Abstract:In recent years, generative adversarial networks have been widely used in the field of image restoration and have achieved good results. However, current methods do not consider the problem of blurred structures and textures in high-resolution images (512×512), which mainly come from the lack of effective feature information. To address this problem, this paper proposes a generative adversarial network that combines image features with semantic information. Based mainly on semantic information, a context-aware image restoration model is proposed, which adaptively fuses semantic information with image features, and adaptive convolution is proposed to replace the traditional convolution, as well as a multi-scale context aggregation module is added after the decoder to capture long-distance information for contextual inference. Experiments are conducted on Places2, CelebA-HQ, Paris Street View, and Openlogo datasets, and the experimental results show that the proposed method in this paper improves in terms of L1 loss, Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity (SSIM) in comparison with existing methods.