Abstract:Portable imaging devices are ubiquitous in everyday life. However, as the hand jitter or the fast moving objects in the scene during shooting process, the captured image or video is often blurred, causing important details loss. In order to restore the blurred video and image to a clear state, we combine the recent research hotspots—Generative adversarial network, and propose a novel end-to-end bidirectional time-domain feature flow blind motion deblurring algorithm. The algorithm makes full use of the feature information of spatio-temporal continuity constraint to establish a bidirectional transmission channel of time-domain features between the adjacent frames. The multi-stage autoencoder deblurring network structure and the parallel coding and hybrid decoding fusion solution can fuse the multi-channel content information of a frame triplet and restore a clearer frame for a video. Experimental results show that the proposed algorithm is superior to the existing advanced algorithms on the traditional image quality evaluation indexes, i.e., peak signal to noise ratio (PSNR) and structural similarity (SSIM), and visual quality within acceptable time cost.