Abstract:In recent years, advances in computer vision have made it possible to reconstruct images based on EEG information, which is of great significance in fields such as medical image reconstruction and brain-computer interfaces. However, due to the complexity and temporal characteristics of EEG signals, existing models face many challenges in feature extraction and image generation tasks. To this end, this paper proposes an EEG signal-driven visual image reconstruction model based on double residual LSTM and DCGAN. The model introduces a long and short-term memory network based on an attentional residual network and a Triplet loss function to enhance the quality of EEG signal feature extraction. ARTLNet integrates residual network, long and short-term memory network and attention mechanism, which improves deep network training through residual con-nection, long and short-term memory network captures time-series features, and attention mecha-nism enhances the focus on key features; it also combines batch normalization and global average pooling to ensure stable signal delivery. In the image generation stage, the model introduces self-designed Deep Convolution Generative Adversarial Networks (DCGAN) with feature fusion strategy, which effectively improves the quality and diversity of the generated images. Experi-mental results show that the improved ARTLNet achieves higher accuracy on both the Characters and Objects datasets with different classification and clustering algorithms, and the proposed model also performs better in terms of image generation quality, especially in terms of image clarity and category differentiation.