EEG Signal-Driven Visual Image Reconstruction Model Based on Double Residual LSTM and DCGAN
CSTR:
Author:
Affiliation:

School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China

Clc Number:

TP391.41;TP181

Fund Project:

National Natural Science Foundation of China (No.61861023).

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Reconstructing visual images from electroencephalogram (EEG) signals has become an emerging frontier in brain-computer interface (BCI) research, offering substantial potential in medical image reconstruction, neural decoding, and cognitive state analysis. However, the inherently noisy, low-amplitude, and highly temporal characteristics of EEG signals pose considerable challenges to robust feature extraction and high-fidelity image synthesis. To address these limitations, this study aims to establish an effective EEG-driven visual reconstruction framework capable of capturing fine-grained temporal dynamics while ensuring semantic consistency in the generated images. The proposed model integrates a double residual long short-term memory (LSTM) architecture with a self-designed deep convolutional generative adversarial network (DCGAN). Specifically, an LSTM network based on attention residual network and Triplet loss (ARTLNet) is constructed to improve EEG feature extraction by combining residual learning, temporal modeling, and self-attention mechanisms. Batch normalization and global average pooling are further employed to enhance signal stability and suppress feature redundancy. In the reconstruction stage, a customized DCGAN incorporating feature fusion is adopted to enrich semantic representation and improve image clarity and diversity. Experimental evaluations on both Characters and Objects EEG datasets demonstrate that ARTLNet achieves consistently higher classification and clustering accuracy across multiple algorithms compared with baseline LSTM and non-residual architectures. The generated images exhibit clearer structural details and more distinguishable category attributes, verifying the effectiveness of the proposed generative strategy. The results demonstrate that the combination of residual enhanced temporal modeling and feature-fusion-based adversarial generation can significantly improve EEG-driven visual reconstruction performance. This study confirms the viability of exploiting advanced deep learning mechanisms to decode and visualize EEG information with improved interpretability, providing methodological support for future BCI-based image reconstruction and neural representation studies.

    Reference
    Related
    Cited by
Get Citation

NI Zhewen, QUAN Haiyan. EEG Signal-Driven Visual Image Reconstruction Model Based on Double Residual LSTM and DCGAN[J]. Journal of Data Acquisition and Processing,2026,(1):244-258.

Copy
Related Videos

Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:December 19,2024
  • Revised:May 08,2025
  • Adopted:
  • Online: March 01,2026
  • Published:
Article QR Code