基于双残差LSTM和DCGAN的脑电信号驱动视觉图像重建模型
DOI:
作者:
作者单位:

昆明理工大学信息工程与自动化学院,昆明 650500

作者简介:

通讯作者:

基金项目:


EEG signal-driven visual image reconstruction model based on double residual LSTM and DCGAN
Author:
Affiliation:

School of Information Engineering and Automation, Kunming University of Science and Tech-nology, Kunming 650500, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
    摘要:

    近年来,计算机视觉的进步使得基于脑电信息重建图像成为可能,这在医学图像重建和脑机接口等领域具有重要意义。然而,由于脑电信号的复杂性和时序特性,现有模型在特征提取和图像生成任务上面临诸多挑战。为此,本文提出了一种基于双残差LSTM和DCGAN的脑电信号驱动视觉图像重建模型。该模型引入基于注意力残差网络和三元组损失函数的长短期记忆网络(Attention Residual Network and Triplet Loss Long Short-Term Memory Network,ARTLNet),以提升脑电信号特征提取的质量。ARTLNet融合了残差网络、长短期记忆网络和注意力机制,通过残差连接改善深层网络训练,长短期记忆网络捕捉时间序列特征,注意力机制增强对关键特征的关注;同时结合批量归一化和全局平均池化,确保信号稳定传递。在图像生成阶段,模型引入自行设计的深度卷积生成对抗网络(Deep Convolution Generative Adversarial Networks, DCGAN)与特征融合策略,有效提升了生成图像的质量和多样性。实验结果表明,改进后的ARTLNet在Characters数据集和Objects数据集上,结合不同的分类和聚类算法均获得了更高的准确率;所提模型在图像生成质量方面也表现优越,尤其在图像清晰度和类别区分度方面展现出显著优势。

    Abstract:

    In recent years, advances in computer vision have made it possible to reconstruct images based on EEG information, which is of great significance in fields such as medical image reconstruction and brain-computer interfaces. However, due to the complexity and temporal characteristics of EEG signals, existing models face many challenges in feature extraction and image generation tasks. To this end, this paper proposes an EEG signal-driven visual image reconstruction model based on double residual LSTM and DCGAN. The model introduces a long and short-term memory network based on an attentional residual network and a Triplet loss function to enhance the quality of EEG signal feature extraction. ARTLNet integrates residual network, long and short-term memory network and attention mechanism, which improves deep network training through residual con-nection, long and short-term memory network captures time-series features, and attention mecha-nism enhances the focus on key features; it also combines batch normalization and global average pooling to ensure stable signal delivery. In the image generation stage, the model introduces self-designed Deep Convolution Generative Adversarial Networks (DCGAN) with feature fusion strategy, which effectively improves the quality and diversity of the generated images. Experi-mental results show that the improved ARTLNet achieves higher accuracy on both the Characters and Objects datasets with different classification and clustering algorithms, and the proposed model also performs better in terms of image generation quality, especially in terms of image clarity and category differentiation.

    参考文献
    相似文献
    引证文献
引用本文

倪哲文,全海燕.基于双残差LSTM和DCGAN的脑电信号驱动视觉图像重建模型[J].数据采集与处理,,():

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2025-09-15