基于多域信息融合的卷积Transformer脑电情感识别模型
作者:
作者单位:

1.南京邮电大学电子与光学工程学院、柔性电子(未来技术)学院,南京210023;2.南京邮电大学射频集成与微组装技术国家地方联合工程实验室,南京 210023

作者简介:

通讯作者:

基金项目:

国家自然科学基金(61977039)。


Convolutional Transformer EEG Emotion Recognition Model Based on Multi- domain Information Fusion
Author:
Affiliation:

1.College of Electronic and Optical Engineering & College of Flexible Electronics (Future Technology), Nanjing University of Posts and Telecommunications, Nanjing210023, China;2.Nation-Local Joint Project Engineering Lab of RF Integration & Micropackage, Nanjing University of Posts and Telecommunications, Nanjing210023, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
    摘要:

    当前脑电信号的情感识别方法很少融合空间、时间和频率信息,并且大多数识别方法只能提取局部的脑电特征,在全局信息关联方面存在着局限性。本文提出了一种基于多域信息融合的三维特征卷积神经网络Transformer 机制(3D-CNN-Transformer mechanism, 3D-CTM)模型的脑电情感识别方法。该方法首先根据脑电信号的特性设计了一种三维特征结构,同时融合脑电信号的空间、时间以及频率信息;然后采用卷积神经网络模块学习多域信息融合的深层特征,再连接Transformer自注意力模块,提取特征信息内的全局关联性;最后利用全局平均池化整合特征信息进行分类。实验结果表明,3D-CTM模型在SEED数据集上的三分类平均准确率达到96.36%,在SEED-Ⅳ数据集上的四分类平均准确率达到87.44%,有效地提高了情感识别精度。

    Abstract:

    Current emotion recognition methods for eletroencephalogram(EEG) signals seldom fuse spatial, temporal and frequency information, and most methods can only extract local EEG features, resulting in limitations in global information correlation. The article proposes an EEG emotion recognition method based on 3D-CNN-Transformer mechanism (3D-CTM) model with multi-domain information fusion. The method first designs a 3D feature structure based on the characteristics of EEG signals, simultaneously fusing the spatial, temporal, and frequency information of EEG signals. Then a convolutional neural network module is used to learn the deep features for multi-domain information fusion, and then the Transformer self-attention module is connected to extract the global correlations within the feature information. Finally, the global average pooling is used to integrate the feature information for classification. Experimental results show that the 3D-CTM model achieves an average accuracy of 96.36% in the SEED dataset for triple classification and 87.44% in the SEED-Ⅳ dataset for quadruple classification, which effectively improves the emotion recognition accuracy.

    参考文献
    相似文献
    引证文献
引用本文

张学军,王天晨,王泽田.基于多域信息融合的卷积Transformer脑电情感识别模型[J].数据采集与处理,2024,39(6):1543-1552

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
历史
  • 收稿日期:2024-02-06
  • 最后修改日期:2024-04-07
  • 录用日期:
  • 在线发布日期: 2024-12-12