基于改进Transformer的自动调制识别方法
DOI:
作者:
作者单位:

陆军工程大学

作者简介:

通讯作者:

基金项目:

国家自然科学基金项目(62071484,62371469);江苏省优秀青年基金项目(BK20180080)


Automatic modulation recognition method based on improved Transformer
Author:
Affiliation:

Army Engineering University of PLA

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
    摘要:

    调制识别技术在认知无线电、电子侦察对抗领域已得到广泛应用。近年来,得益于深度神经网络强大的特征提取能力,基于深度学习的自动调制识别研究已经取得了很大的进展。在实际的调制识别场景中,调制信号通常传输没有语义的比特序列,并且每个调制符号以均匀的概率出现在波形中,因此其特征信息均匀地分布在信号数据当中。但现有的基于深度学习的自动调制识别方法通常采用卷积神经网络或循环神经网络结构,难以适应上述场景中的数据分布特点,未能充分利用长序列数据中的全局特性信息,调制识别率有待进一步提升。本文提出了一种基于改进Transformer的自动调制识别方法——AMR-former,该方法首先对信号数据进行预处理,强化信号数据中的时序特性。随后,结合多头注意力机制与长短期记忆网络设计实现了用于特征提取的AMR-Encoder结构,有效提高了全局时序特征的提取能力,为后续识别分类提供了更为丰富的数据表示。通过在开源数据集RadioML 2016.10a上的实验表明,AMR-former方法在信噪比为0~18dB条件下的平均识别准确度达到91.90%,相比于典型的GRU、PET-CGDNN、LSTM和MCLDNN等网络结构分别提高了6.38%、2.15%、1.99%和1.75%。

    Abstract:

    Modulation recognition technology has been widely used in cognitive radio and electronic reconnaissance countermeasures. In recent years, thanks to the powerful feature extraction ability of deep neural networks, the research of automatic modulation recognition based on deep learning has made great progress. In practical modulation recognition scenarios, modulation signals usually transmit bit sequences without semantic information, and each modulation symbol appears in waveforms with uniform probability, so its feature information is uniformly distributed in signal. However, existing automatic modulation recognition methods based on deep learning usually use structures of Convolutional Neural Network (CNN) or Recurrent Neural Network (RNN). They are difficult to be adapted to the data distribution in the scenarios above and thus fail to make full use of the global characteristics of long sequential data. Therefore, the accuracy of modulation recognition can be further improved by exploiting the sequential information. In this paper, an automatic modulation recognition method based on improved Transformer, AMR-former, is proposed. Firstly, the input signal is preprocessed to strengthen the temporal characteristics. Then, the AMR-Encoder structure for feature extraction is designed and implemented by combining the multi-head attention mechanism and Long Short-Term Memory (LSTM) network, which effectively improves the ability of global temporal feature extraction and provides richer representations for the subsequent recognition and classification. Experiments on the RadioML 2016.10a dataset show that the average recognition accuracy of the AMR-former method reaches 91.90% with the signal-to-noise ratio (SNR) from 0dB to18dB. The proposed method improves the typical networks of GRU, PET-CGDNN, LSTM and MCLDNN by 6.38%, 2.15%, 1.99% and 1.75%, respectively.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
历史
  • 收稿日期:2024-01-16
  • 最后修改日期:2024-04-26
  • 录用日期:2024-04-26
  • 在线发布日期: