Abstract:Modulation recognition technology has been widely used in cognitive radio and electronic reconnaissance countermeasures. In recent years, thanks to the powerful feature extraction ability of deep neural networks, the research of automatic modulation recognition based on deep learning has made great progress. In practical modulation recognition scenarios, modulation signals usually transmit bit sequences without semantic information, and each modulation symbol appears in waveforms with uniform probability, so its feature information is uniformly distributed in signal. However, existing automatic modulation recognition methods based on deep learning usually use structures of Convolutional Neural Network (CNN) or Recurrent Neural Network (RNN). They are difficult to be adapted to the data distribution in the scenarios above and thus fail to make full use of the global characteristics of long sequential data. Therefore, the accuracy of modulation recognition can be further improved by exploiting the sequential information. In this paper, an automatic modulation recognition method based on improved Transformer, AMR-former, is proposed. Firstly, the input signal is preprocessed to strengthen the temporal characteristics. Then, the AMR-Encoder structure for feature extraction is designed and implemented by combining the multi-head attention mechanism and Long Short-Term Memory (LSTM) network, which effectively improves the ability of global temporal feature extraction and provides richer representations for the subsequent recognition and classification. Experiments on the RadioML 2016.10a dataset show that the average recognition accuracy of the AMR-former method reaches 91.90% with the signal-to-noise ratio (SNR) from 0dB to18dB. The proposed method improves the typical networks of GRU, PET-CGDNN, LSTM and MCLDNN by 6.38%, 2.15%, 1.99% and 1.75%, respectively.