Abstract:Brain-computer interfaces (BCIs) based on steady-state visual evoked potential (SSVEP) are rapidly advancing in human-computer interaction systems. However, the classification of SSVEP signals in short time windows still faces challenges such as low accuracy and insufficient feature extraction. In this paper, we propose an Attention Enhancement-Dual Channel Multi-Feature Convolutional Neural Network (AE-dCNN). The network first applies a channel attention mechanism to weight the features of different channels, enhancing the representation of useful information. Then, two parallel channels are employed to extract time and frequency domain features from the signals, respectively, and the extracted features are fused for classification. Cross-subject and subject-independent experiments were conducted on both public and self-built datasets. The results demonstrate that the proposed AE-dCNN model achieves a highest accuracy of 94.38% in cross-subject experiments and 92.36% in subject-independent experiments. Additionally, we explored the application of the Kolmogorov–Arnold Networks (KAN) structure in EEG signal processing. The results indicate that the KAN model outperforms the Multilayer Perceptron (MLP) model in terms of accuracy across most time windows.