QIAN Bo , LI Fujiang , ZHENG Changle , ZHANG Daoqiang
2025, 40(3):562-584. DOI: 10.16337/j.1004-9037.2025.03.002
Abstract:Medical foundation models represent a significant application of large-scale pre-trained model technology in the healthcare domain and have become a key research focus in intelligent medical assistance. By leveraging pretraining on vast amounts of medical data, these models exhibit critical capabilities such as cross-task transfer, multimodal understanding, and complex reasoning, overcoming several limitations of traditional neural networks in medical applications. With these capabilities, medical foundation models are reshaping the implementation of core tasks such as assisted diagnosis, clinical report generation, and medical image analysis. They hold profound implications for achieving general intelligence in healthcare. Based on this, this paper provides a comprehensive review of the current state and future trends of medical foundation models. First, it reviews the development of medical AI models in the context of rapid advancements in artificial intelligence. Then, it highlights research progress of large models in medical subfields such as pathology, ophthalmology, and neurological disorders. Finally, it discusses the challenges currently faced by medical foundation models and explores their future development directions.
QU Chongxiao , TANG Yubo , WU Gaojie , FAN Changjun , ZHANG Yongjin , LIU Shuo
2025, 40(3):585-602. DOI: 10.16337/j.1004-9037.2025.03.003
Abstract:With the rapid development of generative AI technologies, especially breakthroughs in the field of large language models (LLMs), both academia and industry are actively seeking deeper integration between these large-scale AI models and communication networks. This paper aims to explore this emerging field in depth by reviewing the latest research advancements. It provides a comprehensive analysis of how LLMs can enhance the intelligence of communication networks and how communication networks can improve the performance of LLMs. First, the paper introduces the mainstream Transformer-based architectures of LLMs, elaborating on their training processes and the mechanism of intelligent emergence. It then analyzes the intelligent applications of LLMs in network design, diagnostics, configuration, security, network language understanding, and specification analysis, and discusses the corresponding technical implementation methods. Furthermore, the paper explores the crucial role of communication networks in supporting the training, inference, and deployment of LLMs, with a focus on distributed LLM construction technologies based on cloud-edge collaboration and multi-agent LLM network construction solutions. Finally, the paper identifies several key research challenges that remain to be addressed and provides insights into future research directions.
ZOU Guanyun , WANG Cunjun , KONG Yinhao , MA Xiaoqing , LI Piji
2025, 40(3):603-615. DOI: 10.16337/j.1004-9037.2025.03.004
Abstract:With the rapid advancement of artificial intelligence technology, large language models (LLMs) are increasingly being applied across various domains. However, the lack of high-quality, manually curated question-answering datasets in the field of aero-engine has hindered the practical application of expert-level question-answering model. To address this issue, this paper proposes an automated method for constructing question-answering datasets based on LLMs, which generates high-quality open-domain question-answering data without human intervention. During the data generation phase, the method employs in-context learning and input-priority generation strategies to enhance the stability of the generated data. In the data filtering phase, a dual evaluation mechanism is established, combining faithfulness assessment based on source text similarity and semantic quality evaluation using large language models, to automatically filter out hallucinated or anomalous data and ensure factual reliability. Experimental results demonstrate that the proposed method significantly improves the quality of the generated dataset. Models fine-tuned on this dataset exhibit notable performance improvements in aero-engine domain knowledge question-answering tasks. The findings of this study not only provide a solid foundation for the application of large language model in the aero-engine domain but also offer valuable insights for automated dataset construction in other complex engineering fields.
MA Lei , CUI Wenhao , YANG Wenwen , WANG Zhaoxin
2025, 40(3):616-636. DOI: 10.16337/j.1004-9037.2025.03.005
Abstract:Language is an important tool for communication and cognition. Multiple functional areas of the brain, connected through complex neural networks, jointly participate in the perception, comprehension, and production of language. Exploring the neural mechanisms of Chinese semantic decoding is crucial for the development of Chinese brain-computer interface (BCI). This study aims to establish a long-sequence continuous semantic decoding method based on fMRI data, termed Chinese long-sequence continuous semantic decoder(CLCSD). Through signal processing workflows and algorithm optimization, it seeks to achieve efficient decoding of continuous Chinese semantics. The CLCSD framework is composed of four components: neural response dimensionality reduction, an encoding model, a word rate model, and a beam search decoding model. Neural response dimensionality reduction is performed through cortical reconstruction, image registration, and brain region parcellation to reduce four-dimensional brain response data into a two-dimensional matrix. The encoding model is constructed using L2-regularized regression (ridge regression) to establish the relationship between stimulus features and brain responses, with noise covariance estimated via bootstrapping to enhance generalization. The word rate model follows a similar approach to the encoding model, where brain response features are mapped to predicted word rate. The beam search decoding model uses the prior probability of the language model and likelihood probabilities of the encoding model to generate the most probable semantic sequence through beam search. On publicly available dataset SMN4Lang, CLCSD achieves a mean BERTScore of 0.674, outperforming other long-sequence Chinese continuous semantic decoding models. The proposed method provides an efficient long-sequence continuous Chinese semantic decoding approach, offering both theoretical foundations and methodological references for the advancement of Chinese BCI technologies.
ZHU Xinli , GAO Zhiqiang , JI Weitong , LI Shaohua , LI Songjie
2025, 40(3):637-646. DOI: 10.16337/j.1004-9037.2025.03.006
Abstract:In customized scenarios, it is urgent to enhance the understanding and generation capabilities of large language models (LLMs) in specific vertical domains. We propose a paradigm for developing vertical-domain LLM system named “Wuxin”, which covers a series of development methods for LLM systems, including architecture, data, model, and training. Wuxin utilizes human-in-the-loop data augmentation to improve the quality of military training injury question and answer datasets, and employs the GaLore strategy to perform efficient full-parameter fine-tuning on small LLMs. Experimental results show that the adopted full-parameter fine-tuning method outperforms LoRA fine-tuning in terms of convergence and accuracy. Furthermore,Wuxin demonstrates significant advantages in understanding professional military training injury knowledge, as well as overcoming hallucinations. Our achievements can provide references for the design and application of question-answering LLM systems in vertical domains.
2025, 40(3):647-658. DOI: 10.16337/j.1004-9037.2025.03.007
Abstract:Existing studies have constructed knowledge graph (KG) intelligent question-answering systems based on large language models (LLMs) in the field of special equipment. However, limited by the inincomplete entity relationships of KG, LLMs are still prone to hallucination in knowledge-intensive tasks. To suppress the generation of hallucinations, the fusion KG reasoning technology is proposed to enhance the knowledge representation by completing the entity relationship links. Furthermore, in view of the deficiencies of the existing KG reasoning methods in semantic association and topological structure parsing, a dynamic reasoning mechanism based on LLM is further introduced. By leveraging its deep semantic understanding ability, high-order logic rules are automatically generated to achieve the precise expansion of KG, thereby constructing a bidirectional collaborative optimization mechanism between LLM and KG. The results show that this method significantly outperforms the baseline model in terms of mean reciprocal rank (MRR), first hit rate (Hits@1), and top ten hit rate (Hits@10) on the Family, Kinship, and UMLS datasets.
JIN Yan , ZHU Youwen , WU Qihui
2025, 40(3):659-674. DOI: 10.16337/j.1004-9037.2025.03.008
Abstract:Condensed local differential privacy is a metric-based relaxation of local differential privacy with better utility and flexibility than local differential privacy. However, existing solutions are deficient in terms of sequence pattern capture and utility. To address these limitations, this paper proposes SCM-CLDP, a novel sequential data collection method based on condensed local differential privacy. SCM-CLDP fully takes into account important information such as the length and transitions of sequential data during the collection process, through which the data collector is able to synthesize privacy-preserving dataset close to the original dataset. Specifically, according to different perturbation objects, we propose two collection methods, SCM-VP based on value perturbation and SCM-TP based on transition perturbation, respectively. We theoretically prove that SCM-VP and SCM-TP satisfy sequence-level condensed local differential privacy, and comparative experiments are conducted with existing solutions based on two real datasets in terms of Markov chain model accuracy, synthetic dataset utility, and frequent sequence pattern mining accuracy. The results show that SCM-CLDP performs significantly better than the existing solutions, with SCM-VP outperforming SCM-TP in most cases. In the optimal situation, SCM-CLDP reduces the error of the Markov chain model and the distribution of the synthetic dataset by at least one order of magnitude compared to the existing method. Meanwhile, SCM-CLDP improves the accuracy of item frequency ranking of the synthetic dataset and the accuracy of frequent sequence pattern mining by nearly 30% compared to existing solutions.
XIAN Yongli , CHEN Xuejian , PENG Zhenming , WANG Jie , PENG Bo
2025, 40(3):675-685. DOI: 10.16337/j.1004-9037.2025.03.009
Abstract:Geotechnical borehole monitoring, as one of the most common tunneling advanced detection techniques, can truly reflect the material properties, characteristics, and groundwater conditions of geomaterials, which is vital to ensure construction safety. Based on the characteristics of the geotechnical borehole monitoring objectives, a smart visual system based on panoramic cameras is developed. The system is suitable for close-range and dynamic high-resolution imaging of the inner walls of long geotechnical boreholes. Based on the improved EfficientNetV2 network and the sliding window prediction, the rapid intelligent recognition of eight types of rock borehole images is realized. Experimental results show that the visual system can meet the requirements for close-range high-resolution panoramic imaging of long boreholes and achieve intelligent state assessment of rock materials. The recognition success rate reaches 91.49% on the test set, and the system preliminarily possesses the comprehensive intelligent evaluation capability of geotechnical borehole status.
ZHANG Wenqing , WANG Jing , HUANG Xueqin , TIAN Sirui , HE Cheng , ZHANG Jingdong , LI Hongtao
2025, 40(3):686-698. DOI: 10.16337/j.1004-9037.2025.03.010
Abstract:Contrastive learning, as a self-supervised approach, enables the extraction of target representations from unlabeled SAR images, serving as a critical technique for automatic target recognition (ATR) in SAR. However, existing models often encode targets and backgrounds holistically, resulting in feature representations contaminated by background interference, which diminishes the model’s ability to focus on targets. To address this issue, this paper proposes a novel multi-branch dual contrastive learning model. Firstly, the model retains the conventional instance contrastive branch while introducing an innovative background correction contrastive branch, establishing a multi-branch contrastive learning framework. Secondly, through a random recombination strategy of targets and backgrounds in positive and negative samples, combined with the ResNet50 backbone network and self-attention pooling to enhance semantic feature extraction, an optimized dual contrastive loss function is employed to refine target feature learning and mitigate spurious correlations between backgrounds and targets. Finally, Shapley value analysis based on the MSTAR dataset validates the model’s effectiveness, and target classification results demonstrate that this approach significantly enhances the causality of feature extraction, substantially improving the generalization performance of SAR ATR algorithms.
NI Kang , SUN Likun , ZOU Minrui
2025, 40(3):699-710. DOI: 10.16337/j.1004-9037.2025.03.011
Abstract:Synthetic aperture radar (SAR) image targets typically exhibit subtle edge features, which can vary across different scales. Edge features provide crucial information about the shape and contour of target objects, improving the model’s localization capabilities. However, existing SAR object detection methods often underperform in learning edge features, limiting their ability to accurately perceive target edges. To address this, we propose a SAR target detection method based on edge feature guided learning (EFGL). This approach builds upon the fully convolutional one-stage (FCOS) object detection framework and leverages edge features to guide the learning process in feature pyramid networks (FPN). By integrating an edge operator module into FPN, the network’s capacity to learn multi-scale edge features is explicitly enhanced. Additionally, during multi-scale feature fusion, we introduce an edge feature-guided fusion module that incorporates a spatial attention mechanism to enable edge-guided fusion across adjacent feature levels. On the MSAR and SAR-Aircraft-1.0 datasets, the proposed method achieves detection accuracies of 68.68% and 67.44% under the AP’07 standard, showing improvements of 1.34% and 4.81% over the baseline network, respectively compared to other related algorithms, this method demonstrates superior target localization and overall performance in SAR target detection.
HE Yulin , CHEN Chunjia , HUANG Zhexue , LI Junjie , FOURNIER-VIGER Philippe
2025, 40(3):711-729. DOI: 10.16337/j.1004-9037.2025.03.012
Abstract:Different from the classical probability density estimator construction strategies based on the Parzen window method, we propose a heuristic kernel density estimator (HKDE) based on nearest neighbor error measurement function, to improve the accuracy of fitting probability density function of modal-proximity data. From the perspective of data and model uncertainties, we analyze the defects of traditional kernel density estimators in solving the problem of probability density estimation of modal-proximity data. The heuristic probability density values that can reduce the uncertainty of observed data are obtained by referring to the convergence of probability density values with respect to the histogram box width. Based on the heuristic probability density value, we construct the sophisticated objective function to determine the optimal bandwidth for kernel density estimator by reducing the model uncertainty. Extensive experiments on 18 modal-proximity datasets are conducted to validate the feasibility, rationality and effectiveness of the designed HKDE. Results show that HKDE can obtain a better approximate performance of probability distribution than seven existing representative probability density function estimators. HKDE has lower estimation error and closer probability density function estimates to the real density values than other kernel density estimators.
2025, 40(3):730-740. DOI: 10.16337/j.1004-9037.2025.03.013
Abstract:Aiming at the problems of limited strong annotation datasets and the sharp degradation of detection performance in real-world scenarios for polyphonic sound event detection tasks, a method for polyphonic sound event detection based on Transfer learning convolutional retentive network is proposed. Firstly, the method utilizes convolutional blocks with pre-trained weights to extract local features of audio data. Subsequently, the local features, along with orientation features, are input into the residual feature enhancement module for feature fusion and channel dimension reduction. The fused features are then fed into the retentive network with regularization methods to further learn the temporal information in the audio data. Experimental results demonstrate that, compared to the champion system model of the DCASE challenge, the method achieves a reduction in error rates by 0.277 and 0.106, and an increase in F1 scores by 22.6% and 6.6% on the development and evaluation sets of the DCASE 2016 Task3 dataset, respectively. On the development and evaluation sets of the DCASE 2017 Task3 dataset, the error rates are reduced by 0.22 and 0.123, and the F1 scores increase by 17.2% and 14.4%, respectively.
CHEN Haojie , YANG Rui , PAN Shanliang
2025, 40(3):741-753. DOI: 10.16337/j.1004-9037.2025.03.014
Abstract:Sound event detection models based on deep learning typically require a substantial mount of labeled data to train from scratch. Access to task-specific data is costly due to restrictions such as data access rights, usage licenses, and the scarcity of rare individual samples. In order to address the challenge of few shot in sound event detection, this paper proposes a model-agnostic and gradient-balanced meta learning algorithm based on model-agnostic meta learning (MAML). This algorithm trains the model with a large quantities of N-way K-shot tasks, enabling it to acquire the ability of rapid learning, accurately discriminating the unheard sound event in the N-way K-shot target task with minimal gradient updates. In the outer loop stage, the multi-gradient descent algorithm is used to estimate the dynamic loss balance factor, encouraging the model to focus on few-shot training tasks that are more difficult to train, thereby enhancing the shared representation of the model. Furthermore, this paper incorporates data augmentation and label smoothing to mitigate the risk of overfitting caused by the scarcity of training samples. Experimental results demonstrate that the algorithm achieves 73.56%, 82.86% and 57.48% accuracies in the 5-way 1-shot setting on the ESC50, NSynth and DCASE2020 datasets, respectively, showing about 10% relative accuracy improvement compared to the previous MAML algorithm.
WANG Sen , SHI Caijuan , CAI Ao , WANG Rui , YU Xinyang , CHENG Xudong , CHEN Weibin
2025, 40(3):754-773. DOI: 10.16337/j.1004-9037.2025.03.015
Abstract:With the rapid development of computer-aided medical diagnosis systems and medical image segmentation technologies, the performance of colorectal endoscopy has been significantly improved, effectively helping clinicians make quick and accurate judgments on polyp lesions and formulate appropriate treatment plans. However, in clinical practice polyp segmentation faces numerous challenges, such as different intestinal environments in different patients, and varying sizes and shapes of polyps. To address these challenges and enhance the generalization and learning abilities, the generalization enhancement and dynamic perception network (GEDPNet) is proposed. GEDPNet utilizes the pyramid vision Transformer (PVT_v2) as its backbone and focuses on the design of three key modules: the generalization enhancement (GE) module, the dynamic perception (DP) module, and the cascade aggregation (CA) module. Firstly, the GE module innovatively improves the model’s generalization by extracting the polyp domain-invariant features, effectively alleviating the problem of poor segmentation caused by different intestinal environments of polyps in different patients. Meanwhile, the GE module also addresses the challenge of diverse polyp sizes by extracting rich multi-scale information intra each layer. Secondly, the DP module is able to dynamically perceive the global and local information, and then to effectively capture the position information as well as the boundaries or textures of polyps. Finally, the CA module can fully aggregate multi-scale features at different levels to obtain rich semantic information, ensuring the integrity of polyp information and further enhancing segmentation performance. To verify the effectiveness of the proposed GEDPNet, extensive experiments are conducted on five polyp datasets: Kvasir-SEG, CVC-ClinicDB, CVC-T, CVC-ColonDB, and ETIS. On these five polyp datasets, the mDice of the proposed GEDPNet achieves 0.930, 0.946, 0.911, 0.825, and 0.806, respectively; mIoU of that achieves 0.883, 0.902, 0.848, 0.747, and 0.733, respectively; and MAE of that achieves 0.019, 0.005, 0.005, 0.025, and 0.013, respectively. Furthermore, the proposed GEDPNet has been compared with 20 classical and advanced polyp image segmentation methods and it outperforms nearly all of them. Notably, mIoU of GEDPNet has been improved by 4.3%, 5.3%, 5.1%, 10.7%, and 16.6% respectively on these five polyp datasets compared to that of classical polyp segmentation method PraNet. These results indicate that the proposed GEDPNet exhibits superior dynamic perception capabilities for polyps with significant variations in intestinal environments, sizes, and shapes, so it can effectively enhance the polyp segmentation accuracy and model’s generalization.
JIANG Zhengmang , WANG Zining , MA Biao , LIU Xiaoyu , LIN Min
2025, 40(3):774-783. DOI: 10.16337/j.1004-9037.2025.03.016
Abstract:This paper addresses a wireless control system where multiple control loops share spectrum resources and proposes a random access scheme based on imperfect channel state information to achieve control stability. Firstly, in the scenario where multiple control loops access the remote controller with a fixed probability, control stability conditions are derived through the Lyapunov function. Secondly, an optimization problem is formulated with access probability and transmission power as variables, aiming to minimize the total energy consumption of the system while satisfying control stability and transmission power constraints. Since this optimization problem is non-convex and only imperfect channel state information(CSI) can be obtained, a design approach for the access strategy is proposed by combining the Lyapunov stability theorem with mathematical methods such as Bernstein inequality and continuous convex approximation. Simulation results demonstrate that the proposed scheme, compared to existing typical access solutions, can significantly reduce system energy consumption while ensuring control performance.
MENG Xianghao , AN Kang , LIN Zhi
2025, 40(3):784-792. DOI: 10.16337/j.1004-9037.2025.03.017
Abstract:To study the performance of multi-reconfigurable intelligent surface (RIS) assisted communication network with the presence of the same frequency interference at the receiving end, this paper deploys multiple RIS of different geometric sizes as relays in the wireless channel to improve the performance of the communication network, and assumes that the wireless channels associated with different RISs are independent and non-uniformly distributed. Channels associated with different reflector surfaces in the same RIS are independently and identically distributed. The end-to-end channel coefficients are approximated to the Gamma distribution, and the exact expressions of outage probability(OP)、ergodic capacity(EC) and OP asymptotic expressions are derived based on the Gamma distribution. Monte Carlo simulation is used to verify the correctness of the analysis results. The research shows that the number of RIS, the number of interference items and the interference signal power play a crucial role in the cooperative transmission performance of multi-RIS auxiliary communication networks.
LUO Junqi , WANG Min , QIAO Huotong , QIU Yi , ZHANG Haoyang , SUN Huo , XIE Haoyu
2025, 40(3):793-806. DOI: 10.16337/j.1004-9037.2025.03.018
Abstract:Unlike traditional multi-modal fusion methods that are predominantly image-based, production data in industrial manufacturing are primarily structured data, with a small amount of image features. However, both types of heterogeneous data reflect the core parameters of shale gas. Due to the significant difference in data dimensions, it is challenging to achieve feature fusion of heterogeneous data. Additionally, there is heterogeneity among the stratified structured data, leading to substantial errors in predicting core parameters using conventional deep learning methods. To address these issues, this paper proposes a multi-modal fusion algorithm for heterogeneous data (MFH). Firstly, a multi-modal fusion strategy for heterogeneous data is designed to align, extract, and merge features of scanning electron microscopy and logging parameters under the same depth labels. Secondly, a mechanism for drawing heterogeneous data features closer is constructed to create positive sample pairs, enabling the model to learn about the strong heterogeneity between stratums in the same work area and the lateral nonlinear relationships. Finally, a method for exchanging features of heterogeneous data is introduced to solve the matching problem between abundant logging data and scarce electron microscope images, achieving accurate and continuous prediction of core parameters. Experimental results, compared with predictions from mainstream deep models, prove the practicality, effectiveness, and extensibility of the proposed scheme.
ZHANG Wanxiang , ZHANG Xianyong , YANG Jilin , CHEN Benwei
2025, 40(3):807-820. DOI: 10.16337/j.1004-9037.2025.03.019
Abstract:Attribute reduction relies on knowledge granulation and uncertainty measurement, thus facilitating intelligent recognition. For incomplete continuous data, neighborhood decision rough sets induce attribute reduction. However, the related neighborhood relation deserves optimal improvements, while the existing decision cost deserves integrated reinforcements. In this paper, a new neighborhood relation is proposed, and three decision-cost fusion measures are constructed, so new incomplete neighborhood decision rough sets are established and the attribute reduction is systematically researched. At first, an improved distance is introduced to produce an incomplete neighborhood relation, so improved rough sets on incomplete neighborhood are proposed. Then, the dependence degree and neighborhood entropy are introduced based on decision costs, so three fusion measures on decision costs are obtained by multiplication fusion, thus acquiring granulation non-monotonicity. Furthermore, eight heuristic reduction algorithms based on attribute importances are designed from two neighborhood relations and four relevant measures of decision costs. As finally verified by data experiments, the five algorithms out of the seven new algorithms have good performance of classification learning, thus improving the basic reduction algorithm.
LIU Zhuoheng , YANG Feng , ZHAN Chang’an
2025, 40(3):821-831. DOI: 10.16337/j.1004-9037.2025.03.020
Abstract:During the collection process of electroencephalogram (EEG) signals for motor imagery, the subjects’ lack of concentration and failure to strictly follow instructions for corresponding motor imagery result in EEG data that does not match the instructions (labels), leading to the emergence of “noisy labels”. The presence of “noisy labels” reduces the model’s ability to capture key features and affects the model’s generalization on new subjects. Therefore, this paper proposes a method for motor imagery classification under “noisy labels” condition using multi-scale spatio-temporal feature learning. Firstly, a convolutional neural network is used to extract multi-scale local temporal features from EEG signals, reducing the impact of inter-subject variability. Secondly, feature maps are partitioned in spatio-temporal dimensions and served as input to the Transformer module, with a spatio-temporal feature fusion module used to optimize global spatio-temporal features. Finally, symmetric cross entropy loss is introduced, extending the calculation of cross entropy to all categories to reduce the impact of “noisy labels”. Experimental results on the PhysioNet and BCI IV 2a motor imagery datasets demonstrate that the average accuracy of the proposed method is superior to those of other methods. On the PhysioNet dataset, the introduction of symmetric cross entropy loss improves the average accuracy for two-, three-, and four-class classifications by 0.09%, 0.65%, and 0.66%, respectively. Moreover, symmetric cross entropy loss can improve the model’s classification performance and robustness under different proportions of “noisy labels” interference without increasing the model’s parameter quantity and computational complexity.
LI Yan , WANG Ziying , MAO Jiaming , GU Zhimin , JIANG Haitao
2025, 40(3):832-844. DOI: 10.16337/j.1004-9037.2025.03.021
Abstract:Network security situation assessment plays an important role in the design and implementation of network defense strategies. The existing situation assessment methods gather the information of both attack and defense to construct an assessment model, which is extremely sensitive to the accuracy of attack detection and the relationship between attack and vulnerability exploitation. To deal with the above challenges and improve the accuracy of assessment, this paper proposes a situation assessment method combining attack and vulnerability. Firstly, various attack data sets are used to train attack detection models, and the attack detection results of different models are fused by the idea of ensemble learning. Secondly, with the help of the open source security model, the exploitation knowledge between different attack types and security vulnerabilities is extracted. Finally, the security situation assessment results are obtained by integrating the degree of attack damage and the probability of vulnerability exploitation calculated using the extracted exploitation knowledge. The results show that the proposed method improves the performance of attack detection, and the average F1-score reaches 96.24. Furthermore, combined with the attack detection results, a situation assessment application case is given to show the effectiveness of the proposed method.
Quick search
Volume retrievalYou are the visitor 
Mailing Address:29Yudao Street,Nanjing,China
Post Code:210016 Fax:025-84892742
Phone:025-84892742 E-mail:sjcj@nuaa.edu.cn
Supported by:Beijing E-Tiller Technology Development Co., Ltd.
Copyright: ® 2026 All Rights Reserved
Author Login
Reviewer Login
Editor Login
Reader Login
External Links