• Volume 36,Issue 3,2021 Table of Contents
    Select All
    Display Type: |
    • Recent Advances in Small Object Detection

      2021, 36(3):391-417. DOI: 10.16337/j.1004-9037.2021.03.001

      Abstract (3903) HTML (26920) PDF 3.22 M (6454) Comment (0) Favorites

      Abstract:Small object detection has long been a difficult and hot topic in computer vision. Driven by deep learning, small object detection has achieved a major breakthrough and has been successfully applied in national defense security, intelligent transportation, industrial automation, and other fields. In order to further promote the development of small target detection, this paper makes a comprehensive summary of small target detection algorithms, and makes a reasonable classification, analysis and comparison of existing algorithms. Firstly, this paper defines the small object and summarizes the challenges of small object detection. Then, this paper focuses on the algorithms to improve the performance of small object detection from the aspects of data augmentation, multi-scale learning, context learning, generative adversarial learning, anchor-free mechanism, and analyzes the advantages and disadvantages, and relevance of these algorithms. Finally, this paper looks forward to the future development directions of small object detection.

    • Review of Multi-granularity Data Analysis Methods Based on Granular Computing

      2021, 36(3):418-435. DOI: 10.16337/j.1004-9037.2021.03.002

      Abstract (2558) HTML (2285) PDF 1.84 M (3284) Comment (0) Favorites

      Abstract:Multi-granularity data is a special useful type of data which is able to show data in different granularity spaces by using different granularity forms of a universe of discourse (i.e. a set of research objects), and then multi-level knowledge discovery can be studied based on multi-granularity data. As is well-known, quotient space theory, sequential three-way decision, multi-granulation rough set, multi-scale data analysis model and multi-granularity formal concept analysis are several common and effective multi-granularity data analysis methods, and they have attracted more and more people’s attention. This paper reviews the existing work on multi-granularity data analysis in granular computing, gives theoretical frameworks, basic notions and main research ideas for each kind of multi-granularity data analysis methods, and points out some problems for the further study of multi-granularity data analysis. The obtained results can provide a theoretical reference for future research of this field.

    • Review of Spatio-Temporal Sequence Prediction Methods Based on Deep Learning

      2021, 36(3):436-448. DOI: 10.16337/j.1004-9037.2021.03.003

      Abstract (2189) HTML (8519) PDF 1.36 M (4189) Comment (0) Favorites

      Abstract:With the vigorous development of data acquisition technology, spatio-temporal data in various fields are accumulating continuously, so it is urgent to explore efficient spatio-temporal data prediction methods. Deep learning is a machine learning method based on artificial neural networks, which can effectively process large-scale complex data. Therefore, it is of great significance to study the spatio-temporal sequence prediction methods based on deep learning. In this context, the existing prediction methods are summarized. First, the application background and development history of deep learning in spatio-temporal sequence prediction are reviewed, and the related definitions, characteristics and classification of spatio-temporal sequence are introduced. Then according to the categories of spatio-temporal sequence data, this paper introduces the prediction methods based on grid data, on graph data, and on trajectory data. Finally, the above prediction methods are summarized, and some current problems and possible solutions are discussed.

    • Review on Research Progress of Open-World Person Re-identification

      2021, 36(3):449-467. DOI: 10.16337/j.1004-9037.2021.03.004

      Abstract (1601) HTML (2258) PDF 2.17 M (2108) Comment (0) Favorites

      Abstract:Open-world person re-identification (Re-ID) is a task that the query person may not occur in the gallery set in an agnostic space environment. Compared with the closed-world person Re-ID, open-world Re-ID is seen as a subproblem of image retrieval, and is a more challenging and practical application research. We firstly compare and analyze the different pedestrian datasets and describe the development of existing open-set Re-ID works, the discrepancies between closed- and open-world Re-ID, and the modeling process of open-world Re-ID. Then, we focus on summarizing the research methods of open-world Re-ID, which are data-driven research methods, efficiency-driven research methods and application-driven research methods. Finally, the issues of the research and future directions for open-world Re-ID are discussed.

    • Vehicle Re-identification Method Based on Double-Branch Network Feature Fusion

      2021, 36(3):468-476. DOI: 10.16337/j.1004-9037.2021.03.005

      Abstract (965) HTML (777) PDF 1.84 M (2177) Comment (0) Favorites

      Abstract:The purpose of vehicle re-identification(vehicle reID)is to identify the same vehicle through different cameras. However, vehicle reID is a very challenging task due to the large intra-class difference and the large inter-class similarity of vehicle images. This paper proposes a vehicle reID method based on double-branch network feature fusion to solve this problem. The method uses two branches and batch drop block strategies to extract and fuse global features and local features for highlighting the intra-class similarities and inter-class differences. At the same time, the method uses circle loss terms instead of the traditional triplet loss terms and cross-entropy loss terms to construct the objective function. Extensive experiments of the method are conducted on the two public datasets VeRi-776 and VehicleID, and results show that its search accuracy is improved by about 5% compared with the existing methods, which verifies the effectiveness of the method.

    • Robust Feature Extraction Based on Multi-matrix Low-Rank Decomposition

      2021, 36(3):477-488. DOI: 10.16337/j.1004-9037.2021.03.006

      Abstract (809) HTML (1351) PDF 2.55 M (1982) Comment (0) Favorites

      Abstract:Traditional face recognition algorithms are easily affected by lighting, expressions, occlusion, and sparse noise. How to effectively extract data features for traditional face recognition algorithms is one of the most important parts. This paper applies the multi-matrix low-rank decomposition to facial feature extraction, which makes full use of the structural similarity of face datasets and explores the low-rank subspace of the facial images collection, then combines the low-rank matrix recovery model to extract the key features of the test sample. Finally, the principal component analysis (PCA) algorithm is used to reduce the dimensionality of data, and the sparse representation is utilized for classification. The results show that the algorithm in this paper has good recognition accuracy on AR, Yale and CMU_PIE face datasets when samples contain salt and pepper noise, which verifies the robustness of the algorithm to salt and pepper noise.

    • Convolutional Auto-Encoder Patch Learning Based Video Anomaly Event Detection and Localization

      2021, 36(3):489-497. DOI: 10.16337/j.1004-9037.2021.03.007

      Abstract (1275) HTML (776) PDF 2.22 M (1878) Comment (0) Favorites

      Abstract:Video anomaly event detection and localization aim to detect abnormal events and lock its localization in the video. However, the video scenes are complex and diverse, and the localizations where anomaly events occur are random and changeable, which makes it difficult to accurately locate the occurred abnormal events. This paper proposes a video anomaly event detection and localization method based on convolutional auto-encoder patch learning. Firstly, we divide the video frames evenly into patches and extract the optical flow and the histogram of oriented gradient (HOG) feature of each patch. Then, at the different patches in the video, we individually design a convolutional auto-encoder to learn the feature in the normal motion mode. During the anomaly event detection process, the reconstruction loss of the convolutional auto-encoder is used for anomaly detection. The proposed method can effectively perform feature learning for different regions of the video and improve the accuracy of anomaly event localization. Experimental results on three public datasets, UCSD Ped1, UCSD Ped2, and CUHK Avenue, demonstrate that the frame level AUC (area under the curve) of this method is increased by 5.61% on average and can accurately locate anomaly events.

    • Self-sampling Ensemble Classification Method Based on Attribute Reduction

      2021, 36(3):498-508. DOI: 10.16337/j.1004-9037.2021.03.008

      Abstract (916) HTML (1309) PDF 2.17 M (1753) Comment (0) Favorites

      Abstract:The ensemble learning technology often uses each basic classifier that has been trained to form a complete ensemble system, and the largeness of the ensemble system easily leads to more memory and time. So as to gain the high prediction correctness and low classification time of the ensemble classification model, according to the research of attribute reduction in rough sets, this paper proposes a self-sampling ensemble classification method based on the attribute reduction. This method applies the strategy of combining ant colony optimization and attribute reduction to the original feature data set, and then multiple optimal feature reduction subspaces are obtained. Taking any feature subset after reduction as the feature input of the integrated classifier can reduce the memory usage and classification time of the classifier to some extent. And then each self-sampling method taking the learning results and learning speed of the samples as constraints is combined to iteratively train each base classifier. Finally, the feasibility of the proposed method is further proved by experimental results.

    • Dynamic Classification for Multi-imbalanced Datasets via Attribute Selection and Sampling Strategy

      2021, 36(3):509-518. DOI: 10.16337/j.1004-9037.2021.03.009

      Abstract (889) HTML (1525) PDF 1.71 M (2166) Comment (0) Favorites

      Abstract:The classification of imbalanced datasets is one of the important topics in machine learning. Most of the existing imbalance learning algorithms designed for dichotomies are insufficient to meet multi-class classification needs. To tackle multi-class imbalance classification problem, we design a new multi-classification model synthesizing rough sets, resampling methods and dynamic ensemble classification strategy in this study. The model utilizes the hybrid sampling and the rough set reduction algorithm to generate multiple balanced data subsets, on which the construction of the dynamic ensemble classification model is realized. The experiments on 22 real datasets demonstrate that the designed method has higher prediction performance on identifying minority samples compared with two previous algorithms, which can be an alternative selection strategy in multi-class imbalance classification.

    • Imbalanced Multi-label Learning Algorithm Based on Classification Interval Enhanced

      2021, 36(3):519-528. DOI: 10.16337/j.1004-9037.2021.03.010

      Abstract (891) HTML (2448) PDF 1.32 M (1808) Comment (0) Favorites

      Abstract:Traditional multi-label learning algorithms generally do not consider the label imbalance, so the impact of label imbalance on classification is not ignored. However, statistics show that the current multi-label datasets have the problem of label imbalance, and a few kinds of labels are often more important. Based on this, this paper proposes an imbalanced multi-label learning algorithm based on classification interval enhanced (MLCIE), which aims to enhance the learning efficiency and improve the quality of the sample label by using the reconstruction of each label classification interval, so as to reduce the impact of multi-label imbalance on the learning accuracy of the classifier. Firstly, the uncertainty coefficient of each label is calculated by using the density and conditional entropy of each label; Then the enhancement matrix of classification interval is constructed, so that the unique density information of each label is integrated into the original label matrix to obtain the balanced label space; Finally, the limit learning machine is used as the linear classifier for classification. In this paper, the proposed algorithm is compared with other seven multi-label learning algorithms on the 11 multi-label standard datasets. The results show that the proposed algorithm can solve the problem of label imbalance.

    • Label Enhancement and Fuzzy Discernibility Based Label Distribution Feature Selection

      2021, 36(3):529-543. DOI: 10.16337/j.1004-9037.2021.03.011

      Abstract (771) HTML (1473) PDF 2.07 M (1883) Comment (0) Favorites

      Abstract:Feature selection is the key pre-processing step of multi-label learning tasks. It can efficiently solve the problem of the “curse of dimensionality”, which is existed in the high-dimensional multi-label data. In multi-label learning, the label is described as the form of logical distribution, in which the importance of each label associated with the instance is equivalent. However, the label importance of each label is usually different in many fields. For this issue, a label enhancement algorithm is proposed in this paper. By evaluating the fuzzy similarity relation on labels among instances, it transforms the multi-label data to the label distribution data. The discernibility relation on labels and the fuzzy relative discernibility relation on features are analyzed in details for label distribution data, then the fuzzy discernibility on the label space and the feature space is defined, and the significance of feature is constructed to assess the discernibility ability of the feature. On this basis, a feature selection algorithm is proposed for label distribution data, which can obtain the result of feature selection in descending order of feature significance. Finally, the experimental results show that the proposed algorithm is effective and feasible on several multi-label datasets.

    • Weighted Block Sparse Subspace Clustering Algorithm Based on Information Entropy

      2021, 36(3):544-555. DOI: 10.16337/j.1004-9037.2021.03.012

      Abstract (901) HTML (2031) PDF 2.93 M (1759) Comment (0) Favorites

      Abstract:When the sparse subspace clustering algorithm processes hyperspectral remote sensing images, the classification accuracy of features is low. In order to improve the accuracy of feature classification, this paper proposes a weighted block sparse subspace clustering algorithm based on information entropy (EBSSC). The introduction of information entropy weight and block diagonal constraint can obtain the prior probability that two pixels belong to the same category before the simulation experiment, so that the solution solved by the positive intervention model becomes the optimal approximate solution of the block diagonal structure, making the model obtain the performance against noise and outliers, thereby improving the discriminative ability of model classification to obtain better classification accuracy of ground features. Experimental results on three classical hyperspectral remote sensing data sets show that the clustering effect of hyperspectral image in this paper is better than that of several existing classical and popular subspace clustering algorithms.

    • Linguistic Z-numbers Multi-attribute Decision-Making Method Based on Normal Cloud Model and PROMETHEE Method

      2021, 36(3):556-564. DOI: 10.16337/j.1004-9037.2021.03.013

      Abstract (935) HTML (1396) PDF 1.00 M (1619) Comment (0) Favorites

      Abstract:To solve the multi-attribute decision-making (MADM) problem in which the weights of the attributes are unknown under linguistic Z-numbers environment, a novel decision-making method based on normal cloud model and preference ranking organization method for enrichment evaluation (PROMETHEE) method is proposed. Firstly, the conversion model based on linguistic scale function is proposed to convert linguistic Z-numbers to normal cloud models. Then, this paper defines a new cloud likelihood function, and utilizes the proposed function to establish a weight formula to measure the importance of each attribute in MADM. Moreover, a sine preference function is designed to acquire the preference values of alternatives, and the positive, negative and net flow on the basis of aggregated preference values can be calculated, and the corresponding ranking result of each alternative is acquired. Finally, the validity and feasibility of the proposed method can be illustrated by the problem of air pollution potential evaluation and comparative analysis with other existing three methods.

    • Abnormal Heart Rate Classification Based on Ballistocardiogram and BP Neural Network

      2021, 36(3):565-576. DOI: 10.16337/j.1004-9037.2021.03.014

      Abstract (1525) HTML (1255) PDF 2.04 M (2084) Comment (0) Favorites

      Abstract:Heart rate variability (HRV) is widely used in clinical autonomic nervous system assessment and classification of abnormal heart rate. Traditional HRV analysis is based on electrocardiogram (ECG), photoplethysmography (PPG) and remote PPG (RPPG). However, these methods have the following disadvantages: (1) The detection of ECG requires the application of irritating coupling agent on the skin and additional electrodes, which is not suitable for long-term monitoring, and the ECG equipment is expensive; (2) there is ambient optical noise in the PPG and RPPG measurement, and individual difference due to skin color is obvious; (3) the detections of ECG and PPG belong to contact type, which can easily bring discomfort to patients. Based on the shortcomings of the above methods, a HRV analysis method based on ballistocardiogram (BCG) is proposed. It reduces the cost of traditional equipment for HRV analysis, and uses non-contact detection to alleviate the discomfort of patients. The unique detection principle avoids the problem of individual differences, which plays a vital role in long-term cardiovascular disease prediction. In the experiment, the model of back propagation (BP) neural network is used to predict and classify abnormal heart rate with an accuracy rate of 80%, showing the advancement and reliability of the proposed method.

    • Composite Jamming Recognition Based on SPWVD and Improved AlexNet

      2021, 36(3):577-586. DOI: 10.16337/j.1004-9037.2021.03.015

      Abstract (832) HTML (907) PDF 2.34 M (1750) Comment (0) Favorites

      Abstract:In view of the complex electromagnetic environment of modern electronic warfare, it is difficult to extract the effective features of composite jamming signals and identify them. In this paper, a combined jamming recognition algorithm based on smoothed pseudo Wigner-Ville distribution (SPWVD) and improved AlexNet is proposed. The algorithm uses the SPWVD to analyze the time and frequency of the composite jamming signal. Then the image processing technology is used to reduce the dimension of time-frequency characteristics. Finally, combined with the improved AlexNet model, the algorithm uses several small convolution kernels to replace the large convolution kernel, removes the full-connection layer 7 and the local response normalization module to reduce the network parameters and speed up the calculation, so as to realize the recognition of composite jamming signals. Simulation results show that when the jamming(signal) to noise ratio is 0 dB, the recognition rates of the target signal and six kinds of composite jamming signals are all above 90%. Compared with the AlexNet model, the improved network has significant improvement in recognition accuracy.

    • Text Clustering Algorithm Based on Feature Matrix Optimization and Data Dimensionality Reduction

      2021, 36(3):587-594. DOI: 10.16337/j.1004-9037.2021.03.016

      Abstract (1101) HTML (1291) PDF 866.44 K (1677) Comment (0) Favorites

      Abstract:Aiming at inefficient clustering due to dimensional disaster and loss of feature information in text clustering, this paper proposes a clustering algorithm based on feature matrix optimization and improved principal component analysis (PCA) dimensionality reduction. On the basis of the original term frequency inverse document frequency (TF-IDF) algorithm, an adaptive length frequency weight (ALFW) optimization scheme is proposed, which makes the distribution of the feature matrix better and the characterization of the feature terms more obvious. In the process of dimensionality reduction, the PCA algorithm is optimized by using the joint entropy standard in information theory, and the UE-PCA (United entropy-PCA) algorithm is proposed to further reduce the dimensionality of sparse high-dimensional data and better retain the authenticity of the original high-dimensional data. Simulation experiments show that the proposed algorithm (K-means + UE-PCA + ALFW) achieves better performance than other similar algorithms.

    • Identification Method of Encrypted Data Flow Based on Chain-Building Information

      2021, 36(3):595-604. DOI: 10.16337/j.1004-9037.2021.03.017

      Abstract (857) HTML (1436) PDF 1.23 M (1581) Comment (0) Favorites

      Abstract:Aiming at the problem that it is difficult to identify the encrypted traffic, a novel detection method based on the chain-building information is proposed, which utilize the a neural network to extract encrypted traffic characteristics from chain-building data. Firstly the interactive traffic between clients and servers is captured at the beginning of the encrypted connection establishment, then the fore 1 024 bytes of them is converted into grayscale. Finally the convolutional neural network model is constructed to learn these characteristics to extract the pattern of the encrypted traffic. Due to the category information can be extracted at the stage, so this method has the characteristic of early identification, which enables the identification and management of encrypted traffic to be organically combined. In addition, in view of the problems from infinite background traffic attribute set and incomplete training data, an approximate complete method is proposed which mixs random data to the background traffic for data enhancement. The test is carried out in a real environment, the results show that the accuracy of this method reaches 95.4%, and the recognition time is 0.1 ms, which is significantly better than comparison algorithms.

    • Sentence Structure Acquisition Method for Chinese Relation Extraction

      2021, 36(3):605-620. DOI: 10.16337/j.1004-9037.2021.03.018

      Abstract (840) HTML (2215) PDF 3.13 M (1550) Comment (0) Favorites

      Abstract:Neural network model is one of the most commonly used techniques in relation extraction. However, the existing neural network models seldom consider the structural features between two entities in a sentence. Based on the characteristics of relation extraction task, this paper proposes a sentence structure acquisition method on neural network model. In this method, the positions of two entities in relation instance are marked so that the neural network model can effectively capturethe structural information about the entities in sentences. In order to verify the effectiveness of the proposed method, two mainstream neural network models are used for comparative experiments. Experiments show that the performance is improved significantly on ACE 2005 Chinese corpus. The result has exceeded the comparison work by approximately 11 percentage points. That proves that this method can significantly improve the performance of relation extraction.

    • Low Cost Portable Airborne Navigation Equipment

      2021, 36(3):621-628. DOI: 10.16337/j.1004-9037.2021.03.019

      Abstract (940) HTML (1017) PDF 1.91 M (1842) Comment (0) Favorites

      Abstract:The monitoring of “low height, slow speed, small size” aircraft is a technical problem in the opening of low-altitude airspace, which restricts the development of general aviation. This paper designs a low-cost, small-volume and low-power navigation communication equipment for “low, slow and small” aircraft. The equipment integrates GPS、 Beidou, Galileo navigation source and Beidou short message communication functions. Using the civil aviation ADS-B communication data chain and Beidou short message receiving system, it not only brings “low, slow and small” aircraft into civil aviation communication navigation surveillance/air traffic management(CNS/ATM) supervision system, as well as avoids the operation risk of large aircraft in the fusion airspace, but also solves the problem of wireless signal loss in the case of obstacle occlusion and hyper-range. At the same time, track extrapolation and error correction techniques are used to ensure the accuracy of aircraft track continuity and position accuracy. A car and a light aircraft are deployed to carry on the flight comparison verification, and the results demonstrate that the function and the performance are superior to the existing ADS-B airborne communication and navigation equipment. This equipment provides a technical solution to “visible, identifiable and disposal” low altitude airspace aircraft.

Quick search
Search term
Search word
From To
Volume retrieval