• Volume 31,Issue 6,2016 Table of Contents
    Select All
    Display Type: |
    • Survey on Online Learning Algorithms

      2016, 31(6):1067-1082.

      Abstract (1163) HTML (0) PDF 653.50 K (1836) Comment (0) Favorites

      Abstract:With the development of information technology, especially the wide application of Internet-involved products, a large number of areas require real-time processing of massive and high velocity data. How to learn informative knowledge from ″data ocean″ becomes increasingly important. Traditional batched machine learning algorithms come to be pale when dealing with big data. However, the online learning framework employs streaming computing mode and deals with the data directly in the memory, which provides a promising tool for the learning of big data. This online learning framework has a bright prospect in facing difficulties and challenges when learning big data. This paper concludes the traditional and state-of-the-art online learning algorithms, the main contents include: (1) online linear learning algorithms; (2) online kernel learning algorithms; (3) other classical online learning algorithms; (4) optimization methods of online learning algorithms. Additionally, the implementation of online framework on deep learning models is then introduced to inspire interested researchers. Eventually, this paper discusses the key issues and some applications of online learning algorithms, which is followed by the research directions of the research direction.

    • Review on Image Interpolation Techniques

      2016, 31(6):1083-1096.

      Abstract (948) HTML (0) PDF 2.78 M (2149) Comment (0) Favorites

      Abstract:Image interpolation estimates the gray values of unknown pixels based on known ones. It is a process of data generation. The objective is to increase the resolution of an image through upsampling. Firstly,the difference between image interpolation and image super-resolution reconstruction is analyzed. Then, the developments of various algorithms on image interpolation are reviewed and their features are analyzed. In particular, ′what is an edge′ is explained for edge-guided interpolation techniques. Furthermore, some subjective and objective image quality assessments are introduced. Meanwhile, the effects of image interpolation with different downsmapling methods are illustrated experimentally. Finally, the future research direction in the field is given.

    • Dimension Reduction of Spectral Data Based on Feature Mining

      2016, 31(6):1097-1105.

      Abstract (593) HTML (0) PDF 1.19 M (1056) Comment (0) Favorites

      Abstract:The method of spectral data analysis, which can remove a lot of redundancy of high-dimensional spectral data and extract its characteristic spectrum, is an important foundation for the widespread application of spectral instruments. The contradiction of the applicability of the heterogeneity and spectral characteristics of the method of universal selection, to a certain extent, restricts the application of spectral instruments, need to be resolved. In this paper, a sequential forward selection (SFS) spectral feature adaptive data mining method is proposed to generate the optimal combination of variables as support vector machine (SVM) classification model input, to achieve the spectral data reduction and obtain a highprecision data classification. This method can effectively solve the problem of multi-class classification of a large number of spectral data, which is proved and applied in the classification of mahogany. It provides a new way to solve the difficulty of subjective experience feature selection in height-aliasing of spectral peaks. 

    • New Transmission Protocol Based on Digital Fountain Codes

      2016, 31(6):1106-1114.

      Abstract (479) HTML (0) PDF 570.01 K (857) Comment (0) Favorites

      Abstract:Digital fountain codes are a kind of rateless codes. Compared to the traditional block codes with fixed rate, digital fountain codes are able to adapt the changes in the channel well and can effectively avoid ″storm feedback″ in the transmission of big data. Because of the unfixed rate, the transmission protocols of digital fountain codes have a sharp difference with those of the block codes. The paper introduces several existing fountain codes based transmission protocols, based on which a new type of transmission protocol is proposed. The protocal takes a special frame structure as the basic transmission unit, which greatly reduces the effect of delay on the system performance. At last, the performance of the proposed protocol is analyzed and the influence of the parameters on the performance of the protocol is studied.

    • Freeway Incident Detection Algorithm Based on Video Tracking and Fuzzy Inference

      2016, 31(6):1115-1126.

      Abstract (492) HTML (0) PDF 1.49 M (1068) Comment (0) Favorites

      Abstract:Aiming that highway incidents are apt to cause massive congestion, a video-based rapid detection algorithm for fast feedback event information is proposed. Firstly, the smoothing sequence method is used to get the background, and the background differencing method is utilized to obtain the foreground target. Secondly, using convex hull share on detecting occlusion, the car with the modified Kalman filter feature matching algorithm is tracked. Finally, through detecting the traffic flow speed and flow, the map relationships between speed, flow and the states of traffic flow is built. The fuzzy inference method is used to detect the traffic incident. Experimental results show that the method could obtain accurate prospect information, and it is applicable and high time-efficiency for traffic incident detection on the freeway.

    • Multi-modulus Blind Equalization Algorithm Based on Memetic Algorithm

      2016, 31(6):1127-1131.

      Abstract (696) HTML (0) PDF 428.60 K (907) Comment (0) Favorites

      Abstract:Due to the slow convergence speed, large mean square error(MSE) and existing blind phase for the constant modulus blind equalization algorithm(CMA), a multi-modulus blind equalization algorithm based on memetic algorithm (MA-MMA) is proposed. In this algorithm, the reciprocal of the cost function of multi-modulus blind equalization algorithm(MMA) is defined as the fitness function of the memetic algorithm (MA). The solution vector of individual in the whole group is regarded as the initial weight vector of MMA. The vector of the individual in whole groups corresponding to the fitness function maximum is searched by the global information sharing mechanism and local depth search ability of MA and used as the initial optimum weight vector of MMA. After the weight vector of MMA is updated, the optimal weight vector of MMA is obtained. Simulation results prove that compared with CMA, MMA. The multi-modulus blind equalization algorithm based on genetic algorithm(GA-MMA) which has recently been proposed, the proposed MA-MMA has the fastest convergence speed, the smallest MSE, and the clearest constellations of output signals.

    • Sequential Fault Diagnosis with Isolation Rate Requirement Using Differential Evolution Algorithm

      2016, 31(6):1132-1140.

      Abstract (531) HTML (0) PDF 511.57 K (857) Comment (0) Favorites

      Abstract:The optimal test sequence design for fault diagnosis is critical to NP-complete problem. An improved differential evolution (DE) algorithm with additional inertial velocity item is proposed to solve the optimal test sequence problem (OTP) in complicated electronic system. The proposed inertial velocity differential evolution (IVDE) algorithm is constructed based on adaptive differential evolution algorithm. IVDE, combined with a new individual fitness function, optimizes the test sequence sets with the index fault isolation rate(FIR)satisfied in top-down to generate diagnostic decision tree and decrease the test sets and the cost test. Simulation results show that IVDE algorithm can cut down the test cost with the satisfied FIR. Compared with the other algorithm such as particle swarm optimizer(PSO)and genetic algorithm(GA), IVDE can get better solution to OTP.

    • Vehicle Recognition Algorithm Based on Weakly Supervised Hierarchical Deep Learning

      2016, 31(6):1141-1147.

      Abstract (615) HTML (0) PDF 1.03 M (1007) Comment (0) Favorites

      Abstract:Focusing on the shortage of structure and training methods of existing classifier, a weakly supervised hierarchical deep learning vehicle recognition algorithm with 2D deep belief networks(2D-DBN) is proposed. Firstly, the traditional one-dimensional deep belief network(DBN) is expanded to 2D-DBN, thus the pixel matrix of the 2-D images is taken as the input. Then, a determination regularization term with proper weight is introduced to the traditional unsupervised training objective function. By this change, the original unsupervised training is transferred to the weakly supervised training, so that the extracted features have more discrimination ability. Multiple sets of comparative experiments show that the proposed algorithm is better than other deep learning algorithms in respect of recog nition rate.

    • Image Compressed Sensing Based on Local and Nonlocal Regularizations

      2016, 31(6):1148-1155.

      Abstract (431) HTML (0) PDF 1.26 M (829) Comment (0) Favorites

      Abstract:Nonlocal low-rank regularization based approach (NLR) shows the state-of-the-art performance in compressive sensing (CS) recovery which exploits both structured sparsity of similar patches.However, it cannot efficiently preserve the edges because it only exploits the nonlocal regularization and ignores the relationship between pixels. Meanwhile, Logdet function that is used in NLR cannot well approximate the rank, because it is a fixed function and the optimization results obtained by this function essentially deviate from the real solution. A local and nonlocal regularization based CS approach is proposed toward exploiting the local sparse-gradient property of image and low-rank property of similar patches. Schatten-p norm is used as a better non-convex surrogate for the rank function. In addition, the alternating direction method of multipliers method (ADMM) is utilized to solve the resulting nonconvex optimization problem. Experimental results demonstrate that the proposed method outperforms existing state-of-the-art CS algorithms for image recovery.

    • Three-way Decisions Based Parameter Selection of OWA Operators in Fuzzy Information System

      2016, 31(6):1156-1163.

      Abstract (1106) HTML (0) PDF 442.40 K (1311) Comment (0) Favorites

      Abstract:In the fuzzy information system, tolerance relation of the λ level sets based on the similar degree of the objects can be obtained by the ordered weighted averaging (OWA) operator. Indiscernibility relation and grain size are affected by the fuzzy quantifier parameters (α,β) of the OWA operators, when the value of λ is the same. Therefore, it is worth to study how to reasonably select fuzzy quantifier parameter (α,β) values. Parameter selection of OWA operators is discussed based on three-way decisions of rough set theory in fuzzy information system. According to radicalness, mean, and negative of parameters semantics, three most commonly used values of (α,β) are proposed. Furthermore, related properties of similar degrees, tolerance classes, upper and lower approximation sets as well as three-way regions are discussed based on three most commonly used values of (α,β). Finally, experimental results are used to analyze reasonableness of semantic explanations about fuzzy quantifier parameters. The study adopts a novel viewpoint of three-way decisions theory, and the new semantic explanations and validity of selection methods about related fuzzy quantifier parameters (α,β) in the OWA operators are given in fuzzy information system.

    • Low Complexity Video Coding Based on Structured Measurement Matrices

      2016, 31(6):1164-1170.

      Abstract (735) HTML (0) PDF 813.15 K (868) Comment (0) Favorites

      Abstract:Recently, applications of low-complexity video coding have gained wide interests. Compressive sensing (CS) can sample and compress signal simultaneously which can be used to design low-complexity video coding.Structured measurement matrices based CS is proposed for video codec to handle the hard realization of random measurement matrices. Characteristic and construction of structured measurements matrices is explored and theory guarantees of fidelity reconstruction for different structure are analyzed. Numerical simulation results of CS video codec algorithm based on structured measurement matrices verify the theory as well as the promising potentials of low-complexity video application field owing to the hardware friendly and fast computation of the matirices.

    • Lecture Video Text Semantic Shot Segmentation and Annotation

      2016, 31(6):1171-1177.

      Abstract (579) HTML (0) PDF 689.10 K (902) Comment (0) Favorites

      Abstract:To automatically annotate a special kind of video, i.e., lecture videos, a method is proposed to extract caption information from video, Then subtitle information is utilized with latent Dirichlet allocation(LDA). The document distribution probability on the topics is obtained. The distance between these probability distributions is calculated. Finally the semantic shot segmentation is realized. A shot is set as a sample based on safe semi-supervised support vector machine(S4VM ) method. A small amount of labeled semantic shots are taken as samples. The unlabeled shots are automatically annotated. Experimental results show that the proposed method can not only effectively complete the shot semantic segmentation, but also annotate key words for the video.

    • Vehicle Positioning Technology Based on Pseudorange Double Difference in Internet of Vehicles

      2016, 31(6):1178-1184.

      Abstract (688) HTML (0) PDF 584.32 K (839) Comment (0) Favorites

      Abstract:To achieve the accuracy required in Internet of vehicles ,a global position system (GPS)-based pseudorange double differencing relative positioning algorithm is proposed.The pseudorange information of the proposed algorithm between vehicles is exchanged through vehicle-to-vehicle(V2V) communication. Furthermore, the satellite public error and receiver clock offset are eliminated so that the positioning precision of the vehicle can be improved. It is proved that the pseudorange double difference algorithm can highly improve the positioning accuracy by analyzing and comparing the distance error root mean square(RMS) with that of GPS single-point positioning algorithm. Simulation results verify that the relative positioning performance of pseudorange double difference algorithm significantly outperform that of GPS single-point positioning algorithm when there are more than four common and visible satellites.

    • Low Complexity Robust Wideband Beamfoming Algorithm Based on Phase Constraint

      2016, 31(6):1185-1191.

      Abstract (723) HTML (0) PDF 849.45 K (922) Comment (0) Favorites

      Abstract:To solve the computation complexity and robust problems of conventional space-time wideband beamformer structure, a low complexity robust algorithm is proposed. Firstly, the array response phase when the frequency changes is given, and the influence of frequency change and angle change on the phase of array response are analyzed. Through imposing magnitude and phase constraints to compensate the phase differences between different frequencies for desired signal, robust wideband beamforming is achieved without delay lines or FIR/IIR filter. Furthermore, by analyzing the impact on array response phase when the frequency and angle change simultaneously, the auxiliary azimuth in the vicinity of desired signal is taken into account, and the robustness of the algorithm is improved when the arrival direction of the desired signal tends to zero and the relative bandwidth is small. The theoretical analysis and simulations show that the algorithm has simpler structure, lower computational complexity, and better robustness than traditional algorithm.

    • Aggregation Scheme Based on Distributed Data Compression in Sensor Networks

      2016, 31(6):1192-1198.

      Abstract (374) HTML (0) PDF 992.04 K (821) Comment (0) Favorites

      Abstract:Data aggregation based on compressed data collection need efficient routing forwarding tree protocol to gather the coded data corresponding to the sensor nodes to the sink node effectively. A new distributed compressed data collection method with high efficiency and energy saving is presented. Each sensor node can be found alone in the part of its parent and able to build a routing tree, without using the center node to build all forwarding tree, which allows each sensor node to make local decisions on the forwarding tree construction and maintenance. Simulation results show that the complexity of the new method is lower compared with that of the traditional method, and the cost reduces nearly 50%. 

    • Selection of Coherent Targets in Large Number of Time Series Ground Based SAR Images

      2016, 31(6):1199-1204.

      Abstract (371) HTML (0) PDF 1.54 M (895) Comment (0) Favorites

      Abstract:Among a large number of time series of SAR images, selection of all valid coherent targets can improve space density and guarantee reliability of SAR interferometry for deformation monitoring. Continuous observation of ground based SAR can get local area deformation day and night in all weather conditions, and has the characteristics of large amount of data, zero baseline interferometry, strong coherence and different spatial resolution. In this paper, based on the imaging characteristics, a double threshold method of extracting coherent targets from a large number time of series ground based of SAR images is proposed. The method is tested on Geheyan Water Conservancy project area with 1 330 ground based SAR images. Experiment results prove that the method can effectively extract reliable coherent pixels. Moreover, when the number of ground based SAR images is more than 600, less than 0.3 of average correlation coefficient threshold almost has no effect on coherent targets number, while the amplitude dispersion index threshold is the main factor of effecting the number of coherent targets in ground based SAR images.

    • Appearance-based Complex-Attributes Learning for Fine-Grained Recognition

      2016, 31(6):1205-1212.

      Abstract (573) HTML (0) PDF 832.23 K (887) Comment (0) Favorites

      Abstract:Visual attribute as an inter-mediate representation is exploited in many applications due to its semantic understandable and generalization. However, attribute learning need great manual effort for choosing attribute taxonomy and labeling attributes instance, which inevitably introduces human bias and leads to weak discriminant ability of attributes, especially for fine-grained recognition where high discriminant ability is crucial for recognizing subtle distinctness. Motivated by human cognition and the fact of multi-mode distribution of object, the proposed complex attributes try to model the distribution of object explaining various appearance factors and finally forms a distributed representation with better describable and discriminant ability, which will competent for handling high discriminant requirement for fine-grained recognition. Some experiments are conducted on publicly available fine-grained datasets CUB. Results show it has better performance than the handcrafted attributes and also holds simple category discriminant attributes.

    • Transmission Strategy for MU-MIMO Downlink Based on Lattice Reduction

      2016, 31(6):1213-1219.

      Abstract (373) HTML (0) PDF 460.26 K (838) Comment (0) Favorites

      Abstract:According to the twice singular value decompositions, the multi-user interference is able to be eliminated by the block diagonalization(BD) precoding algorithm, and the multi-user multiple-input multiple-output(MU-MIMO) channels can be decoupled into multiple independent single -user multiple-input multiple-output(SU-MIMO) channels. But the computational complexity is growing along with the increases of the number of users and the dimensions of the channel matrix. The transmission strategy for the MU-MIMO downlink system based on lattice reduction is presented. The linear detection based on lattice reduction is used to replace the second singular value decomposition of the traditional BD algorithm. Comparing with traditional BD algorithm, the better BER performance and the lower computational complexity can be obtained.

    • Improved Method for Analyzing Microblog Orientation Based on Association Lexicon

      2016, 31(6):1220-1227.

      Abstract (791) HTML (0) PDF 711.59 K (1130) Comment (0) Favorites

      Abstract:At present, a larger number of researchers focus on Micro-blog orientation on the emotional words, adverb and negative words without considering the impact of connectives. To improve the accuracy of orientation analysis, a method of analyzing Mico-blog orientation is proposed. In the paper, we sufficiently analyze the structure characteristics of associated words and consider the combination laws of negative words , adversative words and conjunctions in Microblog. In addition, a specific dictionary is created based on the existing resources, which contains a turning words lexicon, a connective lexicon and a negative words lexicon. At the same time, we take into account the impact of new network words and phrases of the microblog text, so we also build a new network words dictionary. Therefore, the Microblog texts are classified into three categories including negative, positive and neutral one by support vector machine (SVM). By combining Lexicon-based and SVM machine learning method, better accuracy of classification can be achieved. Experimental results verify that the method achieves higher classification accuracy through experiments using COASE 2014.

    • Estimating Terminal Velocity of Base Station Based on Channel Information in Frequency Domain

      2016, 31(6):1228-1233.

      Abstract (379) HTML (0) PDF 1.03 M (862) Comment (0) Favorites

      Abstract:It needs to use some physical layer measurement, such as signal to interference plus noise ratio(SINR), the user's mobile speed,etc.The de-modulation reference signal(DMRS)uplink channel information is used in orthogonal frequency division multiplexing(OFDM) system.The limit factor and modify factor can be used to reduce the influence of noise.And the calculating formula of auto correlation function(ACF) can be applied to different channel condition such as direct path or not.Simulation is carried out in different channel models and signal-to-noise ratio.Simulation results demonstrate that the proposed method can estimate the terminal moving speed stably and accurately.

    • Collaborative Filtering Recommendation Algorithm Based on Rating Prediction

      2016, 31(6):1234-1241.

      Abstract (787) HTML (0) PDF 467.11 K (1478) Comment (0) Favorites

      Abstract:Traditional collaborative filtering algorithm calculates the difference of scores only for the common items of users while calculating the similarity of users. Owing that the numbers of common items of different users is not the same, the recommendation quality is not reliable. We proposed a new algorithm, taking both the number of common items and the popularity of goods into consideration while calculating the similarity of users. Experimental results show that, the recommendation quality of new algorithm is improved by more than one time than traditional algorithm in both precision and recall. In addition, results also show that using Pearson correlation as similarity metric obtained higher recommendation quality than Euclidean distance. 

    • Co-optimization of Subarray Partition and Weight Vector at Subarray Level for Uniform Linear Array

      2016, 31(6):1242-1249.

      Abstract (480) HTML (0) PDF 864.25 K (840) Comment (0) Favorites

      Abstract:Suboptimum performance may be obtained by processing at subarray level with a low cost in large array. Aiming at uniform linear array(ULA), a subarray level processing method is proposed. Considering the impact of subarray division and weight vector at subarray level on array performance, subarray partition and weight vector at subarray level are optimized by particle swarm optimization (PSO) at the same time. Simulation results show that co-optimization method can make full use of the array dividing degrees of freedom. When the array output performance is given, the proposed method can get the way to divide the array and subarray amplitude weights easily. Compared with the conventional method, co-optimization method can reduce the computation amount of array designing and the cycle for designing array. Moreover, pattern with greater pertinence can be achieved. The method also provides a theoretical basis of subarray partition for a large array.

    • Catenary Detection System Based on Multi-frame Character Recognition

      2016, 31(6):1250-1258.

      Abstract (355) HTML (0) PDF 1.31 M (1072) Comment (0) Favorites

      Abstract:As insufficient stability and accuracy of traditional electrified railway catenary detection, a catenary automatic detection system is proposed, which has the ability to automatically identified rod numbers as the basis for catenary pole position and image index detection. Firstly, the system introduces the process of multi-frame rod number identification, and then analyzes three most common feature extraction methods,i.e.shape context(SC), corner representative shape context(CRSC) and center shape context(CSC). Finally, the CSC algorithm is chosen to be integrated into the proposed system as the most effective method of rod number recognition. Experiments show that the proposed system achieves better recognition results in terms of real-time performance and reliability than other method. Specifically, the proposed system can run smoothly at about three hundred kilometers per hour and provide a practical method to detect the position of electrified railways.

    • Fault Detection by Ordered Tree Model

      2016, 31(6):1259-1264.

      Abstract (447) HTML (0) PDF 443.15 K (744) Comment (0) Favorites

      Abstract:Ordered tree model for wide-area distribution and complex structure system are proposed, which can effectivly realize the fault detection in real-time. Therefore, the characteristics of complex structure system is andyzed and the ordered tree model is built. Relationship is implemented among testing data, using measurement data from sensors and the correlation between orderly tree nodes. Data of the corresponding nodes are calculated according to the relationship between the measured and estimated values of the corresponding nodes. The judgment factors are estimated and fault detection of the complex system is determined. Simulation results demonstrate the proposed method can effectively detect fault and provide the theoretical basis for system maintenance.

Quick search
Search term
Search word
From To
Volume retrieval