• Volume 32,Issue 5,2017 Table of Contents
    Select All
    Display Type: |
    • Data Stream Ensemble Classification Algorithm Based on Tri-training

      2017, 32(5):853-860. DOI: 10.16337/j.1004-9037.2017.05.001

      Abstract (931) HTML (0) PDF 514.28 K (1327) Comment (0) Favorites

      Abstract:Data stream classification is one of important research tasks in the field of data mining. Most existing data stream classification algorithms require the labeled data for training. However, there are few labeled data in data streams in real applications. To solve this problem, the labeled data can be obtained by manual labeling, but it is very expensive and time consuming. Considering the unlabeled data are huge and full of information, a data stream ensemble classification algorithm based on Tri-training for labeled and unlabeled data is proposed in this paper. The proposed algorithm divides data stream into chunks by sliding windows and trains base classifiers with Tri-training on the first coming k chunks with labeled and unlabeled data. Then the classifiers are iteratively updated by weighted voting until all unlabeled data are labeled. Meanwhile, the k+1 data chunk is predicted by using the ensemble model of k Tri-training classifiers and the classifier with higher classification error is discarded, which reconstructs a new classifier on current data chunk to update the model. Experiments on 10 UCI data sets show that the proposed algorithm can significantly improve the class ification accuracy of data stream even with 80% unlabeled data in comparison with traditional algorithms.

    • Reduction of Gibbs Ringing Artifact Based on All-Phase Pre-processing

      2017, 32(5):861-868. DOI: 10.16337/j.1004-9037.2017.05.002

      Abstract (663) HTML (0) PDF 3.06 M (1465) Comment (0) Favorites

      Abstract:As an important tool of signal processing, classical Fourier reconstruction does well in reconstructing continuous signals, but it also suffers from Gibbs artifact for reconstruction of discontinuous signals. Since Gibbs artifact causes serious edge distortion and greatly degrades the quality of image, this paper presents an improved 2D all-phase Fourier reconstruction algorithm.The algorithm incorporates multiple higher-frequency harmonics with a limited number of discrete Fourier transform (DFT) coefficients, and therefore it can reconstruct discontinuous signals with higher precision. When the proposed algorithm is applied to magnetic resonance imaging (MRI) image reconstruction, experimental results show that, compared with the classical Fourier transform, the proposed algorithm can restrain Gibbs artifact more effectively and improve the quality of reconstructed images under the condition of the same amount of 2D Fourier coefficients.

    • Low Energy Consumption Data Collection Protocol Based on Trajectory Constrained Mobile Sink

      2017, 32(5):869-878. DOI: 10.16337/j.1004-9037.2017.05.003

      Abstract (774) HTML (0) PDF 539.72 K (1142) Comment (0) Favorites

      Abstract:Energy consumption problem in wireless sensor networks for data collection has always been a research focus. In this paper, we focus on exploring protocol of designing the constrained trajectory of the mobile sink for data collection. A universal system model for designing constrained trajectory in wireless sensor networks is firstly presented, which is formulated as the problem of the maximum total length reduction for constrained trajectory (MTRC). MTRC is proved to be the problem of NP-hard. Secondly, a greedy algorithm of trajectory constraint of low energy consumption (TCLEC) is designed and the movement trajectory of the mobile sink by maximizing the efficient length reduction is designed through TSP approximate algorithm. Theoretical analysis and simulation results show that the TCLEC algorithm has achieved high computation efficiency in the initialization and optimization of data collection tree of network topology. Compared with other hierarchical data collection methods based on mobile sink, the energy consumption has reduced about 7%.

    • Grey Wolf Optimization Algorithm Based on Strengthening Hierarchy of Wolves

      2017, 32(5):879-889. DOI: 10.16337/j.1004-9037.2017.05.004

      Abstract (765) HTML (0) PDF 1.76 M (2228) Comment (0) Favorites

      Abstract:Aiming at the low precision and local optima stagnation of the grey wolf optimization (GWO) algorithm in dealing with complex optimization problems, a grey wolf optimization algorithm based on strengthening the hierarchy of wolves(GWOSH) is proposed. The new algorithm provides two kinds of hunting-modes which are following hunting mode and self-exploration mode for each grey wolf, and each grey wolf chooses its hunting-mode according to the social hierarchy of their own. In the following hunting mode, the grey wolf only depends on the position of higher level wolves to guide itself to search the optimal area. In the self-exploration mode, the individuals will examine the location of the higher level grey wolf and its position at the same time, and judge the position of prey independently based on these information. In the two hunting-modes, a survival of the fittest selection rule is introduced to ensure the evolutionary direction of the population. The optimization results on 12 benchmark functions show that GWOSH has stronger global searching ability and is more effective in the premature convergence avoidance and more suitable for solving high-dimensional complex optimization problems compared with the available algorithms.

    • Research on Visual Saliency of Crowd Movement

      2017, 32(5):890-897. DOI: 10.16337/j.1004-9037.2017.05.005

      Abstract (701) HTML (0) PDF 1.77 M (1505) Comment (0) Favorites

      Abstract:In public places, pedestrians always move by groups, which are called as motion groups. A motion group with the highest visual salienoy is the focus of the scene understanding. A new measurement of motion group′s visual saliency is defined in this paper, and the measurement includes four descriptors as follows: scale, speed, group compactness and group variation of different frame. Based on these descriptors, a new method is proposed for detecting the highest visual saliency group. Firstly, the optical flow method is used to compute optical flow vectors. Then, hierarchical clustering algorithm is used to group the crowd. Finally, the values of each group′s visual saliency are computed to find the group with the highest visual saliency value. Experimental results show that the proposed method can detect the highest visual saliency groups effectively. The research can be applied to computer visual fields such as crowd scene understanding, crowd motion analysis and crowd scene classification etc.

    • Maximum Likelihood Algorithm for Single-Observer Passive Coherent Location Using TDOA Measurements

      2017, 32(5):898-905. DOI: 10.16337/j.1004-9037.2017.05.006

      Abstract (840) HTML (0) PDF 980.54 K (1298) Comment (0) Favorites

      Abstract:To solve the single-observer passive location estimation using multiple illuminators of opportunity, a time difference of arrival(TDOA) location algorithm based on maximum likelihood is proposed. Firstly, according to the functional relationship between the TDOA measurements and the target location, the likelihood function of the target location is constructed. Then Newton′s method is applied to obtain the global maximum of the nonlinear likelihood function and determine the target position. The least squares solution of the target location is derived and used as the initial guess of the Newton′s method. Finally, the theoretical error and the Cramer Rao Lower Bound (CRLB) are also derived and proved to be equal. Simulation results demonstrate that the proposed algorithm outperforms existing algorithms and achieves the CRLB at moderate noise level. Moreover, from the geometric dilution of precision figure, the influence of target position, illuminator number and position on the localization accuracy is analyzed.

    • Asymptotic Performance Analysis and Degree Distribution Design for Systematic Luby Transform Codes over Binary Erasure Channel

      2017, 32(5):906-912. DOI: 10.16337/j.1004-9037.2017.05.007

      Abstract (677) HTML (0) PDF 466.85 K (1177) Comment (0) Favorites

      Abstract:The asymptotic performance formula of systematic LT codes(SLT) in binary erasure channel (BEC) is firstly derived based on AND-OR tree analysis and its lower limit is given. Simulation results show that the actual bit error ratio(BER), the asymptotic performance and the lower limit match each other perfectly when the overhead is large enough. Then the optimization of degree distribution is proposed by the improved systematic linear programming(ISLP) model in accordance with the asymptotic performance. The optimized degree distribution is obviously superior to robust soliton distribution(RSD) and truncated degree distribution(TDD). Furthermore, the asymptotic behavior of the optimized degree distribution can be controlled by the given overhead and BER. In other words, the ideal BER can be obtained within the overhead we want, which also influences the overhead for complete decoding. Simulation results show that the overhead required for data recovery is close to that of we set. The comparison of BER, the overhead required for data recovery and the time of encoding and decoding for LT codes and SLT codes show that SLT codes have better performance than LT codes with more quick speed in data recovery.

    • Iterative Projection Algorithm to Separate Garbled Secondary Surveillance Radar Replies

      2017, 32(5):913-920. DOI: 10.16337/j.1004-9037.2017.05.008

      Abstract (458) HTML (0) PDF 503.35 K (1505) Comment (0) Favorites

      Abstract:To overcome the problem of decoding errors of secondary surveillance radar (SSR) due to the phenomenon of garble, an iterative projection algorithm for separating the garbled replies is proposed. Firstly, a signal model for the garbled replies is established in which the influence factors such as antenna structure, residual carrier frequency and so on are taken a full consideration, and an optimization model with double matrix variables is proposed based on the maximum likelihood estimation of the unknown variable in the model. Then, the initial value of replies is estimated by the algorithm of noncircular complex fast independent component analysis (nc-FastICA). And then, the iterative projection algorithm based on the zero/constant modulus (ZCM) property of SSR replies is proposed to separate garbled SSR replies. Numerical simulations show that the proposed method can separate multiple replies accurately in the complex environment such as shorter distance of the transponders and inaccuracy in the antenna and so on, and it outperforms the algorithm of nc-FastICA obviously in the separation performance.

    • Nonnegative Matrix Factorization Based Deep Low-Dimensional Feature Extraction Approach for Speech Recognition

      2017, 32(5):921-930. DOI: 10.16337/j.1004-9037.2017.05.009

      Abstract (712) HTML (0) PDF 1.20 M (1373) Comment (0) Favorites

      Abstract:As a type of deep neural network (DNN) based low-dimensional feature,bottleneck feature (BNF) has achieved great success in continuous speech recognition. However, the existing of bottleneck layer reduces the frame accuracy of output layer when training a bottleneck deep neural network (BNDNN), which in return has a bad impact on the performance of bottleneck feature. To solve this problem, a nonnegative matrix factorization based low-dimensional feature extraction approach using DNN without bottleneck layer is proposed in this paper. Specifically, semi-nonnegative matrix factorization and convex-nonnegative matrix factorization algorithms are applied to hidden-layer weights matrix to obtain a basis matrix as the new feature-layer weights matrix, and a new type of feature is extracted by forward passing input data without setting a bias vector in the new feature-layer. Experiments show that the feature has a relatively stable pattern around different tasks and network structures. For corpus with enough training data, the proposed features have almost the same recognition performance with conventional bottleneck feature. Under low-resource environment, the recognition accuracy of the new feature-tandem system outperforms both DNN hybrid system and bottleneck-tandem system obviously.

    • Multiple Classifier System for Entity Resolution Using Resampling and Ensemble Selection

      2017, 32(5):931-938. DOI: 10.16337/j.1004-9037.2017.05.010

      Abstract (523) HTML (0) PDF 487.98 K (1307) Comment (0) Favorites

      Abstract:Classifiers are often used in entity resolution to classify record pairs into matches, non-matches and possible matches based on field similarity vector, in which case, the performance of classifiers is directly related to the performance of entity resolution. To improve the accuracy of classifier, a multiple classifier system is constructed. We make full use of the characters of entity resolution to distinguish the ambiguous instances before classification, vary the resampling ratio to generate a group of resampled instances, and use the resampled instances to train classifiers for constructing a parallel multiple classifier system. Moreover,we emphasize on the diversity and sparsity between classifiers to select the best classifier subset, and use non-linear programming and extreme value to solute the ensemble selection problem, respectively. Empirical experiments show the proposed multiple classifier system is superior to the state-of-art ones in accuracy due to resampling and ensemble selection.

    • Data Reconstruction in WSNs via Matrix Completion with Structural Noise

      2017, 32(5):939-947. DOI: 10.16337/j.1004-9037.2017.05.011

      Abstract (645) HTML (0) PDF 1.01 M (1406) Comment (0) Favorites

      Abstract:Many scientific work needs to analyze the environmental data which are usually collected by wireless sensor networks(WSNs)deployed in research areas. The integrity and accuracy of the collected data determine the reliability of the research results. However, data loss and error usually occur during the process of data collection, which affect the availability of collected data. Therefore, it is necessary to reconstruct the environmental data from the incomplete and erroneous sensory data. Based on the low-rank feature of the environmental data, an efficient data reconstruction approach via matrix completion with structural noise (DRMCSN) is proposed by formulating data reconstruction problem as a L2,1-norm regularized matrix completion model. Finally, experimental results on a real dataset demonstrate that the proposed approach can not only effectively reconstruct the environmental data, but also recognize the sensor nodes that collect erroneous data.

    • Image Boundary Extraction Based on Pixel Coverage Segmentation and Chan-Vese Model

      2017, 32(5):948-957. DOI: 10.16337/j.1004-9037.2017.05.012

      Abstract (684) HTML (0) PDF 4.69 M (1474) Comment (0) Favorites

      Abstract:Aiming at the problem of unsatisfactory image segmentation effect for images with blurred boundary by using traditional algorithms, a coarse-to-fine approach for image boundary extraction is proposed in this paper, which is made up of pixel coverage segmentation and Chan-Vese model. Based on modified coverage segmentation algorithm and active-contours method, images are firstly segmented by using the original coverage segmentation algorithm and a multi-directions fuzzy morphological boundary detection algorithm is used to extract the boundaries between different objects. Then an improved pixel coverage segmentation method is applied to redistribute coverage values for boundary pixels. Finally, the boundary extraction for refined images is carried out with active-contours algorithm. And qualitative comparison of segmentation results, noise immunity tests and contrast experiments on the extracted boundary are carried out. Experimental results show that the proposed method can obtain more excellent boundary extraction effect than those state-of-the-art methods proposed in comparable literatures.

    • Revised Model of Fuzzy Cognitive Diagnosis Framework

      2017, 32(5):958-969. DOI: 10.16337/j.1004-9037.2017.05.013

      Abstract (839) HTML (0) PDF 3.46 M (1859) Comment (0) Favorites

      Abstract:To acquire students′ grasping state of the knowledge points and further predict their scores in future tests, a cognitive diagnosis model is applied to explore students′ talent traits by using their test scores and the relationship between a test topic and its knowledge point. However, the available cognitive diagnosis models generally ignore the influence of the number of knowledge points mastered, the mastery degree of knowledge points and the importance of knowledge points on cognitive diagnosis in subjective questions. So a revised fuzzy cognitive diagnosis framework (FuzzyCDF) model is proposed in this paper, which assumes that the probability of answering correctly increases as the number of knowledge points mastered increases in cognitive diagnosis in subjective questions and takes into consideration the influence of the importance of knowledge points on cognitive diagnosis. Experimental results illustrate that the revised FuzzyCDF model can further improve the accuracy of cognitive diagnosis.

    • Fast Image Clustering Based on Convolutional Neural Network and Binary K-means

      2017, 32(5):970-979. DOI: 10.16337/j.1004-9037.2017.05.014

      Abstract (859) HTML (0) PDF 2.34 M (1949) Comment (0) Favorites

      Abstract:Visual features used in state-of-the-art image clustering methods lack of independent learning ability, which leads to low image expression ability. Furthermore, the efficiency of traditional clustering methods is low for large image dataset. So, a fast image clustering method based on convolutional neural network and binary K-means is proposed in this paper. Firstly, a large-scale convolutional neural network is employed to learn the intrinsic implications of training images so as to improve the discrimination and representational power of visual features. Secondly, hashing is applied to map high-dimensional deep features into low-dimensional hamming space, and multi-index hash table is used to index the initial centers so that the nearest center lookup becomes extremely efficient. Finally, image clustering is accomplished efficiently by binary K-means algorithm. Experimental results on ImageNet-1000 dataset indicate that the proposed method can effectively enhance the expression ability of image features, increase the image clustering efficiency and has better performance than state-of-the-art methods.

    • Sparse Circular Array Pattern Optimization Based on MOPSO and Convex Optimization

      2017, 32(5):980-987. DOI: 10.16337/j.1004-9037.2017.05.015

      Abstract (704) HTML (0) PDF 1.43 M (1478) Comment (0) Favorites

      Abstract:To reduce the peak side-lobe level of the sparse array pattern effectively and suppress the sparse array grating lobe at the same time, a pattern synthesis algorithm using multi-objective particles swarm optimization (MOPSO) combined with convex optimization algorithm is presented in this paper. We take MOPSO as a global search and convex optimization as the local search to search for the optimal solution. In this search, the optimization variables include not only the weights of the array, but also the positions of the array, which can provide more freedom to control the performance of the sparse array. Simulation of a sparse circular array model of thirty elements reveals that compared with MOPSO algorithm alone, the proposed algorithms which uses MOPSO and convex optimization to optimize the positions and the weights of the array respectively, can obtain the grating lobes and the peak side-lobe level of lower than -19.3 dB at the same time.

    • Cooperative Spectrum Sensing Algorithm Based on Joint Optimization of Energy Efficiency and Collision Probability

      2017, 32(5):988-996. DOI: 10.16337/j.1004-9037.2017.05.016

      Abstract (809) HTML (0) PDF 1.27 M (1361) Comment (0) Favorites

      Abstract:Cooperative spectrum sensing can improve the sensing performance of cognitive radio(CR) network. At the cost of improving the sensing performance, the cognitive network costs more energy. Meanwhile, the secondary users have more opportunities to access the spectrum hole. However, with the development of the cognitive network throughput, the data collision probability between the primary user and cognitive user increases continuously. This paper proposes a cooperative spectrum sensing algorithm based on the energy efficiency and data collision probability. In this algorithm, we choose the optimal sensing check point to judge the nodes′ channel state. At the same time, bad channel state of node is discarded in fusion center. Therefore, not only the effect of bad channel state of node on global decision is eliminated, but also the energy efficiency is improved. The simulation results show that the proposed algorithm can improve the spectrum sensing performance of CR metwork effectively and prolong its lifecycle.

    • Outlier Detection Based on Clustering and KDE Hypothesis Testing

      2017, 32(5):997-1004. DOI: 10.16337/j.1004-9037.2017.05.017

      Abstract (835) HTML (0) PDF 501.16 K (1701) Comment (0) Favorites

      Abstract:Outlier detection is the core problem in data mining and is widely used in industrial production. Accurate and efficient outlier detection method can reflect the condition of industrial system in time, which provides reference for the relevant personnel. Traditional outlier detection algorithms can′t efficiently detect outliers in those data with complicated change modes, small change range and the characteristics of streaming data. In this paper a new method for detecting outliers is proposed. Firstly, the data are clustered into several categories by clustering. The data in the same categories share the common characteristics. In this way, we believe that the data in the same categories are under the same distribution which are simpler to fit than the whole data. So the original complex data distribution can be factored into several simple distributions. Secondly, kernel density estimation (KDE) hypothesis testing is used for abnormal value detection. Experiments in the UCI dataset and real industrial data show that the proposed method is more efficient than traditional methods.

    • Relative k Sub-Convex-Hull Classifier Based on Feature Selection

      2017, 32(5):1005-1011. DOI: 10.16337/j.1004-9037.2017.05.018

      Abstract (595) HTML (0) PDF 394.43 K (1121) Comment (0) Favorites

      Abstract:The k sub-convex-hull classifier is widely used in practical problems. But with the increase of the dimension of the problem, these convex distances calculated by the method are very close to or even equal, which seriously affectes the performance of classification. To resolve the above problems, a relative k sub-convex-hull classifier based on feature selection (FRCH) is designed in this paper. Firstly, the definition of the relative k sub-convex-hull is introduced according to the shortcomings of absolutely convex hull distance. Then, the feature selection is carried out by using the discriminant regularization technique in the k neighborhood. Moreover, the feature selection method is embedded in the optimization model on the relative k convex hull. Through these efforts, an adaptive feature subset in different categories for each test sample can be extracted, and a valid relative k sub-convex-hull distance can be obtained. Experimental results show that our FRCH not only can make the feature selection practicable, but also significantly improves the classification performance of the k sub-convex-hull classifier.

    • Hybrid Language Model Speech Recognition Method Based on MTL-DNN System Combination

      2017, 32(5):1012-1021. DOI: 10.16337/j.1004-9037.2017.05.019

      Abstract (981) HTML (0) PDF 1.07 M (1530) Comment (0) Favorites

      Abstract:Speech recognition system based on the hybrid language model has the advantage of recognizing the out-of-vocabulary (OOV) words, but the recognition accuracy of the OOVs is far below that of the in-vocabulary (IV) words. To further improve the performance of hybrid speech recognition, a system combination method based on complementary acoustic models is proposed in this paper. Firstly, two hybrid speech recognition systems based on hidden Markov model and deep neural network (HMM-DNN) are set up by using different acoustic modeling unites. Aiming at the relevance of these two recognition tasks, the thought of multi-task learning (MTL) is then used to share the input and hidden layers of DNN and improve the modeling accuracy by joint training. Finally, the outputs of two systems are combined with recognizer output voting error reduction (ROVER). Experimental results show that the MTL-DNN modeling method can obtain better recognition performance than the single-task learning DNN(STL-DNN) and the combining of the two systems can further reduce the final word error rates(WER).

    • Membrane Membrane Structure of Decision Evolution Sets

      2017, 32(5):1022-1033. DOI: 10.16337/j.1004-9037.2017.05.020

      Abstract (784) HTML (0) PDF 577.45 K (1083) Comment (0) Favorites

      Abstract:As a new research method for decision rules, decision evolution set is the theory to deal with evolution problems of decision rules in time series, which transfers the focus from static information system to dynamic time series and studies the time-dependent evolution regulations of decision information systems. At present, the evolution trace defined by normal structure of decision evolution set is a graph in n-dimensional space, which is difficult to describe. So the membrane structure is newly proposed in this paper to describe the decision evolution set. In the membrane structure, the reduced attributes are got attention in the same way. When the time goes from point t i-1 to point ti, attributes will enter different membranes based on their influence to decision and the data flow made in the process will be labelled at the same time. Thereby the problem of evolution trajectory visualization for decision information system is then solved. And the evolution process and trace of decision evolution system has been demonstrated by using some samples.

    • Step-by-Step Wideband Spectrum Sensing Method Based on Signal Sample Autocorrelation

      2017, 32(5):1034-1043. DOI: 10.16337/j.1004-9037.2017.05.021

      Abstract (703) HTML (0) PDF 1.41 M (1330) Comment (0) Favorites

      Abstract:The bandwidth of spectrum sensing can be enlarged in software radio pla tform by using multi-step frequency domain energy detection. Nevertheless, the energy detection is sensitive to noise uncertainty, and signal sample autocorrelation method is robust to it. To improve the detection performance of software radio, a step-by-step wideband spectrum sensing method based on signal sample autocorrelation is proposed. Firstly, the principle of signal sample autocorrelation and a step-by-step wideband spectrum sensing procedure based on it are described.Then, the whole procedure for wideband spectrum sensing using MATLAB is simulated. The simulation results demonstrate that our proposed method can achieve the required detection performance under different signal-noise ratio (SNR). In addition, to balance the bandwidth resolution and the detection speed, we further propose to apply variable step values in two stages of wideband spectrum sending based on signal sample autocorrelation detection, which can obtain higher detection resolution of the frequency bandwidth and shorter detection time.

    • Application of Nonlinear Granger Causality in Analysis of Physiological Signals During Sleep

      2017, 32(5):1044-1051. DOI: 10.16337/j.1004-9037.2017.05.022

      Abstract (838) HTML (0) PDF 498.94 K (1626) Comment (0) Favorites

      Abstract:A method based on the nolinear Granger causality is used to analyze sleep physiological signal. Polynomial kernel function, Gaussian kernel function and sigmoid kernel function are used to map the linear data in low dimensional input space into high dimensional feature space in which linear Granger method can be used to analyse the biomedical signals. The analysis results show that the causal effect of electrocardlogram (ECG) signals to electroencephalogram (EEG) signals, ECG signals to blood pressure signals and blood pressure signals to ECG signals are more significant than that of the opposite direction. In addition, the results of sleep subjects have more significant difference than that of normal subjects. The simulation results validate that the sleep physiological signal reflects the causality more objectively.

    • Short Text Clustering Based on Feature Word Embedding

      2017, 32(5):1052-1060. DOI: 10.16337/j.1004-9037.2017.05.023

      Abstract (777) HTML (0) PDF 740.72 K (1588) Comment (0) Favorites

      Abstract:Aiming at the problem of poor clustering performance for short text caused by sparse feature and quick updating of short text on the internet, a short text clustering algorithm based on feature word embedding is proposed in this paper. Firstly, the formula for feature word extraction based on word part-of-speech(POS) and length weighting is defined and used to extract feature words as short texts. Secondly, the word embedding that represents semantics of the feature word is gained by means of the training in large scale corpus with continous skip-gram model. Finally, word mover′s distance is introduced to calculate the similarity between short texts and applied in the hierarchical clustering algorithm to realize the short text clustering. The evaluation results on four testing datasets show that the proposed algorithm is significantly superior to traditional clustering algorithms with the mean F of 58.97% higher than the secondly best result.

    • Saliency and Motion Weighed Video Quality Assessment

      2017, 32(5):1061-1068. DOI: 10.16337/j.1004-9037.2017.05.024

      Abstract (580) HTML (0) PDF 720.15 K (1136) Comment (0) Favorites

      Abstract:To evaluate accurately the video quality and make it consistent with the subjective evaluation result, a video quality assessment method based on saliency region and motion characteristics weighting is proposed in this paper. This method is based on traditional structural similarity index measurement (SSIM) and makes some improvement. The spatial and temporal saliency is firstly extracted by spectrum analysis and visual attention model combined with motion characteristics, respectively. Then the frame saliency can be extracted by dynamic fusion of temporal and spatial saliency. Finally, the quality assessment index for entire video frame can be got by frame saliency weighted SSIM. The results of experiment on LIVE VQA standard data set show that this index is more close to objective assessment on video quality from human visual system.

Quick search
Search term
Search word
From To
Volume retrieval