Abstract:Due to its good property to provide an approximation to any distribution, GMM has been widely applied in the field of pattern recognition. Usually, the iterative EM algorithm is applied to estimate GMM parameters .The computational complexity at model training procedure will become very high when large amounts of training data and large mixture number are engaged. The CUDA technology provided by NVIDIA Corporation can perform fast parallel computation by running thousands of threads simultaneously on GPU. In this paper, a fast GMM model training implementation using CUDA is presented, which is especially applicable to large amounts of training data. The fast training implementation contains two parts, the K-means algorithm for model initialization and the EM algorithm for parameter estimation. Furthermore, this fast training method has been applied in language GMMs training. The experimental results show that language model training using GPU is about 26 times faster on NVIDIA GTS250 when compared to traditional implementation on one of the single core of Intel DualCore Pentium Ⅳ 3.0GHz CPU.