Abstract:Bottleneck (BN) features based on the middle layer of deep neural network have been widly applicated to large vocabulary continuous speech recognition (LVCSR), because they can use the traditional Gaussian mixture density hidden Markov model (GMM-HMM) for acoustic modeling. In order to extract discriminative bottleneck features, the parameters of the BN feature extractor and GM M-HMM are optimized jointly by using the minimum phone error (MPE) criterion after training the GMM-HMM using the conventional BN features. Different from other discriminative training method, large batches instead of mini-batch in conventional neural network optimization are used to obtain the statistics, which accelerates training speed. Experiments demonstrate that the proposed bottleneck feature extractor can outperform the traditional methods with 9% relative word error reduction.