Abstract:Recurrent neural network language model (RNNLM) is an important method in statistical language models because it can tackle the data sparseness problem and contain a longer distance constraints. However, it lacks practicability because the lattice has to expand too many times and explode the sea rch space. Therefore, a N-best rescoring algorithm is proposed which uses the RNNLM to rerank the recognition results and optimize the decoding process. Experimental results show that the proposed method can effectively reduce the word error rate of the speech recognition system.