Abstract:As the linear discriminant analysis (LDA) is just a linear method and is difficult to effectively deal with nonlinear problems, non-linearizing LDA is a crucial strategy to enable it to solve such nonlinear problems. Nonlinear LDA is mainly based on two strategies, neural networks and kernelization. A representative of the former strategy is the neural network discriminant analysis (NNDA). Athough NNDA inherits the advantages such as self-adaption, parallel processing, distributed storing and nonlinear mapping of neural networks, its training is quite time-consuming and likely to get trapped in local minimum. While the representative of the latter strategy is the kernel linear discriminant analysis (KLDA). Although KLDA can obtain a global optimal analytical solution, its computational cost is rather high, due to the fact that the number of hidden nodes of KLDA is equal to the size of training samples, especially in large scale scenarios. Inspired by the idea of random map, a novel extreme nonlinear discriminant analysis (ENDA) is proposed by reconstructing NNDA via extreme learning strategy in this paper. ENDA shares both the self-adaption of NNDA and the efficient computation of global optimal solution of KLDA. Finally, experimental results on UCI datasets demonstrate the superiority of ENDA over KLDA and NNDA in classification accuracy.