Feature selection is an important preprocessing step in machine learning and data mining. Feature selection of class-imbalanced dataset is a hot topic of machine learning and pattern recognition. Most traditional feature selection classification algorithms pursue high precision, and assume that the data have no misclassification costs or have the same costs. However, in real applications, different misclassifications always tend to produce different misclassification costs. To get the feature subset with minimum misclassification cost, a supervised cost-sensitive feature selection algorithm based on sample neighborhood preserving is proposed, whose main idea is to introduce the sample neighborhood into the cost-sensitive feature selection framework. The experimental results on eight real-life data sets demonstrate the superiority of the proposed algorithm.