Impact of noise and data sampling on stability of feature ranking techniques for biological datasets
Feature selection is an important preprocessing step when learning from bioinformatics datasets. Since these datasets often have high dimensionality (a large number of features), selecting the most important ones both improves performance and reduces computation time. In addition, when the features in question are genes (as is the case for microarray datasets), knowing the important genes is useful on its own. Although many studies have examined feature selection in the context of classification performance, few analyze techniques in terms of stability: the ability of a technique to produce the same results (list of genes) regardless of changes to the dataset. In this study, we test the stability of eighteen feature rankers across four high-dimensional cancer-gene datasets. Because these datasets are also imbalanced (fewer positive-class instances than negative-class instances), we employ six versions of data sampling (three techniques with two class ratios). Finally, we inject artificial class noise to better evaluate how the rankers perform on realistic datasets, which can be prone to noise. The results demonstrate that of the rankers, the PRC- and Deviance-based Threshold-Based Feature Selection techniques, along with Signal-to-Noise, show the best stability on average. Results also demonstrate that among the sampling techniques investigated, Random Oversampling and the Synthetic Minority Oversampling Technique (both set to a 50∶50 class ratio) showed the best performance on average.