The stability of feature selection is defined as the robustness of the sets of selected features with respect to small variations in the data on which the feature selection is conducted. To quantify stability, several datasets from the same data generating process can be used. Alternatively, a single dataset can be split into parts by resampling. Either way, all datasets used for feature selection must contain exactly the same features. The feature selection method of interest is applied on all of the datasets and the sets of chosen features are recorded. The stability of the feature selection is assessed based on the sets of chosen features using stability measures.
stabilityHamming(
features,
p,
correction.for.chance = "none",
N = 10000,
impute.na = NULL
)
list (length >= 2)
Chosen features per dataset. Each element of the list contains the features for one dataset.
The features must be given by their names (character
) or indices (integerish
).
numeric(1)
Total number of features in the datasets.
Required, if correction.for.chance
is set to "estimate" or "exact".
character(1)
Should a correction for chance be applied? Correction for chance means that if
features are chosen at random, the expected value must be independent of the number
of chosen features. To correct for chance, the original score is transformed by
\((score - expected) / (maximum - expected)\). For stability measures whose
score is the average value of pairwise scores, this transformation
is done for all components individually.
Options are "none", "estimate" and "exact".
For "none", no correction is performed, i.e. the original score is used.
For "estimate", N
random feature sets of the same sizes as the input
feature sets (features
) are generated.
For "exact", all possible combinations of feature sets of the same
sizes as the input feature sets are used. Computation is only feasible for very
small numbers of features (p
) and numbers of considered datasets
(length(features)
).
numeric(1)
Number of random feature sets to consider. Only relevant if correction.for.chance
is set to "estimate".
numeric(1)
In some scenarios, the stability cannot be assessed based on all feature sets.
E.g. if some of the feature sets are empty, the respective pairwise comparisons yield NA as result.
With which value should these missing values be imputed? NULL
means no imputation.
numeric(1)
Stability value.
The stability measure is defined as (see Notation) $$\frac{2}{m (m - 1)} \sum_{i=1}^{m-1} \sum_{j = i+1}^m \frac{|V_i \cap V_j| + |V_i^c \cap V_j^c|}{p}.$$
For the definition of all stability measures in this package,
the following notation is used:
Let \(V_1, \ldots, V_m\) denote the sets of chosen features
for the \(m\) datasets, i.e. features
has length \(m\) and
\(V_i\) is a set which contains the \(i\)-th entry of features
.
Furthermore, let \(h_j\) denote the number of sets that contain feature
\(X_j\) so that \(h_j\) is the absolute frequency with which feature \(X_j\)
is chosen.
Analogously, let \(h_{ij}\) denote the number of sets that include both \(X_i\) and \(X_j\).
Also, let \(q = \sum_{j=1}^p h_j = \sum_{i=1}^m |V_i|\) and \(V = \bigcup_{i=1}^m V_i\).
Dunne, Kevin, Cunningham, Padraig, Azuaje, Francisco (2002). “Solutions to instability problems with sequential wrapper-based approaches to feature selection.” Machine Learning Group, Department of Computer Science, Trinity College, Dublin.
Bommert A (2020). Integration of Feature Selection Stability in Model Fitting. Ph.D. thesis, TU Dortmund University, Germany. doi:10.17877/DE290R-21906 .
feats = list(1:3, 1:4, 1:5)
stabilityHamming(features = feats, p = 10)
#> [1] 0.8666667