[HTML][HTML] Unsupervised feature selection based on variance–covariance subspace distance

S Karami, F Saberi-Movahed, P Tiwari, P Marttinen… - Neural Networks, 2023 - Elsevier
Subspace distance is an invaluable tool exploited in a wide range of feature selection
methods. The power of subspace distance is that it can identify a representative subspace …

DELVE: feature selection for preserving biological trajectories in single-cell data

JS Ranek, W Stallaert, JJ Milner, M Redick… - Nature …, 2024 - nature.com
Single-cell technologies can measure the expression of thousands of molecular features in
individual cells undergoing dynamic biological processes. While examining cells along a …

Unsupervised feature selection guided by orthogonal representation of feature space

MS Jahani, G Aghamollaei, M Eftekhari… - Neurocomputing, 2023 - Elsevier
Feature selection has been an outstanding strategy in eliminating redundant and inefficient
features in high-dimensional data. This paper introduces a novel unsupervised feature …

Locally sparse neural networks for tabular biomedical data

J Yang, O Lindenbaum… - … Conference on Machine …, 2022 - proceedings.mlr.press
Tabular datasets with low-sample-size or many variables are prevalent in biomedicine.
Practitioners in this domain prefer linear or tree-based models over neural networks since …

Where to pay attention in sparse training for feature selection?

G Sokar, Z Atashgahi, M Pechenizkiy… - Advances in Neural …, 2022 - proceedings.neurips.cc
A new line of research for feature selection based on neural networks has recently emerged.
Despite its superiority to classical methods, it requires many training iterations to converge …

Self-supervision enhanced feature selection with correlated gates

C Lee, F Imrie, M van der Schaar - International conference on …, 2022 - openreview.net
Discovering relevant input features for predicting a target variable is a key scientific
question. However, in many domains, such as medicine and biology, feature selection is …

Interpretable deep clustering for tabular data

J Svirsky, O Lindenbaum - arxiv preprint arxiv:2306.04785, 2023 - arxiv.org
Clustering is a fundamental learning task widely used as a first step in data analysis. For
example, biologists use cluster assignments to analyze genome sequences, medical …

L0-sparse canonical correlation analysis

O Lindenbaum, M Salhov, A Averbuch… - … Conference on Learning …, 2021 - openreview.net
Canonical Correlation Analysis (CCA) models are powerful for studying the associations
between two sets of variables. The canonically correlated representations, termed\textit …

Few-sample feature selection via feature manifold learning

D Cohen, T Shnitzer, Y Kluger… - … on Machine Learning, 2023 - proceedings.mlr.press
In this paper, we present a new method for few-sample supervised feature selection (FS).
Our method first learns the manifold of the feature space of each class using kernels …

Composite feature selection using deep ensembles

F Imrie, A Norcliffe, P Liò… - Advances in Neural …, 2022 - proceedings.neurips.cc
In many real world problems, features do not act alone but in combination with each other.
For example, in genomics, diseases might not be caused by any single mutation but require …