How to dp-fy ml: A practical guide to machine learning with differential privacy

N Ponomareva, H Hazimeh, A Kurakin, Z Xu… - Journal of Artificial …, 2023 - jair.org
Abstract Machine Learning (ML) models are ubiquitous in real-world applications and are a
constant focus of research. Modern ML models have become more complex, deeper, and …

Decision trees: from efficient prediction to responsible AI

H Blockeel, L Devos, B Frénay, G Nanfack… - Frontiers in artificial …, 2023 - frontiersin.org
This article provides a birds-eye view on the role of decision trees in machine learning and
data science over roughly four decades. It sketches the evolution of decision tree research …

Tabular data: Deep learning is not all you need

R Shwartz-Ziv, A Armon - Information Fusion, 2022 - Elsevier
A key element in solving real-life data science problems is selecting the types of models to
use. Tree ensemble models (such as XGBoost) are usually recommended for classification …

Revisiting deep learning models for tabular data

Y Gorishniy, I Rubachev, V Khrulkov… - Advances in neural …, 2021 - proceedings.neurips.cc
The existing literature on deep learning for tabular data proposes a wide range of novel
architectures and reports competitive results on various datasets. However, the proposed …

Dynamic neural networks: A survey

Y Han, G Huang, S Song, L Yang… - IEEE transactions on …, 2021 - ieeexplore.ieee.org
Dynamic neural network is an emerging research topic in deep learning. Compared to static
models which have fixed computational graphs and parameters at the inference stage …

Lift: Language-interfaced fine-tuning for non-language machine learning tasks

T Dinh, Y Zeng, R Zhang, Z Lin… - Advances in …, 2022 - proceedings.neurips.cc
Fine-tuning pretrained language models (LMs) without making any architectural changes
has become a norm for learning various language downstream tasks. However, for non …

On embeddings for numerical features in tabular deep learning

Y Gorishniy, I Rubachev… - Advances in Neural …, 2022 - proceedings.neurips.cc
Recently, Transformer-like deep architectures have shown strong performance on tabular
data problems. Unlike traditional models, eg, MLP, these architectures map scalar values of …

Dselect-k: Differentiable selection in the mixture of experts with applications to multi-task learning

H Hazimeh, Z Zhao, A Chowdhery… - Advances in …, 2021 - proceedings.neurips.cc
Abstract The Mixture-of-Experts (MoE) architecture is showing promising results in improving
parameter sharing in multi-task learning (MTL) and in scaling high-capacity neural networks …

T2g-former: organizing tabular features into relation graphs promotes heterogeneous feature interaction

J Yan, J Chen, Y Wu, DZ Chen, J Wu - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Recent development of deep neural networks (DNNs) for tabular learning has largely
benefited from the capability of DNNs for automatic feature interaction. However, the …

Adapting neural networks at runtime: Current trends in at-runtime optimizations for deep learning

M Sponner, B Waschneck, A Kumar - ACM Computing Surveys, 2024 - dl.acm.org
Adaptive optimization methods for deep learning adjust the inference task to the current
circumstances at runtime to improve the resource footprint while maintaining the model's …