Lift: Language-interfaced fine-tuning for non-language machine learning tasks

T Dinh, Y Zeng, R Zhang, Z Lin… - Advances in …, 2022‏ - proceedings.neurips.cc
Fine-tuning pretrained language models (LMs) without making any architectural changes
has become a norm for learning various language downstream tasks. However, for non …

The emergence of reproducibility and consistency in diffusion models

H Zhang, J Zhou, Y Lu, M Guo, P Wang… - Forty-first International …, 2023‏ - openreview.net
In this work, we investigate an intriguing and prevalent phenomenon of diffusion models
which we term as" consistent model reproducibility'': given the same starting noise input and …

Is Ockham's razor losing its edge? New perspectives on the principle of model parsimony

M Dubova, S Chandramouli, G Gigerenzer… - Proceedings of the …, 2025‏ - pnas.org
The preference for simple explanations, known as the parsimony principle, has long guided
the development of scientific theories, hypotheses, and models. Yet recent years have seen …

Simplifying neural network training under class imbalance

R Shwartz-Ziv, M Goldblum, Y Li… - Advances in Neural …, 2023‏ - proceedings.neurips.cc
Real-world datasets are often highly class-imbalanced, which can adversely impact the
performance of deep learning models. The majority of research on training neural networks …

Latent space translation via semantic alignment

V Maiorca, L Moschella, A Norelli… - Advances in …, 2023‏ - proceedings.neurips.cc
While different neural models often exhibit latent spaces that are alike when exposed to
semantically related data, this intrinsic similarity is not always immediately discernible …

Rashomon capacity: A metric for predictive multiplicity in classification

H Hsu, F Calmon - Advances in Neural Information …, 2022‏ - proceedings.neurips.cc
Predictive multiplicity occurs when classification models with statistically indistinguishable
performances assign conflicting predictions to individual samples. When used for decision …