Reforms: Consensus-based recommendations for machine-learning-based science

S Kapoor, EM Cantrell, K Peng, TH Pham, CA Bail… - Science …, 2024 - science.org
Machine learning (ML) methods are proliferating in scientific research. However, the
adoption of these methods has been accompanied by failures of validity, reproducibility, and …

Foundation models in smart agriculture: Basics, opportunities, and challenges

J Li, M Xu, L **ang, D Chen, W Zhuang, X Yin… - … and Electronics in …, 2024 - Elsevier
The past decade has witnessed the rapid development and adoption of machine and deep
learning (ML & DL) methodologies in agricultural systems, showcased by great successes in …

Surgical fine-tuning improves adaptation to distribution shifts

Y Lee, AS Chen, F Tajwar, A Kumar, H Yao… - arxiv preprint arxiv …, 2022 - arxiv.org
A common approach to transfer learning under distribution shift is to fine-tune the last few
layers of a pre-trained model, preserving learned features while also adapting to the new …

Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons

AF Psaros, X Meng, Z Zou, L Guo… - Journal of Computational …, 2023 - Elsevier
Neural networks (NNs) are currently changing the computational paradigm on how to
combine data with mathematical laws in physics and engineering in a profound way …

Diff-instruct: A universal approach for transferring knowledge from pre-trained diffusion models

W Luo, T Hu, S Zhang, J Sun, Z Li… - Advances in Neural …, 2023 - proceedings.neurips.cc
Due to the ease of training, ability to scale, and high sample quality, diffusion models (DMs)
have become the preferred option for generative modeling, with numerous pre-trained …

Generative models improve fairness of medical classifiers under distribution shifts

I Ktena, O Wiles, I Albuquerque, SA Rebuffi, R Tanno… - Nature Medicine, 2024 - nature.com
Abstract Domain generalization is a ubiquitous challenge for machine learning in
healthcare. Model performance in real-world conditions might be lower than expected …

Shortcut learning of large language models in natural language understanding

M Du, F He, N Zou, D Tao, X Hu - Communications of the ACM, 2023 - dl.acm.org
Shortcut Learning of Large Language Models in Natural Language Understanding Page 1 110
COMMUNICATIONS OF THE ACM | JANUARY 2024 | VOL. 67 | NO. 1 research IMA GE B Y …

Change is hard: A closer look at subpopulation shift

Y Yang, H Zhang, D Katabi, M Ghassemi - arxiv preprint arxiv:2302.12254, 2023 - arxiv.org
Machine learning models often perform poorly on subgroups that are underrepresented in
the training data. Yet, little is understood on the variation in mechanisms that cause …

No representation rules them all in category discovery

S Vaze, A Vedaldi, A Zisserman - Advances in Neural …, 2023 - proceedings.neurips.cc
In this paper we tackle the problem of Generalized Category Discovery (GCD). Specifically,
given a dataset with labelled and unlabelled images, the task is to cluster all images in the …

Generalization on the unseen, logic reasoning and degree curriculum

E Abbe, S Bengio, A Lotfi, K Rizk - Journal of Machine Learning Research, 2024 - jmlr.org
This paper considers the learning of logical (Boolean) functions with a focus on the
generalization on the unseen (GOTU) setting, a strong case of out-of-distribution …