Universal prompt tuning for graph neural networks

T Fang, Y Zhang, Y Yang, C Wang… - Advances in Neural …, 2023 - proceedings.neurips.cc
In recent years, prompt tuning has sparked a research surge in adapting pre-trained models.
Unlike the unified pre-training strategy employed in the language field, the graph field …

Model stock: All we need is just a few fine-tuned models

DH Jang, S Yun, D Han - European Conference on Computer Vision, 2024 - Springer
This paper introduces an efficient fine-tuning method for large pre-trained models, offering
strong in-distribution (ID) and out-of-distribution (OOD) performance. Breaking away from …

Anchor-based robust finetuning of vision-language models

J Han, Z Lin, Z Sun, Y Gao, K Yan… - Proceedings of the …, 2024 - openaccess.thecvf.com
We aim at finetuning a vision-language model without hurting its out-of-distribution (OOD)
generalization. We address two types of OOD generalization ie i) domain shift such as …

Spurious feature diversification improves out-of-distribution generalization

Y Lin, L Tan, Y Hao, H Wong, H Dong, W Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
Generalization to out-of-distribution (OOD) data is a critical challenge in machine learning.
Ensemble-based methods, like weight space ensembles that interpolate model parameters …

Towards calibrated robust fine-tuning of vision-language models

C Oh, H Lim, M Kim, D Han, S Yun, J Choo… - arxiv preprint arxiv …, 2023 - arxiv.org
Improving out-of-distribution (OOD) generalization during in-distribution (ID) adaptation is a
primary goal of robust fine-tuning of zero-shot models beyond naive fine-tuning. However …

Fast trainable projection for robust fine-tuning

J Tian, YC Liu, JS Smith, Z Kira - Advances in Neural …, 2023 - proceedings.neurips.cc
Robust fine-tuning aims to achieve competitive in-distribution (ID) performance while
maintaining the out-of-distribution (OOD) robustness of a pre-trained model when …

Saft: Towards out-of-distribution generalization in fine-tuning

B Nguyen, S Uhlich, F Cardinaux, L Mauch… - … on Computer Vision, 2024 - Springer
Handling distribution shifts from training data, known as out-of-distribution (OOD)
generalization, poses a significant challenge in the field of machine learning. While a pre …

Dawin: Training-free dynamic weight interpolation for robust adaptation

C Oh, Y Li, K Song, S Yun, D Han - arxiv preprint arxiv:2410.03782, 2024 - arxiv.org
Adapting a pre-trained foundation model on downstream tasks should ensure robustness
against distribution shifts without the need to retrain the whole model. Although existing …

Knowledge guided machine learning for extracting, preserving, and adapting physics-aware features

E He, Y **e, L Liu, Z **, D Zhang, X Jia - Proceedings of the 2024 SIAM …, 2024 - SIAM
Training machine learning (ML) models for scientific problems is often challenging due to
limited observation data. To overcome this challenge, prior works commonly pre-train ML …

Holistic transfer: Towards non-disruptive fine-tuning with partial target data

CH Tu, HY Chen, Z Mai, J Zhong… - Advances in …, 2023 - proceedings.neurips.cc
We propose a learning problem involving adapting a pre-trained source model to the target
domain for classifying all classes that appeared in the source data, using target data that …