Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Universal prompt tuning for graph neural networks
In recent years, prompt tuning has sparked a research surge in adapting pre-trained models.
Unlike the unified pre-training strategy employed in the language field, the graph field …
Unlike the unified pre-training strategy employed in the language field, the graph field …
Model stock: All we need is just a few fine-tuned models
This paper introduces an efficient fine-tuning method for large pre-trained models, offering
strong in-distribution (ID) and out-of-distribution (OOD) performance. Breaking away from …
strong in-distribution (ID) and out-of-distribution (OOD) performance. Breaking away from …
Anchor-based robust finetuning of vision-language models
We aim at finetuning a vision-language model without hurting its out-of-distribution (OOD)
generalization. We address two types of OOD generalization ie i) domain shift such as …
generalization. We address two types of OOD generalization ie i) domain shift such as …
Spurious feature diversification improves out-of-distribution generalization
Generalization to out-of-distribution (OOD) data is a critical challenge in machine learning.
Ensemble-based methods, like weight space ensembles that interpolate model parameters …
Ensemble-based methods, like weight space ensembles that interpolate model parameters …
Towards calibrated robust fine-tuning of vision-language models
Improving out-of-distribution (OOD) generalization during in-distribution (ID) adaptation is a
primary goal of robust fine-tuning of zero-shot models beyond naive fine-tuning. However …
primary goal of robust fine-tuning of zero-shot models beyond naive fine-tuning. However …
Fast trainable projection for robust fine-tuning
Robust fine-tuning aims to achieve competitive in-distribution (ID) performance while
maintaining the out-of-distribution (OOD) robustness of a pre-trained model when …
maintaining the out-of-distribution (OOD) robustness of a pre-trained model when …
Saft: Towards out-of-distribution generalization in fine-tuning
Handling distribution shifts from training data, known as out-of-distribution (OOD)
generalization, poses a significant challenge in the field of machine learning. While a pre …
generalization, poses a significant challenge in the field of machine learning. While a pre …
Dawin: Training-free dynamic weight interpolation for robust adaptation
Adapting a pre-trained foundation model on downstream tasks should ensure robustness
against distribution shifts without the need to retrain the whole model. Although existing …
against distribution shifts without the need to retrain the whole model. Although existing …
Knowledge guided machine learning for extracting, preserving, and adapting physics-aware features
Training machine learning (ML) models for scientific problems is often challenging due to
limited observation data. To overcome this challenge, prior works commonly pre-train ML …
limited observation data. To overcome this challenge, prior works commonly pre-train ML …
Holistic transfer: Towards non-disruptive fine-tuning with partial target data
We propose a learning problem involving adapting a pre-trained source model to the target
domain for classifying all classes that appeared in the source data, using target data that …
domain for classifying all classes that appeared in the source data, using target data that …