Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Transformers as statisticians: Provable in-context learning with in-context algorithm selection
Neural sequence models based on the transformer architecture have demonstrated
remarkable\emph {in-context learning}(ICL) abilities, where they can perform new tasks …
remarkable\emph {in-context learning}(ICL) abilities, where they can perform new tasks …
Transformers as algorithms: Generalization and stability in in-context learning
In-context learning (ICL) is a type of prompting where a transformer model operates on a
sequence of (input, output) examples and performs inference on-the-fly. In this work, we …
sequence of (input, output) examples and performs inference on-the-fly. In this work, we …
Robust and data-efficient generalization of self-supervised machine learning for diagnostic imaging
Abstract Machine-learning models for medical tasks can match or surpass the performance
of clinical experts. However, in settings differing from those of the training dataset, the …
of clinical experts. However, in settings differing from those of the training dataset, the …
What makes multi-modal learning better than single (provably)
The world provides us with data of multiple modalities. Intuitively, models fusing data from
different modalities outperform their uni-modal counterparts, since more information is …
different modalities outperform their uni-modal counterparts, since more information is …
Universal prompt tuning for graph neural networks
In recent years, prompt tuning has sparked a research surge in adapting pre-trained models.
Unlike the unified pre-training strategy employed in the language field, the graph field …
Unlike the unified pre-training strategy employed in the language field, the graph field …
Rethinking few-shot image classification: a good embedding is all you need?
The focus of recent meta-learning research has been on the development of learning
algorithms that can quickly adapt to test time tasks with limited data and low computational …
algorithms that can quickly adapt to test time tasks with limited data and low computational …
A kernel-based view of language model fine-tuning
It has become standard to solve NLP tasks by fine-tuning pre-trained language models
(LMs), especially in low-data settings. There is minimal theoretical understanding of …
(LMs), especially in low-data settings. There is minimal theoretical understanding of …
Fedavg with fine tuning: Local updates lead to representation learning
Abstract The Federated Averaging (FedAvg) algorithm, which consists of alternating
between a few local stochastic gradient updates at client nodes, followed by a model …
between a few local stochastic gradient updates at client nodes, followed by a model …
Variational model inversion attacks
Given the ubiquity of deep neural networks, it is important that these models do not reveal
information about sensitive data that they have been trained on. In model inversion attacks …
information about sensitive data that they have been trained on. In model inversion attacks …
On the theory of transfer learning: The importance of task diversity
We provide new statistical guarantees for transfer learning via representation learning--
when transfer is achieved by learning a feature representation shared across different tasks …
when transfer is achieved by learning a feature representation shared across different tasks …