Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Meta-learning with task-adaptive loss function for few-shot learning
In few-shot learning scenarios, the challenge is to generalize and perform well on new
unseen examples when only very few labeled examples are available for each task. Model …
unseen examples when only very few labeled examples are available for each task. Model …
Fast finite width neural tangent kernel
Abstract The Neural Tangent Kernel (NTK), defined as the outer product of the neural
network (NN) Jacobians, has emerged as a central object of study in deep learning. In the …
network (NN) Jacobians, has emerged as a central object of study in deep learning. In the …
Learning to learn from apis: Black-box data-free meta-learning
Data-free meta-learning (DFML) aims to enable efficient learning of new tasks by meta-
learning from a collection of pre-trained models without access to the training data. Existing …
learning from a collection of pre-trained models without access to the training data. Existing …
Making look-ahead active learning strategies feasible with neural tangent kernels
We propose a new method for approximating active learning acquisition strategies that are
based on retraining with hypothetically-labeled candidate data points. Although this is …
based on retraining with hypothetically-labeled candidate data points. Although this is …
A fast, well-founded approximation to the empirical neural tangent kernel
Empirical neural tangent kernels (eNTKs) can provide a good understanding of a given
network's representation: they are often far less expensive to compute and applicable more …
network's representation: they are often far less expensive to compute and applicable more …
Learning to learn and remember super long multi-domain task sequence
Catastrophic forgetting (CF) frequently occurs when learning with non-stationary data
distribution. The CF issue remains nearly unexplored and is more challenging when meta …
distribution. The CF issue remains nearly unexplored and is more challenging when meta …
Meta-learning without data via wasserstein distributionally-robust model fusion
Existing meta-learning works assume that each task has available training and testing data.
However, there are many available pre-trained models without accessing their training data …
However, there are many available pre-trained models without accessing their training data …
Few-shot backdoor attacks via neural tangent kernels
In a backdoor attack, an attacker injects corrupted examples into the training set. The goal of
the attacker is to cause the final trained model to predict the attacker's desired target label …
the attacker is to cause the final trained model to predict the attacker's desired target label …
Fast neural kernel embeddings for general activations
Infinite width limit has shed light on generalization and optimization aspects of deep learning
by establishing connections between neural networks and kernel methods. Despite their …
by establishing connections between neural networks and kernel methods. Despite their …
Meta-learning with less forgetting on large-scale non-stationary task distributions
The paradigm of machine intelligence moves from purely supervised learning to a more
practical scenario when many loosely related unlabeled data are available and labeled data …
practical scenario when many loosely related unlabeled data are available and labeled data …