Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Representations and generalization in artificial and brain neural networks
Humans and animals excel at generalizing from limited data, a capability yet to be fully
replicated in artificial intelligence. This perspective investigates generalization in biological …
replicated in artificial intelligence. This perspective investigates generalization in biological …
Task arithmetic in the tangent space: Improved editing of pre-trained models
Task arithmetic has recently emerged as a cost-effective and scalable approach to edit pre-
trained models directly in weight space: By adding the fine-tuned weights of different tasks …
trained models directly in weight space: By adding the fine-tuned weights of different tasks …
Structure-free graph condensation: From large-scale graphs to condensed graph-free data
Graph condensation, which reduces the size of a large-scale graph by synthesizing a small-
scale condensed graph as its substitution, has immediate benefits for various graph learning …
scale condensed graph as its substitution, has immediate benefits for various graph learning …
Efficient dataset distillation using random feature approximation
Dataset distillation compresses large datasets into smaller synthetic coresets which retain
performance with the aim of reducing the storage and computational burden of processing …
performance with the aim of reducing the storage and computational burden of processing …
A kernel-based view of language model fine-tuning
It has become standard to solve NLP tasks by fine-tuning pre-trained language models
(LMs), especially in low-data settings. There is minimal theoretical understanding of …
(LMs), especially in low-data settings. There is minimal theoretical understanding of …
A simple linear algebra identity to optimize large-scale neural network quantum states
Neural-network architectures have been increasingly used to represent quantum many-body
wave functions. These networks require a large number of variational parameters and are …
wave functions. These networks require a large number of variational parameters and are …
More than a toy: Random matrix models predict how real-world neural representations generalize
Of theories for why large-scale machine learning models generalize despite being vastly
overparameterized, which of their assumptions are needed to capture the qualitative …
overparameterized, which of their assumptions are needed to capture the qualitative …
Neural tangent kernel: A survey
A seminal work [Jacot et al., 2018] demonstrated that training a neural network under
specific parameterization is equivalent to performing a particular kernel method as width …
specific parameterization is equivalent to performing a particular kernel method as width …
[HTML][HTML] Deep networks for system identification: a survey
Deep learning is a topic of considerable current interest. The availability of massive data
collections and powerful software resources has led to an impressive amount of results in …
collections and powerful software resources has led to an impressive amount of results in …
Kecor: Kernel coding rate maximization for active 3d object detection
Achieving a reliable LiDAR-based object detector in autonomous driving is paramount, but
its success hinges on obtaining large amounts of precise 3D annotations. Active learning …
its success hinges on obtaining large amounts of precise 3D annotations. Active learning …