Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A review of the gumbel-max trick and its extensions for discrete stochasticity in machine learning
The Gumbel-max trick is a method to draw a sample from a categorical distribution, given by
its unnormalized (log-) probabilities. Over the past years, the machine learning community …
its unnormalized (log-) probabilities. Over the past years, the machine learning community …
[HTML][HTML] Data science applications to string theory
We first introduce various algorithms and techniques for machine learning and data science.
While there is a strong focus on neural network applications in unsupervised, supervised …
While there is a strong focus on neural network applications in unsupervised, supervised …
wav2vec 2.0: A framework for self-supervised learning of speech representations
We show for the first time that learning powerful representations from speech audio alone
followed by fine-tuning on transcribed speech can outperform the best semi-supervised …
followed by fine-tuning on transcribed speech can outperform the best semi-supervised …
Learning graph structures with transformer for multivariate time-series anomaly detection in IoT
Many real-world Internet of Things (IoT) systems, which include a variety of Internet-
connected sensory devices, produce substantial amounts of multivariate time-series data …
connected sensory devices, produce substantial amounts of multivariate time-series data …
Argmax flows and multinomial diffusion: Learning categorical distributions
Generative flows and diffusion models have been predominantly trained on ordinal data, for
example natural images. This paper introduces two extensions of flows and diffusion for …
example natural images. This paper introduces two extensions of flows and diffusion for …
Regularized vector quantization for tokenized image synthesis
Quantizing images into discrete representations has been a fundamental problem in unified
generative modeling. Predominant approaches learn the discrete representation either in a …
generative modeling. Predominant approaches learn the discrete representation either in a …
Searching for a robust neural architecture in four gpu hours
Conventional neural architecture search (NAS) approaches are usually based on
reinforcement learning or evolutionary strategy, which take more than 1000 GPU hours to …
reinforcement learning or evolutionary strategy, which take more than 1000 GPU hours to …
Chasing sparsity in vision transformers: An end-to-end exploration
Vision transformers (ViTs) have recently received explosive popularity, but their enormous
model sizes and training costs remain daunting. Conventional post-training pruning often …
model sizes and training costs remain daunting. Conventional post-training pruning often …
Deep graph reprogramming
In this paper, we explore a novel model reusing task tailored for graph neural networks
(GNNs), termed as" deep graph reprogramming". We strive to reprogram a pre-trained GNN …
(GNNs), termed as" deep graph reprogramming". We strive to reprogram a pre-trained GNN …
Learning to explain: An information-theoretic perspective on model interpretation
We introduce instancewise feature selection as a methodology for model interpretation. Our
method is based on learning a function to extract a subset of features that are most …
method is based on learning a function to extract a subset of features that are most …