Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Towards energy-efficient deep learning: An overview of energy-efficient approaches along the deep learning lifecycle
Deep Learning has enabled many advances in machine learning applications in the last few
years. However, since current Deep Learning algorithms require much energy for …
years. However, since current Deep Learning algorithms require much energy for …
Imagenet-21k pretraining for the masses
ImageNet-1K serves as the primary dataset for pretraining deep learning models for
computer vision tasks. ImageNet-21K dataset, which is bigger and more diverse, is used …
computer vision tasks. ImageNet-21K dataset, which is bigger and more diverse, is used …
Beyond one-hot encoding: Lower dimensional target embedding
Target encoding plays a central role when learning Convolutional Neural Networks. In this
realm, one-hot encoding is the most prevalent strategy due to its simplicity. However, this so …
realm, one-hot encoding is the most prevalent strategy due to its simplicity. However, this so …
A survey on green deep learning
In recent years, larger and deeper models are springing up and continuously pushing state-
of-the-art (SOTA) results across various fields like natural language processing (NLP) and …
of-the-art (SOTA) results across various fields like natural language processing (NLP) and …
Large memory layers with product keys
This paper introduces a structured memory which can be easily integrated into a neural
network. The memory is very large by design and significantly increases the capacity of the …
network. The memory is very large by design and significantly increases the capacity of the …
Strategies for training large vocabulary neural language models
Training neural network language models over large vocabularies is still computationally
very costly compared to count-based models such as Kneser-Ney. At the same time, neural …
very costly compared to count-based models such as Kneser-Ney. At the same time, neural …
A no-regret generalization of hierarchical softmax to extreme multi-label classification
Extreme multi-label classification (XMLC) is a problem of tagging an instance with a small
subset of relevant labels chosen from an extremely large pool of possible labels. Large label …
subset of relevant labels chosen from an extremely large pool of possible labels. Large label …
Hierarchical memory networks
Memory networks are neural networks with an explicit memory component that can be both
read and written to by the network. The memory is often addressed in a soft way using a …
read and written to by the network. The memory is often addressed in a soft way using a …
Sampled softmax with random fourier features
The computational cost of training with softmax cross entropy loss grows linearly with the
number of classes. For the settings where a large number of classes are involved, a …
number of classes. For the settings where a large number of classes are involved, a …
Efficient training of retrieval models using negative cache
Factorized models, such as two tower neural network models, are widely used for scoring
(query, document) pairs in information retrieval tasks. These models are typically trained by …
(query, document) pairs in information retrieval tasks. These models are typically trained by …