Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Challenges and applications of large language models
Large Language Models (LLMs) went from non-existent to ubiquitous in the machine
learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify …
learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify …
Efficient acceleration of deep learning inference on resource-constrained edge devices: A review
Successful integration of deep neural networks (DNNs) or deep learning (DL) has resulted
in breakthroughs in many areas. However, deploying these highly accurate models for data …
in breakthroughs in many areas. However, deploying these highly accurate models for data …
Efficient large language models: A survey
Large Language Models (LLMs) have demonstrated remarkable capabilities in important
tasks such as natural language understanding and language generation, and thus have the …
tasks such as natural language understanding and language generation, and thus have the …
Scaling vision transformers
Attention-based neural networks such as the Vision Transformer (ViT) have recently attained
state-of-the-art results on many computer vision benchmarks. Scale is a primary ingredient …
state-of-the-art results on many computer vision benchmarks. Scale is a primary ingredient …
The effect of choosing optimizer algorithms to improve computer vision tasks: a comparative study
Optimization algorithms are used to improve model accuracy. The optimization process
undergoes multiple cycles until convergence. A variety of optimization strategies have been …
undergoes multiple cycles until convergence. A variety of optimization strategies have been …
Communication-efficient adaptive federated learning
Federated learning is a machine learning training paradigm that enables clients to jointly
train models without sharing their own localized data. However, the implementation of …
train models without sharing their own localized data. However, the implementation of …
Cocktailsgd: Fine-tuning foundation models over 500mbps networks
Distributed training of foundation models, especially large language models (LLMs), is
communication-intensive and so has heavily relied on centralized data centers with fast …
communication-intensive and so has heavily relied on centralized data centers with fast …
Communication-efficient distributed deep learning: A comprehensive survey
Distributed deep learning (DL) has become prevalent in recent years to reduce training time
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …
Compute-efficient deep learning: Algorithmic trends and opportunities
Although deep learning has made great progress in recent years, the exploding economic
and environmental costs of training neural networks are becoming unsustainable. To …
and environmental costs of training neural networks are becoming unsustainable. To …
Zero++: Extremely efficient collective communication for giant model training
Zero Redundancy Optimizer (ZeRO) has been used to train a wide range of large language
models on massive GPUs clusters due to its ease of use, efficiency, and good scalability …
models on massive GPUs clusters due to its ease of use, efficiency, and good scalability …