Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Sparsegpt: Massive language models can be accurately pruned in one-shot
E Frantar, D Alistarh - International Conference on Machine …, 2023 - proceedings.mlr.press
We show for the first time that large-scale generative pretrained transformer (GPT) family
models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal …
models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal …
Patch diffusion: Faster and more data-efficient training of diffusion models
Diffusion models are powerful, but they require a lot of time and data to train. We propose
Patch Diffusion, a generic patch-wise training framework, to significantly reduce the training …
Patch Diffusion, a generic patch-wise training framework, to significantly reduce the training …
Efficient spatially sparse inference for conditional gans and diffusion models
During image editing, existing deep generative models tend to re-synthesize the entire
output from scratch, including the unedited regions. This leads to a significant waste of …
output from scratch, including the unedited regions. This leads to a significant waste of …
Efficientvit: Lightweight multi-scale attention for high-resolution dense prediction
High-resolution dense prediction enables many appealing real-world applications, such as
computational photography, autonomous driving, etc. However, the vast computational cost …
computational photography, autonomous driving, etc. However, the vast computational cost …
Distributed artificial intelligence empowered by end-edge-cloud computing: A survey
As the computing paradigm shifts from cloud computing to end-edge-cloud computing, it
also supports artificial intelligence evolving from a centralized manner to a distributed one …
also supports artificial intelligence evolving from a centralized manner to a distributed one …
Optimal brain compression: A framework for accurate post-training quantization and pruning
E Frantar, D Alistarh - Advances in Neural Information …, 2022 - proceedings.neurips.cc
We consider the problem of model compression for deep neural networks (DNNs) in the
challenging one-shot/post-training setting, in which we are given an accurate trained model …
challenging one-shot/post-training setting, in which we are given an accurate trained model …
On-device training under 256kb memory
On-device training enables the model to adapt to new data collected from the sensors by
fine-tuning a pre-trained model. Users can benefit from customized AI models without having …
fine-tuning a pre-trained model. Users can benefit from customized AI models without having …