Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
[HTML][HTML] A review of green artificial intelligence: Towards a more sustainable future
Green artificial intelligence (AI) is more environmentally friendly and inclusive than
conventional AI, as it not only produces accurate results without increasing the …
conventional AI, as it not only produces accurate results without increasing the …
A survey of techniques for optimizing transformer inference
Recent years have seen a phenomenal rise in the performance and applications of
transformer neural networks. The family of transformer networks, including Bidirectional …
transformer neural networks. The family of transformer networks, including Bidirectional …
Smoothquant: Accurate and efficient post-training quantization for large language models
Large language models (LLMs) show excellent performance but are compute-and memory-
intensive. Quantization can reduce memory and accelerate inference. However, existing …
intensive. Quantization can reduce memory and accelerate inference. However, existing …
Rethinking vision transformers for mobilenet size and speed
With the success of Vision Transformers (ViTs) in computer vision tasks, recent arts try to
optimize the performance and complexity of ViTs to enable efficient deployment on mobile …
optimize the performance and complexity of ViTs to enable efficient deployment on mobile …
Quip: 2-bit quantization of large language models with guarantees
This work studies post-training parameter quantization in large language models (LLMs).
We introduce quantization with incoherence processing (QuIP), a new method based on the …
We introduce quantization with incoherence processing (QuIP), a new method based on the …
Zeroquant: Efficient and affordable post-training quantization for large-scale transformers
How to efficiently serve ever-larger trained natural language models in practice has become
exceptionally challenging even for powerful cloud servers due to their prohibitive …
exceptionally challenging even for powerful cloud servers due to their prohibitive …
Tinyvit: Fast pretraining distillation for small vision transformers
Vision transformer (ViT) recently has drawn great attention in computer vision due to its
remarkable model capability. However, most prevailing ViT models suffer from huge number …
remarkable model capability. However, most prevailing ViT models suffer from huge number …
Shortgpt: Layers in large language models are more redundant than you expect
As Large Language Models (LLMs) continue to advance in performance, their size has
escalated significantly, with current LLMs containing billions or even trillions of parameters …
escalated significantly, with current LLMs containing billions or even trillions of parameters …
Aging with grace: Lifelong model editing with discrete key-value adaptors
Deployed language models decay over time due to shifting inputs, changing user needs, or
emergent world-knowledge gaps. When such problems are identified, we want to make …
emergent world-knowledge gaps. When such problems are identified, we want to make …
A survey on vision transformer
Transformer, first applied to the field of natural language processing, is a type of deep neural
network mainly based on the self-attention mechanism. Thanks to its strong representation …
network mainly based on the self-attention mechanism. Thanks to its strong representation …