Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
A review of deep learning techniques for speech processing
The field of speech processing has undergone a transformative shift with the advent of deep
learning. The use of multiple processing layers has enabled the creation of models capable …
learning. The use of multiple processing layers has enabled the creation of models capable …
Survey on evolutionary deep learning: Principles, algorithms, applications, and open issues
Over recent years, there has been a rapid development of deep learning (DL) in both
industry and academia fields. However, finding the optimal hyperparameters of a DL model …
industry and academia fields. However, finding the optimal hyperparameters of a DL model …
Efficient large language models: A survey
Large Language Models (LLMs) have demonstrated remarkable capabilities in important
tasks such as natural language understanding and language generation, and thus have the …
tasks such as natural language understanding and language generation, and thus have the …
Losparse: Structured compression of large language models based on low-rank and sparse approximation
Transformer models have achieved remarkable results in various natural language tasks,
but they are often prohibitively large, requiring massive memories and computational …
but they are often prohibitively large, requiring massive memories and computational …
Assessing the brittleness of safety alignment via pruning and low-rank modifications
Large language models (LLMs) show inherent brittleness in their safety mechanisms, as
evidenced by their susceptibility to jailbreaking and even non-malicious fine-tuning. This …
evidenced by their susceptibility to jailbreaking and even non-malicious fine-tuning. This …
Asvd: Activation-aware singular value decomposition for compressing large language models
In this paper, we introduce a new post-training compression paradigm for Large Language
Models (LLMs) to facilitate their wider adoption. We delve into LLM weight low-rank …
Models (LLMs) to facilitate their wider adoption. We delve into LLM weight low-rank …
Sparsity in transformers: A systematic literature review
Transformers have become the state-of-the-art architectures for various tasks in Natural
Language Processing (NLP) and Computer Vision (CV); however, their space and …
Language Processing (NLP) and Computer Vision (CV); however, their space and …
Lq-lora: Low-rank plus quantized matrix decomposition for efficient language model finetuning
We propose a simple approach for memory-efficient adaptation of pretrained language
models. Our approach uses an iterative algorithm to decompose each pretrained matrix into …
models. Our approach uses an iterative algorithm to decompose each pretrained matrix into …
Trojvit: Trojan insertion in vision transformers
Abstract Vision Transformers (ViTs) have demonstrated the state-of-the-art performance in
various vision-related tasks. The success of ViTs motivates adversaries to perform backdoor …
various vision-related tasks. The success of ViTs motivates adversaries to perform backdoor …
Svdqunat: Absorbing outliers by low-rank components for 4-bit diffusion models
Diffusion models have been proven highly effective at generating high-quality images.
However, as these models grow larger, they require significantly more memory and suffer …
However, as these models grow larger, they require significantly more memory and suffer …