Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Split computing and early exiting for deep learning applications: Survey and research challenges
Mobile devices such as smartphones and autonomous vehicles increasingly rely on deep
neural networks (DNNs) to execute complex inference tasks such as image classification …
neural networks (DNNs) to execute complex inference tasks such as image classification …
The eras and trends of automatic short answer grading
Automatic short answer grading (ASAG) is the task of assessing short natural language
responses to objective questions using computational methods. The active research in this …
responses to objective questions using computational methods. The active research in this …
Deja vu: Contextual sparsity for efficient llms at inference time
Large language models (LLMs) with hundreds of billions of parameters have sparked a new
wave of exciting AI applications. However, they are computationally expensive at inference …
wave of exciting AI applications. However, they are computationally expensive at inference …
Pretraining language models with human preferences
Abstract Language models (LMs) are pretrained to imitate text from large and diverse
datasets that contain content that would violate human preferences if generated by an LM …
datasets that contain content that would violate human preferences if generated by an LM …
Fine-tuning language models with just forward passes
Fine-tuning language models (LMs) has yielded success on diverse downstream tasks, but
as LMs grow in size, backpropagation requires a prohibitively large amount of memory …
as LMs grow in size, backpropagation requires a prohibitively large amount of memory …
Unified-io: A unified model for vision, language, and multi-modal tasks
We propose Unified-IO, a model that performs a large variety of AI tasks spanning classical
computer vision tasks, including pose estimation, object detection, depth estimation and …
computer vision tasks, including pose estimation, object detection, depth estimation and …
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
The conventional recipe for maximizing model accuracy is to (1) train multiple models with
various hyperparameters and (2) pick the individual model which performs best on a held …
various hyperparameters and (2) pick the individual model which performs best on a held …
Rethinking the role of demonstrations: What makes in-context learning work?
Large language models (LMs) are able to in-context learn--perform a new task via inference
alone by conditioning on a few input-label pairs (demonstrations) and making predictions for …
alone by conditioning on a few input-label pairs (demonstrations) and making predictions for …
Can chatgpt understand too? a comparative study on chatgpt and fine-tuned bert
Recently, ChatGPT has attracted great attention, as it can generate fluent and high-quality
responses to human inquiries. Several prior studies have shown that ChatGPT attains …
responses to human inquiries. Several prior studies have shown that ChatGPT attains …
Powerinfer: Fast large language model serving with a consumer-grade gpu
This paper introduces PowerInfer, a high-speed Large Language Model (LLM) inference
engine on a personal computer (PC) equipped with a single consumer-grade GPU. The key …
engine on a personal computer (PC) equipped with a single consumer-grade GPU. The key …