Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Transformer-based deep learning for predicting protein properties in the life sciences
Recent developments in deep learning, coupled with an increasing number of sequenced
proteins, have led to a breakthrough in life science applications, in particular in protein …
proteins, have led to a breakthrough in life science applications, in particular in protein …
[PDF][PDF] Language model behavior: A comprehensive survey
Transformer language models have received widespread public attention, yet their
generated text is often surprising even to NLP researchers. In this survey, we discuss over …
generated text is often surprising even to NLP researchers. In this survey, we discuss over …
A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions
The emergence of large language models (LLMs) has marked a significant breakthrough in
natural language processing (NLP), fueling a paradigm shift in information acquisition …
natural language processing (NLP), fueling a paradigm shift in information acquisition …
Faith and fate: Limits of transformers on compositionality
Transformer large language models (LLMs) have sparked admiration for their exceptional
performance on tasks that demand intricate multi-step reasoning. Yet, these models …
performance on tasks that demand intricate multi-step reasoning. Yet, these models …
Hallucination is inevitable: An innate limitation of large language models
Hallucination has been widely recognized to be a significant drawback for large language
models (LLMs). There have been many works that attempt to reduce the extent of …
models (LLMs). There have been many works that attempt to reduce the extent of …
Towards revealing the mystery behind chain of thought: a theoretical perspective
Recent studies have discovered that Chain-of-Thought prompting (CoT) can dramatically
improve the performance of Large Language Models (LLMs), particularly when dealing with …
improve the performance of Large Language Models (LLMs), particularly when dealing with …
Holistic evaluation of language models
Language models (LMs) are becoming the foundation for almost all major language
technologies, but their capabilities, limitations, and risks are not well understood. We present …
technologies, but their capabilities, limitations, and risks are not well understood. We present …
Transformers as statisticians: Provable in-context learning with in-context algorithm selection
Neural sequence models based on the transformer architecture have demonstrated
remarkable\emph {in-context learning}(ICL) abilities, where they can perform new tasks …
remarkable\emph {in-context learning}(ICL) abilities, where they can perform new tasks …
What can transformers learn in-context? a case study of simple function classes
In-context learning is the ability of a model to condition on a prompt sequence consisting of
in-context examples (input-output pairs corresponding to some task) along with a new query …
in-context examples (input-output pairs corresponding to some task) along with a new query …
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
Language models demonstrate both quantitative improvement and new qualitative
capabilities with increasing scale. Despite their potentially transformative impact, these new …
capabilities with increasing scale. Despite their potentially transformative impact, these new …