Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
On the effectiveness of large language models in domain-specific code generation
Large language models (LLMs) such as ChatGPT have shown remarkable capabilities in
code generation. Despite significant achievements, they rely on enormous training data to …
code generation. Despite significant achievements, they rely on enormous training data to …
MinPrompt: Graph-based minimal prompt data augmentation for few-shot question answering
Recent advances in few-shot question answering (QA) mostly rely on the power of pre-
trained large language models (LLMs) and fine-tuning in specific settings. Although the pre …
trained large language models (LLMs) and fine-tuning in specific settings. Although the pre …
Dynamic few-shot learning for knowledge graph question answering
Large language models present opportunities for innovative Question Answering over
Knowledge Graphs (KGQA). However, they are not inherently designed for query …
Knowledge Graphs (KGQA). However, they are not inherently designed for query …
Graph-enhanced prompt learning for personalized review generation
Personalized review generation is significant for e-commerce applications, such as
providing explainable recommendation and assisting the composition of reviews. With the …
providing explainable recommendation and assisting the composition of reviews. With the …
Towards a zero-data, controllable, adaptive dialog system
Conversational Tree Search (V\" ath et al., 2023) is a recent approach to controllable dialog
systems, where domain experts shape the behavior of a Reinforcement Learning agent …
systems, where domain experts shape the behavior of a Reinforcement Learning agent …
TPKE-QA: A gapless few-shot extractive question answering approach via task-aware post-training and knowledge enhancement
Q **ao, R Li, J Yang, Y Chen, S Jiang… - Expert Systems with …, 2024 - Elsevier
Few-shot extractive question answering (EQA) is a challenging task in natural language
processing, whose current methods are mainly based on pretrained language models …
processing, whose current methods are mainly based on pretrained language models …
Improving low-resource question answering by augmenting question information
A Chen, Y Sun, X Zhao, RG Esparza… - Findings of the …, 2023 - aclanthology.org
In the era of large models, low-resource question-answering tasks lag, emphasizing the
importance of data augmentation-a key research avenue in natural language processing …
importance of data augmentation-a key research avenue in natural language processing …
QARR-FSQA: Question-Answer Replacement and Removal Pretraining Framework for Few-Shot Question Answering
SW Tan, CP Lee, KM Lim, C Tee, A Alqahtani - IEEE Access, 2024 - ieeexplore.ieee.org
In Natural Language Processing, creating training data for question answering (QA) systems
typically requires significant effort and expertise. This challenge is amplified in few-shot …
typically requires significant effort and expertise. This challenge is amplified in few-shot …
SMART: Self-Aware Agent for Tool Overuse Mitigation
Current Large Language Model (LLM) agents demonstrate strong reasoning and tool use
capabilities, but often lack self-awareness, failing to balance these approaches effectively …
capabilities, but often lack self-awareness, failing to balance these approaches effectively …
Prompt and instruction-based tuning for response generation in conversational question answering
In recent years, prompt-based tuning and instruction-based tuning have emerged as popular
approaches for natural language processing. In this paper, we investigate the application of …
approaches for natural language processing. In this paper, we investigate the application of …