On the effectiveness of large language models in domain-specific code generation

X Gu, M Chen, Y Lin, Y Hu, H Zhang, C Wan… - ACM Transactions on …, 2024 - dl.acm.org
Large language models (LLMs) such as ChatGPT have shown remarkable capabilities in
code generation. Despite significant achievements, they rely on enormous training data to …

MinPrompt: Graph-based minimal prompt data augmentation for few-shot question answering

X Chen, JY Jiang, WC Chang, CJ Hsieh… - Proceedings of the …, 2024 - aclanthology.org
Recent advances in few-shot question answering (QA) mostly rely on the power of pre-
trained large language models (LLMs) and fine-tuning in specific settings. Although the pre …

Dynamic few-shot learning for knowledge graph question answering

J D'Abramo, A Zugarini, P Torroni - arxiv preprint arxiv:2407.01409, 2024 - arxiv.org
Large language models present opportunities for innovative Question Answering over
Knowledge Graphs (KGQA). However, they are not inherently designed for query …

Graph-enhanced prompt learning for personalized review generation

X Qu, Y Wang, Z Li, J Gao - Data Science and Engineering, 2024 - Springer
Personalized review generation is significant for e-commerce applications, such as
providing explainable recommendation and assisting the composition of reviews. With the …

Towards a zero-data, controllable, adaptive dialog system

D Väth, L Vanderlyn, NT Vu - arxiv preprint arxiv:2403.17582, 2024 - arxiv.org
Conversational Tree Search (V\" ath et al., 2023) is a recent approach to controllable dialog
systems, where domain experts shape the behavior of a Reinforcement Learning agent …

TPKE-QA: A gapless few-shot extractive question answering approach via task-aware post-training and knowledge enhancement

Q **ao, R Li, J Yang, Y Chen, S Jiang… - Expert Systems with …, 2024 - Elsevier
Few-shot extractive question answering (EQA) is a challenging task in natural language
processing, whose current methods are mainly based on pretrained language models …

Improving low-resource question answering by augmenting question information

A Chen, Y Sun, X Zhao, RG Esparza… - Findings of the …, 2023 - aclanthology.org
In the era of large models, low-resource question-answering tasks lag, emphasizing the
importance of data augmentation-a key research avenue in natural language processing …

QARR-FSQA: Question-Answer Replacement and Removal Pretraining Framework for Few-Shot Question Answering

SW Tan, CP Lee, KM Lim, C Tee, A Alqahtani - IEEE Access, 2024 - ieeexplore.ieee.org
In Natural Language Processing, creating training data for question answering (QA) systems
typically requires significant effort and expertise. This challenge is amplified in few-shot …

SMART: Self-Aware Agent for Tool Overuse Mitigation

C Qian, EC Acikgoz, H Wang, X Chen, A Sil… - arxiv preprint arxiv …, 2025 - arxiv.org
Current Large Language Model (LLM) agents demonstrate strong reasoning and tool use
capabilities, but often lack self-awareness, failing to balance these approaches effectively …

Prompt and instruction-based tuning for response generation in conversational question answering

Y **ng, P Liu - International conference on applications of natural …, 2023 - Springer
In recent years, prompt-based tuning and instruction-based tuning have emerged as popular
approaches for natural language processing. In this paper, we investigate the application of …