“What it wants me to say”: Bridging the abstraction gap between end-user programmers and code-generating large language models

MX Liu, A Sarkar, C Negreanu, B Zorn… - Proceedings of the …, 2023 - dl.acm.org
Code-generating large language models map natural language to code. However, only a
small portion of the infinite space of naturalistic utterances is effective at guiding code …

Instructerc: Reforming emotion recognition in conversation with a retrieval multi-task llms framework

S Lei, G Dong, X Wang, K Wang, S Wang - arxiv preprint arxiv …, 2023 - arxiv.org
The development of emotion recognition in dialogue (ERC) has been consistently hindered
by the complexity of pipeline designs, leading to ERC models that often overfit to specific …

Log parsing with prompt-based few-shot learning

VH Le, H Zhang - … IEEE/ACM 45th International Conference on …, 2023 - ieeexplore.ieee.org
Logs generated by large-scale software systems provide crucial information for engineers to
understand the system status and diagnose problems of the systems. Log parsing, which …

In-boxbart: Get instructions into biomedical multi-task learning

M Parmar, S Mishra, M Purohit, M Luo… - arxiv preprint arxiv …, 2022 - arxiv.org
Single-task models have proven pivotal in solving specific tasks; however, they have
limitations in real-world applications where multi-tasking is necessary and domain shifts are …

LogicBench: Towards systematic evaluation of logical reasoning ability of large language models

M Parmar, N Patel, N Varshney… - Proceedings of the …, 2024 - aclanthology.org
Recently developed large language models (LLMs) have been shown to perform
remarkably well on a wide range of language understanding tasks. But, can they really …

Instructuie: Multi-task instruction tuning for unified information extraction

X Wang, W Zhou, C Zu, H **a, T Chen, Y Zhang… - arxiv preprint arxiv …, 2023 - arxiv.org
Large language models have unlocked strong multi-task capabilities from reading instructive
prompts. However, recent studies have shown that existing large models still have difficulty …

A comprehensive survey on instruction following

R Lou, K Zhang, W Yin - arxiv preprint arxiv:2303.10475, 2023 - arxiv.org
Task semantics can be expressed by a set of input-output examples or a piece of textual
instruction. Conventional machine learning approaches for natural language processing …

Instruction tuned models are quick learners

H Gupta, SA Sawant, S Mishra, M Nakamura… - arxiv preprint arxiv …, 2023 - arxiv.org
Instruction tuning of language models has demonstrated the ability to enhance model
generalization to unseen tasks via in-context learning using a few examples. However …

Help me think: A simple prompting strategy for non-experts to create customized content with models

S Mishra, E Nouri - arxiv preprint arxiv:2208.08232, 2022 - arxiv.org
Controlling the text generated by language models and customizing the content has been a
long-standing challenge. Existing prompting techniques proposed in pursuit of providing …

Large Language Model Instruction Following: A Survey of Progresses and Challenges

R Lou, K Zhang, W Yin - Computational Linguistics, 2024 - direct.mit.edu
Task semantics can be expressed by a set of input-output examples or a piece of textual
instruction. Conventional machine learning approaches for natural language processing …