“What it wants me to say”: Bridging the abstraction gap between end-user programmers and code-generating large language models
Code-generating large language models map natural language to code. However, only a
small portion of the infinite space of naturalistic utterances is effective at guiding code …
small portion of the infinite space of naturalistic utterances is effective at guiding code …
Instructerc: Reforming emotion recognition in conversation with a retrieval multi-task llms framework
The development of emotion recognition in dialogue (ERC) has been consistently hindered
by the complexity of pipeline designs, leading to ERC models that often overfit to specific …
by the complexity of pipeline designs, leading to ERC models that often overfit to specific …
Log parsing with prompt-based few-shot learning
Logs generated by large-scale software systems provide crucial information for engineers to
understand the system status and diagnose problems of the systems. Log parsing, which …
understand the system status and diagnose problems of the systems. Log parsing, which …
In-boxbart: Get instructions into biomedical multi-task learning
Single-task models have proven pivotal in solving specific tasks; however, they have
limitations in real-world applications where multi-tasking is necessary and domain shifts are …
limitations in real-world applications where multi-tasking is necessary and domain shifts are …
LogicBench: Towards systematic evaluation of logical reasoning ability of large language models
Recently developed large language models (LLMs) have been shown to perform
remarkably well on a wide range of language understanding tasks. But, can they really …
remarkably well on a wide range of language understanding tasks. But, can they really …
Instructuie: Multi-task instruction tuning for unified information extraction
Large language models have unlocked strong multi-task capabilities from reading instructive
prompts. However, recent studies have shown that existing large models still have difficulty …
prompts. However, recent studies have shown that existing large models still have difficulty …
A comprehensive survey on instruction following
Task semantics can be expressed by a set of input-output examples or a piece of textual
instruction. Conventional machine learning approaches for natural language processing …
instruction. Conventional machine learning approaches for natural language processing …
Instruction tuned models are quick learners
Instruction tuning of language models has demonstrated the ability to enhance model
generalization to unseen tasks via in-context learning using a few examples. However …
generalization to unseen tasks via in-context learning using a few examples. However …
Help me think: A simple prompting strategy for non-experts to create customized content with models
Controlling the text generated by language models and customizing the content has been a
long-standing challenge. Existing prompting techniques proposed in pursuit of providing …
long-standing challenge. Existing prompting techniques proposed in pursuit of providing …
Large Language Model Instruction Following: A Survey of Progresses and Challenges
Task semantics can be expressed by a set of input-output examples or a piece of textual
instruction. Conventional machine learning approaches for natural language processing …
instruction. Conventional machine learning approaches for natural language processing …