Refiner: Reasoning feedback on intermediate representations

D Paul, M Ismayilzada, M Peyrard, B Borges… - arxiv preprint arxiv …, 2023 - arxiv.org
Language models (LMs) have recently shown remarkable performance on reasoning tasks
by explicitly generating intermediate inferences, eg, chain-of-thought prompting. However …

Massively multi-cultural knowledge acquisition & lm benchmarking

Y Fung, R Zhao, J Doo, C Sun, H Ji - arxiv preprint arxiv:2402.09369, 2024 - arxiv.org
Pretrained large language models have revolutionized many applications but still face
challenges related to cultural bias and a lack of cultural commonsense knowledge crucial for …

Operationalizing contextual integrity in privacy-conscious assistants

S Ghalebikesabi, E Bagdasaryan, R Yi, I Yona… - arxiv preprint arxiv …, 2024 - arxiv.org
Advanced AI assistants combine frontier LLMs and tool access to autonomously perform
complex tasks on behalf of users. While the helpfulness of such assistants can increase …

Harnessing the power of LLMs for normative reasoning in MASs

BTR Savarimuthu, S Ranathunga… - arxiv preprint arxiv …, 2024 - arxiv.org
Software agents, both human and computational, do not exist in isolation and often need to
collaborate or coordinate with others to achieve their goals. In human society, social …

Analyzing Effects of Learning Downstream Tasks on Moral Bias in Large Language Models

N Kiehne, A Ljapunov, M Bätje… - Proceedings of the 2024 …, 2024 - aclanthology.org
Pre-training and fine-tuning large language models (LMs) is currently the state-of-the-art
methodology for enabling data-scarce downstream tasks. However, the derived models still …

Do Large Language Models Understand Mansplaining? Well, Actually...

C Pérez-Almendros… - Proceedings of the 2024 …, 2024 - aclanthology.org
Gender bias has been widely studied by the NLP community. However, other more subtle
variations of it, such as mansplaining, have yet received little attention. Mansplaining is a …