Can watermarks survive translation? on the cross-lingual consistency of text watermark for large language models

Z He, B Zhou, H Hao, A Liu, X Wang, Z Tu… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Text watermarking technology aims to tag and identify content produced by large language
models (LLMs) to prevent misuse. In this study, we introduce the concept of cross-lingual …

Limited ability of llms to simulate human psychological behaviours: a psychometric analysis

NB Petrov, G Serapio-García, J Rentfrow - arxiv preprint arxiv:2405.07248, 2024‏ - arxiv.org
The humanlike responses of large language models (LLMs) have prompted social scientists
to investigate whether LLMs can be used to simulate human participants in experiments …

The better angels of machine personality: How personality relates to llm safety

J Zhang, D Liu, C Qian, Z Gan, Y Liu, Y Qiao… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Personality psychologists have analyzed the relationship between personality and safety
behaviors in human society. Although Large Language Models (LLMs) demonstrate …

Measuring bargaining abilities of llms: A benchmark and a buyer-enhancement method

T **a, Z He, T Ren, Y Miao, Z Zhang, Y Yang… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Bargaining is an important and unique part of negotiation between humans. As LLM-driven
agents learn to negotiate and act like real humans, how to evaluate agents' bargaining …

Evaluating psychological safety of large language models

X Li, Y Li, L Qiu, S Joty, L Bing - arxiv preprint arxiv:2212.10529, 2022‏ - arxiv.org
In this work, we designed unbiased prompts to systematically evaluate the psychological
safety of large language models (LLMs). First, we tested five different LLMs by using two …

Shall we team up: Exploring spontaneous cooperation of competing llm agents

Z Wu, R Peng, S Zheng, Q Liu, X Han, BI Kwon… - arxiv preprint arxiv …, 2024‏ - arxiv.org
Large Language Models (LLMs) have increasingly been utilized in social simulations, where
they are often guided by carefully crafted instructions to stably exhibit human-like behaviors …

Benchmarking Distributional Alignment of Large Language Models

N Meister, C Guestrin, T Hashimoto - arxiv preprint arxiv:2411.05403, 2024‏ - arxiv.org
Language models (LMs) are increasingly used as simulacra for people, yet their ability to
match the distribution of views of a specific demographic group and be\textit {distributionally …

What Limits LLM-based Human Simulation: LLMs or Our Design?

Q Wang, J Wu, Z Tang, B Luo, N Chen, W Chen… - arxiv preprint arxiv …, 2025‏ - arxiv.org
We argue that advancing LLM-based human simulation requires addressing both LLM's
inherent limitations and simulation framework design challenges. Recent studies have …

Exploring the Potential of Large Language Models to Simulate Personality

M Molchanova, A Mikhailova, A Korzanova… - arxiv preprint arxiv …, 2025‏ - arxiv.org
With the advancement of large language models (LLMs), the focus in Conversational AI has
shifted from merely generating coherent and relevant responses to tackling more complex …

Text-based Personality Prediction Using Large Language Models

M Molchanova, D Olshevskaya - 2024 2nd International …, 2024‏ - ieeexplore.ieee.org
This paper addresses the task of text-based personality detection using Large Language
Models (LLMs). The study focuses primarily on exploring the text-based detection of …