The benefits, risks and bounds of personalizing the alignment of large language models to individuals

HR Kirk, B Vidgen, P Röttger, SA Hale - Nature Machine Intelligence, 2024 - nature.com
Large language models (LLMs) undergo 'alignment'so that they better reflect human values
or preferences, and are safer or more useful. However, alignment is intrinsically difficult …

Openfedllm: Training large language models on decentralized private data via federated learning

R Ye, W Wang, J Chai, D Li, Z Li, Y Xu, Y Du… - Proceedings of the 30th …, 2024 - dl.acm.org
Trained on massive publicly available data, large language models (LLMs) have
demonstrated tremendous success across various fields. While more data contributes to …

The PRISM alignment dataset: What participatory, representative and individualised human feedback reveals about the subjective and multicultural alignment of large …

HR Kirk, A Whitefield, P Rottger… - Advances in …, 2025 - proceedings.neurips.cc
Human feedback is central to the alignment of Large Language Models (LLMs). However,
open questions remain about the methods (how), domains (where), people (who) and …

Large Language Models in Food Science: Innovations, Applications, and Future

P Ma, S Tsai, Y He, X Jia, D Zhen, N Yu, Q Wang… - Trends in Food Science …, 2024 - Elsevier
Abstract Background Large Language Models (LLMs) are increasingly significant in food
science, transforming areas such as recipe development, nutritional analysis, food safety …

Political compass or spinning arrow? towards more meaningful evaluations for values and opinions in large language models

P Röttger, V Hofmann, V Pyatkin, M Hinck… - ar** harms? Learning from search engine studies
A Leidinger, R Rogers - Proceedings of the AAAI/ACM Conference on AI …, 2024 - ojs.aaai.org
With the widespread availability of LLMs since the release of ChatGPT and increased public
scrutiny, commercial model development appears to have focused their efforts …