Playing repeated games with large language models E Akata, L Schulz, J Coda-Forno, SJ Oh, M Bethge, E Schulz arXiv preprint arXiv:2305.16867, 2023 | 130 | 2023 |
Inducing anxiety in large language models increases exploration and bias J Coda-Forno, K Witte, AK Jagadish, M Binz, Z Akata, E Schulz arXiv preprint arXiv:2304.11111, 2023 | 61 | 2023 |
Meta-in-context learning in large language models J Coda-Forno, M Binz, Z Akata, M Botvinick, J Wang, E Schulz Advances in Neural Information Processing Systems 36, 65189-65201, 2023 | 46 | 2023 |
CogBench: a large language model walks into a psychology lab J Coda-Forno, M Binz, JX Wang, E Schulz 41st International Conference on Machine Learning (ICML) 235, 9076-9108, 2024 | 22 | 2024 |
Human-like category learning by injecting ecological priors from large language models into neural networks AK Jagadish, J Coda-Forno, M Thalmann, E Schulz, M Binz arXiv preprint arXiv:2402.01821, 2024 | 5* | 2024 |
Playing repeated games with large language models (arXiv: 2305.16867). arXiv E Akata, L Schulz, J Coda-Forno, SJ Oh, M Bethge, E Schulz | 5 | 2023 |
Centaur: a foundation model of human cognition M Binz, E Akata, M Bethge, F Brändle, F Callaway, J Coda-Forno, ... arXiv preprint arXiv:2410.20268, 2024 | 4 | 2024 |
Leveraging Episodic Memory to Improve World Models for Reinforcement Learning J Coda-Forno, C Yu, Q Guo, Z Fountas, N Burgess Memory in Artificial and Real Intelligence (MemARI), NeurIPS workshop, 2022 | 3 | 2022 |
Inducing anxiety in large language models can induce bias J Coda-Forno, K Witte, AK Jagadish, M Binz, Z Akata, E Schulz arXiv preprint arXiv:2304.11111, 2023 | 1 | 2023 |