Meta-learned models of cognition
Psychologists and neuroscientists extensively rely on computational models for studying
and analyzing the human mind. Traditionally, such computational models have been hand …
and analyzing the human mind. Traditionally, such computational models have been hand …
Foundation models in robotics: Applications, challenges, and the future
We survey applications of pretrained foundation models in robotics. Traditional deep
learning models in robotics are trained on small datasets tailored for specific tasks, which …
learning models in robotics are trained on small datasets tailored for specific tasks, which …
Foundational challenges in assuring alignment and safety of large language models
This work identifies 18 foundational challenges in assuring the alignment and safety of large
language models (LLMs). These challenges are organized into three different categories …
language models (LLMs). These challenges are organized into three different categories …
Supervised pretraining can learn in-context reinforcement learning
Large transformer models trained on diverse datasets have shown a remarkable ability to
learn in-context, achieving high few-shot performance on tasks they were not explicitly …
learn in-context, achieving high few-shot performance on tasks they were not explicitly …
Bigger, better, faster: Human-level atari with human-level efficiency
We introduce a value-based RL agent, which we call BBF, that achieves super-human
performance in the Atari 100K benchmark. BBF relies on scaling the neural networks used …
performance in the Atari 100K benchmark. BBF relies on scaling the neural networks used …
Harms from increasingly agentic algorithmic systems
Research in Fairness, Accountability, Transparency, and Ethics (FATE) 1 has established
many sources and forms of algorithmic harm, in domains as diverse as health care, finance …
many sources and forms of algorithmic harm, in domains as diverse as health care, finance …
Deep reinforcement learning with plasticity injection
A growing body of evidence suggests that neural networks employed in deep reinforcement
learning (RL) gradually lose their plasticity, the ability to learn from new data; however, the …
learning (RL) gradually lose their plasticity, the ability to learn from new data; however, the …
Emergent agentic transformer from chain of hindsight experience
Large transformer models powered by diverse data and model scale have dominated
natural language modeling and computer vision and pushed the frontier of multiple AI areas …
natural language modeling and computer vision and pushed the frontier of multiple AI areas …
A survey on transformers in reinforcement learning
Transformer has been considered the dominating neural architecture in NLP and CV, mostly
under supervised settings. Recently, a similar surge of using Transformers has appeared in …
under supervised settings. Recently, a similar surge of using Transformers has appeared in …
The mechanistic basis of data dependence and abrupt learning in an in-context classification task
G Reddy - The Twelfth International Conference on Learning …, 2023 - openreview.net
Transformer models exhibit in-context learning: the ability to accurately predict the response
to a novel query based on illustrative examples in the input sequence, which contrasts with …
to a novel query based on illustrative examples in the input sequence, which contrasts with …