Syntactic structure from deep learning
Modern deep neural networks achieve impressive performance in engineering applications
that require extensive linguistic skills, such as machine translation. This success has …
that require extensive linguistic skills, such as machine translation. This success has …
[HTML][HTML] Language in brains, minds, and machines
It has long been argued that only humans could produce and understand language. But
now, for the first time, artificial language models (LMs) achieve this feat. Here we survey the …
now, for the first time, artificial language models (LMs) achieve this feat. Here we survey the …
Explainability for large language models: A survey
Large language models (LLMs) have demonstrated impressive capabilities in natural
language processing. However, their internal mechanisms are still unclear and this lack of …
language processing. However, their internal mechanisms are still unclear and this lack of …
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
Language models demonstrate both quantitative improvement and new qualitative
capabilities with increasing scale. Despite their potentially transformative impact, these new …
capabilities with increasing scale. Despite their potentially transformative impact, these new …
[HTML][HTML] Modern language models refute Chomsky's approach to language
ST Piantadosi - From fieldwork to linguistic theory: A tribute to …, 2023 - books.google.com
Modern machine learning has subverted and bypassed the theoretical framework of
Chomsky's generative approach to linguistics, including its core claims to particular insights …
Chomsky's generative approach to linguistics, including its core claims to particular insights …
On the opportunities and risks of foundation models
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …
Call for Papers--The BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus
We present the call for papers for the BabyLM Challenge: Sample-efficient pretraining on a
developmentally plausible corpus. This shared task is intended for participants with an …
developmentally plausible corpus. This shared task is intended for participants with an …
Rethinking interpretability in the era of large language models
Interpretable machine learning has exploded as an area of interest over the last decade,
sparked by the rise of increasingly large datasets and deep neural networks …
sparked by the rise of increasingly large datasets and deep neural networks …
What artificial neural networks can tell us about human language acquisition
Rapid progress in machine learning for natural language processing has the potential to
transform debates about how humans learn language. However, the learning environments …
transform debates about how humans learn language. However, the learning environments …
Language models as knowledge bases?
Recent progress in pretraining language models on large textual corpora led to a surge of
improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these …
improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these …