Computational modeling of bilingual language learning: Current models and future directions
The last two decades have seen a significant amount of interest in bilingual language
learning and processing. A number of computational models have also been developed to …
learning and processing. A number of computational models have also been developed to …
Brain and Cognitive Science Inspired Deep Learning: A Comprehensive Survey
Deep learning (DL) is increasingly viewed as a foundational methodology for advancing
Artificial Intelligence (AI). However, its interpretability remains limited, and it often …
Artificial Intelligence (AI). However, its interpretability remains limited, and it often …
Deep neural networks and brain alignment: Brain encoding and decoding (survey)
Can we obtain insights about the brain using AI models? How is the information in deep
learning models related to brain recordings? Can we improve AI models with the help of …
learning models related to brain recordings? Can we improve AI models with the help of …
Attention weights accurately predict language representations in the brain
In Transformer-based language models (LMs) the attention mechanism converts token
embeddings into contextual embeddings that incorporate information from neighboring …
embeddings into contextual embeddings that incorporate information from neighboring …
Structural similarities between language models and neural response measurements
Large language models (LLMs) have complicated internal dynamics, but induce
representations of words and phrases whose geometry we can study. Human language …
representations of words and phrases whose geometry we can study. Human language …
Re-evaluating the Need for Visual Signals in Unsupervised Grammar Induction
Are multimodal inputs necessary for grammar induction? Recent work has shown that
multimodal training inputs can improve grammar induction. However, these improvements …
multimodal training inputs can improve grammar induction. However, these improvements …
On the trade-off between redundancy and local coherence in summarization
Extractive summaries are usually presented as lists of sentences with no expected cohesion
between them and with plenty of redundant information if not accounted for. In this paper, we …
between them and with plenty of redundant information if not accounted for. In this paper, we …
Understanding the Cognitive Complexity in Language Elicited by Product Images
Product images (eg, a phone) can be used to elicit a diverse set of consumer-reported
features expressed through language, including surface-level perceptual attributes (eg," …
features expressed through language, including surface-level perceptual attributes (eg," …