Not all language model features are linear
Recent work has proposed that language models perform computation by manipulating one-
dimensional representations of concepts (" features") in activation space. In contrast, we …
dimensional representations of concepts (" features") in activation space. In contrast, we …
Dichotomy of early and late phase implicit biases can provably induce grokking
Recent work by Power et al.(2022) highlighted a surprising" grokking" phenomenon in
learning arithmetic tasks: a neural net first" memorizes" the training set, resulting in perfect …
learning arithmetic tasks: a neural net first" memorizes" the training set, resulting in perfect …
Fourier circuits in neural networks: Unlocking the potential of large language models in mathematical reasoning and modular arithmetic
In the evolving landscape of machine learning, a pivotal challenge lies in deciphering the
internal representations harnessed by neural networks and Transformers. Building on recent …
internal representations harnessed by neural networks and Transformers. Building on recent …
Emergence in non-neural models: grokking modular arithmetic via average gradient outer product
Neural networks trained to solve modular arithmetic tasks exhibit grokking, a phenomenon
where the test accuracy starts improving long after the model achieves 100% training …
where the test accuracy starts improving long after the model achieves 100% training …
Why do you grok? a theoretical analysis of grokking modular addition
We present a theoretical explanation of the``grokking''phenomenon, where a model
generalizes long after overfitting, for the originally-studied problem of modular addition. First …
generalizes long after overfitting, for the originally-studied problem of modular addition. First …
Do Mice Grok? Glimpses of Hidden Progress During Overtraining in Sensory Cortex
Does learning of task-relevant representations stop when behavior stops changing?
Motivated by recent theoretical advances in machine learning and the intuitive observation …
Motivated by recent theoretical advances in machine learning and the intuitive observation …
Progressive distillation induces an implicit curriculum
Knowledge distillation leverages a teacher model to improve the training of a student model.
A persistent challenge is that a better teacher does not always yield a better student, to …
A persistent challenge is that a better teacher does not always yield a better student, to …
Pre-trained Large Language Models Use Fourier Features to Compute Addition
Pre-trained large language models (LLMs) exhibit impressive mathematical reasoning
capabilities, yet how they compute basic arithmetic, such as addition, remains unclear. This …
capabilities, yet how they compute basic arithmetic, such as addition, remains unclear. This …
Composing Global Optimizers to Reasoning Tasks via Algebraic Objects in Neural Nets
Y Tian - arxiv preprint arxiv:2410.01779, 2024 - arxiv.org
We prove rich algebraic structures of the solution space for 2-layer neural networks with
quadratic activation and $ L_2 $ loss, trained on reasoning tasks in Abelian group (eg …
quadratic activation and $ L_2 $ loss, trained on reasoning tasks in Abelian group (eg …
Unifying and Verifying Mechanistic Interpretations: A Case Study with Group Operations
A recent line of work in mechanistic interpretability has focused on reverse-engineering the
computation performed by neural networks trained on the binary operation of finite groups …
computation performed by neural networks trained on the binary operation of finite groups …