From Markov to Laplace: How Mamba In-Context Learns Markov Chains

M Bondaschi, N Rajaraman, X Wei… - arxiv preprint arxiv …, 2025 - arxiv.org
While transformer-based language models have driven the AI revolution thus far, their
computational complexity has spurred growing interest in viable alternatives, such as …

Transformers Handle Endogeneity in In-Context Linear Regression

H Liang, K Balasubramanian, L Lai - arxiv preprint arxiv:2410.01265, 2024 - arxiv.org
We explore the capability of transformers to address endogeneity in in-context linear
regression. Our main finding is that transformers inherently possess a mechanism to handle …