PDE-Controller: LLMs for Autoformalization and Reasoning of PDEs
While recent AI-for-math has made strides in pure mathematics, areas of applied
mathematics, particularly PDEs, remain underexplored despite their significant real-world …
mathematics, particularly PDEs, remain underexplored despite their significant real-world …
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers
Chain-of-thought reasoning and scratchpads have emerged as critical tools for enhancing
the computational capabilities of transformers. While theoretical results show that polynomial …
the computational capabilities of transformers. While theoretical results show that polynomial …
Mathematical Reasoning in Large Language Models: Assessing Logical and Arithmetic Errors across Wide Numerical Ranges
Mathematical reasoning in Large Language Models (LLMs) is often evaluated using
benchmarks with limited numerical ranges, failing to reflect real-world problem-solving …
benchmarks with limited numerical ranges, failing to reflect real-world problem-solving …
Low-Bit Quantization Favors Undertrained LLMs: Scaling Laws for Quantized LLMs with 100T Training Tokens
We reveal that low-bit quantization favors undertrained large language models (LLMs) by
observing that models with larger sizes or fewer training tokens experience less quantization …
observing that models with larger sizes or fewer training tokens experience less quantization …
Fine Tuning Large Language Models to Deliver CBT for Depression
T Tahir - arxiv preprint arxiv:2412.00251, 2024 - arxiv.org
Cognitive Behavioral Therapy (CBT) is a well-established, evidence-based treatment for
Major Depressive Disorder. Unfortunately, there exist significant barriers to individuals …
Major Depressive Disorder. Unfortunately, there exist significant barriers to individuals …