[引用][C] Reasoning with transformer-based models: Deep learning, but shallow reasoning
Recent years have seen impressive performance of transformer-based models on different
natural language processing tasks. However, it is not clear to what degree the transformers …
natural language processing tasks. However, it is not clear to what degree the transformers …
Out-of-distribution generalization in natural language processing: Past, present, and future
Abstract Machine learning (ML) systems in natural language processing (NLP) face
significant challenges in generalizing to out-of-distribution (OOD) data, where the test …
significant challenges in generalizing to out-of-distribution (OOD) data, where the test …
From lsat: The progress and challenges of complex reasoning
Complex reasoning aims to draw a correct inference based on complex rules. As a hallmark
of human intelligence, it involves a degree of explicit reading comprehension, interpretation …
of human intelligence, it involves a degree of explicit reading comprehension, interpretation …
Geomverse: A systematic evaluation of large models for geometric reasoning
Large language models have shown impressive results for multi-hop mathematical
reasoning when the input question is only textual. Many mathematical reasoning problems …
reasoning when the input question is only textual. Many mathematical reasoning problems …
Logigan: Learning logical reasoning via adversarial pre-training
We present LogiGAN, an unsupervised adversarial pre-training framework for improving
logical reasoning abilities of language models. Upon automatic identification of logical …
logical reasoning abilities of language models. Upon automatic identification of logical …
Open-Ethical AI: Advancements in Open-Source Human-Centric Neural Language Models
This survey summarises the most recent methods for building and assessing helpful, honest,
and harmless neural language models, considering small, medium, and large-size models …
and harmless neural language models, considering small, medium, and large-size models …
Do Large Language Models Show Human-like Biases? Exploring Confidence—Competence Gap in AI
This study investigates self-assessment tendencies in Large Language Models (LLMs),
examining if patterns resemble human cognitive biases like the Dunning–Kruger effect …
examining if patterns resemble human cognitive biases like the Dunning–Kruger effect …