Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Mathematical language models: A survey
In recent years, there has been remarkable progress in leveraging Language Models (LMs),
encompassing Pre-trained Language Models (PLMs) and Large-scale Language Models …
encompassing Pre-trained Language Models (PLMs) and Large-scale Language Models …
Instance-adaptive zero-shot chain-of-thought prompting
Zero-shot Chain-of-Thought (CoT) prompting emerges as a simple and effective strategy for
enhancing the performance of large language models (LLMs) in real-world reasoning tasks …
enhancing the performance of large language models (LLMs) in real-world reasoning tasks …
Demystifying chains, trees, and graphs of thoughts
M Besta, F Memedi, Z Zhang, R Gerstenberger… - arxiv preprint arxiv …, 2024 - arxiv.org
The field of natural language processing (NLP) has witnessed significant progress in recent
years, with a notable focus on improving large language models'(LLM) performance through …
years, with a notable focus on improving large language models'(LLM) performance through …
Unlocking Black-Box Prompt Tuning Efficiency via Zeroth-Order Optimization
Prompt optimization emerges as an important technique for adapting Large Language
Models (LLMs) to specific tasks. Unfortunately, LLM proprietors often limit access to models' …
Models (LLMs) to specific tasks. Unfortunately, LLM proprietors often limit access to models' …
Towards a unified view of answer calibration for multi-step reasoning
Large Language Models (LLMs) employing Chain-of-Thought (CoT) prompting have
broadened the scope for improving multi-step reasoning capabilities. We generally divide …
broadened the scope for improving multi-step reasoning capabilities. We generally divide …
Unveiling the Magic of Code Reasoning through Hypothesis Decomposition and Amendment
Y Zhao, T Ji, W Feng, Z Huang, Q Liu, Z Liu… - arxiv preprint arxiv …, 2025 - arxiv.org
The reasoning abilities are one of the most enigmatic and captivating aspects of large
language models (LLMs). Numerous studies are dedicated to exploring and expanding the …
language models (LLMs). Numerous studies are dedicated to exploring and expanding the …
Evolutionary Pre-Prompt Optimization for Mathematical Reasoning
Recent advancements have highlighted that large language models (LLMs), when given a
small set of task-specific examples, demonstrate remarkable proficiency, a capability that …
small set of task-specific examples, demonstrate remarkable proficiency, a capability that …
E&S-Gainer: An Emotion Aware and Strategy Enhanced Model for Emotional Support Conversation
Abstract The Emotional Support Conversation (ESC) task aims to alleviate emotional
distress in seekers and offer them an outlet for expressing their aggravation, garnering …
distress in seekers and offer them an outlet for expressing their aggravation, garnering …
Fallback Prompting Guides Large Language Models for Accurate Responses in Complex Reasoning
J Sun, Z Zhang, X Wang, X Ji, Y Zhang - Journal of Networking and …, 2024 - iecscience.org
Since the introduction of Chain-of-Thought (CoT), leveraging Large Language Models
(LLMs) to solve complex reasoning problems has become possible. While an increasing …
(LLMs) to solve complex reasoning problems has become possible. While an increasing …
DCR: Divide-and-Conquer Reasoning for Multi-choice Question Answering with LLMs
Large language models (LLMs) have shown impressive performance in reasoning
benchmarks with the emergence of Chain-of-Thought (CoT), particularly in multi-choice …
benchmarks with the emergence of Chain-of-Thought (CoT), particularly in multi-choice …