Large language models for generative information extraction: A survey
Abstract Information Extraction (IE) aims to extract structural knowledge from plain natural
language texts. Recently, generative Large Language Models (LLMs) have demonstrated …
language texts. Recently, generative Large Language Models (LLMs) have demonstrated …
Cpa-enhancer: Chain-of-thought prompted adaptive enhancer for object detection under unknown degradations
Object detection methods under known single degradations have been extensively
investigated. However, existing approaches require prior knowledge of the degradation type …
investigated. However, existing approaches require prior knowledge of the degradation type …
Understanding Before Reasoning: Enhancing Chain-of-Thought with Iterative Summarization Pre-Prompting
DH Zhu, YJ **ong, JC Zhang, XJ **e… - arxiv preprint arxiv …, 2025 - arxiv.org
Chain-of-Thought (CoT) Prompting is a dominant paradigm in Large Language Models
(LLMs) to enhance complex reasoning. It guides LLMs to present multi-step reasoning …
(LLMs) to enhance complex reasoning. It guides LLMs to present multi-step reasoning …
Graph Elicitation for Guiding Multi-Step Reasoning in Large Language Models
Chain-of-Thought (CoT) prompting along with sub-question generation and answering has
enhanced multi-step reasoning capabilities of Large Language Models (LLMs). However …
enhanced multi-step reasoning capabilities of Large Language Models (LLMs). However …
RamIR: Reasoning and action prompting with Mamba for all-in-one image restoration
A Tang, Y Wu, Y Zhang - Applied Intelligence, 2025 - Springer
All-in-one image restoration aims to recover various degraded images using a unified
model. To adaptively reconstruct high-quality images, recent prevalent CNN and …
model. To adaptively reconstruct high-quality images, recent prevalent CNN and …
CoT-UQ: Improving Response-wise Uncertainty Quantification in LLMs with Chain-of-Thought
B Zhang, R Zhang - arxiv preprint arxiv:2502.17214, 2025 - arxiv.org
Large language models (LLMs) excel in many tasks but struggle to accurately quantify
uncertainty in their generated responses. This limitation makes it challenging to detect …
uncertainty in their generated responses. This limitation makes it challenging to detect …
Triplet-based contrastive method enhances the reasoning ability of large language models
H Chen, J Zhu, W Wang, Y Zhu, L ** - The Journal of Supercomputing, 2025 - Springer
Prompting techniques play a crucial role in enhancing the capabilities of large pretrained
language models (LLMs). While chain-of-thought (CoT) prompting, Wei (Adv Neural Inf …
language models (LLMs). While chain-of-thought (CoT) prompting, Wei (Adv Neural Inf …
DiVA-DocRE: A Discriminative and Voice-Aware Paradigm for Document-Level Relation Extraction
Y Wu, R Yangarber, X Mao - arxiv preprint arxiv:2409.13717, 2024 - arxiv.org
The remarkable capabilities of Large Language Models (LLMs) in text comprehension and
generation have revolutionized Information Extraction (IE). One such advancement is in …
generation have revolutionized Information Extraction (IE). One such advancement is in …
Privacy Protection and Standardization of Electronic Medical Records Using Large Language Model
CL Huang, B Rianto, JT Sun, ZX Fu, CH Lee - International Workshop on …, 2024 - Springer
Recently, the widespread application of electronic medical records (EMRs) has made
protecting patients' personal privacy information crucial and highly important. However, the …
protecting patients' personal privacy information crucial and highly important. However, the …
[PDF][PDF] Real-Time Task Planning Improvements for LLMs: Innovations in Closed-Loop Architectures
S Desai, M Gupta, K Mehta, A Nair, P Singh - 2024 - researchgate.net
Large language models (LLMs) have made significant strides in various applications, but
optimizing their task planning capabilities remains a critical challenge. To address this, we …
optimizing their task planning capabilities remains a critical challenge. To address this, we …