Large language models for generative information extraction: A survey

D Xu, W Chen, W Peng, C Zhang, T Xu, X Zhao… - Frontiers of Computer …, 2024 - Springer
Abstract Information Extraction (IE) aims to extract structural knowledge from plain natural
language texts. Recently, generative Large Language Models (LLMs) have demonstrated …

When moe meets llms: Parameter efficient fine-tuning for multi-task medical applications

Q Liu, X Wu, X Zhao, Y Zhu, D Xu, F Tian… - Proceedings of the 47th …, 2024 - dl.acm.org
The recent surge in Large Language Models (LLMs) has garnered significant attention
across numerous fields. Fine-tuning is often required to fit general LLMs for a specific …

[HTML][HTML] Geospatial large language model trained with a simulated environment for generating tool-use chains autonomously

Y Zhang, J Li, Z Wang, Z He, Q Guan, J Lin… - International Journal of …, 2024 - Elsevier
Solving geospatial tasks generally requires multiple geospatial tools and steps, ie, tool-use
chains. Automating the geospatial task solving process can effectively enhance the …

Can Editing LLMs Inject Harm?

C Chen, B Huang, Z Li, Z Chen, S Lai, X Xu… - arxiv preprint arxiv …, 2024 - arxiv.org
Knowledge editing has been increasingly adopted to correct the false or outdated
knowledge in Large Language Models (LLMs). Meanwhile, one critical but under-explored …

Decoding by Contrasting Knowledge: Enhancing LLMs' Confidence on Edited Facts

B Bi, S Liu, L Mei, Y Wang, P Ji, X Cheng - arxiv preprint arxiv:2405.11613, 2024 - arxiv.org
The knowledge within large language models (LLMs) may become outdated quickly. While
in-context editing (ICE) is currently the most effective method for knowledge editing (KE), it is …

Mill: Mutual verification with large language models for zero-shot query expansion

P Jia, Y Liu, X Zhao, X Li, C Hao, S Wang… - arxiv preprint arxiv …, 2023 - arxiv.org
Query expansion, pivotal in search engines, enhances the representation of user
information needs with additional terms. While existing methods expand queries using …

Can Knowledge Editing Really Correct Hallucinations?

B Huang, C Chen, X Xu, A Payani, K Shu - arxiv preprint arxiv …, 2024 - arxiv.org
Large Language Models (LLMs) suffer from hallucinations, referring to the non-factual
information in generated content, despite their superior capacities across tasks. Meanwhile …

Mitigating Hallucinations of Large Language Models in Medical Information Extraction via Contrastive Decoding

D Xu, Z Zhang, Z Zhu, Z Lin, Q Liu, X Wu, T Xu… - arxiv preprint arxiv …, 2024 - arxiv.org
The impressive capabilities of large language models (LLMs) have attracted extensive
interests of applying LLMs to medical field. However, the complex nature of clinical …

Double-Checker: Large Language Model as a Checker for Few-shot Named Entity Recognition

W Chen, L Zhao, Z Zheng, T Xu, Y Wang… - Findings of the …, 2024 - aclanthology.org
Abstract Recently, few-shot Named Entity Recognition (NER) has attracted significant
attention due to the high cost of obtaining high-quality labeled data. Decomposition-based …

LLMTreeRec: Unleashing the Power of Large Language Models for Cold-Start Recommendations

W Zhang, C Wu, X Li, Y Wang, K Dong, Y Wang… - arxiv preprint arxiv …, 2024 - arxiv.org
The lack of training data gives rise to the system cold-start problem in recommendation
systems, making them struggle to provide effective recommendations. To address this …