Can Editing LLMs Inject Harm?
Knowledge editing has been increasingly adopted to correct the false or outdated
knowledge in Large Language Models (LLMs). Meanwhile, one critical but under-explored …
knowledge in Large Language Models (LLMs). Meanwhile, one critical but under-explored …
Can Knowledge Editing Really Correct Hallucinations?
Large Language Models (LLMs) suffer from hallucinations, referring to the non-factual
information in generated content, despite their superior capacities across tasks. Meanwhile …
information in generated content, despite their superior capacities across tasks. Meanwhile …
Parenting: Optimizing knowledge selection of retrieval-augmented language models with parameter decoupling and tailored tuning
Retrieval-Augmented Generation (RAG) offers an effective solution to the issues faced by
Large Language Models (LLMs) in hallucination generation and knowledge obsolescence …
Large Language Models (LLMs) in hallucination generation and knowledge obsolescence …
MACPO: Weak-to-Strong Alignment via Multi-Agent Contrastive Preference Optimization
As large language models (LLMs) are rapidly advancing and achieving near-human
capabilities, aligning them with human values is becoming more urgent. In scenarios where …
capabilities, aligning them with human values is becoming more urgent. In scenarios where …
Learning from Mistakes: A Comprehensive Review of Knowledge Editing for Large Language Models
Y Li, C Fan, M Huang, C Li - 2024 IEEE International …, 2024 - ieeexplore.ieee.org
In recent years, there has been a growing recognition that large language models like GPT-
4 have the capability to store vast amounts of knowledge and possess extremely powerful …
4 have the capability to store vast amounts of knowledge and possess extremely powerful …
ChroKnowledge: Unveiling Chronological Knowledge of Language Models in Multiple Domains
Large language models (LLMs) have significantly impacted many aspects of our lives.
However, assessing and ensuring their chronological knowledge remains challenging …
However, assessing and ensuring their chronological knowledge remains challenging …
One Mind, Many Tongues: A Deep Dive into Language-Agnostic Knowledge Neurons in Large Language Models
Large language models (LLMs) have learned vast amounts of factual knowledge through
self-supervised pre-training on large-scale corpora. Meanwhile, LLMs have also …
self-supervised pre-training on large-scale corpora. Meanwhile, LLMs have also …
OntoTune: Ontology-Driven Self-training for Aligning Large Language Models
Z Liu, C Gan, J Wang, Y Zhang, Z Bo, M Sun… - arxiv preprint arxiv …, 2025 - arxiv.org
Existing domain-specific Large Language Models (LLMs) are typically developed by fine-
tuning general-purposed LLMs with large-scale domain-specific corpora. However, training …
tuning general-purposed LLMs with large-scale domain-specific corpora. However, training …
Neuron-based Personality Trait Induction in Large Language Models
Large language models (LLMs) have become increasingly proficient at simulating various
personality traits, an important capability for supporting related applications (eg, role …
personality traits, an important capability for supporting related applications (eg, role …
Predicting Large Language Model Capabilities on Closed-Book QA Tasks Using Only Information Available Prior to Training
The GPT-4 technical report from OpenAI suggests that model performance on specific tasks
can be predicted prior to training, though methodologies remain unspecified. This approach …
can be predicted prior to training, though methodologies remain unspecified. This approach …