Large language model validity via enhanced conformal prediction methods

JJ Cherian, I Gibbs, EJ Candès - arxiv preprint arxiv:2406.09714, 2024 - arxiv.org
We develop new conformal inference methods for obtaining validity guarantees on the
output of large language models (LLMs). Prior work in conformal language modeling …

Bi-factorial preference optimization: Balancing safety-helpfulness in language models

W Zhang, PHS Torr, M Elhoseiny, A Bibi - arxiv preprint arxiv:2408.15313, 2024 - arxiv.org
Fine-tuning large language models (LLMs) on human preferences, typically through
reinforcement learning from human feedback (RLHF), has proven successful in enhancing …

Investigating Open Source LLMs to Retrofit Competency Questions in Ontology Engineering

R Alharbi, V Tamma, F Grasso, TR Payne - Proceedings of the AAAI …, 2024 - ojs.aaai.org
Abstract Competency Questions (CQs) are essential in ontology engineering; they express
an ontology's functional requirements as natural language questions, offer crucial insights …

[PDF][PDF] Explainable Knowledge-Aware Recommender Systems

FN Cardoso - 2024 - estudogeral.uc.pt
Curiosity is the root of human evolution. Through this simple but multi-faceted trait, humanity
has controlled fire and learned to use it to cook its food and light its streets. This same …

VERA: Validation and evaluation of retrieval-augmented systems

T Ding, A Banerjee, L Mombaerts, Y Li… - arxiv preprint arxiv …, 2024 - arxiv.org
The increasing use of Retrieval-Augmented Generation (RAG) systems in various
applications necessitates stringent protocols to ensure RAG systems accuracy, safety, and …