Generative ai for customizable learning experiences

I Pesovski, R Santos, R Henriques, V Trajkovik - Sustainability, 2024 - mdpi.com
The introduction of accessible generative artificial intelligence opens promising
opportunities for the implementation of personalized learning methods in any educational …

Quantized Delta Weight Is Safety Keeper

Y Liu, Z Sun, X He, X Huang - arxiv preprint arxiv:2411.19530, 2024 - arxiv.org
Recent advancements in fine-tuning proprietary language models enable customized
applications across various domains but also introduce two major challenges: high resource …

Detecting Emotional Incongruity of Sarcasm by Commonsense Reasoning

Z Qiu, J Yu, Y Zhang, H Lai, Y Rao, Q Su… - arxiv preprint arxiv …, 2024 - arxiv.org
This paper focuses on sarcasm detection, which aims to identify whether given statements
convey criticism, mockery, or other negative sentiment opposite to the literal meaning. To …

CrossCheckGPT: Universal Hallucination Ranking for Multimodal Foundation Models

G Sun, P Manakul, A Liusie, K Pipatanakul… - arxiv preprint arxiv …, 2024 - arxiv.org
Multimodal foundation models are prone to hallucination, generating outputs that either
contradict the input or are not grounded by factual information. Given the diversity in …

Iter-AHMCL: Alleviate Hallucination for Large Language Model via Iterative Model-level Contrastive Learning

H Wu, X Li, X Xu, J Wu, D Zhang, Z Liu - arxiv preprint arxiv:2410.12130, 2024 - arxiv.org
The development of Large Language Models (LLMs) has significantly advanced various AI
applications in commercial and scientific research fields, such as scientific literature …

Unc-TTP: A Method for Classifying LLM Uncertainty to Improve In-Context Example Selection

HY Huang, Z Wu, Y Yang, J Zhang, Y Wu - arxiv preprint arxiv …, 2024 - arxiv.org
Nowadays, Large Language Models (LLMs) have demonstrated exceptional performance
across various downstream tasks. However, it is challenging for users to discern whether the …

StateAct: State Tracking and Reasoning for Acting and Planning with Large Language Models

N Rozanov, M Rei - arxiv preprint arxiv:2410.02810, 2024 - arxiv.org
Planning and acting to solvereal'tasks using large language models (LLMs) in interactive
environments has become a new frontier for AI methods. While recent advances allowed …

SoccerRAG: Multimodal Soccer Information Retrieval via Natural Queries

AT Strand, S Gautam, C Midoglu… - arxiv preprint arxiv …, 2024 - arxiv.org
The rapid evolution of digital sports media necessitates sophisticated information retrieval
systems that can efficiently parse extensive multimodal datasets. This paper introduces …

LLM Unlearning via Loss Adjustment with Only Forget Data

Y Wang, J Wei, CY Liu, J Pang, Q Liu, AP Shah… - arxiv preprint arxiv …, 2024 - arxiv.org
Unlearning in Large Language Models (LLMs) is essential for ensuring ethical and
responsible AI use, especially in addressing privacy leak, bias, safety, and evolving …

Model-based Preference Optimization in Abstractive Summarization without Human Feedback

J Choi, K Chae, J Song, Y Jo, T Kim - arxiv preprint arxiv:2409.18618, 2024 - arxiv.org
In abstractive summarization, the challenge of producing concise and accurate summaries
arises from the vast amount of information contained in the source document. Consequently …