In-Context Meta LoRA Generation
Y Shao, M Yan, Y Liu, S Chen, W Chen, X Long… - arxiv preprint arxiv …, 2025 - arxiv.org
Low-rank Adaptation (LoRA) has demonstrated remarkable capabilities for task specific fine-
tuning. However, in scenarios that involve multiple tasks, training a separate LoRA model for …
tuning. However, in scenarios that involve multiple tasks, training a separate LoRA model for …
Recurrent Diffusion for Large-Scale Parameter Generation
Parameter generation has struggled to scale up for a long time, significantly limiting its range
of applications. In this study, we introduce\textbf {R} ecurrent diffusion for large-scale\textbf …
of applications. In this study, we introduce\textbf {R} ecurrent diffusion for large-scale\textbf …
Generating GFlowNets as You Wish with Diffusion Process
Generative Flow Networks (GFlowNets) are probabilistic samplers that learn stochastic
policies to generate diverse sets of high-reward objects, which is essential in scientific …
policies to generate diverse sets of high-reward objects, which is essential in scientific …
Investigating Fine-Tuning of Language Models for Multiple-Choice Questions
IA Wang - 2024 - dspace.mit.edu
This thesis investigates the positional and contextual bias of large language models (LLMs)
when used to answer multiple-choice questions (MCQs). Given the increasing use of …
when used to answer multiple-choice questions (MCQs). Given the increasing use of …