Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Amortizing intractable inference in diffusion models for vision, language, and control
Diffusion models have emerged as effective distribution estimators in vision, language, and
reinforcement learning, but their use as priors in downstream tasks poses an intractable …
reinforcement learning, but their use as priors in downstream tasks poses an intractable …
Beyond ELBOs: A large-scale evaluation of variational methods for sampling
Monte Carlo methods, Variational Inference, and their combinations play a pivotal role in
sampling from intractable probability distributions. However, current studies lack a unified …
sampling from intractable probability distributions. However, current studies lack a unified …
Steering masked discrete diffusion models via discrete denoising posterior prediction
Generative modeling of discrete data underlies important applications spanning text-based
agents like ChatGPT to the design of the very building blocks of life in protein sequences …
agents like ChatGPT to the design of the very building blocks of life in protein sequences …
Sequential controlled langevin diffusions
An effective approach for sampling from unnormalized densities is based on the idea of
gradually transporting samples from an easy prior to the complicated target distribution. Two …
gradually transporting samples from an easy prior to the complicated target distribution. Two …
From discrete-time policies to continuous-time diffusion samplers: Asymptotic equivalences and faster training
We study the problem of training neural stochastic differential equations, or diffusion models,
to sample from a Boltzmann distribution without access to target samples. Existing methods …
to sample from a Boltzmann distribution without access to target samples. Existing methods …
[PDF][PDF] Flow of reasoning: Efficient training of llm policy with divergent thinking
Divergent thinking, the cognitive process of generating diverse solutions, is a hallmark of
human creativity and problem-solving. For machines, sampling diverse solution trajectories …
human creativity and problem-solving. For machines, sampling diverse solution trajectories …
Pessimistic backward policy for gflownets
This paper studies Generative Flow Networks (GFlowNets), which learn to sample objects
proportionally to a given reward function through the trajectory of state transitions. In this …
proportionally to a given reward function through the trajectory of state transitions. In this …
Can a Bayesian Oracle Prevent Harm from an Agent?
Is there a way to design powerful AI systems based on machine learning methods that would
satisfy probabilistic safety guarantees? With the long-term goal of obtaining a probabilistic …
satisfy probabilistic safety guarantees? With the long-term goal of obtaining a probabilistic …
Adaptive teachers for amortized samplers
Amortized inference is the task of training a parametric model, such as a neural network, to
approximate a distribution with a given unnormalized density where exact sampling is …
approximate a distribution with a given unnormalized density where exact sampling is …
Streaming Bayes GFlowNets
T Silva, DA de Souza… - Advances in Neural …, 2025 - proceedings.neurips.cc
Bayes' rule naturally allows for inference refinement in a streaming fashion, without the need
to recompute posteriors from scratch whenever new data arrives. In principle, Bayesian …
to recompute posteriors from scratch whenever new data arrives. In principle, Bayesian …