Opportunities and challenges of diffusion models for generative AI

M Chen, S Mei, J Fan, M Wang - National Science Review, 2024 - academic.oup.com
Diffusion models, a powerful and universal generative artificial intelligence technology, have
achieved tremendous success and opened up new possibilities in diverse applications. In …

Learning gflownets from partial episodes for improved convergence and stability

K Madan, J Rector-Brooks… - International …, 2023 - proceedings.mlr.press
Generative flow networks (GFlowNets) are a family of algorithms for training a sequential
sampler of discrete objects under an unnormalized target density and have been …

Diffusion models for black-box optimization

S Krishnamoorthy, SM Mashkaria… - … on Machine Learning, 2023 - proceedings.mlr.press
The goal of offline black-box optimization (BBO) is to optimize an expensive black-box
function using a fixed dataset of function evaluations. Prior works consider forward …

An overview of diffusion models: Applications, guided generation, statistical rates and optimization

M Chen, S Mei, J Fan, M Wang - arxiv preprint arxiv:2404.07771, 2024 - arxiv.org
Diffusion models, a powerful and universal generative AI technology, have achieved
tremendous success in computer vision, audio, reinforcement learning, and computational …

Bidirectional learning for offline infinite-width model-based optimization

C Chen, Y Zhang, J Fu, XS Liu… - Advances in Neural …, 2022 - proceedings.neurips.cc
In offline model-based optimization, we strive to maximize a black-box objective function by
only leveraging a static dataset of designs and their scores. This problem setting arises in …

ExPT: synthetic pretraining for few-shot experimental design

T Nguyen, S Agrawal, A Grover - Advances in Neural …, 2024 - proceedings.neurips.cc
Experimental design is a fundamental problem in many science and engineering fields. In
this problem, sample efficiency is crucial due to the time, money, and safety costs of real …

Design from policies: Conservative test-time adaptation for offline policy optimization

J Liu, H Zhang, Z Zhuang, Y Kang… - Advances in Neural …, 2024 - proceedings.neurips.cc
In this work, we decouple the iterative bi-level offline RL (value estimation and policy
extraction) from the offline training phase, forming a non-iterative bi-level paradigm and …

Importance-aware co-teaching for offline model-based optimization

Y Yuan, CS Chen, Z Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
Offline model-based optimization aims to find a design that maximizes a property of interest
using only an offline dataset, with applications in robot, protein, and molecule design …

Parallel-mentoring for offline model-based optimization

CS Chen, C Beckham, Z Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
We study offline model-based optimization to maximize a black-box objective function with a
static dataset of designs and scores. These designs encompass a variety of domains …

Gradient-based bi-level optimization for deep learning: A survey

C Chen, X Chen, C Ma, Z Liu, X Liu - arxiv preprint arxiv:2207.11719, 2022 - arxiv.org
Bi-level optimization, especially the gradient-based category, has been widely used in the
deep learning community including hyperparameter optimization and meta-knowledge …