State of the art on diffusion models for visual computing
The field of visual computing is rapidly advancing due to the emergence of generative
artificial intelligence (AI), which unlocks unprecedented capabilities for the generation …
artificial intelligence (AI), which unlocks unprecedented capabilities for the generation …
Wonder3d: Single image to 3d using cross-domain diffusion
In this work we introduce Wonder3D a novel method for generating high-fidelity textured
meshes from single-view images with remarkable efficiency. Recent methods based on the …
meshes from single-view images with remarkable efficiency. Recent methods based on the …
Syncdreamer: Generating multiview-consistent images from a single-view image
In this paper, we present a novel diffusion model called that generates multiview-consistent
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …
images from a single-view image. Using pretrained large-scale 2D diffusion models, recent …
Viewdiff: 3d-consistent image generation with text-to-image models
Abstract 3D asset generation is getting massive amounts of attention inspired by the recent
success on text-guided 2D content creation. Existing text-to-3D methods use pretrained text …
success on text-guided 2D content creation. Existing text-to-3D methods use pretrained text …
Reconfusion: 3d reconstruction with diffusion priors
Abstract 3D reconstruction methods such as Neural Radiance Fields (NeRFs) excel at
rendering photorealistic novel views of complex scenes. However recovering a high-quality …
rendering photorealistic novel views of complex scenes. However recovering a high-quality …
latentsplat: Autoencoding variational gaussians for fast generalizable 3d reconstruction
We present latentSplat, a method to predict semantic Gaussians in a 3D latent space that
can be splatted and decoded by a light-weight generative 2D architecture. Existing methods …
can be splatted and decoded by a light-weight generative 2D architecture. Existing methods …
Richdreamer: A generalizable normal-depth diffusion model for detail richness in text-to-3d
Lifting 2D diffusion for 3D generation is a challenging problem due to the lack of geometric
prior and the complex entanglement of materials and lighting in natural images. Existing …
prior and the complex entanglement of materials and lighting in natural images. Existing …
Cad: Photorealistic 3d generation via adversarial distillation
The increased demand for 3D data in AR/VR robotics and gaming applications gave rise to
powerful generative pipelines capable of synthesizing high-quality 3D objects. Most of these …
powerful generative pipelines capable of synthesizing high-quality 3D objects. Most of these …
Paint3d: Paint anything 3d with lighting-less texture diffusion models
This paper presents Paint3D a novel coarse-to-fine generative framework that is capable of
producing high-resolution lighting-less and diverse 2K UV texture maps for untextured 3D …
producing high-resolution lighting-less and diverse 2K UV texture maps for untextured 3D …
Sv4d: Dynamic 3d content generation with multi-frame and multi-view consistency
We present Stable Video 4D (SV4D), a latent video diffusion model for multi-frame and multi-
view consistent dynamic 3D content generation. Unlike previous methods that rely on …
view consistent dynamic 3D content generation. Unlike previous methods that rely on …