Instantmesh: Efficient 3d mesh generation from a single image with sparse-view large reconstruction models

J Xu, W Cheng, Y Gao, X Wang, S Gao… - arxiv preprint arxiv …, 2024 - arxiv.org
We present InstantMesh, a feed-forward framework for instant 3D mesh generation from a
single image, featuring state-of-the-art generation quality and significant training scalability …

4real: Towards photorealistic 4d scene generation via video diffusion models

H Yu, C Wang, P Zhuang… - Advances in …, 2025 - proceedings.neurips.cc
Existing dynamic scene generation methods mostly rely on distilling knowledge from pre-
trained 3D generative models, which are typically fine-tuned on synthetic object datasets. As …

Dreamreward: Text-to-3d generation with human preference

J Ye, F Liu, Q Li, Z Wang, Y Wang, X Wang… - … on Computer Vision, 2024 - Springer
Abstract 3D content creation from text prompts has shown remarkable success recently.
However, current text-to-3D methods often generate 3D results that do not align well with …

Mvd-fusion: Single-view 3d via depth-consistent multi-view generation

H Hu, Z Zhou, V Jampani… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
We present MVD-Fusion: a method for single-view 3D inference via generative modeling of
multi-view-consistent RGB-D images. While recent methods pursuing 3D inference advocate …

Unidream: Unifying diffusion priors for relightable text-to-3d generation

Z Liu, Y Li, Y Lin, X Yu, S Peng, YP Cao, X Qi… - … on Computer Vision, 2024 - Springer
Recent advancements in text-to-3D generation technology have significantly advanced the
conversion of textual descriptions into imaginative well-geometrical and finely textured 3D …

Humansplat: Generalizable single-image human gaussian splatting with structure priors

P Pan, Z Su, C Lin, Z Fan, Y Zhang… - Advances in …, 2025 - proceedings.neurips.cc
Despite recent advancements in high-fidelity human reconstruction techniques, the
requirements for densely captured images or time-consuming per-instance optimization …

Unique3d: High-quality and efficient 3d mesh generation from a single image

K Wu, F Liu, Z Cai, R Yan, H Wang, Y Hu… - The Thirty-eighth …, 2024 - openreview.net
In this work, we introduce Unique3D, a novel image-to-3D framework for efficiently
generating high-quality 3D meshes from single-view images, featuring state-of-the-art …

Vivid-zoo: Multi-view video generation with diffusion model

B Li, C Zheng, W Zhu, J Mai, B Zhang… - Advances in …, 2025 - proceedings.neurips.cc
While diffusion models have shown impressive performance in 2D image/video generation,
diffusion-based Text-to-Multi-view-Video (T2MVid) generation remains underexplored. The …

Meshanything: Artist-created mesh generation with autoregressive transformers

Y Chen, T He, D Huang, W Ye, S Chen, J Tang… - arxiv preprint arxiv …, 2024 - arxiv.org
Recently, 3D assets created via reconstruction and generation have matched the quality of
manually crafted assets, highlighting their potential for replacement. However, this potential …

[HTML][HTML] A Review of Visual Estimation Research on Live Pig Weight

Z Wang, Q Li, Q Yu, W Qian, R Gao, R Wang, T Wu… - Sensors, 2024 - mdpi.com
The weight of live pigs is directly related to their health, nutrition management, disease
prevention and control, and the overall economic benefits to livestock enterprises. Direct …