A comprehensive survey on deep graph representation learning

W Ju, Z Fang, Y Gu, Z Liu, Q Long, Z Qiao, Y Qin… - Neural Networks, 2024 - Elsevier
Graph representation learning aims to effectively encode high-dimensional sparse graph-
structured data into low-dimensional dense vectors, which is a fundamental task that has …

Multi3drefer: Grounding text description to multiple 3d objects

Y Zhang, ZM Gong, AX Chang - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
We introduce the task of localizing a flexible number of objects in real-world 3D scenes
using natural language descriptions. Existing 3D visual grounding tasks focus on localizing …

LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding Reasoning and Planning

S Chen, X Chen, C Zhang, M Li, G Yu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Recent progress in Large Multimodal Models (LMM) has opened up great
possibilities for various applications in the field of human-machine interactions. However …

3djcg: A unified framework for joint dense captioning and visual grounding on 3d point clouds

D Cai, L Zhao, J Zhang, L Sheng… - Proceedings of the …, 2022 - openaccess.thecvf.com
Observing that the 3D captioning task and the 3D grounding task contain both shared and
complementary information in nature, in this work, we propose a unified framework to jointly …

Eda: Explicit text-decoupling and dense alignment for 3d visual grounding

Y Wu, X Cheng, R Zhang, Z Cheng… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract 3D visual grounding aims to find the object within point clouds mentioned by free-
form natural language descriptions with rich semantic cues. However, existing methods …

Multi-view transformer for 3d visual grounding

S Huang, Y Chen, J Jia, L Wang - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
The 3D visual grounding task aims to ground a natural language description to the targeted
object in a 3D scene, which is usually represented in 3D point clouds. Previous works …