3D point cloud data processing with machine learning for construction and infrastructure applications: A comprehensive review

K Mirzaei, M Arashpour, E Asadi, H Masoumi… - Advanced Engineering …, 2022 - Elsevier
Point clouds are increasingly being used to improve productivity, quality, and safety
throughout the life cycle of construction and infrastructure projects. While applicable for …

[HTML][HTML] Deep learning on point clouds and its application: A survey

W Liu, J Sun, W Li, T Hu, P Wang - Sensors, 2019 - mdpi.com
Point cloud is a widely used 3D data form, which can be produced by depth sensors, such
as Light Detection and Ranging (LIDAR) and RGB-D cameras. Being unordered and …

Openscene: 3d scene understanding with open vocabularies

S Peng, K Genova, C Jiang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Traditional 3D scene understanding approaches rely on labeled 3D datasets to train a
model for a single task with supervision. We propose OpenScene, an alternative approach …

Pointclip v2: Prompting clip and gpt for powerful 3d open-world learning

X Zhu, R Zhang, B He, Z Guo, Z Zeng… - Proceedings of the …, 2023 - openaccess.thecvf.com
Large-scale pre-trained models have shown promising open-world performance for both
vision and language tasks. However, their transferred capacity on 3D point clouds is still …

Clip2scene: Towards label-efficient 3d scene understanding by clip

R Chen, Y Liu, L Kong, X Zhu, Y Ma… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Contrastive Language-Image Pre-training (CLIP) achieves promising results in 2D
zero-shot and few-shot learning. Despite the impressive performance in 2D, applying CLIP …

Pointclip: Point cloud understanding by clip

R Zhang, Z Guo, W Zhang, K Li… - Proceedings of the …, 2022 - openaccess.thecvf.com
Recently, zero-shot and few-shot learning via Contrastive Vision-Language Pre-training
(CLIP) have shown inspirational performance on 2D visual recognition, which learns to …

Clip2: Contrastive language-image-point pretraining from real-world point cloud data

Y Zeng, C Jiang, J Mao, J Han, C Ye… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Contrastive Language-Image Pre-training, benefiting from large-scale unlabeled
text-image pairs, has demonstrated great performance in open-world vision understanding …

Clip2point: Transfer clip to point cloud classification with image-depth pre-training

T Huang, B Dong, Y Yang, X Huang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Pre-training across 3D vision and language remains under development because of limited
training data. Recent works attempt to transfer vision-language (VL) pre-training methods to …

Clip-fo3d: Learning free open-world 3d scene representations from 2d dense clip

J Zhang, R Dong, K Ma - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Training a 3D scene understanding model requires complicated human annotations, which
are laborious to collect and result in a model only encoding close-set object semantics. In …

Semantic-aware knowledge distillation for few-shot class-incremental learning

A Cheraghian, S Rahman, P Fang… - Proceedings of the …, 2021 - openaccess.thecvf.com
Few-shot class incremental learning (FSCIL) portrays the problem of learning new concepts
gradually, where only a few examples per concept are available to the learner. Due to the …