UrbanLF: A comprehensive light field dataset for semantic segmentation of urban scenes

H Sheng, R Cong, D Yang, R Chen… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
As one of the fundamental technologies for scene understanding, semantic segmentation
has been widely explored in the last few years. Light field cameras encode the geometric …

Defocus image deblurring network with defocus map estimation as auxiliary task

H Ma, S Liu, Q Liao, J Zhang… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Different from the object motion blur, the defocus blur is caused by the limitation of the
cameras' depth of field. The defocus amount can be characterized by the parameter of point …

Dense light field coding: A survey

C Conti, LD Soares, P Nunes - IEEE access, 2020 - ieeexplore.ieee.org
Light Field (LF) imaging is a promising solution for providing more immersive and closer to
reality multimedia experiences to end-users with unprecedented creative freedom and …

AIFNet: All-in-focus image restoration network using a light field-based dataset

L Ruan, B Chen, J Li, ML Lam - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Defocus blur often degrades the performance of image understanding, such as object
recognition and image segmentation. Restoring an all-in-focus image from its defocused …

[PDF][PDF] Synthesizing light field from a single image with variable MPI and two network fusion.

Q Li, NK Kalantari - ACM Trans. Graph., 2020 - people.engr.tamu.edu
4D light fields capture both the intensity and direction of light, enabling appealing effects
such as viewpoint change, synthetic aperture, and refocusing. However, capturing a light …

Bridging unsupervised and supervised depth from focus via all-in-focus supervision

NH Wang, R Wang, YL Liu, YH Huang… - Proceedings of the …, 2021 - openaccess.thecvf.com
Depth estimation is a long-lasting yet important task in computer vision. Most of the previous
works try to estimate depth from input images and assume images are all-in-focus (AiF) …

All-in-focus imaging from event focal stack

H Lou, M Teng, Y Yang, B Shi - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Traditional focal stack methods require multiple shots to capture images focused at different
distances of the same scene, which cannot be applied to dynamic scenes well. Generating a …

Hybrid all-in-focus imaging from neuromorphic focal stack

M Teng, H Lou, Y Yang, T Huang… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Creating an image focal stack requires multiple shots, which captures images at different
depths within the same scene. Such methods are not suitable for scenes undergoing …

DUT-LFSaliency: Versatile dataset and light field-to-RGB saliency detection

Y Piao, Z Rong, S Xu, M Zhang, H Lu - arxiv preprint arxiv:2012.15124, 2020 - arxiv.org
Light field data exhibit favorable characteristics conducive to saliency detection. The
success of learning-based light field saliency detection is heavily dependent on how a …

Real-MFF: A large realistic multi-focus image dataset with ground truth

J Zhang, Q Liao, S Liu, H Ma, W Yang… - Pattern Recognition Letters, 2020 - Elsevier
Multi-focus image fusion, a technique to generate an all-in-focus image from two or more
partially-focused source images, can benefit many computer vision tasks. However, currently …