An outlook into the future of egocentric vision

C Plizzari, G Goletto, A Furnari, S Bansal… - International Journal of …, 2024 - Springer
What will the future be? We wonder! In this survey, we explore the gap between current
research in egocentric vision and the ever-anticipated future, where wearable computing …

E2 (go) motion: Motion augmented event stream for egocentric action recognition

C Plizzari, M Planamente, G Goletto… - Proceedings of the …, 2022 - openaccess.thecvf.com
Event cameras are novel bio-inspired sensors, which asynchronously capture pixel-level
intensity changes in the form of" events". Due to their sensing mechanism, event cameras …

With a little help from my temporal context: Multimodal egocentric action recognition

E Kazakos, J Huh, A Nagrani, A Zisserman… - arxiv preprint arxiv …, 2021 - arxiv.org
In egocentric videos, actions occur in quick succession. We capitalise on the action's
temporal context and propose a method that learns to attend to surrounding actions in order …

RadioTransformer: A cascaded global-focal transformer for visual attention–guided disease classification

M Bhattacharya, S Jain, P Prasanna - European Conference on Computer …, 2022 - Springer
In this work, we present RadioTransformer, a novel student-teacher transformer framework,
that leverages radiologists' gaze patterns and models their visuo-cognitive behavior for …

Interaction region visual transformer for egocentric action anticipation

D Roy, R Rajendiran… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Human-object interaction (HOI) and temporal dynamics along the motion paths are the most
important visual cues for egocentric action anticipation. Especially, interaction regions …

GPT4Ego: unleashing the potential of pre-trained models for zero-shot egocentric action recognition

G Dai, X Shu, W Wu, R Yan… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Vision-Language Models (VLMs), pre-trained on large-scale datasets, have shown
impressive performance in various visual recognition tasks. This advancement paves the …

EgoPCA: A New Framework for Egocentric Hand-Object Interaction Understanding

Y Xu, YL Li, Z Huang, MX Liu, C Lu… - Proceedings of the …, 2023 - openaccess.thecvf.com
With the surge in attention to Egocentric Hand-Object Interaction (Ego-HOI), large-scale
datasets such as Ego4D and EPIC-KITCHENS have been proposed. However, most current …

Tasks Reflected in the Eyes: Egocentric Gaze-Aware Visual Task Type Recognition in Virtual Reality

Z Wang, F Lu - IEEE Transactions on Visualization and …, 2024 - ieeexplore.ieee.org
With eye tracking finding widespread utility in augmented reality and virtual reality headsets,
eye gaze has the potential to recognize users' visual tasks and adaptively adjust virtual …

Multimodal across domains gaze target detection

F Tonini, C Beyan, E Ricci - … of the 2022 International Conference on …, 2022 - dl.acm.org
This paper addresses the gaze target detection problem in single images captured from the
third-person perspective. We present a multimodal deep architecture to infer where a person …

Egocentric action recognition by capturing hand-object contact and object state

T Shiota, M Takagi, K Kumagai… - Proceedings of the …, 2024 - openaccess.thecvf.com
Improving the performance of egocentric action recognition (EAR) requires accurately
capturing interactions between actors and objects. In this paper, we propose two learning …