An outlook into the future of egocentric vision
What will the future be? We wonder! In this survey, we explore the gap between current
research in egocentric vision and the ever-anticipated future, where wearable computing …
research in egocentric vision and the ever-anticipated future, where wearable computing …
E2 (go) motion: Motion augmented event stream for egocentric action recognition
Event cameras are novel bio-inspired sensors, which asynchronously capture pixel-level
intensity changes in the form of" events". Due to their sensing mechanism, event cameras …
intensity changes in the form of" events". Due to their sensing mechanism, event cameras …
With a little help from my temporal context: Multimodal egocentric action recognition
In egocentric videos, actions occur in quick succession. We capitalise on the action's
temporal context and propose a method that learns to attend to surrounding actions in order …
temporal context and propose a method that learns to attend to surrounding actions in order …
RadioTransformer: A cascaded global-focal transformer for visual attention–guided disease classification
In this work, we present RadioTransformer, a novel student-teacher transformer framework,
that leverages radiologists' gaze patterns and models their visuo-cognitive behavior for …
that leverages radiologists' gaze patterns and models their visuo-cognitive behavior for …
Interaction region visual transformer for egocentric action anticipation
D Roy, R Rajendiran… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Human-object interaction (HOI) and temporal dynamics along the motion paths are the most
important visual cues for egocentric action anticipation. Especially, interaction regions …
important visual cues for egocentric action anticipation. Especially, interaction regions …
GPT4Ego: unleashing the potential of pre-trained models for zero-shot egocentric action recognition
Vision-Language Models (VLMs), pre-trained on large-scale datasets, have shown
impressive performance in various visual recognition tasks. This advancement paves the …
impressive performance in various visual recognition tasks. This advancement paves the …
EgoPCA: A New Framework for Egocentric Hand-Object Interaction Understanding
With the surge in attention to Egocentric Hand-Object Interaction (Ego-HOI), large-scale
datasets such as Ego4D and EPIC-KITCHENS have been proposed. However, most current …
datasets such as Ego4D and EPIC-KITCHENS have been proposed. However, most current …
Tasks Reflected in the Eyes: Egocentric Gaze-Aware Visual Task Type Recognition in Virtual Reality
With eye tracking finding widespread utility in augmented reality and virtual reality headsets,
eye gaze has the potential to recognize users' visual tasks and adaptively adjust virtual …
eye gaze has the potential to recognize users' visual tasks and adaptively adjust virtual …
Multimodal across domains gaze target detection
This paper addresses the gaze target detection problem in single images captured from the
third-person perspective. We present a multimodal deep architecture to infer where a person …
third-person perspective. We present a multimodal deep architecture to infer where a person …
Egocentric action recognition by capturing hand-object contact and object state
T Shiota, M Takagi, K Kumagai… - Proceedings of the …, 2024 - openaccess.thecvf.com
Improving the performance of egocentric action recognition (EAR) requires accurately
capturing interactions between actors and objects. In this paper, we propose two learning …
capturing interactions between actors and objects. In this paper, we propose two learning …