Predicting gaze-based target selection in augmented reality headsets based on eye and head endpoint distributions
Target selection is a fundamental task in interactive Augmented Reality (AR) systems.
Predicting the intended target of selection in such systems can provide users with a smooth …
Predicting the intended target of selection in such systems can provide users with a smooth …
Exploring gaze for assisting freehand selection-based text entry in ar
With eye-tracking increasingly available in Augmented Reality, we explore how gaze can be
used to assist freehand gestural text entry. Here the eyes are often coordinated with manual …
used to assist freehand gestural text entry. Here the eyes are often coordinated with manual …
Glancewriter: Writing text by glancing over letters with gaze
Writing text with eye gaze only is an appealing hands-free text entry method. However,
existing gaze-based text entry methods introduce eye fatigue and are slow in ty** speed …
existing gaze-based text entry methods introduce eye fatigue and are slow in ty** speed …
Exploring gaze-assisted and hand-based region selection in augmented reality
Region selection is a fundamental task in interactive systems. In 2D user interfaces, users
typically use a rectangle selection tool to formulate a region using a mouse or touchpad …
typically use a rectangle selection tool to formulate a region using a mouse or touchpad …
STAR: Smartphone-analogous Ty** in Augmented Reality
While text entry is an essential and frequent task in Augmented Reality (AR) applications,
devising an efficient and easy-to-use text entry method for AR remains an open challenge …
devising an efficient and easy-to-use text entry method for AR remains an open challenge …
Metapose: Fast 3d pose from multiple views without 3d supervision
In the era of deep learning, human pose estimation from multiple cameras with unknown
calibration has received little attention to date. We show how to train a neural model to …
calibration has received little attention to date. We show how to train a neural model to …
Classifying head movements to separate head-gaze and head gestures as distinct modes of input
Head movement is widely used as a uniform type of input for human-computer interaction.
However, there are fundamental differences between head movements coupled with gaze in …
However, there are fundamental differences between head movements coupled with gaze in …
Comparing ty** methods for uppercase input in virtual reality: Modifier Key vs. alternative approaches
MJ Kim, YG Son, YM Kim, D Park - International Journal of Human …, 2025 - Elsevier
Ty** tasks are basic interactions in a virtual environment (VE). The presence of uppercase
letters affects the meanings of words and their readability. By ty** uppercase letters on a …
letters affects the meanings of words and their readability. By ty** uppercase letters on a …
Online eye-movement classification with temporal convolutional networks
C Elmadjian, C Gonzales, RL Costa… - Behavior Research …, 2023 - Springer
The simultaneous classification of the three most basic eye-movement patterns is known as
the ternary eye-movement classification problem (3EMCP). Dynamic, interactive real-time …
the ternary eye-movement classification problem (3EMCP). Dynamic, interactive real-time …
Demonstration of CameraMouseAI: A Head-Based Mouse-Control System for People with Severe Motor Disabilities
We propose the mouse control system CameraMouseAI that includes real-time facial feature
detection and new ways to map facial feature movements to mouse clicks. In addition to …
detection and new ways to map facial feature movements to mouse clicks. In addition to …