Predicting gaze-based target selection in augmented reality headsets based on eye and head endpoint distributions

Y Wei, R Shi, D Yu, Y Wang, Y Li, L Yu… - Proceedings of the 2023 …, 2023 - dl.acm.org
Target selection is a fundamental task in interactive Augmented Reality (AR) systems.
Predicting the intended target of selection in such systems can provide users with a smooth …

Exploring gaze for assisting freehand selection-based text entry in ar

MN Lystbæk, K Pfeuffer, JES Grønbæk… - Proceedings of the ACM …, 2022 - dl.acm.org
With eye-tracking increasingly available in Augmented Reality, we explore how gaze can be
used to assist freehand gestural text entry. Here the eyes are often coordinated with manual …

Glancewriter: Writing text by glancing over letters with gaze

W Cui, R Liu, Z Li, Y Wang, A Wang, X Zhao… - Proceedings of the …, 2023 - dl.acm.org
Writing text with eye gaze only is an appealing hands-free text entry method. However,
existing gaze-based text entry methods introduce eye fatigue and are slow in ty** speed …

Exploring gaze-assisted and hand-based region selection in augmented reality

R Shi, Y Wei, X Qin, P Hui, HN Liang - … of the ACM on Human-Computer …, 2023 - dl.acm.org
Region selection is a fundamental task in interactive systems. In 2D user interfaces, users
typically use a rectangle selection tool to formulate a region using a mouse or touchpad …

STAR: Smartphone-analogous Ty** in Augmented Reality

T Kim, A Karlson, A Gupta, T Grossman, J Wu… - Proceedings of the 36th …, 2023 - dl.acm.org
While text entry is an essential and frequent task in Augmented Reality (AR) applications,
devising an efficient and easy-to-use text entry method for AR remains an open challenge …

Metapose: Fast 3d pose from multiple views without 3d supervision

B Usman, A Tagliasacchi… - Proceedings of the …, 2022 - openaccess.thecvf.com
In the era of deep learning, human pose estimation from multiple cameras with unknown
calibration has received little attention to date. We show how to train a neural model to …

Classifying head movements to separate head-gaze and head gestures as distinct modes of input

BJ Hou, J Newn, L Sidenmark, A Ahmad Khan… - Proceedings of the …, 2023 - dl.acm.org
Head movement is widely used as a uniform type of input for human-computer interaction.
However, there are fundamental differences between head movements coupled with gaze in …

Comparing ty** methods for uppercase input in virtual reality: Modifier Key vs. alternative approaches

MJ Kim, YG Son, YM Kim, D Park - International Journal of Human …, 2025 - Elsevier
Ty** tasks are basic interactions in a virtual environment (VE). The presence of uppercase
letters affects the meanings of words and their readability. By ty** uppercase letters on a …

Online eye-movement classification with temporal convolutional networks

C Elmadjian, C Gonzales, RL Costa… - Behavior Research …, 2023 - Springer
The simultaneous classification of the three most basic eye-movement patterns is known as
the ternary eye-movement classification problem (3EMCP). Dynamic, interactive real-time …

Demonstration of CameraMouseAI: A Head-Based Mouse-Control System for People with Severe Motor Disabilities

F Karimli, H Yu, S Jain, ES Akosah, M Betke… - Proceedings of the 26th …, 2024 - dl.acm.org
We propose the mouse control system CameraMouseAI that includes real-time facial feature
detection and new ways to map facial feature movements to mouse clicks. In addition to …