A survey on deep learning for human activity recognition

F Gu, MH Chung, M Chignell, S Valaee… - ACM Computing …, 2021 - dl.acm.org
Human activity recognition is a key to a lot of applications such as healthcare and smart
home. In this study, we provide a comprehensive survey on recent advances and challenges …

Toward storytelling from visual lifelogging: An overview

M Bolanos, M Dimiccoli… - IEEE Transactions on …, 2016 - ieeexplore.ieee.org
Visual lifelogging consists of acquiring images that capture the daily experiences of the user
by wearing a camera over a long period of time. The pictures taken offer considerable …

Socratic models: Composing zero-shot multimodal reasoning with language

A Zeng, M Attarian, B Ichter, K Choromanski… - arxiv preprint arxiv …, 2022 - arxiv.org
Large pretrained (eg," foundation") models exhibit distinct capabilities depending on the
domain of data they are trained on. While these domains are generic, they may only barely …

LSTM-CNN architecture for human activity recognition

K **a, J Huang, H Wang - Ieee Access, 2020 - ieeexplore.ieee.org
In the past years, traditional pattern recognition methods have made great progress.
However, these methods rely heavily on manual feature extraction, which may hinder the …

H2o: Two hands manipulating objects for first person interaction recognition

T Kwon, B Tekin, J Stühmer, F Bogo… - Proceedings of the …, 2021 - openaccess.thecvf.com
We present a comprehensive framework for egocentric interaction recognition using
markerless 3D annotations of two hands manipulating objects. To this end, we propose a …

Social lstm: Human trajectory prediction in crowded spaces

A Alahi, K Goel, V Ramanathan… - Proceedings of the …, 2016 - openaccess.thecvf.com
Humans navigate complex crowded environments based on social conventions: they
respect personal space, yielding right-of-way and avoid collisions. In our work, we propose a …

H+ o: Unified egocentric recognition of 3d hand-object poses and interactions

B Tekin, F Bogo, M Pollefeys - Proceedings of the IEEE/CVF …, 2019 - openaccess.thecvf.com
We present a unified framework for understanding 3D hand and object interactions in raw
image sequences from egocentric RGB cameras. Given a single RGB image, our model …

In the eye of beholder: Joint learning of gaze and actions in first person video

Y Li, M Liu, JM Rehg - Proceedings of the European …, 2018 - openaccess.thecvf.com
We address the task of jointly determining what a person is doing and where they are
looking based on the analysis of video captured by a headworn camera. We propose a …

A survey on activity detection and classification using wearable sensors

M Cornacchia, K Ozcan, Y Zheng… - IEEE Sensors …, 2016 - ieeexplore.ieee.org
Activity detection and classification are very important for autonomous monitoring of humans
for applications, including assistive living, rehabilitation, and surveillance. Wearable sensors …

Egobody: Human body shape and motion of interacting people from head-mounted devices

S Zhang, Q Ma, Y Zhang, Z Qian, T Kwon… - European conference on …, 2022 - Springer
Understanding social interactions from egocentric views is crucial for many applications,
ranging from assistive robotics to AR/VR. Key to reasoning about interactions is to …