Review the state-of-the-art technologies of semantic segmentation based on deep learning

Y Mo, Y Wu, X Yang, F Liu, Y Liao - Neurocomputing, 2022 - Elsevier
The goal of semantic segmentation is to segment the input image according to semantic
information and predict the semantic category of each pixel from a given label set. With the …

A review of multimodal human activity recognition with special emphasis on classification, applications, challenges and future directions

SK Yadav, K Tiwari, HM Pandey, SA Akbar - Knowledge-Based Systems, 2021 - Elsevier
Human activity recognition (HAR) is one of the most important and challenging problems in
the computer vision. It has critical application in wide variety of tasks including gaming …

Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors

M Wang, Z Yan, T Wang, P Cai, S Gao, Y Zeng… - Nature …, 2020 - nature.com
Gesture recognition using machine-learning methods is valuable in the development of
advanced cybernetics, robotics and healthcare systems, and typically relies on images or …

A survey on wearable sensor modality centred human activity recognition in health care

Y Wang, S Cang, H Yu - Expert Systems with Applications, 2019 - Elsevier
Increased life expectancy coupled with declining birth rates is leading to an aging
population structure. Aging-caused changes, such as physical or cognitive decline, could …

UTD-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor

C Chen, R Jafari, N Kehtarnavaz - 2015 IEEE International …, 2015 - ieeexplore.ieee.org
Human action recognition has a wide range of applications including biometrics,
surveillance, and human computer interaction. The use of multimodal sensors for human …

3D skeleton-based human action classification: A survey

LL Presti, M La Cascia - Pattern Recognition, 2016 - Elsevier
In recent years, there has been a proliferation of works on human action classification from
depth sequences. These works generally present methods and/or feature representations …

A survey of depth and inertial sensor fusion for human action recognition

C Chen, R Jafari, N Kehtarnavaz - Multimedia Tools and Applications, 2017 - Springer
A number of review or survey articles have previously appeared on human action
recognition where either vision sensors or inertial sensors are used individually …

Wearable inertial sensors for human motion analysis: A review

IH Lopez-Nava, A Munoz-Melendez - IEEE Sensors Journal, 2016 - ieeexplore.ieee.org
This paper reviews the research literature on human motion analysis using inertial sensors
with the aim to find out: which configuration of sensors have been used to measure human …

Smart wearable hand device for sign language interpretation system with sensors fusion

BG Lee, SM Lee - IEEE Sensors Journal, 2017 - ieeexplore.ieee.org
Gesturing is an instinctive way of communicating to present a specific meaning or intent.
Therefore, research into sign language interpretation using gestures has been explored …

Improving human action recognition using fusion of depth camera and inertial sensors

C Chen, R Jafari, N Kehtarnavaz - IEEE Transactions on …, 2014 - ieeexplore.ieee.org
This paper presents a fusion approach for improving human action recognition based on two
differing modality sensors consisting of a depth camera and an inertial body sensor …