Flexible and stretchable light-emitting diodes and photodetectors for human-centric optoelectronics

S Chang, JH Koo, J Yoo, MS Kim, MK Choi… - Chemical …, 2024 - ACS Publications
Optoelectronic devices with unconventional form factors, such as flexible and stretchable
light-emitting or photoresponsive devices, are core elements for the next-generation human …

Knowledge-integrated machine learning for materials: lessons from gameplaying and robotics

K Hippalgaonkar, Q Li, X Wang, JW Fisher III… - Nature Reviews …, 2023 - nature.com
As materials researchers increasingly embrace machine-learning (ML) methods, it is natural
to wonder what lessons can be learned from other fields undergoing similar developments …

Edge artificial intelligence for 6G: Vision, enabling technologies, and applications

KB Letaief, Y Shi, J Lu, J Lu - IEEE journal on selected areas in …, 2021 - ieeexplore.ieee.org
The thriving of artificial intelligence (AI) applications is driving the further evolution of
wireless networks. It has been envisioned that 6G will be transformative and will …

Multimodal human–robot interaction for human‐centric smart manufacturing: a survey

T Wang, P Zheng, S Li, L Wang - Advanced Intelligent Systems, 2024 - Wiley Online Library
Human–robot interaction (HRI) has escalated in notability in recent years, and multimodal
communication and control strategies are necessitated to guarantee a secure, efficient, and …

Rh20t: A comprehensive robotic dataset for learning diverse skills in one-shot

HS Fang, H Fang, Z Tang, J Liu, C Wang… - arxiv preprint arxiv …, 2023 - arxiv.org
A key challenge in robotic manipulation in open domains is how to acquire diverse and
generalizable skills for robots. Recent research in one-shot imitation learning has shown …

Intelligent recognition using ultralight multifunctional nano-layered carbon aerogel sensors with human-like tactile perception

H Zhao, Y Zhang, L Han, W Qian, J Wang, H Wu, J Li… - Nano-Micro Letters, 2024 - Springer
Humans can perceive our complex world through multi-sensory fusion. Under limited visual
conditions, people can sense a variety of tactile signals to identify objects accurately and …

Making sense of vision and touch: Learning multimodal representations for contact-rich tasks

MA Lee, Y Zhu, P Zachares, M Tan… - IEEE Transactions …, 2020 - ieeexplore.ieee.org
Contact-rich manipulation tasks in unstructured environments often require both haptic and
visual feedback. It is nontrivial to manually design a robot controller that combines these …

Polytransform: Deep polygon transformer for instance segmentation

J Liang, N Homayounfar, WC Ma… - Proceedings of the …, 2020 - openaccess.thecvf.com
In this paper, we propose PolyTransform, a novel instance segmentation algorithm that
produces precise, geometry-preserving masks by combining the strengths of prevailing …

Toward next-generation learned robot manipulation

J Cui, J Trinkle - Science robotics, 2021 - science.org
The ever-changing nature of human environments presents great challenges to robot
manipulation. Objects that robots must manipulate vary in shape, weight, and configuration …

A review of brain-inspired cognition and navigation technology for mobile robots

Y Bai, S Shao, J Zhang, X Zhao, C Fang… - Cyborg and Bionic …, 2024 - spj.science.org
Brain-inspired navigation technologies combine environmental perception, spatial cognition,
and target navigation to create a comprehensive navigation research system. Researchers …