Spremljaj
Yantian Zha
Yantian Zha
The University of Maryland College Park
Preverjeni e-poštni naslov na asu.edu - Domača stran
Naslov
Navedeno
Navedeno
Leto
Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems
S Kambhampati, S Sreedharan, M Verma, Y Zha, L Guan
AAAI 2022 Blue Sky Paper, 2021
572021
Explicable planning as minimizing distance from expected behavior
A Kulkarni, Y Zha, T Chakraborti, SG Vadlamudi, Y Zhang, ...
AAMAS Conference proceedings, 2019
512019
Explicablility as minimizing distance from expected behavior
A Kulkarni, Y Zha, T Chakraborti, SG Vadlamudi, Y Zhang, ...
arXiv preprint arXiv:1611.05497, 2016
462016
Discovering underlying plans based on shallow models
HH Zhuo, Y Zha, S Kambhampati, X Tian
ACM Transactions on Intelligent Systems and Technology (TIST) 11 (2), 1-30, 2020
322020
" Task Success" is not Enough: Investigating the Use of Video-Language Models as Behavior Critics for Catching Undesirable Agent Behaviors
L Guan, Y Zhou, D Liu, Y Zha, HB Amor, S Kambhampati
Conference on Language Modeling, 2024
142024
Recognizing plans by learning embeddings from observed action distributions
Y Zha, Y Li, S Gopalakrishnan, B Li, S Kambhampati
AAMAS Conference proceedings, 2017
102017
NatSGD: A Dataset with Speech, Gestures, and Demonstrations for Robot Learning in Natural Human-Robot Interaction
S Shrestha, Y Zha, G Gao, C Fermuller, Y Aloimonos
IEEE/ACM International Conference on Human-Robot Interaction, 2023
92023
Contrastively Learning Visual Attention as Affordance Cues from Demonstrations for Robotic Grasping
Y Zha, S Bhambri, L Guan
The IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2021
92021
Learning from ambiguous demonstrations with self-explanation guided reinforcement learning
Y Zha, L Guan, S Kambhampati
AAAI-24 Main Track & AAAI-22 Workshop on Reinforcement Learning in Games, 2021
82021
Plan-recognition-driven attention modeling for visual recognition
Y Zha, Y Li, T Yu, S Kambhampati, B Li
AAAI 2019 Workshop on Plan, Activity, and Intent Recognition (PAIR), 2018
12018
NatSGLD: A Dataset with Speech, Gesture, Logic, and Demonstration for Robot Learning in Natural Human-Robot Interaction
S Shrestha, Y Zha, S Banagiri, G Gao, Y Aloimonos, C Fermüller
arXiv preprint arXiv:2502.16718, 2025
2025
AAM-SEALS: Developing Aerial-Aquatic Manipulators in SEa, Air, and Land Simulator
WW Yang, K Kona, Y Jain, A Bhamidipati, T Atzili, X Lin, Y Zha
arXiv preprint arXiv:2412.19744, 2024
2024
Perceiving, Planning, Acting, and Self-Explaining: A Cognitive Quartet with Four Neural Networks
Y Zha
Arizona State University, 2022
2022
Discovering Underlying Plans Based on Shallow Models
H Hankui Zhuo, Y Zha, S Kambhampati
arXiv e-prints, arXiv: 1803.02208, 2018
2018
Sistem trenutno ne more izvesti postopka. Poskusite znova pozneje.
Članki 1–14