One-shot imitation from observing humans via domain-adaptive meta-learning

T Yu, C Finn, A **e, S Dasari, T Zhang… - arxiv preprint arxiv …, 2018 - arxiv.org
Humans and animals are capable of learning a new behavior by observing others perform
the skill just once. We consider the problem of allowing a robot to do the same--learning …

Avid: Learning multi-stage tasks via pixel-level translation of human videos

L Smith, N Dhawan, M Zhang, P Abbeel… - arxiv preprint arxiv …, 2019 - arxiv.org
Robotic reinforcement learning (RL) holds the promise of enabling robots to learn complex
behaviors through experience. However, realizing this promise for long-horizon tasks in the …

[หนังสือ][B] Learning to learn with gradients

CB Finn - 2018 - search.proquest.com
Humans have a remarkable ability to learn new concepts from only a few examples and
quickly adapt to unforeseen circumstances. To do so, they build upon their prior experience …

Learning to reproduce visually similar movements by minimizing event-based prediction error

J Kaiser, S Melbaum, JCV Tieck… - 2018 7th IEEE …, 2018 - ieeexplore.ieee.org
Prediction is believed to play an important role in the human brain. However, it is still unclear
how predictions are used in the process of learning new movements. In this paper, we …

O2A: One-Shot Observational Learning with Action Vectors

L Pauly, WC Agboh, DC Hogg… - Frontiers in Robotics and AI, 2021 - frontiersin.org
We present O2A, a novel method for learning to perform robotic manipulation tasks from a
single (one-shot) third-person demonstration video. To our knowledge, it is the first time this …

Learning quasi-periodic robot motions from demonstration

X Li, H Cheng, H Chen, J Chen - Autonomous Robots, 2020 - Springer
The goal of Learning from Demonstration is to automatically transfer the skill knowledge
from human to robot. Current researches focus on the problem of modeling …

[หนังสือ][B] Adaptation Based Approaches to Distribution Shift Problems

MM Zhang - 2021 - search.proquest.com
Distribution shift in machine learning refers to the general problem where a model is
evaluated on test data drawn from a different distribution than the training data distribution …