Sledovať
Dilip Arumugam
Dilip Arumugam
Postdoctoral Research Associate - Princeton University
Overená e-mailová adresa na: cs.princeton.edu - Domovská stránka
Názov
Citované v
Citované v
Rok
State abstractions for lifelong reinforcement learning
D Abel, D Arumugam, L Lehnert, M Littman
International Conference on Machine Learning, 10-19, 2018
1632018
Deep reinforcement learning from policy-dependent human feedback
D Arumugam, JK Lee, S Saskin, ML Littman
arXiv preprint arXiv:1902.04257, 2019
1142019
Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications.
N Gopalan, D Arumugam, LLS Wong, S Tellex
Robotics: Science and Systems 2018, 2018
722018
Value preserving state-action abstractions
D Abel, N Umbanhowar, K Khetarpal, D Arumugam, D Precup, M Littman
International Conference on Artificial Intelligence and Statistics, 1639-1650, 2020
682020
Accurately and efficiently interpreting human-robot instructions of varying granularities
D Arumugam, S Karamcheti, N Gopalan, LLS Wong, S Tellex
Robotics: Science and Systems, 2017
682017
Grounding English commands to reward functions
J MacGlashan, M Babes-Vroman, M desJardins, ML Littman, S Muresan, ...
Robotics: Science and Systems, 2015
67*2015
State abstraction as compression in apprenticeship learning
D Abel, D Arumugam, K Asadi, Y Jinnai, ML Littman, LLS Wong
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 3134-3142, 2019
642019
Grounding natural language instructions to semantic goal representations for abstraction and generalization
D Arumugam, S Karamcheti, N Gopalan, EC Williams, M Rhee, LLS Wong, ...
Autonomous Robots 43, 449-468, 2019
332019
An information-theoretic perspective on credit assignment in reinforcement learning
D Arumugam, P Henderson, PL Bacon
arXiv preprint arXiv:2103.06224, 2021
242021
Deciding what to learn: A rate-distortion approach
D Arumugam, B Van Roy
International Conference on Machine Learning, 373-382, 2021
232021
A tale of two draggns: A hybrid approach for interpreting action-oriented and goal-oriented instructions
S Karamcheti, EC Williams, D Arumugam, M Rhee, N Gopalan, LLS Wong, ...
arXiv preprint arXiv:1707.08668, 2017
222017
Deciding what to model: Value-equivalent sampling for reinforcement learning
D Arumugam, B Van Roy
Advances in neural information processing systems 35, 9024-9044, 2022
162022
The value of information when deciding what to learn
D Arumugam, B Van Roy
Advances in neural information processing systems 34, 9816-9827, 2021
142021
Interpreting human-robot instructions
S Tellex, D Arumugam, S Karamcheti, N Gopalan, LLS Wong
US Patent 10,606,898, 2020
132020
Mitigating planner overfitting in model-based reinforcement learning
D Arumugam, D Abel, K Asadi, N Gopalan, C Grimm, JK Lee, L Lehnert, ...
arXiv preprint arXiv:1812.01129, 2018
132018
Toward good abstractions for lifelong learning
D Abel, D Arumugam, L Lehnert, ML Littman
NIPS Workshop on Hierarchical Reinforcement Learning, 2017
132017
Bayesian reinforcement learning with limited cognitive load
D Arumugam, MK Ho, ND Goodman, B Van Roy
Open Mind 8, 395-438, 2024
122024
Modeling latent attention within neural networks
C Grimm, D Arumugam, S Karamcheti, D Abel, LLS Wong, ML Littman
arXiv preprint arXiv:1706.00536, 2017
10*2017
Social contract ai: Aligning ai assistants with implicit group norms
JP Fränken, S Kwok, P Ye, K Gandhi, D Arumugam, J Moore, A Tamkin, ...
arXiv preprint arXiv:2310.17769, 2023
92023
Shattering the agent-environment interface for fine-tuning inclusive language models
W Xu, S Dong, D Arumugam, B Van Roy
arXiv preprint arXiv:2305.11455, 2023
82023
Systém momentálne nemôže vykonať operáciu. Skúste to neskôr.
Články 1–20