Off-Policy Imitation Learning from Observations

Abstract

Learning from Observations (LfO) is a practical reinforcement learning scenario from which many applications can benefit through the reuse of incomplete resources. Compared to conventional imitation learning (IL), LfO is more challenging because of the lack of expert action guidance. In both conventional IL and LfO, distribution matching is at the heart of their foundation. Traditional distribution matching approaches are sample-costly which depend on on-policy transitions for policy learning. Towards sample-efficiency, some off-policy solutions have been proposed, which, however, either lack comprehensive theoretical justifications or depend on the guidance of expert actions. In this work, we propose a sample-efficient LfO approach which enables off-policy optimization in a principled manner. To further accelerate the learning procedure, we regulate the policy update with an inverse action model, which assists distribution matching from the perspective of mode-covering. Extensive empirical results on challenging locomotion tasks indicate that our approach is comparable with state-of-the-art in terms of both sample-efficiency and asymptotic performance.

Publication
34th Conference on Neural Information Processing Systems
Zhuangdi Zhu
Zhuangdi Zhu
Assistant Professor (Tenure-Track)

My research centers around accountable, scalable, and trustworthy AI, e.g., decentralized machine learning, knowledge transfer for supervised and reinforcement learning, debiased representation learning, etc.

Related