Authors: Svetlin Penkov,Subramanian Ramamoorthy
ArXiv: 1705.08320
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1705.08320v1
Explaining and reasoning about processes which underlie observed black-box
phenomena enables the discovery of causal mechanisms, derivation of suitable
abstract representations and the formulation of more robust predictions. We
propose to learn high level functional programs in order to represent abstract
models which capture the invariant structure in the observed data. We introduce
the $\pi$-machine (program-induction machine) -- an architecture able to induce
interpretable LISP-like programs from observed data traces. We propose an
optimisation procedure for program learning based on backpropagation, gradient
descent and A* search. We apply the proposed method to three problems: system
identification of dynamical systems, explaining the behaviour of a DQN agent
and learning by demonstration in a human-robot interaction scenario. Our
experimental results show that the $\pi$-machine can efficiently induce
interpretable programs from individual data traces.