Authors: Paul Jasek,Bernard Abayowa
ArXiv: 1808.04287
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1808.04287v1
We present an approach for reconfiguration of dynamic visual sensor networks
with deep reinforcement learning (RL). Our RL agent uses a modified
asynchronous advantage actor-critic framework and the recently proposed
Relational Network module at the foundation of its network architecture. To
address the issue of sample inefficiency in current approaches to model-free
reinforcement learning, we train our system in an abstract simulation
environment that represents inputs from a dynamic scene. Our system is
validated using inputs from a real-world scenario and preexisting object
detection and tracking algorithms.