Authors: András Lőrincz,Máté Csákvári,Áron Fóthi,Zoltán Ádám Milacski,András Sárkány,Zoltán Tősér
ArXiv: 1612.00745
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1612.00745v1
Machine learning is making substantial progress in diverse applications. The
success is mostly due to advances in deep learning. However, deep learning can
make mistakes and its generalization abilities to new tasks are questionable.
We ask when and how one can combine network outputs, when (i) details of the
observations are evaluated by learned deep components and (ii) facts and
confirmation rules are available in knowledge based systems. We show that in
limited contexts the required number of training samples can be low and
self-improvement of pre-trained networks in more general context is possible.
We argue that the combination of sparse outlier detection with deep components
that can support each other diminish the fragility of deep methods, an
important requirement for engineering applications. We argue that supervised
learning of labels may be fully eliminated under certain conditions: a
component based architecture together with a knowledge based system can train
itself and provide high quality answers. We demonstrate these concepts on the
State Farm Distracted Driver Detection benchmark. We argue that the view of the
Study Panel (2016) may overestimate the requirements on `years of focused
research' and `careful, unique construction' for `AI systems'.