Authors: Anthony Bau,Yonatan Belinkov,Hassan Sajjad,Nadir Durrani,Fahim Dalvi,James Glass
Where published:
ICLR 2019 5
ArXiv: 1811.01157
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1811.01157v1
Neural machine translation (NMT) models learn representations containing
substantial linguistic information. However, it is not clear if such
information is fully distributed or if some of it can be attributed to
individual neurons. We develop unsupervised methods for discovering important
neurons in NMT models. Our methods rely on the intuition that different models
learn similar properties, and do not require any costly external supervision.
We show experimentally that translation quality depends on the discovered
neurons, and find that many of them capture common linguistic phenomena.
Finally, we show how to control NMT translations in predictable ways, by
modifying activations of individual neurons.