We are very excited to join forces with MLCommons and OctoML.ai! Contact Grigori Fursin for more details!

Identifying and Controlling Important Neurons in Neural Machine Translation

lib:4812b6a83034a3e0 (v1.0.0)

Authors: Anthony Bau,Yonatan Belinkov,Hassan Sajjad,Nadir Durrani,Fahim Dalvi,James Glass
Where published: ICLR 2019 5
ArXiv: 1811.01157
Document:  PDF  DOI 
Abstract URL: http://arxiv.org/abs/1811.01157v1

Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives


Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!