Authors: Shuoyang Ding,Philipp Koehn
Where published:
WS 2019 6
ArXiv: 1904.03409
Document:
PDF
DOI
Artifact development version:
GitHub
Abstract URL: http://arxiv.org/abs/1904.03409v1
Stack Long Short-Term Memory (StackLSTM) is useful for various applications
such as parsing and string-to-tree neural machine translation, but it is also
known to be notoriously difficult to parallelize for GPU training due to the
fact that the computations are dependent on discrete operations. In this paper,
we tackle this problem by utilizing state access patterns of StackLSTM to
homogenize computations with regard to different discrete operations. Our
parsing experiments show that the method scales up almost linearly with
increasing batch size, and our parallelized PyTorch implementation trains
significantly faster compared to the Dynet C++ implementation.