Lack of repeatability and generalisability are two significant threats to
continuing scientific development in Natural Language Processing. Language
models and learning methods are so complex that scientific conference papers no
longer contain enough space for the technical depth required for replication or
reproduction. Taking Target Dependent Sentiment Analysis as a case study, we
show how recent work in the field has not consistently released code, or
described settings for learning methods in enough detail, and lacks
comparability and generalisability in train, test or validation data. To
investigate generalisability and to enable state of the art comparative
evaluations, we carry out the first reproduction studies of three groups of
complementary methods and perform the first large-scale mass evaluation on six
different English datasets. Reflecting on our experiences, we recommend that
future replication or reproduction experiments should always consider a variety
of datasets alongside documenting and releasing their methods and published
code in order to minimise the barriers to both repeatability and
generalisability. We have released our code with a model zoo on GitHub with
Jupyter Notebooks to aid understanding and full documentation, and we recommend
that others do the same with their papers at submission time through an
anonymised GitHub account.