Authors: John D. Kelleher,Simon Dobnik
ArXiv: 1807.08133
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1807.08133v1
This paper examines to what degree current deep learning architectures for
image caption generation capture spatial language. On the basis of the
evaluation of examples of generated captions from the literature we argue that
systems capture what objects are in the image data but not where these objects
are located: the captions generated by these systems are the output of a
language model conditioned on the output of an object detector that cannot
capture fine-grained location information. Although language models provide
useful knowledge for image captions, we argue that deep learning image
captioning architectures should also model geometric relations between objects.