Authors: Simon Ĺ uster,Gertjan van Noord,Ivan Titov
ArXiv: 1508.07709
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1508.07709v2
Word representations induced from models with discrete latent variables
(e.g.\ HMMs) have been shown to be beneficial in many NLP applications. In this
work, we exploit labeled syntactic dependency trees and formalize the induction
problem as unsupervised learning of tree-structured hidden Markov models.
Syntactic functions are used as additional observed variables in the model,
influencing both transition and emission components. Such syntactic information
can potentially lead to capturing more fine-grain and functional distinctions
between words, which, in turn, may be desirable in many NLP applications. We
evaluate the word representations on two tasks -- named entity recognition and
semantic frame identification. We observe improvements from exploiting
syntactic function information in both cases, and the results rivaling those of
state-of-the-art representation learning methods. Additionally, we revisit the
relationship between sequential and unlabeled-tree models and find that the
advantage of the latter is not self-evident.