Authors: Yuning Mao,Xiang Ren,Jiaming Shen,Xiaotao Gu,Jiawei Han
Where published:
ACL 2018 7
ArXiv: 1805.04044
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1805.04044v1
We present a novel end-to-end reinforcement learning approach to automatic
taxonomy induction from a set of terms. While prior methods treat the problem
as a two-phase task (i.e., detecting hypernymy pairs followed by organizing
these pairs into a tree-structured hierarchy), we argue that such two-phase
methods may suffer from error propagation, and cannot effectively optimize
metrics that capture the holistic structure of a taxonomy. In our approach, the
representations of term pairs are learned using multiple sources of information
and used to determine \textit{which} term to select and \textit{where} to place
it on the taxonomy via a policy network. All components are trained in an
end-to-end manner with cumulative rewards, measured by a holistic tree metric
over the training taxonomies. Experiments on two public datasets of different
domains show that our approach outperforms prior state-of-the-art taxonomy
induction methods up to 19.6\% on ancestor F1.