Check a prototype of 2nd version of this platform being developed by cKnowledge.org in collaboration with MLCommons.

Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation

lib:4b42a026b79d386a (v1.0.0)

Vote to reproduce this paper and share portable workflows   1 
Authors: Tiancheng Zhao,Kyusong Lee,Maxine Eskenazi
Where published: ACL 2018 7
ArXiv: 1804.08069
Document:  PDF  DOI 
Artifact development version: GitHub
Abstract URL: http://arxiv.org/abs/1804.08069v1


The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoder-decoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!