Check the preview of 2nd version of this platform being developed by the open MLCommons taskforce on automation and reproducibility as a free, open-source and technology-agnostic on-prem platform.

Making Neural Machine Reading Comprehension Faster

lib:ac3ae84e998b6757 (v1.0.0)

Authors: Debajyoti Chatterjee
ArXiv: 1904.00796
Document:  PDF  DOI 
Abstract URL: http://arxiv.org/abs/1904.00796v1


This study aims at solving the Machine Reading Comprehension problem where questions have to be answered given a context passage. The challenge is to develop a computationally faster model which will have improved inference time. State of the art in many natural language understanding tasks, BERT model, has been used and knowledge distillation method has been applied to train two smaller models. The developed models are compared with other models which have been developed with the same intention.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!