Check the preview of 2nd version of this platform being developed by the open MLCommons taskforce on automation and reproducibility as a free, open-source and technology-agnostic on-prem platform.

Advances in Online Audio-Visual Meeting Transcription

lib:a0c9a43557a48599 (v1.0.0)

Authors: Takuya Yoshioka,Igor Abramovski,Cem Aksoylar,Zhuo Chen,Moshe David,Dimitrios Dimitriadis,Yifan Gong,Ilya Gurvich,Xuedong Huang,Yan Huang,Aviv Hurvitz,Li Jiang,Sharon Koubi,Eyal Krupka,Ido Leichter,Changliang Liu,Partha Parthasarathy,Alon Vinnikov,Lingfeng Wu,Xiong Xiao,Wayne Xiong,Huaming Wang,Zhenghao Wang,Jun Zhang,Yong Zhao,Tianyan Zhou
ArXiv: 1912.04979
Document:  PDF  DOI 
Abstract URL: https://arxiv.org/abs/1912.04979v1


This paper describes a system that generates speaker-annotated transcripts of meetings by using a microphone array and a 360-degree camera. The hallmark of the system is its ability to handle overlapped speech, which has been an unsolved problem in realistic settings for over a decade. We show that this problem can be addressed by using a continuous speech separation approach. In addition, we describe an online audio-visual speaker diarization method that leverages face tracking and identification, sound source localization, speaker identification, and, if available, prior speaker information for robustness to various real world challenges. All components are integrated in a meeting transcription framework called SRD, which stands for "separate, recognize, and diarize". Experimental results using recordings of natural meetings involving up to 11 attendees are reported. The continuous speech separation improves a word error rate (WER) by 16.1% compared with a highly tuned beamformer. When a complete list of meeting attendees is available, the discrepancy between WER and speaker-attributed WER is only 1.0%, indicating accurate word-to-speaker association. This increases marginally to 1.6% when 50% of the attendees are unknown to the system.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!