We are very excited to join forces with MLCommons and OctoML.ai! Contact Grigori Fursin for more details!

Are words easier to learn from infant- than adult-directed speech? A quantitative corpus-based investigation

lib:0b3d8b1cfa409889 (v1.0.0)

Authors: Adriana Guevara-Rukoz,Alejandrina Cristia,Bogdan Ludusan,Roland Thiollière,Andrew Martin,Reiko Mazuka,Emmanuel Dupoux
ArXiv: 1712.08793
Document:  PDF  DOI 
Abstract URL: http://arxiv.org/abs/1712.08793v1

We investigate whether infant-directed speech (IDS) could facilitate word form learning when compared to adult-directed speech (ADS). To study this, we examine the distribution of word forms at two levels, acoustic and phonological, using a large database of spontaneous speech in Japanese. At the acoustic level we show that, as has been documented before for phonemes, the realizations of words are more variable and less discriminable in IDS than in ADS. At the phonological level, we find an effect in the opposite direction: the IDS lexicon contains more distinctive words (such as onomatopoeias) than the ADS counterpart. Combining the acoustic and phonological metrics together in a global discriminability score reveals that the bigger separation of lexical categories in the phonological space does not compensate for the opposite effect observed at the acoustic level. As a result, IDS word forms are still globally less discriminable than ADS word forms, even though the effect is numerically small. We discuss the implication of these findings for the view that the functional role of IDS is to improve language learnability.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives


Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!