Skip to main content

Research Repository

Advanced Search

Mouth-Clicks used by Blind Expert Human Echolocators – Signal Description and Model Based Signal Synthesis

Thaler, Lore; Reich, Galen M.; Zhang, Xinyu; Wang, Dinghe; Smith, Graeme E.; Tao, Zeng; Abdullah, Raja Syamsul Azmir Bin. Raja; Cherniakov, Mikhail; Baker, Christopher J.; Kish, Daniel; Antoniou, Michail

Mouth-Clicks used by Blind Expert Human Echolocators – Signal Description and Model Based Signal Synthesis Thumbnail


Authors

Galen M. Reich

Xinyu Zhang

Dinghe Wang

Graeme E. Smith

Zeng Tao

Raja Syamsul Azmir Bin. Raja Abdullah

Mikhail Cherniakov

Christopher J. Baker

Daniel Kish

Michail Antoniou



Abstract

Echolocation is the ability to use sound-echoes to infer spatial information about the environment. Some blind people have developed extraordinary proficiency in echolocation using mouth-clicks. The first step of human biosonar is the transmission (mouth click) and subsequent reception of the resultant sound through the ear. Existing head-related transfer function (HRTF) data bases provide descriptions of reception of the resultant sound. For the current report, we collected a large database of click emissions with three blind people expertly trained in echolocation, which allowed us to perform unprecedented analyses. Specifically, the current report provides the first ever description of the spatial distribution (i.e. beam pattern) of human expert echolocation transmissions, as well as spectro-temporal descriptions at a level of detail not available before. Our data show that transmission levels are fairly constant within a 60° cone emanating from the mouth, but levels drop gradually at further angles, more than for speech. In terms of spectro-temporal features, our data show that emissions are consistently very brief (~3ms duration) with peak frequencies 2-4kHz, but with energy also at 10kHz. This differs from previous reports of durations 3-15ms and peak frequencies 2-8kHz, which were based on less detailed measurements. Based on our measurements we propose to model transmissions as sum of monotones modulated by a decaying exponential, with angular attenuation by a modified cardioid. We provide model parameters for each echolocator. These results are a step towards developing computational models of human biosonar. For example, in bats, spatial and spectro-temporal features of emissions have been used to derive and test model based hypotheses about behaviour. The data we present here suggest similar research opportunities within the context of human echolocation. Relatedly, the data are a basis to develop synthetic models of human echolocation that could be virtual (i.e. simulated) or real (i.e. loudspeaker, microphones), and which will help understanding the link between physical principles and human behaviour.

Citation

Thaler, L., Reich, G. M., Zhang, X., Wang, D., Smith, G. E., Tao, Z., …Antoniou, M. (2017). Mouth-Clicks used by Blind Expert Human Echolocators – Signal Description and Model Based Signal Synthesis. PLoS Computational Biology, 13(8), Article e1005670. https://doi.org/10.1371/journal.pcbi.1005670

Journal Article Type Article
Acceptance Date Jul 5, 2017
Online Publication Date Aug 31, 2017
Publication Date Aug 31, 2017
Deposit Date Dec 1, 2016
Publicly Available Date Mar 29, 2024
Journal PLoS Computational Biology
Print ISSN 1553-734X
Publisher Public Library of Science
Peer Reviewed Peer Reviewed
Volume 13
Issue 8
Article Number e1005670
DOI https://doi.org/10.1371/journal.pcbi.1005670

Files

Published Journal Article (7.8 Mb)
PDF

Publisher Licence URL
http://creativecommons.org/licenses/by/4.0/

Copyright Statement
© 2017 Thaler et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.





You might also like



Downloadable Citations