Summary
A sensory-substitution software to convert in real-time a video stream into an audio stream (audio to video conversion)
The Vibe: A Versatile Vision-to-Audition Sensory Substitution Device
Sylvain Hanneton, Malika Auvray, and Barthélemy Durette
Applied Bionics and Biomechanics
Volume 7 (2010), Issue 4, Pages 269-276
http://dx.doi.org/10.1080/11762322.2010.512734
We describe a sensory substitution scheme that converts a video stream into an audio stream in real-time. It was initially developed as a research tool for studying human ability to learn new ways of perceiving the world: the Vibe can give us the ability to learn a kind of ‘vision’ by audition. It converts a video stream into a continuous stereophonic audio signal that conveys information coded from the video stream. The conversion from the video stream to the audio stream uses a kind of retina with receptive fields. Each receptive field controls a sound source and the user listens to a sound that is a mixture of all these sound sources. Compared to other existing vision-to-audition sensory substitution devices, the Vibe is highly versatile in particular because it uses a set of configurable units working in parallel. In order to demonstrate the validity and interest of this method of vision to audition conversion, we give the results of an experiment involving a pointing task to targets memorised through visual perception or through their auditory conversion by the Vibe. This article is also an opportunity to precisely draw the general specifications of this scheme in order to prepare its implementation on an autonomous/mobile hardware.
Citations to this Article [7 citations], 2015
The following is the list of published articles that have cited the current article.
Ophelia Deroy, and Malika Auvray, “Reading the world through the skin and ears: A new perspective on sensory substitution,” Frontiers in Psychology, vol. 3, 2012.
Alastair Haigh, David J. Brown, Peter Meijer, and Michael J. Proulx, “How well do you see what you hear? The acuity of visual-to-auditory sensory substitution,” Frontiers In Psychology, vol. 4, 2013.
Kai Kaspar, Sabine König, Jessika Schwandt, and Peter König, “The experience of new sensorimotor contingencies by sensory augmentation,” Consciousness and Cognition, vol. 28, pp. 47–63, 2014.
Philip M. Lewis, Helen M. Ackland, Arthur J. Lowery, and Jeffrey V. Rosenfeld, “Restoration of vision in blind individuals using bionic devices: A review with A focus on cortical Visual prostheses,” Brain Research, 2014.
Jess Hartcher-O'Brien, and Malika Auvray, “The Process of Distal Attribution Illuminated Through Studies of Sensory Substitution,” Multisensory Research, vol. 27, no. 5-6, pp. 421–441, 2014.
Shachar Maidenbaum, Shlomi Hanassy, Sami Abboud, Galit Buchs, Daniel-Robert Chebat, Shelly Levy-Tzedek, and Amir Amedi, “The "EyeCane", a new electronic travel aid for the blind: Technology, behavior & swift learning,” Restorative Neurology And Neuroscience, vol. 32, no. 6, pp. 813–824, 2014.
Chloe Stoll, Richard Palluel-Germain, Vincent Fristot, Denis Pellerin, David Alleysson, and Christian Graff, “Navigating from a Depth Image Converted into Sound,” Applied Bionics And Biomechanics, pp. 1–9, 2015.
A new version is coming...
Sourceforge hosts the very old version of the sources of this software, only available for windows, and for a very raw and limited application of the ideas developped in the paper. But this project is not dead...
Whishes :
And may be we found a solution :
This last point allows to use your preferred sound synthesis application to generate the sounds . A lot of audio software or real synths are able to deal with the OSC messages.