Artificial intelligence has learned to hear what people say “on the inside”

Share this post

US researchers have created a system capable of recognizing silent speech and transforming it into audible speech. The algorithm based on artificial intelligence can recognize what people are saying “on the inside”.

When people talk “on the inside,” they do not make any sounds, but they still strain the muscles of the vocal tract (although to a lesser extent than in normal speech). This is a process known as subvocalization.

“We can read subvocalization and turn it into audible speech – an algorithm based on neural networks was created for this purpose. The system was trained on three types of data at once, so it shows good results,” say the authors of the technology, employees of the University of California at Berkeley.

The algorithm, as noted by American scientists, reads data from electromyograms and converts signals into sounds. The algorithm of AI was used as a recurring neural network with long short-term memory, as well as WaveNet, a neural network responsible for voice synthesis. In order to train artificial intelligence, the scientists used a data set of several hours of records of audible and silent speech. Testing of the technology has shown that it has good prospects of commercial application – for example, to create devices that will allow to transmit speech without making sounds.