Advertisement

Mind-reading science translates your inner-most thoughts into speech

Whenever you move the dial on the radio or read an article on a website, you hear the words formi...
Newstalk
Newstalk

12.55 31 Oct 2014


Share this article


Mind-reading science translate...

Mind-reading science translates your inner-most thoughts into speech

Newstalk
Newstalk

12.55 31 Oct 2014


Share this article


Whenever you move the dial on the radio or read an article on a website, you hear the words forming in your mind. Your brain instantaneously voices the vowels and consonants into recognisable speech. And now scientists believe they’ve figured out how to translate those thoughts from an internal monologue to a computer-generated language – helping those who cannot speak to find their voice.

It’s all about the voices in our heads, or at least that’s what cognitive neuroscientist Brian Pasley of UC Berkeley thinks. Speaking to the New Scientist, Pasley says unlocking the mysteries of how the brain reacts to speech is the key to digitally translating brainwaves into words.

"We're trying to decode the brain activity related to that voice to create a medical prosthesis that can allow someone who is paralysed or locked in to speak," Pasley says.

Advertisement

Talking to somebody causes sensory neurons in our ears to pass information on to the parts of our brain that recognise and unscramble the noise from nonsense to speech. Pasley’s team reasoned that hearing someone speak and hearing that voice in your head when thinking or silently reading might light up some of the same neurons in the brain.

The team developed an algorithm to work on eavesdropping on our inner-most thoughts, analysing how brains reacted to hearing Abraham Lincoln’s Gettysburg Address, JFK’s inauguration speech, and the Humpty Dumpty’s battle with gravity. Participants read the texts aloud, and then again silently.

The scientists then mapped the brain’s reaction to each sound aloud and internally, which showed striking similarities. Using a decoder, they then managed to recreate the sounds based on the silent-reading readings, and all the king’s horses and all the king’s men could almost put those sounds back together again.

[Image: PLOS Biology]

“We got significant results, but it’s not good enough yet to build a device,” says Stephanie Martin, who works with Brian Pasley.

The problem at the moment is that the translation is based on both the speech and thought patterns of the same person, whereas a device for people who cannot speak would need to work on what they hear and think. But the results thus far are promising.

"We don't think it would be an issue to train the decoder on heard speech because they share overlapping brain areas," says Martin.

The research team is now expanding and fine-tuning its algorithm, even turning to Pink Floyd to work on the brain’s reaction to music. While it may yet take a while, giving a voice to those who have lost their words might just start with a visit to the dark side of the moon.


Share this article


Read more about

News

Most Popular