Facebook makes progress on brain-computer interface research
It’s been two years since Facebook announced its brain-computer interface (BCI) program, and the company finally has an update it is ready to share. Through a sponsored research initiative with the University of California, San Francisco (UCSF), researchers have been able to decode spoken words and phrases in real time from the brain signals that control speech.
“This isn’t about decoding your random thoughts. Think of it like this: You take many photos and choose to share only some of them. Similarly, you have many thoughts and choose to share only some of them. This is about decoding those words you’ve already decided to share by sending them to the speech center of your brain. It’s a way to communicate with the speed and flexibility of your voice and the privacy of text. We want to do this with non-invasive, wearable sensors that can be manufactured at scale,” the company wrote in 2017 when it announced the project.
RELATED CONTENT:
The reality of augmented reality
Facebook fined $5 billion over privacy breaches
The company explained that while the use of electrocorticography has been used to explore brain-computer interface technologies, it is looking to take it a step further and change the way people can interact with digital devices through a non-invasive, wearable device, rather than through embedded electrodes.
Facebook recruited Edward Change, a world-renowned neurosurgeon at University of California, San Francisco (UCSF), who was leading a research team on brain mapping and speech neuroscience research.
“After discussing the importance of UCSF’s research aimed at improving the lives of people suffering from paralysis and other forms of speech impairment, as well as Facebook’s interest in the long-term potential of BCI to change the way we interact with technology more broadly, the two decided to team up with the shared goal of demonstrating whether it might really be possible to decode speech from brain activity in real time,” the company wrote in a post.
Chang along with a postdoctoral scholar in his lab studied the brain activity that controls speech on three volunteer research participants at the UCSF Epilepsy Center, who had recording electrodes temporarily placed on their brains.
Machine learning algorithms detected when participants were hearing a new question or beginning to respond, and were able to identify which of the two dozen standard responses the participant was giving with 61 percent accuracy.
“Real-time processing of brain activity has been used to decode simple speech sounds, but this is the first time this approach has been used to identify spoken words and phrases,” said postdoctoral researcher David Moses, PhD, who led the research. “It’s important to keep in mind that we achieved this using a very limited vocabulary, but in future studies we hope to increase the flexibility as well as the accuracy of what we can translate from brain activity.”
While Facebook is still a long way away from releasing a fully non-invasive, wearable devices at scale, the company is still exploring other methods with other partners such as the Mallinckrodt Institute of Radiology at Washington University School of Medicine and APL at Johns Hopkins.
Facebook envisions a future where brain-computer interface technology that can help patients with neurological damage speak again by detecting intended speech from brain activity in real time.
“A decade from now, the ability to type directly from our brains may be accepted as a given. Not long ago, it sounded like science fiction. Now, it feels within plausible reach. And our responsibility to ensure that these technologies work for everyone begins today,” Facebook wrote.
The post Facebook makes progress on brain-computer interface research appeared first on SD Times.
Tech Developers
No comments