Facebook’s work on augmented reality and virtual reality neural input seems to move in one area More wrist-based trendBut the company continues to invest in research into implanted brain-computer interfaces. The latest stage from U.S long years A Facebook-funded UCSF study called Project Steno translates conversation attempts by a speech paralysis patient into words on a screen.
Dr. David Moses, lead author of a study published Wednesday in the journal. has been posted New England Journal of Medicine. “We hope this demonstrates the principle of direct voice control of a communication device using the intentional attempt to speak as a control signal from a person who is unable to speak and is paralyzed.”
Brain-computer interfaces (BCIs) have been behind a number of promising developments, including Stanford research that could change imaginative handwriting. im drop text. The UCSF study takes a different approach and analyzes actual attempts to speak and act roughly like a translator.
The study conducted by the neurosurgeon Dr. Edward Chang of the University of California, San Francisco, co-implanted a ‘neuro degeneration’ electrode in a paralyzed man who had suffered a brainstem stroke at the age of 20. The man tried to answer the questions on the screen. UCSF’s machine learning algorithms can recognize 50 words and convert them into sentences in real time. For example, when a patient sees a prompt, “How are you today?” The answer appeared on the screen with the words “I am very good”, word for word appeared.
Musa stated that the work will continue beyond the Facebook funding phase and that research still has a lot to do. At the moment, it remains unclear how much speech recognition comes from recorded patterns of brain activity, sound words, or a combination of both.
Moses quickly points out that the study, like BCI’s other work, is not mind reading: it relies on recognizing brain activity that occurs specifically when trying to engage in a particular behavior, such as speaking. Moses also says that the UCSF team’s work does not yet need to be translated into non-invasive neural interfaces. Elon Musk’s Neuralink promises to wirelessly transmit data from electrodes implanted in the brain for future research and assistance purposes, but so far only this technology has been demonstrated. on monkeys.
Meanwhile, Facebook Reality Labs Research has turned away from outdated brain-computer interfaces for futuristic VR/AR headsets and focused on the near future. wrist devices based on technology Obtained from CTRL-Labs. Facebook Reality Labs has its own non-invasive research headphones to study brain activity, and the company has announced that it will make these headphones available for open source research because they are no longer focused on head-mounted neural devices. (UCSF receives funding from Facebook, but no hardware.)
Aspects of work on the optic head in our research are applied to the wrist electromyogram. We will continue to use optical BCI as a research tool to develop better wrist-based sensor models and algorithms. While we will continue to use these models.” In our research, we are no longer developing a head-mounted BCI optical device to capture speech production. This is one reason why we are sharing prototypes of head-mounted devices with other researchers who can apply our innovations to other use cases.”, A representative confirmed via email.
Consumer-centered neural input technology is still in its infancy, And so on. Although there are consumer devices that use sensors worn in the head or wrist, they are much less accurate than currently implanted electrodes.
“Prone to fits of apathy. Zombie ninja. Entrepreneur. Organizer. Evil travel aficionado. Coffee practitioner. Beer lover.”