With the help of further development, the researchers succeeded in returning their language to patients. Brain-computer interface makes this possible.
Speech problems can occur after a stroke. Some sufferers even lose the ability to speak completely. Patients with amyotrophic lateral sclerosis are also affected. As the disease, amyotrophic lateral sclerosis (ALS), progresses, patients’ nerve cells become increasingly damaged. And also those responsible for muscle control. This can lead to muscle weakness and paralysis, which can also affect the muscles of the mouth and throat. As a result, patients have problems with speech formation and speak slurred or incomprehensible speech.
There are already communication aids such as computer voice devices that can be used in the above situations. But now American researchers have succeeded in developing a new type of communication system using artificial intelligence (AI) that can significantly improve the quality of life of those affected.
Nerve impulses are converted into text or sound
A team of researchers led by Francis R. Willett of the Howard Hughes Medical Institute at Stanford University in California is making dramatic improvements to brain implants and associated control centers. An amyotrophic lateral sclerosis patient tested the system and was pleasantly surprised by the results: the woman, who had lost her ability to speak, was able to express herself using language using the AI-based solution.
Simply put, it works like this: Brain implants transmit nerve impulses to the system, which can convert them into speech using artificial intelligence. Study leaders from the USA put it in their own words Stady As follows: “Voice-enabled brain-computer interfaces (BCI) have the potential to enable people with paralysis to communicate quickly by converting neural activity elicited by speech attempts into text or sound.”
Don’t miss anything: you can find everything related to health in the regular newsletter from our partner 24vita.de.
Artificial intelligence enables more accurate translation of nerve impulses into speech
This is not the first time that researchers have been able to correctly read what someone wants to say based on data from the brain Daily news. What’s new in the current publication is that the researchers are using artificial intelligence to process the data. Advantage: AI enables more accurate and error-free translation of brain impulses into language than was previously possible.
“Thanks to these high-resolution recordings, the study participant, who was no longer able to speak clearly due to amyotrophic lateral sclerosis, achieved a word error rate of 9.1 percent with a vocabulary of 50 words (2.7 times fewer errors than before BCI2 speech). ” and a word error rate of 23.8% in a vocabulary of 125,000 words (to our knowledge, the first successful demonstration of decoding a large vocabulary). Our participant’s speech attempt was decoded at a rate of 62 words per minute, which is 3.4 times faster than the previous record and close to the speed of a normal conversation (160 words per minute). nature Has been published. The researchers concluded, “These findings show a viable route to rapidly restore communication to paralyzed people who are no longer able to speak.”
This article only contains general information about the health topic in question and is therefore not intended for self-diagnosis, treatment or medication. It does not, in any way, replace a visit to a doctor. Unfortunately, our editors are not allowed to answer individual questions about medical conditions.
“Alcohol buff. Troublemaker. Introvert. Student. Social media lover. Web ninja. Bacon fan. Reader.”