The latest brain computer interface device has been developed internationally to help severely paralyzed individuals recover their communication skills
The internationally renowned academic journal Nature has recently published two neuroscience papers, and researchers have reported the development of a new brain computer interface device that can decode brain activity into language more quickly, accurately, and with a larger vocabulary than existing technologies. The research findings demonstrate advances in technology aimed at helping severely paralyzed individuals recover their communication skills.
According to the paper, people with neurological disorders often lose their language ability due to muscle paralysis. Previous studies have shown that language can be decoded from the brain activity of paralyzed patients, but it can only be output in written form, with limited speed, accuracy, and vocabulary.
In one of the research papers, Francis Willett and colleagues from Stanford University in the United States developed a brain computer interface device that can collect the neural activity of individual cells by inserting a fine electrode array into the brain, and train an artificial neural network to decode the patient's attempted vocalizations. With the help of this device, a patient with amyotrophic lateral sclerosis can communicate at a speed of 62 words per minute, which is 3.4 times faster than similar devices before, further approaching the speed of natural dialogue. The device has an error rate of 9.1% at a vocabulary of 50 words, which is 2.7 times lower than the most advanced language brain computer interface device before. When using a vocabulary of 12500, the error rate of the device is 23.8%. The author of the paper believes that this may be the first successful demonstration of decoding with a large vocabulary in research.
In another research paper, Edward Chang and colleagues at the University of California, San Francisco developed a device based on different methods to capture brain activity, with electrodes covering the surface of the brain and detecting the activity of many cells. This brain computer interface device can simultaneously convert brain signals into three output forms: text, speech, and controlling a avatar. Researchers trained a deep learning model to decode neural data collected from a severely paralyzed patient due to a brainstem stroke, where the patient attempted to silently utter sentences. The median translation speed from brain signals to text is 78 words per minute; When translating brain signals into speech, the error rate is 28.2% with a vocabulary of 372, and the smaller the vocabulary, the lower the error rate. The device can also translate neural activity into facial expressions and present them in the form of animated avatars. In summary, this multimodal brain computer interface device provides more possibilities for paralyzed patients, allowing them to communicate more naturally and expressively.
The article "News and Perspectives" by peer experts published in Nature at the same time believes that these two brain computer interface devices "represent significant advances in neuroscience and neuroengineering research, and have great potential for alleviating the pain of people who lose voice due to paralytic nerve damage and diseases." However, further work is needed to achieve wider applications.