Price data from Decrypt’s Art, Fashion, and Entertainment Hub highlights an exciting innovation for stroke patients who struggle with dysarthria, a speech disorder. A new device called SCENE is here to help these patients regain their ability to communicate naturally and fluently.
This “intelligent throat” system, created by an international team of researchers, uses advanced sensors and artificial intelligence (AI) to interpret silent speech and emotional cues in real time. It combines textile strain sensors that pick up vibrations from throat muscles with carotid pulse monitors, all working together with large language models to process speech effectively.
What sets the SCENE device apart? Unlike existing technologies, it translates silent speech into clear sentences without any delay. It also captures emotional and contextual nuances, making communication richer and more personal. In tests with five patients, the system achieved a 4.2% word error rate and a 2.9% sentence error rate. That’s a significant improvement over previous silent speech systems. Plus, user satisfaction jumped by 55%, showing how well it meets their needs for expressive communication.
According to a research paper submitted recently, “The system generates personalized, contextually appropriate sentences that accurately reflect patients' intended meaning.” The device is designed as a comfortable choker embedded with graphene-based strain sensors, ensuring high sensitivity for daily use.
It features a built-in wireless module that allows for continuous data transmission while using minimal energy. This means patients can use it all day without worrying about battery life. The system’s embedded LLM agents analyze speech tokens and emotional signals, refining sentences to match what the user wants to express. This personalized approach creates dynamic, real-time communication, bridging the gap between patient needs and technology.
The researchers also see broader applications for the device. It could support patients with other neurological conditions like ALS and Parkinson’s. They’re even considering the potential for multilingual adaptations. Right now, the team is focused on making the device smaller and integrating it into edge-computing frameworks to improve usability.