Edward Changand UCSF colleagues are developing technology that will translate signals from the brain into synthetic speech. The research team believes that the sounds would be nearly as sharp and normal as a real person’s voice. Sounds made by the human lips, jaw, tongue and larynx would be simulated.
The goal is a communication method for those with disease and paralysis.
According to Chang: “For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity.”
Berkeley’s Bob Knighthas developed related technology, using HFB activity to decode imagined speech to develop a BCI for treatment of disabling language deficits. He described this work at the 2018 ApplySci conference at Stanford.