- Apple announced plans to admit switch control for brain computers interfaces
- The tool would make devices such as iPhones and vision pro headphones accessible to people with conditions such as ALS
- Combined with AI’s personal voice characteristic, the brain-computer interfaces could allow people to think that words and listen to them spoken in a synthetic version of their voice
Our smartphones and other devices are key to so many personal and professional tasks throughout the day. The use of these devices can be difficult or directly impossible for those with ELA and other conditions. Apple believes that it has a possible solution: think. Specifically, a cerebral computer interface (BCI) built with Australian neurotecnia onset synchronous that could provide hands -free and controlled versions by the thinking of operating systems for iPhones, iPads and vision pro headphones.
A brain implant to control your phone may seem extreme, but it could be the key to those with serious spinal cord lesions or related injuries to interact with the world. Apple will admit the switch control for those with the implant embedded near the brain motor cortex. The implant collects the electrical signals of the brain when a person thinks about moving. Translate that electrical activity and feeds it to Apple’s switch control software, becoming digital actions such as selecting icons on a screen or navigating a virtual environment.
Brain implants, voices of AI
Of course, it is still early for the system. It can be slow compared to the tapping, and developers will take time to build better BCI tools. But speed is not the point at this time. The point is that people could use brain implant and an iPhone to interact with a world that they were otherwise closed.
The possibilities are even greater when they are observed how they could be combined with personal voice clones generated by AI. Apple’s personal voice function allows users to record a sample of their own speech so that, if they lose their ability to speak, they can generate a synthetic discourse that still sounds like them. It is not very indistinguishable from the real thing, but it is close and much more human than family robotic imitation of old movies and television programs.
At this time, these voices are triggered by touch, ocular monitoring or other assistance technology. But with the integration of BCI, those same people could “think” their voice in existence. They could speak only with the intention of speaking, and the system would do the rest. Imagine someone with ALS not only navigate their iPhone with their thoughts, but also speak again through the same device “writing” statements so that their synthetic voice clone says.
While it is incredible that a brain implant can allow someone to control a computer with their mind, the AI could take it to another level. Not only would it help people use technology, but also to be themselves in a digital world.