Facebook is working on mind-reading

Credit to Author: Lisa Vaas| Date: Fri, 02 Aug 2019 11:39:03 +0000

How does the prospect of Facebook learning how to read minds strike you?

Fellow social media-participating lab rats, you are likely already aware that Facebook has been crafted on the principles of Las Vegas-esque addiction, the idea being to exploit human psychology by giving us little hits of dopamine with those “Likes” in order to keep us coming back to the platform like slot machine addicts feeling favored by Lady Luck.

In 2017, ex-president of Facebook Sean Parker told us all about Facebook’s nonchalantly endeavoring to get us addicted, during that era’s spate of mea-culpa’ing.

This is all just to say that it might be reasonable to worry about Facebook playing around with our wetware. There might be reasons why somebody might not trust Facebook with direct access to their brain.

But one of Facebook’s technology research projects – the funding of artificial intelligence (AI) algorithms capable of turning brain activity into speech – may be altruistic.

It’s about creating a brain-computer interface (BCI) that allows people to type just by thinking, and Facebook has announced that it’s just achieved a first in the field: while previous decoding has been done offline, for the first time, a team at University of California San Francisco has managed to decode a small set of full, spoken words and phrases from brain activity, in real-time.

In an article published on Tuesday in Nature Communications, University of California San Francisco (UCSF) neurosurgeon Edward Chang and postdoctoral student David Moses published the results of a study demonstrating that brain activity recorded while people speak could be used to almost instantly decode what they were saying into text on a computer screen.

Chang also runs a leading brain mapping and speech neuroscience research team dedicated to developing new treatments for patients with neurological disorders. In short, he’s the logical choice for the BCI program, which Facebook announced at its F8 conference in 2017. The program’s goal is to build a non-invasive, wearable device that lets people type by simply imagining that they’re talking.

In a blog post about the project, Facebook said that the past decade has brought tremendous strides in neuroscience, in that we know much more about how the brain understands and produces speech.

At the same time, AI research is improving speech to text translation. Put those technologies together, and the hope is that one day, people could be able to communicate by thinking about what they want to say – a possibility that could dramatically improve the lives of people living with paralysis.

The study involved three volunteer epileptic patients who were already undergoing treatment at UCSF Medical Center. Prior to surgery, they’d already had recording electrodes implanted in their brains to locate the origins of their seizures.

The hope is to decode more words in a shorter amount of time: the goal is to decode 100 words per minute with a 1,000-word vocabulary and with an error rate of less than 17%.

While the electrodes were implanted in the patients’ heads, Facebook says the ultimate goal is to create a non-invasive, wearable device to help patients with speech loss.

That’s quite a way off, but Facebook Reality Labs (FRL) is working with partners, including the Mallinckrodt Institute of Radiology at Washington University School of Medicine and APL at Johns Hopkins, on how to do it.

The next step: to use infrared light.

Facebook asks us to imagine a pulse oximeter: the clip-like sensor with a glowing red light that clinicians clamp around your index finger when you visit your doctor. Those devices measure the oxygen saturation level of your blood through your finger. Likewise, near-infrared light can be used to measure blood oxygenation in the brain from outside of the body in a safe, non-invasive way. FRL says it’s similar to the signals measured today in functional magnetic resonance imaging (fMRI), but it would be portable and wearable, made from consumer-grade parts.

The infrared system is clunky: it’s “bulky, slow and unreliable,” FRL says. But if it were to be improved to even a modest extent – able to decode a handful of silent thoughts, such as “home,” “select,” and “delete” – FRL thinks it could reinvent how we interact with today’s virtual reality systems, as well as tomorrow’s augmented reality glasses.

What comes after measuring blood oxygenation to determine brain activity? Measuring the movement of blood vessels and even neurons themselves.

Thanks to the commercialization of optical technologies for smartphones and LiDAR, we think we can create small, convenient BCI devices that will let us measure neural signals closer to those we currently record with implanted electrodes – and maybe even decode silent speech one day.

Facebook, decode this: My brain is thinking “Wow.” My brain is thinking that this could be a mind-boggling boon to those suffering from stroke or other head injuries.

http://feeds.feedburner.com/NakedSecurity

Leave a Reply