Press "Enter" to skip to content

Can privacy coexist with technology that reads and changes brain activity?


Gertrude, the pig rooted around a straw-filled pen, oblivious to the cameras and viewers and the 1,024 electrodes that hear his brain signals. Every time the pig’s snout found a gift in the hand of a researcher, a musical jingle sounded indicating activity in its nerve cells that controlled the snout.

Those beeps were part of the big revelation on August 28 from Elon Musk’s company Neuralink. “In many ways, it’s like a Fitbit in the skull with small wires,” said Musk, founder of Tesla and SpaceX, of the new technology.

Neuroscientists have been recording the activity of animal nerve cells for decades. But Musk and others ’ambitions of linking humans to computers are shocking at their fingertips. Future-oriented entrepreneurs and researchers aim to listen to our brains and perhaps even rethink thinking. Imagine being able to call our Teslas with our mind, Jedi style.

Some scientists have called Gertrude’s introduction a light publicity stunt, full of unattainable promises. But Musk has already surprised people. “You can’t argue with a guy who built his own electric car and sent it into orbit around Mars,” says Christof Koch, a neuroscientist at Seattle’s Allen Institute for Brain Science.

Whenever Gertrude's snout touched something, nerve cells in her brain fired electrical signals detected by an implanted device (signals shown as wavy lines in black). A similar technology can one day help people with paralysis or brain disorders.Neuralink

If Neuralink ends up merging brains and Teslas is no exception. Musk is not the only dreamer pursuing neurotechnology. Advances are coming quickly and encompassing a variety of approaches, including external headphones that can distinguish between hunger and boredom; implanted electrodes that translate the intention to speak into real words; and wristbands that use nerve impulses to type without a keyboard.

Today, paralyzed people are already testing brain-computer interfaces, a technology that connects brains to the digital world (SN: 16/11/13, p. 22). Only with brain signals, users were able to shop online, communicate, and even use a prosthetic arm to drink from a cup (SN: 16/06/12, p. 5). The ability to hear, understand, and perhaps even modify neural conversations could change and improve people’s lives in ways that go far beyond medical treatments. But these skills also raise questions about who can access our brains and for what purposes.

Because of the potential of neurotechnology, for both good and bad, we all have a stake in shaping how it is created and ultimately how it is used. But most people don’t have a chance to weigh in and only find out about these advances after it’s a consummate fact. So we asked Science News readers for their opinions on recent advances in neurotechnology. We describe three main ethical issues: equity, autonomy, and privacy. By far, readers were more concerned about privacy.

The idea of ​​allowing companies, or governments, or even healthcare workers access to the inner workings of the brain scared many respondents. Such an intrusion would be the most important breach in a world where privacy is already rare. “My brain is the only place I know is really mine,” one reader wrote.

Technology that can change your brain (drive it to think or behave in certain ways) is especially troubling to many of our readers. A nightmare scenario raised by several interviewees: we become zombies controlled by others.

When discussing this type of brain manipulation, several science fiction scenarios come to mind, such as memories that were erased in the touching 2004 film Eternal Sunshine of the Spotless Mind; ideas implanted in a person’s mind, as in the 2010 film Inception; or getting people fooled into thinking a virtual world is the real thing, like in the mind-boggling 1999 thriller The Matrix.

Current technological capabilities do not come close to any of these fantasies. Still, “the here and now is just as interesting … and just as morally problematic,” says neuroethicist Timothy Brown of the University of Washington in Seattle. "We don't need The Matrix to get our dystopia."

illustration of scientists on a ladder removing thought bubbles from a brainThe ability to boost brain activity in certain directions raises ethical questions.Julia Yellow

Today, codes of ethics and laws govern research, medical treatments, and some aspects of our privacy. But we don’t have a complete way to deal with privacy violations that may arise with future advances in brain science. “We’re all flying by the seat of our pants here,” says Rafael Yuste, a neurobiologist at Columbia University.

For now, ethics issues are being partially addressed. Academic, bioethical, and scientific researchers from private companies, such as IBM and Facebook, are discussing these issues among themselves. Large brain research consortia, such as the U.S. BRAIN Initiative (SN: 22/02/14, p. 16), include funding for projects that address privacy issues. Some governments, including Chile’s national legislature, are beginning to address concerns raised by neurotechnology.

With such disunited efforts, it is not surprising that no consensus has emerged. The few answers that exist are as varied as the people who ask.

Read thoughts

The ability to extract information directly from the brain – without relying on speaking, writing, or writing – has long been a goal for researchers and physicians seeking to help people whose bodies can no longer move or speak. Already implanted electrodes can record signals from moving areas of the brain, allowing people to control robotic prostheses.

In January 2019, researchers at Johns Hopkins University implanted electrodes in the brain of Robert “Buz” Chmielewski, who became quadriplegic after a surfing accident. With signals from both sides of his brain, Chmielewski controlled two prosthetic arms to simultaneously use a fork and a knife to feed himself, researchers said in a press release on Dec. 10.

Robert "Buz" Chmielewski, who has had quadriplexia since adolescence, uses brain signals to feed on cake. Using electrodes implanted on both sides of the brain, he controls two robotic arms: one handles the knife and the other holds the fork.

Other research has decoded the speech of the brain signals of a paralyzed man who cannot speak. When the man saw the question, "Do you want water?" on a computer screen, he responded with the text message, "No, I'm not thirsty," using only signals in his brain. This feat, described on November 19 at a symposium organized by Columbia University, is another example of the tremendous progress being made in connecting brains to computers.

“Never before have we been able to get that kind of information without having to interact with the periphery of your body, which you had to activate voluntarily,” says Karen Rommelfanger, a neuroethicist at Emory University in Atlanta. Speaking, sign language, and writing, for example, “all require several steps to make decisions,” she says.

Today, efforts to extract information from the brain typically require bulky equipment, intense computing power, and, most importantly, a willing participant, Rommelfanger says. For now, an attempt to get into your mind could be easily thwarted by closing your eyes, or moving your fingers, or even falling asleep.

In addition, Rommelfanger says, "I don't think any neuroscientist knows what a mind is or what a thought is," she says. "I don't care about mental reading, from the existing field of technologies."

Headlines and summaries of the latest Science News articles, delivered in your inbox

But that terrain can change quickly. “We’re very, very close” to having the ability to extract private information from people’s brains, says Yuste, who points to studies that have decoded what a person looks at and what words they hear. Scientists at Kernel, a neurotechnology company near Los Angeles, have invented a helmet, which is now coming to market, which is essentially a portable brain scanner that can capture activity in certain areas of the brain.

For now, companies only have our behavior (our tastes, our clicks, our shopping history) to build accurate profiles of us and estimate what we will do next. And we leave them. Predictive algorithms make good guesses, but guess the same. “With these neural data obtained from neurotechnology, it may no longer be an assumption,” Yuste says. Companies will have the real thing, right from the source.

Even subconscious thoughts can be revealed with new technological improvements, Yuste says. "That's the ultimate fear of privacy, because what's left?"

Rewrite, revise

Technology that can change brain activity already exists today, as do medical treatments. These tools can detect and prevent a seizure in a person with epilepsy, for example, or stop a tremor before it goes out.

Researchers are testing systems for obsessive-compulsive disorder, addiction, and depression (SN: 16/02/19, p. 22). But the power to accurately change a functioning brain directly, and as a result, a person’s behavior, raises concerns.

The desire to persuade, to change a person’s opinion, is not new, says Marcello Ienca, bioethicist at ETH Zurich. Winning hearts and minds is at the core of advertising and politics. Technology capable of changing the activity of your brain with just a subtle push, however, “takes the current risks of manipulation to the next level,” says Ienca.

"Imagine going into McDonald's and you suddenly have an irresistible craving for a cheeseburger (or 10)."

What if that influence finds a place outside the medical area? A doctor may use precise technology to modify the brain to relieve anorexia in a young person, but the same can be used to make money: "Imagine going into McDonald's if you suddenly have an irresistible craving for a cheeseburger (or 10) ”Wrote one of our readers.

Is desire caused by real hunger? Or is it the result of a small neuronal thrust just as he drove near the golden arches? That neural intrusion could cause uncertainty about where that desire comes from, or perhaps even escape the warning altogether. “That’s super dangerous,” Yuste says. "The moment you start stimulating the brain, you'll change people's minds and they'll never know it because they'll interpret it as 'it's me.'"

Accurate brain control of people with existing technology is not possible. But in an indication of what may be possible, scientists have already created visions within the brains of the mouse (SN: 17/08/19, p. 10). Using a technique called optogenetics to stimulate small groups of nerve cells, the researchers made the mice "see" lines that were not. Those mice behaved exactly as if their eyes actually saw the lines, says Yuste, whose research group conducted some of these experiments. “Puppets,” he calls them.

illustration of scientists observing the inside of a brainOnce researchers or companies can change our brain activity, will neural privacy require special protections?
Julia Yellow

What to do?

As neurotechnology advances, scientists, ethics, companies, and governments are looking for answers on how or if brain technology is regulated. For now, those answers depend entirely on who you ask. And they come in a context of increasingly invasive technology with which we feel surprisingly comfortable.

We allow our smartphones to control where we go, what time we sleep, and even if we wash our hands for a full 20 seconds. Pair that with the digital bread crumbs we actively share about the diets we try, the shows we crave, and the tweets we love and our lives are an open book.

Those details are more powerful than brain data, says Anna Wexler, an ethicist at the University of Pennsylvania. “My email address, my note app, and my search engine history reflect more of who I am as a person – my identity – than our neural data can be,” he says.

"How would we know that what we thought or felt came from our own brains or if someone put it there?"

It’s too early to worry about neurotechnology’s privacy invasions, Wexler argues, a position that makes it something atypical. "Most of my classmates told me I was crazy."

At the other end of the spectrum, some researchers, including Yuste, have proposed strict privacy regulations that would treat a person’s neural data as their organs. Just as a liver cannot be removed from a body without approval for medical purposes, so should neural data not be removed. That view found buying in Chile, which is now considering classifying neural data with new protections that don’t allow companies to access it.

Other experts fall somewhere in the middle. Ienca, for example, does not want to see restrictions on personal freedom. People should have the option to sell or give away their brain data for a product they like or even for direct money. “The human brain is becoming a new asset,” says Ienca, something that can generate profits for companies eager to extract the data. He calls it "neurocapitalism."

And Ienca is fine with that. If a person is duly informed (granted, it is doubtful), then they are within their rights to sell their data or exchange it for a service or product. People should have the freedom to do what they like with their information.

General rules, checklists, and regulations are not likely to be a good way forward, Rommelfanger says. “Right now there are more than 20 frameworks, guidelines and principles that have been developed since 2014 on how to handle neuroscience,” he says. They often cover “mental privacy” and “cognitive freedom,” the freedom to control your own mental life.

He says those guidelines are reflective, but technologies differ in what they are capable of and their possible ethical repercussions. Rommelfanger says there are no one-size-fits-all solutions.

Instead, each company or research group may have to work with ethical issues throughout the development process. She and colleagues recently proposed five questions that researchers can ask themselves to begin thinking about these ethical issues, including privacy and autonomy. The questions ask people to consider how the new technology could be used outside of a lab, for example.

Rommelfanger says advancing technology to help people with mental illness and paralysis is an ethical imperative. "More than my fear of a violation of privacy, my fear is the diminishing public confidence that could undermine all the good that this technology could do."

Lack of ethical clarity is likely to slow the pace of the next neurotechnological career. But careful consideration of ethics could help shape the trajectory of what is to come and help protect what makes us more human.



Source link

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *