Neuralink has put its first chip in a human brain. What could possibly go wrong?
Higgins
Earlier this month, Elon Musk announced his brain-computer interface company, Neuralink, had implanted a device in a human for the first time. The company’s PRIME study, approved by the US Food and Drug Administration last year, is testing a brain implant for “people with paralysis to control external devices with their thoughts”.
In the past few years, Neuralink has faced investigation for mistreatment of lab animals and seen the departure of several company executives. Nevertheless, the PRIME trial is a significant milestone for a company less than 10 years old.
However, Neuralink’s challenges are far from over. Implanting a device is just the beginning of a decades-long clinical project beset with competitors, financial hurdles, and ethical quandaries.
Decades of development
The first reported demonstration of a brain-computer interface occurred in 1963. During a lecture at the University of Oxford, neuroscientist William Grey Walter bewildered his audience by linking one of his patient’s brains to the projector, where they advanced the slides of his presentation using only their thoughts.
However, the current wave of exploration in using brain-recording techniques to restore movement and communication to patients with severe paralysis began in the early 2000s. It draws on studies from the 1940s that measured the activity of single neurons, and more complex experiments on rats and monkeys in the 1990s.
Read more: Neuralink put a chip in Gertrude the pig’s brain. It might be useful one day
Neuralink’s technology belongs to the next generation of recording devices. These have multiple electrodes, greater precision, and are safer, longer-lasting, and more compatible with the body.
The Neuralink implant is thinner, smaller, and less obtrusive than the “Utah array” device, widely used in existing brain-computer interfaces, which has been available since 2005.
Neuralink’s device is implanted by a special robot that rapidly inserts polymer threads, each containing dozens of electrodes. In total, the device has 3072 electrodes – dwarfing the 100 electrodes of the Utah array.
Competitors
Neuralink faces stiff competition in the race to commercialise the first next-generation brain-computer interface.
Arguably its most fierce competitor is an Australian company called Synchron. This Melbourne-based start-up recently used a microelectrode mesh threaded through the blood vessels of the brain. This allowed paralysed patients to use tablets and smartphones, surf the internet, send emails, and manage finances (and post on X, formerly Twitter).
The Synchron implant is described as a “minimally invasive” brain-computer interface. It requires only a minor incision in the neck, rather than the elaborate neurosurgery required by Neuralink and most other brain-computer interfaces.
In 2021, Synchron received a “Breakthrough Device Designation” in the United States, and is now onto its third clinical trial.
Patient welfare
This competitive landscape raises potential ethical issues concerning the welfare of patients in the PRIME study. For one, it’s notoriously difficult to recruit participants to neural implant studies. Patients must meet strict criteria to be eligible, and the trials are inherently risky and ask a lot of participants.
Musk’s public profile may help Neuralink find and enrol suitable patients. However, the company will need to be prepared to provide long-term support (potentially decades) to patients. If things go wrong, patients may need support to live with the consequences; if things go right, Neuralink may need to make sure the devices don’t stop working.
In 2022, a company called Second Sight Medical Product demonstrated the risks. Second Sight made retinal implants to treat blindness. When the company went bankrupt, it left more than 350 patients around the world with obsolete implants and no way to remove them.
If Neuralink’s devices are successful, they’re likely to transform patients’ lives. What happens if the company winds up operations because it can’t make a profit? A plan for long-term care is essential.
What’s more, the considerable hype surrounding Neuralink may have implications for obtaining informed consent from potential participants.
Musk famously compared the implant to a “Fitbit in your skull”. The device itself, Musk recently revealed, is misleadingly named “Telepathy”.
This techno-futurist language may give participants unrealistic expectations about the likelihood and kind of individual benefit. They may also underappreciate the risks, which could include severe brain damage.
The way forward
In this next chapter of the Neuralink odyssey, Musk and his team must maintain a strong commitment to research integrity and patient care. Neuralink’s establishment of a patient registry to connect with patient communities is a step in the right direction.
Long-term planning and careful use of language will be necessary to preventing harm to patients and families.
The nightmare scenario for all neurotechnology research would be a repeat of Walter Freeman’s disastrous pre-frontal lobotomy experiments in the 1940s and 1950s. These had catastrophic consequences for patients, and set research back by generations.
This article originally appeared on The Conversation.
About the Authors
-
Nathan higgins
Former PhD Candidate, Turner Institute for Brain and Mental Health
Nathan is a PhD candidate at the Turner Institute for Brain and Mental Health. His work focuses on the responsible research and innovation of neurotechnologies, with a specific emphasis on ethical issues related to post-trial access to implantable neural devices. He completed his undergraduate and honours degrees at the University of Melbourne, majoring in neuroscience, and recently concluded a full-time research assistant position at the Monash Bioethics Centre, where he was a member of a core team engaged in a Wellcome Trust-funded horizon scan of bioethical issues in anxiety, depression, and psychosis research.
Other stories you might like
-
Beyond the hype: The practical and ethical implications of generative AI in education
While large language models such as ChatGPT offer vast potential in reshaping educational methods, the challenges are many.
-
No time to waste: Identifying the barriers to earlier autism and ADHD diagnosis
A new world-first study of nearly 700 Australian parents or caregivers confirms more education and training is needed for those involved in every stage of the process.
-
How lab-grown hybrid lifeforms bamboozle scientific ethics
Pigs with human kidneys? Brain-powered computer chips? Science is creating new kinds of living things – and our moral understanding needs to catch up fast.
-
Episode 86: Are Humans About to Evolve?
Trace the increasingly blurred line between man and machine in the world of transhumanism on our “What Happens Next?” podcast.