From AI to living computers: The technologies powering the future of neuroscience
Our understanding of the brain has come a long way since ancient Egyptian times when the heart was believed to be the seat of emotion, cognition and moral reasoning. Even Aristotle had it all wrong – he, too, believed the heart was the centre of sensation and intellect, with the brain acting as a mere radiator to cool the heart’s hot blood.
Clearly, this theory didn’t stand the test of time and things have advanced dramatically through the centuries since then.
We now know the brain, spinal cord and nerves form one integrated connective network – the nervous system – with “neuroscience” representing the scientific study behind how this intricate system drives everything from movement and memory to emotion and disease.
This knowledge has opened the door to significant advancements in diagnosing and treating neurological and psychiatric disorders, which has transformed many lives for the better.
But we have a long way to go – there’s still no cure for dementia; people living with schizophrenia can treat symptoms, but the medication side-effects bring a whole new set of challenges; and for the millions of people worldwide living with depression, the options are equally limiting.
New technologies are bringing new hope. But, much like the intricacies of the nervous system, it’s a complex and delicate balance between technological innovation, human insight and ethical boundaries to accelerate neuroscience for the benefit of all.
Driven by Monash University’s Turner Institute for Brain and Mental Health as part of a neuroscience showcase to Consular Corp Melbourne, a panel of neuro-experts, facilitated by Associate Professor Sharna Jamadar, recently convened at the impressive new Velos Accelerator to discuss “The fusion of neuroscience, AI and pharmacology: How Australia is uniquely positioned to lead this transformation”.
Together, Professor Chris Langmead from the Monash Institute of Pharmaceutical Sciences, Professor Adeel Razi from the Turner Institute and Dr Deval Mehta from the Faculty of IT shared their perspectives on the fast-evolving shifts in neuroscience.

AI is reshaping disease diagnosis, but its real value lies in how we use that knowledge
When people talk about “AI for good”, the potential of AI in disease diagnosis really embodies this concept. With dementia, for example, by analysing data from sources such as brain scans and health records, AI can detect early signs of dementia that might otherwise go unnoticed, enabling faster and more accurate diagnoses.
It can even detect a person's risk of dementia onset up to nine years in advance with 80% accuracy, according to joint research between Queen Mary University of London and the Turner Institute published in Nature Mental Health.
In essence, AI is particularly well-placed for the world of diagnostics in which large complex data is the norm. The challenge is, as pointed out by Professor Razi, there’s no cure for dementia – so what are we to do with this early diagnostic information?
While there are some medications and a number of lifestyle factors to help reduce dementia risk (the usual suspects – exercise, eat well, don’t smoke, reduce alcohol), it’s certainly worth considering the real upside to living for years with the knowledge that dementia is lurking in the shadows with no quick-fix solution.
That being said, there are encouraging signs pointing towards renewed hope for dementia treatments, and in many cases the evolution of AI diagnostics is in lockstep with new treatments for diseases, leading to more personalised and efficient treatment plans.
It’s also helping to address the issue of inequalities in diagnostics, as highlighted by Dr Mehta, who’s interested in how we can harness AI to improve disease diagnosis in regional areas and low and middle-income countries (LMICs) where people disproportionately slip through the cracks of healthcare systems.
We’re now seeing an uptick in the use of portable devices in LMICs and regional areas, leading to more accurate and faster diagnosis for a wide range of conditions.
From schizophrenia to Parkinson’s, AI is reshaping how scientists uncover and test potential new drugs
An area of neuroscience reaping truly transformative benefits from AI is drug discovery and development. Renowned for being a complex, time-consuming and expensive endeavour, drug development has traditionally relied on laborious trial-and-error experimentation. AI is redefining this paradigm.
Professor Langmead, who has a particularly strong interest in the development of new medicines for schizophrenia, says using AI to assess how drugs access and work in the brain is not a panacea, but it certainly increases the probability of success in a meaningful way.
Likewise, brain imaging datasets can help determine how an individual might respond to different drugs, and therefore who would be an appropriate candidate for clinical trials – this is especially important with diseases like schizophrenia.
Genuine breakthroughs in AI-driven drug development are happening, which we’re starting to see first-hand. For neurological conditions there’s FB1006, which is in clinical trials for amyotrophic lateral sclerosis, a progressive neurodegenerative disease that attacks motor neurons. FB1006 was discovered and developed entirely with AI, from target identification to efficacy assessment.
There’s also exciting progress for Parkinson’s disease, with researchers from the University of Cambridge using an AI-based strategy to identify compounds that block the clumping, or aggregation, of alpha-synuclein, the protein that characterises Parkinson’s.
While AI is not the whole solution, there’s no doubt it’s proving to be a powerful tool in accelerating and refining drug discovery, helping researchers design and test new medicines with greater accuracy and speed.

From lab-grown brain cells to living computers, scientists are exploring the next frontier in intelligence – but ethics must keep pace
During the panel discussion, Professor Razi posed the question: What if we could take brain cells grown in the lab, place them on an electronic surface and let them communicate with each other?
It sounds like the stuff of science-fiction, but this is the future of computing. Biological computers, or “biocomputers”, are “alive” computers made from living cells, able to think, respond and learn more like humans do.
Unlike AI models that have to be taught to solve problems, using huge amounts of energy in the process, biocomputers learn and adapt naturally just like the human brain, making them better at solving problems a lot more efficiently.
It starts with scientists growing real neurons (brain cells) in a lab and connecting them to electronic devices. These neurons can then send and receive tiny electrical signals, just like they do in the brain. Scientists feed them information (for example, patterns or tasks), and the neurons learn to respond by communicating and forming new connections between cells.
Over time, the network of brain cells can recognise patterns, make simple decisions, or even control computer programs, acting like a living processor that learns and adapts.
Already there are scientists and companies working to advance this young field, including Australia’s Cortical Labs, with whom Professor Razi collaborates, with a website homepage using the slogan “Actual Intelligence” instead of “Artificial Intelligence”. In 2022, Cortical Labs announced it had managed to get artificial neurons to play the computer game Pong.
In the US, researchers at Johns Hopkins University are also building “mini-brains” to study how they process information, but in the context of drug development for neurological conditions such as Alzheimer's and autism.
It’s all very exciting. However – as Professor Razi points out – before this powerful science is unleashed, ethical oversight must precede innovation to ensure its benefits are realised responsibly.
We have time on our side to lay this foundation. At this stage biocomputers are still in their infancy, far behind AI in capability, with current systems limited to learning simple tasks, but they hold long-term promise for more powerful, adaptive computing in the future.
Deeply collaborative neuroscience
Finally, the panel discussion highlighted that neuroscience today can’t operate as a solo pursuit, but rather as a deeply collaborative effort that brings together experts from medicine, psychiatry, pharmacology, engineering, computer science and even arts (ethics).
This is why Monash has established Monash Neuroscience, a University-wide collective of more than 500 neuroscience researchers creating a culture that supports research excellence in neuroscience to collectively tackle complex challenges, and ultimately save and transform lives.