‘What Happens Next?’: Can We Create a Better Reality?
Carland
There’s no doubt that technological advancements such as artificial intelligence and robotics are reshaping our reality. While many of us may feel a bit trepidatious about this brave new world, leading experts are pointing to unprecedented opportunities for social good – as long as we get the governance and implementation right.
In the latest episode of Monash University’s What Happens Next? podcast, host Dr Susan Carland explores the transformative potential of emerging technologies. Following last week’s examination of the challenges and risks inherent with AI, robotics and other new tech, this episode shifts focus to examine how these powerful tools could help us build a more equitable future.
Dr Carland’s joined by world-leading experts in AI, robotics, philosophy and Indigenous technology innovation to understand how we might harness these technologies for social good.
Economic promise meets social impact
The transformative potential of AI extends far beyond efficiency gains. According to futurist Dr Ben Hamer, host of the ThinkerTank podcast, recent Australian government research suggests AI could contribute up to $600 billion annually to the national economy. But the real promise lies in how these technologies could help solve some of humanity’s most pressing challenges.
Professor Joanna Batstone, Director of the Monash Data Futures Institute, highlights groundbreaking applications in healthcare and climate action. From accelerating drug discovery to monitoring global bushfire impacts, AI is already demonstrating its capacity to tackle complex societal challenges.
Listen: Will AI Cut Us Off From Reality?
Reshaping tomorrow’s workforce
Professor Batstone says the future of work is undergoing a fundamental transformation. While AI will automate many labour-intensive tasks, the focus is shifting to how it can enhance rather than replace human capabilities. Educational institutions are adapting accordingly – Monash, for example, has taken a clear stance on preparing students for an AI-integrated future, teaching responsible AI use, and helping students understand both the capabilities and limitations of these technologies.
Even the nature of technical education is evolving. “We're moving to a world of AI development that is no-code or low-code,” she says, suggesting that future developers may need different skill sets than traditional programming. Nevertheless, fundamental human attributes – logic, reasoning and problem-solving – will remain crucial for the workers of tomorrow.
“We can't predict the future with AI, but one thing I'm very sure of is, AI is not going to go away. And so we have to look at how do we optimise for benefit? How do we optimise for social good and think through some of the risks and challenges that inevitably we'll have to tackle.” – Professor Joanna Batstone
Revolutionising healthcare and agriculture
Professor Geoff Webb, from Monash's Faculty of Information Technology, points to AI’s life-changing potential in developing regions. He cites a recent innovation in Kenya where an AI-powered crop monitoring system, costing just $3 monthly, helps farmers combat crop diseases and pests – a solution to a problem that currently destroys one-third of the country's crops.
Professor Chris Lawrence, Associate Dean, Indigenous, in the faculties of Information Technology and Engineering, emphasises the importance of embedding Indigenous knowledge systems into emerging technologies. This “coding for culture” approach ensures new technology truly serves its intended users.
This philosophy is now being integrated into Monash IT and engineering curriculum. All students, including non-Indigenous ones, will graduate with an Indigenous graduate attribute, understanding how to work with Indigenous people and address social, emotional and wellbeing issues through computer sciences and engineering. Importantly, this isn’t about adding token content – all materials are peer-reviewed and specifically aligned with the technical subjects students are studying.
Read more: Star man: From a childhood dream to an Indigenous academy shooting for space
The future of robotics and human connection
Dr Sue Keay, chair of Robotics Australia Group, envisions a future where robots adapt to human environments rather than the reverse. She explains that while humanoid robots capable of complex household tasks such as washing dishes might still be some way off, the first generation of home robots could provide valuable services, particularly in elder care.
“For a lot of people, having interactions with a robot may be more comfortable than having interactions with a human,” Dr Keay notes, explaining how robots could offer non-judgmental support and monitoring.
She envisions robots serving as personal assistants, helping with tasks such as picking up items before vacuum cleaning, and providing peace of mind for families with elderly relatives by monitoring for falls or other emergencies, potentially revolutionising home care and support services.
Building ethical guardrails
While the potential benefits are immense, experts acknowledge the need for robust governance frameworks. Professor Batsone stresses that alongside legislation, there must be broader societal engagement in conversations about responsible AI use and development.
Professor Webb argues for specific legislation targeting potential misuses, such as preventing the creation of libellous deepfake content. This approach would help protect against nefarious uses while allowing beneficial applications to flourish.
Strengthening democracy in the digital age
Associate professors Stephanie Collins and Ben Wellings offer a nuanced perspective on AI’s role in democratic processes. Rather than viewing AI solely as a threat to electoral integrity, they suggest it could enhance democratic participation when properly implemented.Associate Professor Collins argues that future AI systems might even help facilitate more informed and ethical decision-making in governance, provided they’re developed with appropriate oversight and transparency.
The key, according to both experts, lies in maintaining diverse information sources and encouraging genuine dialogue across political divides – something technology could help facilitate rather than hinder.
Read more: ChatGPT: Old AI problems in a new guise, new problems in disguise
The path forward
“We can't predict the future with AI, but one thing I'm very sure of is, AI is not going to go away,” says Professor Batsone. “We have to look at how we optimise for benefit and social good while thinking through the risks and challenges.”
The experts agree that success lies in balancing innovation with responsible development. This includes:
- Implementing appropriate legislation and governance frameworks
- Ensuring inclusive technology design
- Maintaining human judgment and oversight
- Fostering critical thinking skills
- Supporting diverse perspectives in technology development
With proper guidance and purposeful implementation, AI and emerging technologies could help create a better reality for all.
Want to be part of this positive technological future? Monash’s AiLECS Lab is seeking submissions for its “My Pictures Matter” campaign, which helps train AI systems to combat child exploitation while protecting law enforcement officers from exposure to harmful content.
Don’t miss a moment of season nine – subscribe now on your favourite podcast app.
Already a subscriber? You can help other listeners find the show by giving What Happens Next? a rating and review.
Listen to more What Happens Next? podcast episodes
About the Authors
-
Susan carland
Director, Bachelor of Global Studies, and Lecturer, School of Language, Literature, Cultures and Linguistics
Susan's research and teaching specialties focus on gender, sociology, contemporary Australia, terrorism, and Islam in the modern world. Susan hosted the “Assumptions” series on ABC’s Radio National, and was named one of the 20 Most Influential Australian Female Voices in 2012 by The Age.
-
Ben wellings
Associate Professor, Politics and International Relations, Faculty of Arts
Ben Wellings is an expert on Brexit and the politics of nationalism and Euroscepticism in contemporary Europe. He writes regularly for The Conversation, the Globe and Mail and The Drum on Brexit, English nationalism, Euroscepticism and the politics of the European Union. He comments regularly about Brexit on television for Sky News, the ABC and on international TV and radio.
-
Chris lawrence
Professor, Department of Human Centred Computing, Dean of Indigenous Engagement, Faculty of IT
Chris is an Aboriginal health and wellbeing researcher. He has a background in education and postgraduate research degrees with a Master of Applied Epidemiology, and a PhD in Indigenous health and lifestyle choices. He’s been a chief investigator on many research grants, including an NHMRC tripartite study exploring Indigenous resilience in Australia, Canada and New Zealand.
-
Ben hamer
Dr Ben Hamer is an accredited futurist, where he was recently awarded the number one thought leader for the Future of Work in the Asia-Pacific by Onalytica. Ben has undertaken work and research around the world, including time spent leading critical projects at the World Economic Forum as well as being a Visiting Scholar at Yale University. He is an Adjunct Professor at Edith Cowan University and a Board Member for the Australian HR Institute, where he was appointed as the youngest Non-Executive Director in the organisation’s history, as well as being on the Board of Netball NSW. A sought after media commentator and keynote speaker on future trends, Ben is the host of The ThinkerTank Podcast.
-
Stephanie collins
Steph's research primarily focuses on theories of group agency and group responsibility. She is interested in whether groups—such as states, corporations, and NGOs—can truly be considered 'agents' capable of 'doing wrong.' Are the wrongs committed by group agents reducible to the wrongs of their individual members, or is group wrongdoing something 'greater than the sum' of individual wrongdoing? Can groups possess the wide range of moral obligations that philosophers assign to individuals, or are group obligations 'special' in some way—perhaps constrained by their goals or constitutions? Her aim is to explore these questions in ways that contribute to discussions in politics, law, and business ethics.
-
Joanna batstone
Professor Joanna L. Batstone is the inaugural Director of the Monash Data Futures Institute and is responsible for bringing together data science and AI capabilities from across the University. Joanna will continue to establish a digital ecosystem which fosters collaborative interdisciplinary research and promotes lasting industry engagements. An exceptional thought leader in the development and application of AI and data analytics, Joanna is passionate about the benefits of AI in driving lasting and transformative change for social good.
-
Geoff webb
Director, Monash University Centre for Data Science and Professor of Information Technology Research, School of Information Technology
Geoff is a world-renowned data scientist whose research investigates how to use data to best support effective evidence-based decision making and derive useful knowledge and insight. This spans artificial intelligence, machine learning, data mining, data analytics and big data. Geoff is the author of the Magnum Opus commercial data mining software package, a system that embodies many of his research contributions in the area of data mining and has contributed many components to the popular Weka machine learning workbench. He is a technical adviser to Froomle, a data-science-driven recommendation engine.
-
Sue keay
Acknowledged as one of Queensland's most influential people, Dr Sue is an experienced leader with high credibility and respect in the digital domain. Dr Sue consults, advises and speaks on how organisations and individuals can embrace technological change. She is a partner at Future Work Group, a fellow of the Australian Academy of Technology and Engineering (ATSE), a member of the prestigious Kingston AI Group, Founder and Chair of Robotics Australia Group and an Adjunct Professor at QUT Centre for Robotics. Dr Sue holds a range of board and advisory roles in the robotics, AI and emerging technology space, including for Australia's National Robotics Strategy.
Other stories you might like
-
Episode 94: Will AI Cut Us Off from Reality?
In the season nine premiere of Monash’s podcast, learn how AI, deepfakes and humanoid robots are transforming human interaction and our perception of reality.
-
Deepfakes: When you can no longer trust your eyes
Deepfakes are threatening privacy and security, and while detection methods using deep learning aim to combat the problem, there’s a long way to go.
-
So sue me: Who should be held liable when AI makes mistakes?
AI is more likely to make errors than humans, because it relies on often incomplete or inaccurate data. The big question is, who’s accountable if it does? The user, programmer, owner, or AI itself?
-
Are you being served? How humans are warming to robots’ ‘emotions’
Research shows humans are becoming less sceptical of robots, meaning the imagined future is happening in real time.