‘What Happens Next?’: Can We Create a Better Reality?
There’s no doubt that technological advancements such as artificial intelligence and robotics are reshaping our reality. While many of us may feel a bit trepidatious about this brave new world, leading experts are pointing to unprecedented opportunities for social good – as long as we get the governance and implementation right.
In the latest episode of Monash University’s What Happens Next? podcast, host Dr Susan Carland explores the transformative potential of emerging technologies. Following last week’s examination of the challenges and risks inherent with AI, robotics and other new tech, this episode shifts focus to examine how these powerful tools could help us build a more equitable future.
Dr Carland’s joined by world-leading experts in AI, robotics, philosophy and Indigenous technology innovation to understand how we might harness these technologies for social good.
Economic promise meets social impact
The transformative potential of AI extends far beyond efficiency gains. According to futurist Dr Ben Hamer, host of the ThinkerTank podcast, recent Australian government research suggests AI could contribute up to $600 billion annually to the national economy. But the real promise lies in how these technologies could help solve some of humanity’s most pressing challenges.
Professor Joanna Batstone, Director of the Monash Data Futures Institute, highlights groundbreaking applications in healthcare and climate action. From accelerating drug discovery to monitoring global bushfire impacts, AI is already demonstrating its capacity to tackle complex societal challenges.
Listen: Will AI Cut Us Off From Reality?
Reshaping tomorrow’s workforce
Professor Batstone says the future of work is undergoing a fundamental transformation. While AI will automate many labour-intensive tasks, the focus is shifting to how it can enhance rather than replace human capabilities. Educational institutions are adapting accordingly – Monash, for example, has taken a clear stance on preparing students for an AI-integrated future, teaching responsible AI use, and helping students understand both the capabilities and limitations of these technologies.
Even the nature of technical education is evolving. “We're moving to a world of AI development that is no-code or low-code,” she says, suggesting that future developers may need different skill sets than traditional programming. Nevertheless, fundamental human attributes – logic, reasoning and problem-solving – will remain crucial for the workers of tomorrow.
“We can't predict the future with AI, but one thing I'm very sure of is, AI is not going to go away. And so we have to look at how do we optimise for benefit? How do we optimise for social good and think through some of the risks and challenges that inevitably we'll have to tackle.” – Professor Joanna Batstone
Revolutionising healthcare and agriculture
Professor Geoff Webb, from Monash's Faculty of Information Technology, points to AI’s life-changing potential in developing regions. He cites a recent innovation in Kenya where an AI-powered crop monitoring system, costing just $3 monthly, helps farmers combat crop diseases and pests – a solution to a problem that currently destroys one-third of the country's crops.
Professor Chris Lawrence, Associate Dean, Indigenous, in the faculties of Information Technology and Engineering, emphasises the importance of embedding Indigenous knowledge systems into emerging technologies. This “coding for culture” approach ensures new technology truly serves its intended users.
This philosophy is now being integrated into Monash IT and engineering curriculum. All students, including non-Indigenous ones, will graduate with an Indigenous graduate attribute, understanding how to work with Indigenous people and address social, emotional and wellbeing issues through computer sciences and engineering. Importantly, this isn’t about adding token content – all materials are peer-reviewed and specifically aligned with the technical subjects students are studying.
Read more: Star man: From a childhood dream to an Indigenous academy shooting for space
The future of robotics and human connection
Dr Sue Keay, chair of Robotics Australia Group, envisions a future where robots adapt to human environments rather than the reverse. She explains that while humanoid robots capable of complex household tasks such as washing dishes might still be some way off, the first generation of home robots could provide valuable services, particularly in elder care.
“For a lot of people, having interactions with a robot may be more comfortable than having interactions with a human,” Dr Keay notes, explaining how robots could offer non-judgmental support and monitoring.
She envisions robots serving as personal assistants, helping with tasks such as picking up items before vacuum cleaning, and providing peace of mind for families with elderly relatives by monitoring for falls or other emergencies, potentially revolutionising home care and support services.
Building ethical guardrails
While the potential benefits are immense, experts acknowledge the need for robust governance frameworks. Professor Batsone stresses that alongside legislation, there must be broader societal engagement in conversations about responsible AI use and development.
Professor Webb argues for specific legislation targeting potential misuses, such as preventing the creation of libellous deepfake content. This approach would help protect against nefarious uses while allowing beneficial applications to flourish.
Strengthening democracy in the digital age
Associate professors Stephanie Collins and Ben Wellings offer a nuanced perspective on AI’s role in democratic processes. Rather than viewing AI solely as a threat to electoral integrity, they suggest it could enhance democratic participation when properly implemented.Associate Professor Collins argues that future AI systems might even help facilitate more informed and ethical decision-making in governance, provided they’re developed with appropriate oversight and transparency.
The key, according to both experts, lies in maintaining diverse information sources and encouraging genuine dialogue across political divides – something technology could help facilitate rather than hinder.
Read more: ChatGPT: Old AI problems in a new guise, new problems in disguise
The path forward
“We can't predict the future with AI, but one thing I'm very sure of is, AI is not going to go away,” says Professor Batsone. “We have to look at how we optimise for benefit and social good while thinking through the risks and challenges.”
The experts agree that success lies in balancing innovation with responsible development. This includes:
- Implementing appropriate legislation and governance frameworks
- Ensuring inclusive technology design
- Maintaining human judgment and oversight
- Fostering critical thinking skills
- Supporting diverse perspectives in technology development
With proper guidance and purposeful implementation, AI and emerging technologies could help create a better reality for all.
Want to be part of this positive technological future? Monash’s AiLECS Lab is seeking submissions for its “My Pictures Matter” campaign, which helps train AI systems to combat child exploitation while protecting law enforcement officers from exposure to harmful content.
Don’t miss a moment of season nine – subscribe now on your favourite podcast app.
Already a subscriber? You can help other listeners find the show by giving What Happens Next? a rating and review.
Transcript
Joanna Batstone: We can’t predict the future with AI, but one thing I’m very sure of is AI is not going to go away. And so, we have to look at how do we optimise for benefit, how do we optimise for social good and think through some of the risks and challenges that inevitably we’ll have to tackle.
Sue Keay: For a lot of people, having interactions with a robot may be more comfortable than having interactions with a human.
Ben Hamer: And I feel like if we get it right and if we use it in the way it’s intended and we get on board with it, we can find a lot more meaning and purpose in our work.
[Music]
Susan Carland: Welcome back to What Happens Next?, the podcast that examines some of the biggest challenges facing our world and asks the experts, what will happen if we don’t change? And what can we do to create a better future? I’m your host, Dr Susan Carland.
In our last episode, we explored how emerging technologies such as AI, deepfakes and humanoid robots are blurring the lines between reality and fiction. We discussed the challenges in distinguishing truth from falsehood in our digital age, the psychological impacts of replacing human connection with technological companions and the potentially significant implications for world politics. While these developments raise legitimate concerns, they also open doors to exciting possibilities.
Today we’ll shift our focus to a more optimistic view. After all, every technological revolution has brought both challenges and opportunities, so let’s explore how these same innovations might just help us build a brighter, more connected future. Keep listening to find out what happens next.
Joanna Batstone: Hello, I’m Joanna Batstone. I’m the director of the Monash Data Futures Institute and I’m a professor of practice in the Faculty of IT. I’m a technologist by background and I’ve spent many years looking at the intersection of artificial intelligence and data science to be able to create a better world.
Susan Carland: Joanna, welcome to the podcast.
Joanna Batstone: Thank you, it’s great to be here.
Susan Carland: I want to start by asking you about the Data Futures Institute that you head. Tell us about the work you do there and particularly what are you trying to do with our understanding about AI?
Joanna Batstone: Thanks. So, the Data Futures Institute is Monash University’s Artificial Intelligence Institute. We’re focused on all things with respect to AI and data science.
From a social good perspective, one of the areas that we look at in particular is the impact of AI and health sciences and a lot of different medical use cases where we can really look at how AI can transform both the research side of medical sciences, but also the experience of the clinician. We’re seeing so many applications, AI for opportunities in drug discovery, AI for identifying patients who would be eligible to enrol in a clinical trial. Conversations around AI and ethics and how I think about clinicians’ use of AI in a clinical setting.
The opportunities to transform healthcare for me are very, very exciting. Another area of interest for us in particular is around climate and sustainable development. How can we look at, for example, the impact of bushfire smoke on global planetary health and what are some of the interventions we could take as we understand how that bushfire smoke moves around the world?
And then also and very relevant for today’s conversations, is the work that we do around ethics and governance and policy. What does it mean to think about AI in an ethical way? What regulations and guidelines should we be thinking about in order to enable responsible and safe AI? So, those are three of the main focus areas for us within the institute.
Geoff Webb: There are many, many, many incredibly positive potential applications of AI.
Susan Carland: Geoff Webb is a professor of data science and artificial intelligence in Monash’s Faculty Of Information Technology. He also believes AI has enormous potential to help us untangle some of the world’s wicked problems.
Geoff Webb: So, I’m about to go to Africa and I’ve been looking at news about AI in Africa and the Africa Prize for Engineering has just been won by an amazing young woman engineer who has developed an AI system for monitoring crops. So in Kenya, one-third of all crops are lost to disease and pests and the system, for just $3 a month, is able to observe the crop. And as soon as an infestation turns up, it sends a text message to the farmer saying not only what the issue is, but also what can be done to rectify it.
Susan Carland: That’s lifesaving for many people.
Geoff Webb: Absolutely extraordinary. Absolutely extraordinary. Many applications in medicines, drug discovery, optimisation of processes so they’re more efficient, innumerable… It’s limited really only by our imagination, as are the potential misuses. And we’ve got an amazing centre, the AiLECS Lab funded by the federal police, which uses AI to scan hard discs for images of child abuse, thereby helping to detect people who are doing terrible things and to help protect police officers from having to be exposed to –
Susan Carland: Yeah, a human having to watch that stuff. Yeah.
Geoff Webb: – terrible, terrible things.
Susan Carland: I mean, that seems like an obviously brilliant use of AI and I guess it’s like with any technology, isn’t it? That there’s going to be terrible, society-destroying uses and society-helping uses. We focus a lot on the negative though when we talk about AI. Why do you think that is?
Geoff Webb: It’s a two-edged sword. I feel it’s going to result in many great developments and also many terrible wrongs.
Susan Carland: Double-edged sword though it may be, this technology certainly offers promising possibilities for the economy, the future of work and even human connection.
Futurist Ben Hamer is the host of the ThinkerTank podcast. What upcoming tech advancements are you most excited about?
Ben Hamer: Well, for me, I think it’s artificial intelligence and generative AI. And I know that’s not up and coming, that’s here and now, but I feel like most people still think about it as being a gimmick. I think a lot of people still use it as a glorified search engine, or I’ve used it before to be like, “Hey, I’ve got these four ingredients in my fridge, what can I make for dinner?” And it’s brilliant for that, but there is so much to its untapped potential.
And the Australian government came out earlier this year and did a study that found that if we get it right, AI could add up to $600 billion every year to our economy. So, there’s a massive upside, a lot of potential and I feel like if we get it right and if we use it in the way it’s intended and we get on board with it, we can find a lot more meaning and purpose in our work, because we’re outsourcing a lot of that boring, mundane, repetitive type work as well.
But then we can also get the benefits that come from having things like our own personal tutor if we’re at school, or having our own career coach in the workplace, an AI that actually gets to know us, not in an intrusive way, but in a way that can actually help us be better as humans.
And similarly, the stuff we spoke about with AI companions earlier, as much as there are some red flags about it, I do also think that there are a lot of people who are highly introverted or feel marginalised as it is, who could actually benefit from the technology as well. So, I do think AI and particularly generative AI, has a lot of upside.
And just one specific example of one of the latest advancements in it is ChatGPT released their latest version, which is 4.0, and that has real-time language translation. So, you and I could be having this conversation now and we could be talking in different languages but having a real-time, natural, free-flowing conversation and to think that language barriers will be removed in all of our lifetimes with this kind of technology, it’s really quite powerful.
Susan Carland: It is exciting to think about. It’s hard to believe that it could happen in our lifetime.
Sue Keay: My name is Sue Keay and I am the chair of Robotics Australia Group and also a director of Future Work Group and really passionate about the difference that robotics technology is likely to have on workplaces and also on society.
Susan Carland: I was in an international airport not long ago and there was a robot that it was meant to look like a human, but it was basically just on wheels zipping around. It was some sort of security apparatus, I think, just to make sure people weren’t sleeping in the chairs and anything like that.
But I wonder how does the bipedal humanoid-type robot… Why would they be better than one that just drifts around on wheels?
Sue Keay: You don’t have to change the environment to suit the robot, which is currently how a lot of our manufacturing plants operate. And in fact, Amazon fulfilment centres, they are all designed to maximise the, I guess talent of the robot, whereas a humanoid robot is something that can operate in an environment that is built for humans.
And that’s why there’s a lot of interest into how these robots might have an impact not only in the workplace but also in our homes because we are not going to have to change the environment to suit the robot, the robot will be able to operate in the environment that we occupy currently.
Susan Carland: How close are we to having these humanoid robots doing the dishes for us at home?
Sue Keay: Yeah. Well, not soon enough. I keep waiting, but housekeeping tasks seem to be difficult things for robots to solve. The example that you gave about washing dishes is particularly difficult, mixing electronics and water is problematic, but I think for them to be truly useful in the home, we need to have those expectations of the robots that we bring into our homes.
I suspect that what will happen with the first generation of robots is that they will be not that much more sophisticated than an Amazon Alexa or a Google Home on wheels in your house in the first instance. But they might give you a lot of peace of mind in terms of if you have an elderly parent, for example, being able to make sure they haven’t had a fall, being able to, I guess surveil the house and make sure that everything’s okay. At the end of the day, I think people find robots far less threatening to have in their environment than if you were to place cameras everywhere in someone’s home to achieve the same end.
I think perhaps in the first instance they will do quite simple tasks, but with the investment that we have seen and with some of the advancements in AI, hopefully, they’ll start to be able to take on much more complex tasks.
And we’re also seeing a lot of developments in the field of tactile sensing, which is how robots are able to, in particular, use their hands to grasp things, to make sure they’re actually lifting things correctly, which will then translate into them being able to do much more useful tasks like folding linen and hopefully being able to take away some of the simple tasks. Or perhaps even just as I find myself doing, picking up the socks before you run your robot vacuum cleaner over the room –
Susan Carland: [Laughter] Yeah.
Sue Keay: – because that seems to completely bamboozle all vacuum cleaner robots.
Susan Carland: Yeah.
Sue Keay: So, even if they’re able to just do simple things like picking up, they would probably be a useful addition, as long as we can get them down to the price point that makes them attractive for people to bring into their homes. And then they will probably serve, I guess a function a bit like if we had our own personal butler.
Susan Carland: Can you imagine, a personal butler? What a dream. I have a robot vacuum and my children often say that it is my third child because I love it so much, and I’m thinking about what you’re saying about how the early models will be… [Fades out into music]
Susan Carland: While the right robot may make our lives easier, the potential impact of AI extends far beyond household chores. So, how do we make sure society uses this powerful technology appropriately with the greater good in mind?
Here’s Joanna. How can we balance the good and bad attributes? Air quotes good, air quotes bad, attributes of AI to ensure that we do build a better tomorrow, a more equitable, safer, cleaner, healthier tomorrow?
Joanna Batstone: Yes, the good and the bad. And again, in the context of the work that we do in the institute, we’ve been very much focused on for social good. And so a lot of our research is around looking at applications where we can demonstrate value for societal benefit, but we still need to recognise that there are bad actors in this world and there are nefarious people and nefarious algorithms at work to be able to disrupt the AI technologies that we see. And we see a lot of this in the context of the spread of misinformation and disinformation. So, getting that balance right, being able to be knowledgeable and be able to detect bad actors becomes an important part of the research agenda as well
And so, being able to react to get that balance of good versus bad, you can’t just ignore the fact that there are aspects or use cases of AI that make us very uncomfortable. Why is this making us uncomfortable? What is problematic about this scenario? How do we get to an environment or a scenario where we are looking at a world where our aspirations really are to do no harm and hence what technology guardrails do we need to have in place to balance off that world of good versus evil?
Because there will be evil actors in this journey of AI, but we need to be able to get the appropriate technology in place to catch them.
Susan Carland: Geoff says one of those guardrails must be appropriate legislation.
Geoff Webb: We absolutely need legislation around AI. We already have laws which govern many aspects of acting nefariously, and we just need to make sure that they’re equally applicable to online and high-tech forms of nefariousness as they are to old-fashioned ones.
So, the legislation we can possibly do is legislation preventing specific misuses of it. So, saying you can’t use the systems to generate images of a person saying something, which would be liable as to claim they’d said, for example. It would be a very simple legislation, which would seem like a very good idea.
Susan Carland: Many governments were caught on the back foot when ChatGPT took the world by storm. Dr Ben Hamer believes they need to catch up quickly.
Ben Hamer: I think we’re also seeing that we need government to try and get ahead of the curveball as well, because when we saw the likes of generative AI and ChatGPT as an example of one technology, it came about so quickly and its take-up was so significant that the regulators were falling behind and didn’t know how to actually play catch-up and how to get in front of it.
And so, it’s almost like the technology got away from us, which is why you saw some big names coming out and saying, “Hey, can we put a ban for six months on developing this tech?” Which absolutely is not going to happen, but it just goes to show if that happened with AI, and we’re seeing the rise of extended reality, of deepfake technology and a whole heap of others. It is something that we need government to get on top of and they need to be partnering closely with industry on it as well.
I think as far as news is concerned, where we will see a lot more around news websites who will be rated on how accurate their information and reporting is and we’ll go to particular publications and it will have that rating and it might have a content warning for us. “FYI, this particular outlet is known for potential misreporting in the past,” or whatever it might be. But beyond the realm of news, I think we’re going to find that we’re in some murky waters.
Susan Carland: As we discussed last week, the media we consume plays a major role in shaping our understanding of reality. Should media and tech companies be answerable for the spread of misinformation? Here are associate professors Stephanie Collins and Ben Wellings, part of the team behind Monash’s Politics, Philosophy and Economics degree.
Stephanie Collins: I think the other thing would be holding for-profit entities to account. So Meta, X, the corporations that really benefit from the propagation of fake news because they keep people on their website, keep people scrolling and liking and engaging. I think probably as a society, as a globe, we’re not really doing enough to hold those kinds of entities to account.
Anthony Albanese: Meta are showing how out of touch they are and how arrogant they are.
Stephanie Collins: I don’t know what that looks like.
Anthony Albanese: What we need is for social media to acknowledge that it has a social responsibility –
Stephanie Collins: Because things like multinational entities, no one government can rein them in, but I would say politically that’s important, too. Ben Wellings: There have always, not always, but for a long time there have been transnational organisations that mediate our news, like newspapers. And I think that just technology got a little bit ahead of our critical faculties, maybe, in the last decade.
Susan Carland: But as a philosopher, Stephanie is quick to point out that simply because information or an idea originates from a non-human source, that doesn’t mean we should necessarily distrust it more than we would if it came from a human. Remember Steve AI, the artificial intelligence that came dead last in the UK election earlier this year? He wasn’t up to snuff for the constituents of Brighton, but that doesn’t mean we should automatically write off a future iteration.
Stephanie Collins: I’m wanting to push back on the thought that just because it comes from a bot or comes from AI, it’s therefore morally untrustful. I think particularly when it comes to moral questions. If Ben tells me what I should or shouldn’t do with my life, I always take that with a grain of salt. Sorry, Ben. I always question it.
We should of course have the same attitude to AI, I’m not saying we should just blindly follow them, but maybe when it comes to the moral questions we trust that we kind of take them with the same grain of salt with which we take other humans, have the same level of scepticism that we have towards our current MPs.
And I guess the people of Brighton just don’t have that trust and I’m with them on our current… I’m sure Steve AI probably doesn’t engender the kind of rights to trust that I’m suggesting maybe future AI will. But if there came to be an AI that did give compelling answers to moral questions, why not have it in the mix, is what I’m saying.
Susan Carland: So, how do we build ethical technology that we can trust to consider moral questions in its decision-making? Part of modern educators’ job is to teach those skills to tomorrow’s developers and ethical tech is inclusive tech.
Chris Lawrence: Hi, my name is Professor Chris Lawrence and I’m the Associate Dean, Indigenous in the Faculty of Information Technology and Engineering at Monash University. I’m a Noongar from Western Australia.
Susan Carland: Chris is driven by a desire to use technology as a tool to help end the entrenched disadvantage, political exclusion, intergenerational trauma and institutional racism experienced by Aboriginal and Torres Strait Islander people.
Chris Lawrence: Well, I like to say that Indigenous people are the first hunters, technologists, engineers, architects, medicine people, mathematicians. And if anybody can pick up a boomerang, a piece of wood and throw it, which is a boomerang, make it come back, have got to be innovative.
That kind of STEM research knowledge has been really informing STEM research for thousands of years and it has been evolving. The boomerang is a propeller, it’s about speed and it’s about power and it’s about design and these are all the kind of principles that you apply when you’re designing software and technology and for its purpose. So, it’s a flying machine.
That helps informs my research when we’re designing software for Indigenous communities, because we need to make the software about them, we need to embed what we call this coding for culture into the software. So, that’s about being able to include Indigenous people’s identities – their languages, their relationship structures, their totems, their moieties – and being able to make it as user-friendly and Indigenise it as much as possible so that you can actually get Indigenous people to use the software, which is what you want to do for all users. That’s what the gold standard is, because if you want the software to be successful you’ve got to target the group that you’re actually trying to make it for.
Susan Carland: Can you provide us with a brief overview of how Indigenous knowledge systems can inform new and emerging technologies?
Chris Lawrence: Well, first you have to embed it into the curriculum, and that’s the challenge. And so, getting the curriculum to include Indigenous content and case studies that are relevant to those computer science subjects and units that the students are studying. So, what we’re doing in the Faculty of Information Technology and Engineering is we’re actually embedding Indigenous content in the curriculum.
So, we’re creating this Indigenous graduate attribute, which means that all students, including Indigenous students, will have an Indigenous graduate attribute where they will understand how to … Well one, who are Indigenous people? How do you work with Indigenous people? How do you address those social-emotional well-being issues, or health issues, or the engineering issues through computer sciences and software engineering, or mechanical engineering, civil engineering, whatever it is?
And so, we’re starting to change that and making sure that all students will be aware of those things and understand what they are, but they must be relevant to those subjects and units that they’re studying. So I’m not just throwing any Indigenous content into the curriculum, it’s going to be peer-reviewed, published information that is actually aligned and critical to those kind of pedagogies that are embedded in the curriculum in our courses that we offer.
Susan Carland: Teaching future technologists to code for culture isn’t the only way formal education is changing to embrace the potential of AI and other advancements and guide our way to a future where it’s used ethically. Here’s Joanna again.
Joanna Batstone: The future of work has been probably one of the hottest debates over the last five years in the context of artificial intelligence and there have been a number of surveys and studies that look at what styles of jobs will be impacted. There’s no question that the automation opportunities with AI will take away many of those very labour-intensive tasks and give us the opportunity to be replaced with AI.
But increasingly with generative AI, I think what we’re seeing is what’s often described as the knowledge worker cohort of the workforce potentially impacted as well, that those jobs change as these new technologies come in place. And so, that dynamic around what percentage of jobs are impacted changes as we look at the different styles of work that are impacted. In terms of skills, clearly, the workforce is changing. One of the interesting skill discussions we have is around developers, developers of AI systems: Will they be writing code in the future? Are we moving to a world of AI development that is no-code or low-code? So, the skillsets that are being taught or being delivered also change when we’re entering a world of, well, maybe it’s no longer about writing colons and semicolons and brackets in code, it’s very much a different style of code development enabled by some of these generative AI tools. So, that changes the world of development.
But in terms of skills, I think there are some fundamental skills that remain the same and it comes back to those human attributes of logic and reasoning and problem-solving and understanding the context in which I’m trying to deploy the technology and those skills will remain the same. But the technical skills I think will shift and the way that workforces change will shift.
And I think we’ll see AI really transforming every segment of the population, whether you’re an HR professional, or a finance professional, or an academic in university, your job will change.
Susan Carland: Are you and I going to be out of a job? I think that’s a real question I need to know, that will my students be taught by an AI generative lecturer who are probably far more charismatic and interesting than me and the AI will probably do my research as well.
Joanna Batstone: I don’t think the AI will put you out of a job, but I think you’ll find your job is changed and hopefully enhanced significantly with your AI assistant alongside you.
And so, one of the interesting things about teaching and education and one of the things that we’ve done here at Monash, is really taken a very clear stance around we have to educate our students for the world of AI. We have to teach what does responsible AI mean? We have to educate around what does plagiarism mean in a world of AI and we have to be able to enable our students to experiment with assignments written with generative AI and assignments written without and to be able to develop those intuition and complex reasoning tasks to be able to do that critical assessment and really understand how this world is changing.
So, the short answer is no, I don’t think we’ll be out of a job, but I do think we will be using AI technologies to change the way that we work in the future.
Susan Carland: If it hasn’t already happened, chances are good that your job will soon incorporate these technologies in your own training and continuing education. Here’s Ben Hamer.
Ben Hamer: Last year I was at South by Southwest in Austin in Texas and Johnson & Johnson had this VR exhibition you could go into. And so, I put this headset on and it’s the same VR simulation they use to train some of their surgeons on a particular or new type of knee surgery. So, I was essentially conducting this knee surgery as a surgeon would. It was the most confronting thing, so I had to drill into this knee, which is obviously a virtual knee, but still very uncomfortable for someone like me who can’t watch medical TV shows.
But we’re seeing people do things like that, we’re seeing train drivers before a new type of train with a new kind of technology arrives, before that fleet hits the station they’re able to learn how to drive that train using virtual reality. So, a lot of people using it in the workplace to learn different skills as well.
Susan Carland: Establishing a governmental approach, holding media and tech companies accountable for misinformation and reworking our educational approach for tomorrow’s workforce are obviously massive projects. Happily, some are already in the works.
While we’re advocating for progress in these major areas, Joanna says there are things we can do on an individual level to encourage responsible use of these new technologies and help our communities think critically about the nature of reality. How do we optimise for social good? Is it just legislation, is there more to it than that?
Joanna Batstone: I think there’s more to it than just legislation. The problem with legislation and rules is rules are often made to be broken. So, it’s not just about the rules and regulations, but it is around a societal desire for looking at how do we enable the best opportunities with AI? We know it’s going to transform the workplace, we see that already in today. We know that it’s going to disrupt many industries as we know them today.
But for each one of us, I think there’s an individual responsibility, there’s a societal responsibility and that responsibility then means we have to be engaged in that conversation with government, with industry, with academic institutions around what does our own responsible use mean? How do we educate the community around us to understand enough to be able to trust the technology, trust that our governments are going to be using these technologies in appropriate ways? So, it’s not just about the rules, I think it’s very much around enabling that societal view of what technology means and how can we trust in the future that we are heading in the right direction.
Susan Carland: How do you see the difficulty in distinguishing reality from fabrication evolving for us? As AI just gets better and better, will we also get better at being able to detect the fakes?
Joanna Batstone: I think we as individuals and humans will get better, but one of the things that we’re seeing right now is a rapid proliferation of AI detection tools. Much as generative AI hit the headlines, it’s created a new opportunity for software developers to create new tools to be able to identify a fake image from a real image, or an AI-generated body of content versus a human-generated body of content. So, that tools generation is going to improve rapidly, but the tools are not completely fallible. They will make mistakes.
And so, the role of the human is very much to continue to think about leveraging our judgement, our ability to assess and reason with content and to be able to really look at leveraging those human intuition skills to be able to determine when is a piece of content AI-generated or not. And also to make the assessment of, is it okay if the content was AI-generated, or is this a problem when it’s generated? And so, that judgement, that human judgement role, I think will be incredibly important moving forward.
Susan Carland: Here’s Geoff.
Geoff Webb: Education is incredibly important in bringing the public up to speed in how to both make best use of these systems and also how to identify misuses. And that’s both something we need to pass on to our students as educators, but also we have a role in doing public education through mechanisms such as podcasts, for example.
Susan Carland: It is tricky though to educate people to identify things which are so believable. This isn’t just a problem of children who aren’t discerning enough, or older people who maybe don’t understand digital technology, these things trick everybody. The fake generative AI images or videos, I’ve had intelligent, educated people my age or a bit younger send me videos and I said, “I don’t think that’s real. I don’t think they said that.” But we just accept it. And I’m wondering how viable, especially as it’s only ever improving, how viable is it really for us to teach people to identify what is by design indistinguishable from truth?
Geoff Webb: I think it’s just a question of experience. It’s exactly the same with scam emails and scam phone calls. When you’re not exposed to them, when you’re not educated about them, you very easily fall victim to them. But with experience and with education, you can learn that you need to be extremely cautious.
Susan Carland: Turn up the critical faculties in our minds about everything we get sent on WhatsApp.
Geoff Webb: Absolutely.
Susan Carland: And when it comes to attuning your critical faculties to see through different political realities to the truth, Stephanie and Ben have some thoughts.
What advice would you have, from both of your professional expertise, to the average person at home that is observing? We’ve got elections coming up, there are big elections that are coming up around the world. We’ve just had big elections around the world and they want to know, how do they know what they can trust is true that’s coming at them generally through the media, social media, mainstream media. What lens would you recommend that they try to bring to what they’re encountering to have some sort of confidence in what they’re seeing or reading?
Stephanie Collins: I would say diversity of sources would probably be my number one. And this, I mean, really goes back, you don’t even need to introduce technology to make this point it goes back to standpoint theory, the idea that humans occupy different standpoints, we have different experiences that gives us different insights into different aspects of society and politics. We should always be seeking to be informed by a wide array of standpoints and I think that goes as much, if not more so, for media consumption as for personal relations. That would probably be my number one.
And so, I think for me, what comes out of all this is that really the real importance of epistemic humility. So, just because my favourite politician says something doesn’t mean I arrogantly go around as asserting that thing, because that person could be wrong, maybe non-culpably, maybe they’ve just got the wrong sources, or they’ve made a mistake that day or whatever. But it becomes then really important to listen to a wide range of voices to try and verify yourself what you can. I know we can’t go around checking all the facts and manifestos has been suggested, but where you can, try and do that. And just any belief that you hold, don’t hold it with 100% conviction. Yeah. So for me, the response to a lot of this stuff is humility.
Ben Wellings: Sometimes it’s not very comfortable to hear what the other side, if we want to put it that way, is thinking, but it is useful in terms of then working out what people’s interests are being served by the news, our reality coming to us in particular forms and just maybe keeping those kinds of questions in mind when filtering political information.
Susan Carland: I think also, what do you think about just the need to talk to people in real life? Make sure you don’t have a group of friends that all thinks exactly what you do.
Ben Wellings: Yeah, this is really important but it’s also not always easy and I suppose I draw back on people I’ve known for quite some time and whose opinions may have diverged with mine for whatever reasons, but I do think it is important to do that. And even just casual conversations can be useful to just get a little sense of why it matters to people and how important it is.
Stephanie Collins: I mean, I think as well with the people that you disagree with, sometimes trying to reach agreement at least on some emotional level about what matters. So yeah, community matters. I don’t know, family matters. There’s going to be a level at which you can reach agreement, trying to get there with people and then seeing how it is that based on that we both got to these really different political positions, even that can be helpful.
Susan Carland: Yeah, agreed. The sign to me that I need to talk to a human being is when I hear about a group who has a totally different view to me politically and my first thought is, “But how could anyone think that?” That’s my sign that I need to go and speak to that, someone from that group and go, “Why do you think that? Because I literally can’t imagine how you got to that point, it makes no sense to me.”
But then like you said, it’s hard to find those people. But to be fair, that’s actually when social media can be really useful, because that can be when you find those people and go, “Okay, I’m going to find the guy who votes like this and this is his hobby and I’m going to follow him online and go, ‘Okay, now I can understand why you’re into hunting and why that’s really important to you and that actually makes sense to me now.’”
Stephanie Collins: I think it’s also important that not all relationships have to be political. I have some friends who have very different political opinions from me, from childhood mostly. We don’t really talk about that stuff, but I know that the people who have those views are humans, who I care about and who I love. And sometimes even that can get you a long way into being like, “Okay, maybe there’s some overlap in our ontologies, even if it looks like there’s not.”
Susan Carland: Yes, we don’t reduce everyone else to being either evil or stupid if they don’t agree with us. Whether we like it or not, we’re on the brink of a new era, one that will change the way we learn, work, vote and even think about the world.
It can feel frightening, but there’s also room for optimism. Here’s Professor Joanna Batstone.
Joanna Batstone: So I’m very much a glass-half-full person when it comes to AI. I mean, I’ve been in the world of AI for quite a long time now and so what I think gives me hope at the moment is the realisation of this is the time of AI, this really is the generation of AI.
Susan Carland: Thank you to all our guests on this series. Professor Joanna Batstone, Professor Geoff Webb, Professor Chris Lawrence, Associate Professor Ben Wellings, Associate Professor Stephanie Collins, Dr Ben Hamer and Dr Sue Keay. You can find links to their work in our show notes.
Geoff mentioned Monash University’s AiLECS Lab in this episode. Among other projects, the lab uses machine learning to combat child exploitation and you can help. One easy way to ensure AI is used as a force for good is to support its My Pictures Matter campaign, which we’ll also link in the show notes.
For more information about the topics we’ve covered in this series, straight from the researchers and leaders in this area, visit Monash Lens at lens.monash.edu. Thanks for joining us and see you next week with an all-new topic.
Listen to more What Happens Next? podcast episodes