‘What Happens Next?’: How Do We Teach Machines to Think Responsibly?
If humans are programming artificial intelligence, are we stuck with the human biases that inadvertently work their way into AI systems?
The guests on this episode of What Happens Next? don’t think so.
Dr Susan Carland is again joined by communications and media studies expert Professor Mark Andrejevic, human-computer interaction scholar Yolande Strengers, Monash University Interim Dean of Information Technology Ann Nicholson, and Microsoft Australia’s former chief digital advisor, Rita Arrigo.
AI is undoubtedly playing a beneficial role in society, helping us respond more effectively to medical emergencies, improving surgical procedures, and even making our buildings smarter.
There are already methods in place ensuring society can hold AI and its designers accountable. Established ethics committees have been joined by new legislation and increasing awareness among policymakers.
Accessible courses and casual data science meetups enable any of us to join the conversation, and help shape emerging technologies, the data they use, and how they use it. As we work towards a more ethical society, we can ensure our AI is ethical, too.
“I think we could imagine cases where it's demonstrated over time that the machine actually makes a better decision with a better social outcome.”
Professor Mark Andrejevic
If you’re enjoying the show, don’t forget to subscribe on your favourite podcast app, and rate or review What Happens Next? to help listeners like yourself discover it.
Transcript
Dr. Susan Carland:
Welcome to another episode of ‘What Happens Next?’ I'm Dr Susan Carland. In our first episode, we spoke about the potential problems with biases in artificial intelligence or AI. What are the consequences in the future if there are no regulations in place to prevent it?
Today, we take a more positive approach and look at some of the potential AI has in addressing biases and how it could ultimately play a role in helping us achieve all positive outcomes.
Rita Arrigo was most recently Chief Digital Advisor at Microsoft Australia. Her role as an AI ambassador drives many AI for good initiatives, including smart, inclusive cities; diversity, sustainability; and humanitarian aspects. I asked Rita where she thinks AI is headed in the future. It turns out, she's quite excited. Rita, lovely to have you here today.
Rita Arrigo:
Thanks, Dr Susan Carland. It's wonderful to be here.
Dr Susan Carland:
Tell me what you see as the role of AI, or artificial intelligence, for societies going into the future.
Rita Arrigo:
Well, I think there's really exciting stuff happening in society at the moment. I think we're really moving from being consumers of stuff to entering this idea of societal innovation, where we want to solve complex problems, address some of these huge challenges like the environment, sustainability, accessibility, inclusion. I think this is a real vital role for AI and data science, because it will allow us to augment the world and see the world in a different way as we start to find these new ways that we want to innovate society.
Dr Susan Carland:
So you see AI as helping us solve some of those big problems.
Rita Arrigo:
Yeah, absolutely.
Dr Susan Carland:
Do you have a fear that AI might try to kill us in our sleep like Terminator?
Rita Arrigo:
No, but I love that Hollywood loves that, you know? Hollywood's there to make horror movies and stuff, and I think I love that whole dystopia kind of model that they create, but we haven't seen it. I've been in technology for like 40 years, pretty much, and read about the Luddites that try to, you know, break all the machinery when the industrial age was coming, but really, did it really affect what was happening? And has the progress been going on regardless of those kinds of fears that we have? Maybe they are just part of the drama of Hollywood potentially, I don't know.
Dr Susan Carland:
Are any of the fears justified? Do you think we have enough checks and balances or policies or proper ring fencing around any of, perhaps, the ethical concerns people might have about AI?
Rita Arrigo:
I think we do. I think we're a really risk averse culture. I think in the way that we implement technology ... you know, it takes 50 change managers, cultural program, and building the culture and the diversity in the organisation, and the training. I don't think we just switch on technology and go, "Wow, what happened then?" There's a lot of planning and journey creation around it, so yeah, I don't think that that's an issue.
Dr Susan Carland:
I want you to look into the crystal ball. If we continue down the same path that we're on with our approach to AI – imagine we don't change anything in our approach to AI – what does our future look like? What does our society look like? What's good and what's bad?
Rita Arrigo:
Well, I think there'll be some really great things. I think a lot of the frustrations that we have with customer service, for example, like how many times do you want to tell someone your name, address, date of birth? “Excuse me, I'm over this,” right? We want more intelligent customer service and experiences, and I think that there's going to be some real advantages in that.
I think healthcare is one of the areas that really needs AI, and we're already seeing things like quantum-inspired algorithms to improve the way we do scans. We're seeing much better real-time prediction in hospitals with algorithms to assist clinicians to know what's going on. You know, all those crazy beeping machines actually being turned into messages that people can understand. We're seeing it with things like holographic surgery, where we're moving into that spatial world and adding an AI layer to that. So I think in healthcare, there's some really exciting things.
And I think in transportation and smart cities, that's going to be a massive disruption. Roads are going to change. We're maybe not going to have as many accidents on roads because we're going to have a bit more AI computer vision and a bit more –
Dr Susan Carland:
Like AI in our cars, you mean?
Rita Arrigo:
Yeah, AI in our cars and on our roads, and lots of cameras. I don't know if you've driven a Tesla lately, but ...
Dr Susan Carland:
Sadly, no. I'm on an academic wage.
Rita Arrigo:
I only test drove one. But they're quite cost-effective because all of a sudden you don't have to pay for petrol and et cetera, et cetera. But you know, it can actually see the road, and it can identify pedestrians, and it will stop instead of colliding into one. I think that that side of it's going to be really amazing.
Then what's happening in smart cities. Even this building, it's like, it's almost like a smart building I've walked into. People knew I was arriving, had an experience to it, but imagine all the other critical infrastructure we have. Our bridges, our water, our energy, and that being turned into these cyber-physical digital twins for us to be able to predict the future and understand the past.
Dr Susan Carland:
Although Rita is excited about the future benefits of AI, she also concedes that at this stage, we still bring our own biases when developing new systems. Because after all, it's humans who are doing the programming.
Is there a way that you can see the future of AI better tackling the human biases that may have inadvertently been placed within them?
Rita Arrigo:
Yeah, I really do. I think that's so real in terms of, if you even look at the way that we run our finance systems and how you apply for a loan. A lot of those biases exist before you even put the AI around it. Or how do we go about getting a new job and what kind of... how do people judge us when we enter the room – that combination of body language, et cetera, et cetera.
I also think it's about building a lot of fairness. There's a lot of work going on in creating responsible AI guidelines so that we can have fairness, reliability, safety, privacy, security, inclusion, transparency, and accountability. It's these principles that we're starting to see in organisations as they start to bring them on. How to create these kind of committees to discuss and think about some of the implications of what we take when we start to mine our data and start to find examples from them and start to predict things, and also start to use AI to learn.
I think it's those learning parts of things, that deep learning, that will have to become much more focused around ensuring that we're using these responsible AI principles. So I actually think it's some kind of new science that's emerging. We're seeing a lot of the philosophers and ethicists and people coming in to have these discussions, because it is a real multi-disciplinary discussion, and probably an area that I'm really passionate about, because I think traditionally data science and data engineering and people dealing with data can be quite a male-dominated area. But I also think that women are amazing at data analysis and understanding that side of things. So being able to make it more multi-disciplinary and also include other stream of thoughts that might not necessarily be a lot of the engineers that you traditionally work with, bringing them into the equation will make a massive difference as well.
Dr Susan Carland:
In our previous episode, we spoke with Yolande Strengers about how female voices on our smartphones and networked home devices such as Google Home are perpetuating old-fashioned feminine stereotypes. I asked her what she thinks are some of the things needed to improve these issues.
Would it be having things like Siri or Alexa come where you can choose from five different voices, and gradually trying to get people used to hearing different voices in these devices? How else could we do it?
Yolande Strengers:
I think we also need to be experimenting with personality. Actually that's – there is also already a lot of experimentation with voice. There's already a lot of options away from the default, but not so much around what it is, or how we expect these devices to behave.
Dr Susan Carland:
So we might suddenly start having an AI who's a bit sassy?
Yolande Strengers:
Yeah.
Dr Susan Carland:
Or I say, "What's the weather," and they say, "Work it out yourself, idiot." Like, talks back?
Yolande Strengers:
Yeah, possibly. I mean, there are some great design precedents out there already, and they're quite quick. They didn't have to be threatening. They don't have to be rude or not likeable. They can actually be kind of just a really interesting kind of character in our lives.
Dr Susan Carland:
Tell us about some of those precedents that are available at the moment.
Yolande Strengers:
There's one that Jenny and I refer to in the book called Kai, which is actually a banking assistant that was developed by a feminist designer, Jacqueline Feldman. Her kind of design ethos was that she wanted this device to be assertive, and also, I guess, to have a kind of – a bot sense of humour. It's gender neutral, but it's not personality-less. It still tries to behave in the world like a bot. So if someone kind of puts a human question to it, like, “Do you want to go out with me?” Or, you know ...
Dr Susan Carland:
Why can we not help doing that? You said Kai was a banking bot?
Yolande Strengers:
Yeah, that's right.
Dr Susan Carland:
So I thought you were going to say the question would be, “How much money do I have in my account?” Or, “How do I move it?” Why can we not help asking AI these questions?
Yolande Strengers:
Exactly. Look, I think because they're designed to behave like people, right?
Dr Susan Carland:
So we want to see, what will you say?
Yolande Strengers:
That's right. Yeah, we want to engage with them like we would engage with another person.
Dr Susan Carland:
Right.
Yolande Strengers:
I think that's partly human nature.
Dr Susan Carland:
Okay. So I ask Kai, I type in Kai, “Kai, how about a date?” What would Kai say?
Yolande Strengers:
Well, I can't remember exactly what Kai would say, but it'd be something like, “Oh, I'm a bot, so such trivial consent things don't concern me. I'm much more interested in…” I don't know, whatever it's interested in, numbers and maths or something. That was a very poorly scripted answer on my part, but she makes him sound so – just gendered it – she makes it sound much more quirky and kind of funny.
Another one that I really like is the Tamagochi robot, which died if you didn't look after it. You know, it's not like devices that don't put up with things haven't existed around us in different forms for a while. I think that there are other sources of inspiration out there, little pockets of resistance. Like the Tamagochi robot, it's often not where you expect it to be. But it dies. If you don't look after it, that's it. I mean, I think it does eventually possibly come back to life, but there's – I can imagine similar situations with a voice assistant kind of shutting down for a period of time if you treat it badly.
Some of the companies have started to do that on a very modest scale now in response to widespread criticism of how they respond to sexual harassment, and sort of abusive language and loud, aggressive tones. But I think there's, again, a lot more I could do around what they are prepared to tolerate, I guess, in terms of how people are going to treat them.
Dr Susan Carland:
Professor Ann Nicholson is the Interim Dean of IT at Monash University. What do you see as some of the greatest successes that artificial intelligence is giving us at the moment?
Ann Nicholson:
Well, I think that we've had a huge improvement, if you like, in what AI systems that have kind of been around with the fundamental technology for about 30 years, that's the deep learning is based on neural networks, that they – it sounds really technical, but basically the idea is if you get data, you can feed it into a model and it kind of learns based on those inputs, something about the outputs.
Now, that might be recognising features in a picture. So we know now that computer vision is really accurate and speech-to-text and speech recognition are doing really well. So they've really massively improved for two reasons: One is that we have got much greater computing power, and we've got much greater sources of data to train them up well. They can take lots of data and just do an amazing job on finding patterns, and learning things, and classifications, and so on. It's truly amazing, what's happening in that space.
Dr Susan Carland:
What advice would you give to people who want to ethically and mindfully engage with AI?
Ann Nicholson:
I think that if you've got the time, that there's some quite good online courses, like micro-credentials that actually try and explain a little bit about AI and actually can show you with some software online, that you can kind of run some little tests and create some sort of little AI model.
That might be something that would be too much for some of your listeners, but for other ones, I would encourage you to sort of expand your technological base. People are really amazing. My mum is on Facebook and downloading things and messaging, so she's upskilled. I do think that getting a bit more familiar with some of the guts of it, even if you're not going to go off and do it seriously, can help you get a better understanding of how it's built.
And then you can then assess better when you think there might be an AI on and that you're putting data in, how they might use it. I think that if you're working with an organisation who wants to collect your data, you should definitely be asking how they're using it. If you're typing it in to get an answer back from something, you should be aware that there's some AI and that it might be really dodgy or it might be really good. You can try and have a [inaudible] and ask the organisation. If they can't tell you, you should probably give it a miss.
Dr Susan Carland:
Do you have any examples of projects you've worked on where artificial intelligence is clearly doing a social good?
Ann Nicholson:
Well, I'm working on one right now where we're trying to prove that it will. That is, that we're collaborating with Ambulance Victoria to have an AI system that will listen in on emergency calls and provide a kind of a decision support flag to earlier detect cardiac arrest calls. Cardiac arrest is where the heart’s stopped, so time is critical. The call-takers do a great job. They have a script though that they have to go through. If we can detect agonal breathing, a particular kind of breathing in the background, before the call-taker does, or some other indication while they're still going through their script and we can flag it earlier, we could save precious seconds and get the ambulance out there earlier. Or we could hook into these new systems they've got where they ping someone who knows how to do CPR and could be in the vicinity, and say, “Get to that person who's a hundred metres away from you and help with the CPR.” So we're not necessarily there yet, but we've got some very promising results that show in particular cases, that we can just help get that right decision detection earlier.
Dr Susan Carland:
I asked Professor of Communications and Media Studies in the Faculty of Arts at Monash University, Mark Andrejevic, for some advice.
Dr Susan Carland:
I once typed into my Google search bar – I think it was something like, “Why are academics so...” or, “Why are professors so...” and do you know what it suggested?
Mark Andrejevic:
Oh no.
Dr Susan Carland:
“So badly dressed.” Deeply hurtful bias. What can the average person do to either be more aware of the biases we may be encountering in AI, or even do something about it?
Mark Andrejevic:
Well, clearly I need to dress better.
Dr Susan Carland:
I think, well, clearly we both do, because that's what humanity thinks.
Mark Andrejevic:
I think that in many cases, these responses are going to have to be collective political ones. I think the type of organising that we've been seeing in response to concerns about bias. For example, political mobilising around bias in the use of facial recognition technologies. I think those ... where the politics is going to come to bear on the decisions to actually use those systems. Once the systems are in place, it's very difficult to find ways to reverse-engineer them. Although I think it will be important to advocate for accountability as well.
On an individual level, I think it's very hard ... when it comes to political change, I think in most cases we're talking about collective action. So we have to, as a society, start to hold accountable those places where these decisions are being made and say, “This is what we want and this is what we, you know, we don't want certain types of decisions to be made by automated systems in ways that are non-explicable to us.”
And when it comes to very significant decisions about medical care, employment, we're going to have to be able to come to a shared political consensus about when systems are appropriate to use, and why they're appropriate to use.
I think we could imagine cases where it's demonstrated over time that the machine actually makes a better decision with a better social outcome. That seems conceivable to me, but it's got to be demonstrable and explainable how we came to that decision and how we have the belief that that's going to continue to operate if we're going to continue to use that system.
It's been interesting to see, for example, bias and facial recognition has gotten a lot of political attention. I think it's because the face itself is – we're very concerned about biometric monitoring of our faces because we think of our faces as communication ... I don't know how, I mean, one way we think of our faces is that they're ours to control our interface with the world. The idea that it can be captured and used to make inferences about us and to identify us without our ability to know that that's happening or to see how it's happening is alarming.
And so it's been a site of people expressing concern, and there've been bans of the use of facial recognition technology in some municipalities. That has received political traction, and there's political response to that. So I think it's possible, but I think it does require interrogating the systems, and looking at how they're being developed and pushing back against them. That takes political will, and it takes a lot of knowledge about what's going on, and it's going to be harder and harder to do because more and more of these systems are going to be operating. You're going to have to keep directing your attention. Wait, is it the prison parole system? Is it facial recognition technology? Is it medical triage? Where are we going to devote our effort?
Dr Susan Carland:
Mark, thank you so much for coming back. I think you're really well dressed. I think Google got it wrong.
Mark Andrejevic:
Thank you. Likewise.
Dr Susan Carland:
I asked recent Chief Digital Advisor on Microsoft Australia, Rita Arrigo, for her advice.
What advice would you give to the average person who is interested in maybe accessing AI, but wants to make sure they're part of an ethical or fair society of AI use?
Rita Arrigo:
I feel really into that. I think there should be ways that you can start to be part of these ethics committees, and be part of these kinds of decisions. I think there's already a bit of a startup around the ethics in AI committee and people that are going to make, have opinions around that. You're seeing it in our public sector. You're also seeing in our vendors where a lot of ... I was playing with this system called Fair Learn, which actually assesses systems and mitigates any impacts from race, gender, or disability, or age, and kind of assesses your AI platform in the way that you're using it. So a lot of software-driven stuff as well to start to assess a lot of those kinds of things.
But I think the way I've ... I'm a real hands-on learner, so the way I've always kind of learned about the way things work or learned about IOT or learned about mixed realities, I like to go to hackathons or meetups. I think if I'm a young person, you want to turn up to some of these things and have a listen to what people are saying about data science. My first data science meetup, it was all about trying to figure out who's going to win the soccer. So people were studying soccer games and trying to figure out, and had servers under their desk. Now it's really evolved into being able to tap into services to do a lot of that kind of deep learning and machine learning and not have to kind of create your own hardware and that kind of stuff. I think for young people, it's really exciting to kind of see the applications and see the way we're using some of the newest technology to trial some of these things.
Dr Susan Carland:
Rita, thank you so much for joining us.
Rita Arrigo:
It's a pleasure. Had a lot of fun.
Dr Susan Carland:
That's it for our look into the issues surrounding bias in artificial intelligence. We will be back next week with a brand new topic to unpack on ‘What Happens Next?’.
Listen to more What Happens Next? podcast episodes