‘What Happens Next?’: Will AI Change the Future of Soldiering?
Ask an artificial intelligence to play you in a game of chess, and chances are, you’ll lose.
Ask an artificial intelligence to make you a cup of coffee, however, and prepare to be disappointed.
While robotics, autonomous systems and artificial intelligence, collectively known as RAS-AI, represent the bleeding edge of innovation and are often the villains in science fiction, the technology has a long way to go before it will replace humans in day-to-day tasks – and that includes on the battlefield.
In last week’s episode of Monash University’s What Happens Next? podcast, expert guests warned of the dangers of removing humans from war and replacing them with robots – the consequences could quickly become catastrophic. The good news? That’s not likely to happen any time soon.
Listen: Will Tomorrow's Wars Be Fought by Robots?
Although it will never replace human soldiers, RAS-AI does have some surprising effects on them – including on their mental health, and in shaping society’s views on valour and what it means to be a member of the military.
This week on What Happens Next?, Dr Susan Carland is joined by expert guests who outline the state of RAS-AI today, its tremendous potential benefits to humanity, and how emerging technologies are changing the way we think about soldiers, and the way soldiers think about their jobs.
Today’s guests are Monash University alumnus and veteran Dr Josh Roose; Dr Kate Devitt, Chief Scientist of Trusted Autonomous Systems, CRC; former SAS commanding officer Ben Pronk DSC; and Paul Scharre, a former US Army Ranger and the author of Army of None: Autonomous Weapons and the Future of War.
“If no one slept uneasy at night, if no one felt responsible, what would that say about us as a society? Some of the moral and ethical issues are actually some of the most interesting and really challenging issues where there are no easy answers.”Paul Scharre
What Happens Next? will be back next week with a new topic.
If you’re enjoying the show, don’t forget to subscribe on your favourite podcast app, and rate or review What Happens Next? to help listeners like yourself discover it.
Transcript
Dr Susan Carland: Welcome back to What Happens Next?, the podcast that examines some of the biggest challenges facing our world and asks the experts, what will happen if we don't change? And what can we do to create a better future?
I'm Dr Susan Carland. Keep listening to find out what happens next.
[Radio chatter]
Dr Josh Roose: The real concern from an ethical perspective is that out of the loop where an algorithm determines what a target is, what a target isn't.
Paul Scharre: There's a very tiny slice of the population that then bears that burden, not just physically, but psychologically. We ask young men and women to go do that on behalf of the whole country.
[Radio chatter]
Ben Pronk: I definitely think there is obviously applications here. I think there's obviously things that can help make a soldier's life better.
Kate Devitt: If you're a pacifist, then you are unsure that there's ever any kind of war that would be justified.
Dr Susan Carland: This is the second part of our series on the future of soldiering.
Last week, we learned about some of the emerging technologies that are affecting the way we fight wars. Our expert guests pointed out some of the moral quandaries associated with using machines and artificial intelligence on the battlefield, as well as the effect that it can have on soldiers' mental health. It can be pretty terrifying stuff.
This week, we're looking at the benefit of those technologies, and society's growing understanding of trauma and moral injury among active military and veterans. Keep listening to find out what happens next.
Few people are as qualified as Paul Scharre to discuss RAS-AI – that's robotics, autonomous systems, and artificial intelligence – in a defence setting. A former US Army Ranger, Paul is the vice-president and director of studies at the Washington, D.C.-based think tank Centre for a New American Security, and the author of Army Of None: Autonomous Weapons and the Future of War. He's been heavily involved in drafting actual policies on RAS-AI technology for the US Department of Defense.
Tell us something that you think the average person gets wrong when they think about artificial intelligence in warfare.
Paul Scharre: Well, I think the biggest sort of misnomer with artificial intelligence is the way that we use the term conjures up all of these images of humanlike intelligence, or human-level intelligence. And in some ways it would be easiest if we just weren't using the term AI, we used some other term.
Because one of the things that's both fascinating and has really important implications for warfare, and other areas, too, is that the types of intelligences that we're able to build using machines today in some ways can be very smart, very intelligent, very effective at very narrow problems. Playing chess, playing the game of Go, in some settings driving cars, in some types of military applications, they can be very effective. But they really function nothing like human intelligence.
Human intelligence is very general. It's very robust to a wide range of circumstances. You can give people very vague instructions and humans can figure things out, and machines are terrible at that. Just terrible.
There's this interesting phenomenon in the field of AI, where sort of the definition of what means “intelligence” keeps evolving over time. Early on, people looked to things that we would define humans as requiring intelligence, like playing chess. Then it turns out that we could build machines that could achieve superhuman performance in games like chess with really nothing relating to what we would think of as requiring intelligence, just really brute-force calculation.
Then they switch to harder kinds of games like Go, which is computationally more difficult or playing poker, which is the kind of thing where we would think requires a theory of mind, understanding what's going on in your opponent. Are they bluffing? How are they thinking about what bets to make, or what's in their hand? Turns out you don't need any of them, actually. You can achieve superhuman performance in poker with just large amounts of computational game theory and computing power.
But something simple that we take for granted as humans, like making a pot of coffee, is basically impossible for AI systems today. If you think about a challenge you could give a grown adult, walk into a house or an office you've never been in before and make a pot of coffee. If you're a coffee drinker, that's not hard. You can figure that out, right? Rummage through the cabinets, find the materials, get it done. We have no idea how to build a machine that could do that today.
That's a real challenge when we think about how we use AI in a variety of industries, it's a huge problem in warfare because warfare is so unpredictable. No wars are ever the same. There are common themes, but the actual way of fighting is constantly evolving. For militaries preparing for war, they don't know where they're going to fight, who they're going to fight against. They don't know really what technologies are going to be used, or what tactics the enemy's going to use. They try to find out, they try to use intelligence ahead of time, but the enemy's trying to conceal all that from them.
There are nearly no rules about how militaries are going to fight. There are laws of war, which sometimes militaries adhere to, sometimes they don't. And war is, fortunately, it's very rare. So militaries are in this kind of strange position of… Imagine like a sports team, trying to prepare to play a game that you play once every 30 years, where the rules are constantly changing, against a team you don't know, and the consequences are life and death.
That's a real challenge for militaries and militaries deal with that by giving people training that's a foundation then for what they're going to do in combat. They tell people things like “no plan survives first contact with the enemy”. They tell people to be flexible. Machines can't do any of that.
This brittleness, that's a really significant characteristic of machine intelligence today, is a huge limitation in warfare. There are some ways in which AI can be useful in war, but there are just other areas where it's going to fall dramatically short and we've got to find ways to account for those vulnerabilities.
Dr Susan Carland: What would you see as some of the areas where AI could be useful in warfare – and safe, I guess – that they won't require that agility and moral reasoning to make a good decision?
Paul Scharre: Anywhere where you can have a clear metric for better performance. You can say, “This is what good performance looks like, this is how I can measure it. This is where… Quantitatively, this is what better means”. Anywhere you have large amounts of data that you can use to feed into machine learning systems, they are going to be valuable.
I’ll give a couple examples. One would be landing aeroplanes . We should not have people landing aeroplanes. Machines can do it better, they can do it safer, we could reduce aeroplane accidents by having machines do that.
One early application that the Defence Department in the United States built was doing flood mapping for disasters, natural disasters. There's a flood from, say, a hurricane or some other kind of natural disaster. There's a bunch of satellite imagery that they then get. Well now, huge chunks of this area are flooded, and it's an analytical problem, trying to figure out what roads are passable, where might there be damage that's caused that people need to go take a look at. Turns out that AI systems can help do that. And they can help process that satellite imagery much, much faster, which is incredibly valuable.
Dr Susan Carland: Ben Pronk is a former commanding officer of one of the Australian Army's Special Forces units, the SAS. He's the co-author of The Resilience Shield.
When we think about the future of soldiering, you mentioned how useful it would be if we could help the resilience of soldiers. I wonder if another area that might help benefit soldiers and their mental wellbeing would be greater use of AI, or artificial intelligence, in warfare? Because that could help potentially protect soldiers from some of the more psychologically traumatising things that they have to do. Do you think that could be a useful intervention to protect the mental wellbeing of soldiers? Or do you think that changes the nature of warfare too much?
Ben Pronk: These are like really wonderful, philosophical questions at the moment, because ... So I definitely think there are clearly benefits to the incorporation of AI technology into the military environment.
I was lucky enough to spend a year in the UK. I studied there under a guy called Dr Kenneth Payne, who's just released a book called I, Warbot, which explores a lot of the applications of AI in the military context. And his sort of summary is that, wonderful for tactical, interpreting tonnes of data very quickly, and giving responses in situations that have set parameters, but not so good at the kind of creativity and the strategic side of things.
I definitely think there is obviously applications here. I think there's obviously things that can help make a soldier's life better. One of the most exciting ones, a guy called Dave Snowden talks about sort of enhanced cognition.
If you are driving a vehicle convoy in Afghanistan, and a really experienced crew commander might just get that sixth sense, that intuitive, Kahneman thinking fast-type response, “Hang on, something's not right”, and make a decision. And yet someone with less experience may not have that.
What Snowden offers is that maybe AI can start to augment that. It can start to interpret the data that's coming around, that maybe there's some disturbed ground, and maybe this is an area that historically has had high rates of IED, roadside bomb strikes. Or maybe this is exactly the kind of places that the adversaries tend to lay ambushes or something like that. So it can sharpen that human decision making.
Dr Susan Carland: Dr Kate Devitt, chief scientist of Trusted Autonomous Systems Defence CRC is excited about the possibilities of RAS-AI, even beyond a military setting.
Kate Devitt: Take the bushfires of Australia a year ago. If we had had pseudo-satellite balloons far above where the smoke was, that were able to provide mobile phone access to people who were stranded on mountains, were able to surveil the territory and be able to provide disaster relief workers better understanding of where the fire front may be moving and providing advice to those humans who were struggling on the ground. You could have a coordinated set of these balloons that were all working, hundreds of them, and they would be floating on the wind and then using their intelligence to sort of change what they do, and what they perceive, and how they record information.
The potential of RAS-AI is that we have a huge country and we have so many tasks that we need to accomplish, perceptively, logistically, in thinking and knowing. If we know more about our environment, and then we're able to act on that knowledge, this is the advantage. It could provide speed. It could provide knowledge, it could provide greater heft, so logistical support.
If we can supply those that are stranded in flood waters at a faster pace with autonomous drones, that would be tremendous for Australia and for Australians. Those are some of the great advantages that they might provide.
Dr Susan Carland: Do you think they'd have any advantage or positive role in warfare?
Kate Devitt: Across the scope of warfare, they could bring advantage. A lot of Australians don't, however, understand how constrained Australia is when it comes to warfare.
So the first point of rejection of RAS-AI in warfare would be from a committed pacifist, because if you are a pacifist, then you are unsure that there's ever any kind of war that would be justified, right? “War, what is it good for? Absolutely nothing”, right? If you start from that place, that's a very respectable place to begin, then no, RAS-AI probably aren't good for warfare. Why? Because warfare altogether is a bad idea.
Dr Susan Carland: Yeah. Nothing is good for warfare. Humans aren't good for it.
Kate Devitt: Nothing's good for warfare! I don't want humans involved, I don't want the robots there, I don't want the dogs, I don't want elephants – none of it!
Okay. When I started in this position, the first place I had to start was from bare basics was, do I ever think that war is ever justified? I'm not going to give a big lecture on just war theory, but there is a very long tradition across Eastern, Western cultures that looks at if war is ever justified, under what conditions would you be justified in the acts of war?
If you think that there are some just wars… Let's suppose Australia was physically invaded, and you might say, “Well, that's a condition where I would feel like Australia would be justified in using force against an invading country that did not have our interests, had not been invited and did not have our interests front of mind. I think that you could use robotics and autonomous systems to defend Australia”, right? It would be a great way to defend Australia against, for example, missile attacks and other sorts of things. There's a lot of ways that they would be incredibly useful.
Dr Susan Carland: One of the really fascinating consequences of new technologies in battlefields is that they can shift our ideas of valour and courage.
Dr Josh Roose, a senior research fellow in politics and religion at Deakin University served in the Australian Army Reserve for over a decade. He's done some thinking about some of these concepts. How would you define bravery?
Dr Josh Roose: Ooo. [Laughter] Well, there's obviously different dimensions. I think the key here… There's obviously physical bravery, there's the willingness to put your life on the line for your friends and your mates and your team. And I'd say to that, it doesn't necessarily have to be that. It could be bravery getting under a 10-tonne truck and repairing it out in the bush, or there's all sorts of dimensions of that physical bravery.
But then I'd say the most important bravery in defence that I ever was not only taught, but also witnessed, is moral courage. And it's that willingness to take tough decisions and make the right decision, even if it's the one that hurts the most. And the willingness to own up and admit where you've stuffed up, or in terms of you see a wrong, you don't... It's an old adage, but “the standard you walk past is the standard you accept”. And I'd say that defence and the army is defined by that value of moral courage.
Dr Susan Carland: I wonder if, as we see perhaps greater reliance on artificial intelligence, or autonomous systems, or robotics within the defence forces, and thus perhaps less physical risk for our soldiers – male or female – they may just not need to be out in the field as much, will our idea of bravery change within the armed forces? Will there just be this idea of moral courage and less need for the physical courage?
Dr Josh Roose: Another great question. I think we're a long way off seeing the need for reduced reliance for boots on the ground, because ultimately these technologies are not going to build rapport with local populations, and build the capacity to make adequate planning, and to really take adequately into the needs of local populations in conflict zones. So there's always going to be a need, I think, for boots on the ground.
But that said, for a long time... I mean, there's some really good, interesting work on killing and distance and so on, which is at the crux of the blunt… the sharp end of the stick, so to speak. And really humans have sought to remove themselves from that face-to-face killing for a long time now, to the extent that we now have drone operators operating out of shipping containers in the desert, in the US and other places, effectively dropping bombs around the world, and identifying targets, and so on. There's a significant removal from that individual.
When it comes to artificial intelligence, there's “in the loop” and there's sort of “out of the loop”. The “in the loop” dimension to this is where an individual who's got training, and you would have to argue a strong level of moral courage is inserted in the actions of that machine. They may say, “Well, this is a target” or, “This is not a target”.
But then the real concern from an ethical perspective is that “out of the loop”, where an algorithm determines what a target is, what a target isn't. And a clever enemy will seek to deceive that algorithm, effectively by identifying itself in a different manner or not meeting that standard, but really that's at sort of the coalface of the serious questions we're facing about the ethical use of artificial intelligence.
Dr Susan Carland: Here's Paul Scharre again.
I wonder how the increased and increasing use of AI in warfare might change the way we perceive the work that our soldiers do. Will we see them as less brave in the work they do if we feel that some of their difficult work is being outsourced to a machine?
Or alternatively, will the people who are perhaps manning the drones – it could even be a pregnant woman who can man a drone in a way that she could never be on the front line in war – will it actually give someone like her, or anyone else that does that role, greater capacity to contribute to armed forces? I guess what I'm asking is, how will the technologies of AI and their implementation in warfare change the way we view who our soldiers are and what they can do, for better or worse?
Paul Scharre: Oh, it's such a great question. And we're already seeing some early stages of that, of exactly what you're describing unfolding with things like drone warfare, where people are piloting drones from the other side of the globe, in bases inside the continental United States, and drones are thousands of miles away.
The long arc of technology and warfare has been towards greater distance between combatants. You can think about the first time someone picked up a rock and threw it at someone else in anger, to then the bow and arrow, and a slingshot, and rifles and cannons, up to intercontinental ballistic missiles. Technology has been making it possible for combatants to harm someone else from further and further distances away. And I suspect that at each of these evolutions, oftentimes people who've been exploiting kind of this technology for greater distance have been perceived as maybe less valorous, less brave.
The crossbow in mediaeval Europe was reviled in part for this very reason, because it let a relatively untrained person, the ability to shoot a knight at a distance, piercing their armour, and levelled the playing field, and was seen as wicked, and a terrible weapon.
I think that one of the things we've seen is that how people are perceived, how bravery is perceived evolves over time. If you look at modern warfare today, it's fought at a distance using rifles. I suspect that to ancient combatants who fought hand-to-hand using axes and swords, a lot of what happens today would be viewed as perhaps cowardly.
Dr Susan Carland: Yeah.
Paul Scharre: But warfare evolves, and the notions of what is brave or valorous evolve as well.
Dr Susan Carland: Do you think a time will come where we will see the person who remotely operates the drone that takes out a soldier or an enemy combatant, we will see their bravery as the psychological impact of what they had to go through to still... They maybe weren't putting themselves physically in danger, but we'll look more at the psychological danger that they were in?
Paul Scharre: I think that's very well said, in fact, because that's exactly right. We're seeing this shift from physical bravery, or physically in harm's way, to this issue of, perhaps the right term is psychological bravery, or maybe we'll see a different term emerge over time to describe it.
But it's really about decision-making. And that's where, even as we use more and more robotic systems to get greater distances between soldiers on the battlefield, you still want humans involved in the decision-making process.
And I think it mirrors an increased attention towards issues like moral injury, towards the psychological toll of war. Certainly ideas like PTSD, post-traumatic stress disorder, have been around for a while, that soldiers might experience that from warfare. But it's often, I think, in the past, been characterised as related to trauma to soldiers themselves, perhaps experiencing fear of harm. And it's been more attention... It's really, that's kind of a limited way of viewing it.
In reality, it's much wider than that. There's more attention in recent years to issues surrounding moral injury, where service members experience psychological suffering afterwards and trauma, not because of themselves being in harm's way, but because of what they've had to do, or see, or experience from a moral standpoint.
Dr Susan Carland: Here's resilience expert Ben Pronk.
Do you feel that building resilience in soldiers could be an area of great importance?
Ben Pronk: Yeah, I do. And particularly that mental layer. We talk about the mental layer of the resilience shield, being all the things that are psychological or spiritual that can help build your resilience. And our research suggests that's the most important layer. It's absolutely the bedrock of everything else. And it also suggests that there are a number of aspects that can really improve it. And some of these are not currently, I guess, culturally consistent with a lot of organisations like militaries and first responders.
So things like meditation. This is a superpower. And yet I reckon five years ago, if you told me meditation, I would've gone to hippies, and mung beans, and orange flowing robes, and it was almost anathema to my idea of what I was doing.
I've come to think of it more as gym training for your brain. There are neuroplastic effects from meditation that help you make better decisions, that help you override your amygdala response, that help you stay cool in the face of pressure. All of these things that we really want in high pressure environments, or in fact in any lifestyle environment.
So things like meditation, things like gratitude, I was really surprised from our results at how much gratitude correlated with overall resilience. And I guess that makes sense. If you’re seeing all the amazing, wonderful things that exist in the world instead of all the sad aspects, then you’re probably going to have a better, more bulletproof mind as you go through life.
I think those things, I think my personal, I guess, quest… I would love to see militaries, first responders, incorporate mental training, things like meditation and mindfulness in the same way that they currently do physical training. A nice, graduated programme that builds you up to prepare you for the kind of job stressors that you're going to encounter.
Dr Susan Carland: We can do more to prepare soldiers for what they'll face in combat, but we'll never be able to protect them completely. Here's Paul Scharre.
Paul Scharre: It's really deeply unfair that as a society, when democratic societies make the decision to go to war, there's a very tiny slice of the population that then bears that burden, not just physically, but psychologically. We ask young men and women to go do that on behalf of the whole country.
But it's also worth asking, if no one slept uneasy at night, if no one felt responsible, what would that say about us as a society? Some of the moral and ethical issues are actually some of the most interesting and really challenging issues where there are no easy answers.
Dr Susan Carland: There really aren't any easy answers when it comes to war, whatever weapons it's being fought with. Although this is the official end of our series on the future of soldiering, if you'd like to explore these issues further, I really encourage you to visit our show notes. We've also linked to the Resilience Shield website, where you can take the screening test to learn where your resilience may need a little bolstering. See if you can beat my 86 per cent.
Thanks to all our guests on this series: Dr Rob Sparrow, Paul Scharre, Dr Kate Devitt, Dr Josh Roose, and Ben Pronk.
Thank you also to the Monash University Performing Arts Centre's David Li Sound Gallery, where a portion of this season was recorded.
If you are enjoying What Happens Next?, don't forget to give us a five-star rating on Apple Podcasts or Spotify, and share the show with your friends. Thanks for joining us. See you next week with an all-new topic.