‘What Happens Next?’: Will Tomorrow’s Wars Be Fought by Robots?
It’s an unfortunate truth that many of our technological advancements are driven by conflict. In war, the party with the most advanced technology has a significant advantage, and a vast military industrial complex is hard at work developing the latest and the greatest to give their side a leg up. From the internet, to satellites, to robotics, one thing war certainly is good for is advancing technology.
Listen: Cyberwar: Keeping track of the battle to keep Ukraine online
Artificial intelligence and autonomous systems are poised to change the battlefield, and with it, soldiers themselves.
Today, the human cost of war is high. Will that be true of tomorrow’s wars, or will humans be removed from the front lines, left to orchestrate surgical drone strikes and autonomous technology behind the scenes?
And what about the soldiers themselves? Today, pilots are operating drones from the other side of the world, but they’re still reporting high levels of trauma. Will reducing the humans involved in combat also reduce the humanity that should govern it? Can modern soldiers be both ethical and effective? And how will emerging technologies such as AI and robotics affect human soldiering?
Read: The battle back home
Monash University’s podcast, What Happens Next?, examines a new topic this week: The future of soldiering. Join host Dr Susan Carland as she talks to veterans and experts in ethics; robotics, autonomous systems, and artificial intelligence; resilience; and military conduct to discover what the future we face looks like if we fail to consider the moral and ethical quandaries presented by new technologies on the battlefield.
This week’s guests are philosophy professor Dr Rob Sparrow; alumnus and veteran Dr Josh Roose; Dr Kate Devitt, Chief Scientist of Trusted Autonomous Systems, CRC; former SAS commanding officer Ben Pronk DSC; and Paul Scharre, a former US Army Ranger and the author of Army of None: Autonomous Weapons and the Future of War.
“When you have humans with other humans, it's a human endeavour. And when you take humans away from other humans and then put robots between them, then there's a loss of a connection from a human perspective.”Kate Devitt
What Happens Next? will be back next week with part two of this series, “Will AI Change the Future of Soldiering?”.
If you’re enjoying the show, don’t forget to subscribe on your favourite podcast app, and rate or review What Happens Next? to help listeners like yourself discover it.
Transcript
Dr Susan Carland: Welcome back to What Happens Next?, the podcast that examines some of the biggest challenges facing our world and asks the experts, what will happen if we don't change? And what can we do to create a better future?
I'm Dr Susan Carland. Keep listening to find out what happens next.
Joe Biden: This is a premeditated attack. Vladimir Putin has been planning this for months, as we've been saying all along.
Newsreader: As we go to air tonight, Ukraine is under full-scale Russian assault. Explosions and air raid sirens have been heard in several...
Dr Susan Carland: There's a popular quote that's often shared especially during times of international political tension that says, “I do not know with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.”
The global military industrial complex is constantly at work developing new technologies and weapons, all designed to give one nation or another an edge in the next conflict, and the one after that, and the one after that.
And it isn't just weaponry that's evolving and adapting. Soldiers themselves are changing, learning new skills and tactics to succeed on emerging fronts. As the nature of war and soldiers change, our own ideas about them are changing, too.
On the next two episodes of What Happens Next?, we'll examine the ways new technologies such as AI and robotics affect human soldiering. And we'll ask the experts, can modern soldiers be both ethical and effective? Keep listening to find out what happens next.
Prof Rob Sparrow: Hi, I'm Professor Rob Sparrow of the philosophy department at Monash University, where I work on ethical issues related to new technologies.
Dr Susan Carland: Tell us about some of the new technological advancements in warfare at the moment.
Prof Rob Sparrow: So war is a very technological activity, and a lot of our technologies, sadly, are developed for the purposes of war.
Technologies that I've studied are new material sciences and their implications on the battlefield. So, for instance, people have now… American soldiers have very sophisticated combat armour; wound treatment systems that mean that fewer people are being killed immediately, but more people are coming home with multiple amputations as a result of… they can be cared for, prevented from dying, but not necessarily come home healthy.
I study the ethics of drone warfare, so these are a whole range of systems that essentially allow people to see from a distance and occasionally fire weapons from a distance. So people might be familiar with the video that shows up now in computer games or films of pilots in America seeing a battlefield in Iraq, or Syria, or Afghanistan, and being able to drop bombs or fire weapons from drones. But there are other drones that are much more like the toys you see at the local electronic store, and they get used in battle as well.
There's a big impact of AI in warfare at the moment in terms of making it possible for weapons to carry out, or systems to carry out, more and more operations autonomously.
And I guess the other thing that people are very interested in at the moment is various sorts of performance enhancements, either better amphetamines to keep soldiers up, awake and combat alert for longer, to drugs to prevent post-traumatic stress disorder. Perhaps even brain machine interfaces to allow people to pilot aircraft just by thinking about it.
So there's lots of stuff happening in the new technology space, and actually lots of philosophers writing about it because it tends to raise so many issues.
Dr Susan Carland: And I imagine warfare, of course, has always had some of these overarching ethical or moral quandaries. I imagine you'd probably find any battle in history has had an army that was more developed, or more technologically savvy, than its competitor. The first tribe to have arrows probably destroyed the group that they came against who'd never seen anything like that.
So I imagine these have always been things that we've been struggling with, but this introduction, particularly of AI, artificial intelligence, that you refer to, where I wonder if it means… Could we get to the point where humans aren't needing to make the decisions at all? Will the drone make the decision about whether this is a legitimate target to bomb?
Prof Rob Sparrow: Increasingly, that looks likely. There are powerful dynamics pushing towards the development of what we call fully autonomous weapon systems, or autonomous weapon systems. Some people call them lethal autonomous weapon systems, or just killer robots. And the reason why lots of critics think that those systems will be developed, indeed they are being developed, are twofold.
One, the drones are remotely controlled via satellite link or radio, and those systems are vulnerable to enemy activity in a combat between near-peers. Essentially, you wouldn't be able to rely upon your satellites, you wouldn't be able to rely upon your radio communications not being blocked. So if you want your expensive weapon systems to keep operating in that environment, you need to be able to detach, cut the strings, and let the plane fly itself.
It also looks as though, what's called the “tempo of battle”, essentially the speed of combat, the time in which people have to make decisions, is now becoming so small that human beings can't do it as well as machines. So, for instance, there are radar- and computer-controlled cannon used on ships, including Australian ships, the purpose of which is to shoot down incoming cruise missiles. And human beings simply can't do that. So in order to operate those systems, you have to put them on full auto mode and then hope they can distinguish between your planes coming home, or a civilian aircraft, and an incoming cruise missile.
I think that's very dangerous. I think there's a real risk that in the future wars will be started by computers who have been granted on authority to fire in certain circumstances, detect something, or maybe just make a mistake, open fire and that drags the rest of us into a war.
Dr Susan Carland: Paul Scharre is the Vice President and Director of Studies at the Washington, D.C.-based think tank Centre for a New American Security, author of Army of None: Autonomous Weapons and the Future of War, and a former U.S. army ranger. He led the US Department of Defence working group that drafted the directive establishing the department's policies on autonomous weapons systems.
Do you think there are things that we should never outsource to machines in warfare?
Paul Scharre: Well, that's the big question where there's been a lot of debate about where this technology's headed. One of the things that's unique about warfare and the military environment is it involves causing harm and it involves killing, ultimately. When militaries are operating lawfully, they're killing lawful combatants of the enemy, but it does involve causing harm. And so that is a really profound question that people have been asking, debating internationally, countries coming together to discuss at the United Nations.
Where's this technology taking us? And what is it, you know… Where do we draw the line in terms of how much autonomy or AI is acceptable in military systems? What does it mean when a predator drone has as much autonomy as a self-driving car? And are we comfortable allowing machines to make the ultimate decision of life and death when it comes to human beings?
Dr Susan Carland: Machines affect our political decision-making, too.Dr Kate Devitt is Chief Scientist of Trusted Autonomous Systems Defence CRC. She's paid close attention to the way governments around the world treat robotics, autonomous systems and artificial intelligence, or RAS-AI. RAS-AI, or robotics, autonomous systems and artificial intelligence, it is a mouthful. What is that?
Dr Kate Devitt: Yeah, what is that? What we've seen in the last 10 years in particular has been the rise of artificial intelligence as a useful way – as a useful way to help robots categorise the world, to perceive the world, locomote around the world.
And we can see that buzz in the development of autonomous vehicles like Tesla and the reduction in cost in some things like LIDARs and cameras so that car manufacturers feel like they can afford to experiment with some of the perceptive systems, the systems that can see the world and understand the environment, and then bringing them together on platforms like a car to enable them to manoeuvre more safely or more intelligently than in the past.
So the term is used to bring together the fact that we've got more and more systems in the world that can behave in a smarter way. The question of whether we trust those systems is, of course, the interesting one.
So there was a really interesting war game that was done by the Rand Corporation, and what they found was that when you have a global battlefield and you've got autonomous assets like undersea submarines and ships and things that are running around themselves, they might have been weaponised or not. But as a targeting value proposition, nobody really cares if a machine is destroyed, which is good, and people do care when human lives are lost. So the acts of war that are about robots destroying other robots don't seem to have political impact.
Dr Susan Carland: Here's Rob Sparrow again.
It's interesting. I was told that the way Parliament House in Canberra was set up was so that the government sitting there could look out the window, straight down to the War Memorial, to always remind them – that that would always be in their eyesight – that if they were to ever go to war, that there is a human cost of the people that they're sending there. I wonder what that means, then, if this human cost is removed, at least of their own people.
Prof Rob Sparrow: Yes. I think that's the danger. I think it's very easy now for governments to delude themselves that they can solve political problems by killing the bad people.
And I mean, it really is striking how infantile the moral language that people use is when they start talking about drone strikes and they fall into this dichotomous, “We're good, they're evil, each and every one evil by nature”. And this idea that you might solve a political conflict, or change a government's mind, or end a civil war just by killing the bad people is hopelessly naive.
And indeed it's quite sobering to think about the nations in which drones have been used, because now drones have been used by the US for about 20 years. And to look around the world where drone strikes have taken place, almost none of those places are better off today, or more friendly to the United States, or pose less of a risk to US interests, than they were at the start of those drone strikes.
So the weapons just don't work at a strategic level. They're tactically quite effective, but in terms of achieving political goals, this idea that you don't put boots on the ground, that you can fight a war without risk, doesn't seem to be borne out in terms of, you don't win through that kind of campaign.
And yet it's so tempting for governments: “We've got to do something, because bad things are happening overseas. What can we do?” Cruise missile strikes, drone strikes, ticks that box, no political risk at home, but just kicks the can further down the road in terms of the actual political issues.
Dr Kate Devitt: War is a very human activity and one of the… the analysis of drone strikes over the past 20 years by the United States has revealed that some of the uses of robotic systems may have taken humanity too far away from the acts of war itself.
Obama utilised drones more than almost any US president. And most people I know think, “Well, Barack Obama, one of the smartest US presidents we've ever seen. Incredibly thoughtful, so smart, so considered. Whatever he has chosen to do when it comes to war is probably very well considered.” But unfortunately for Obama, those drone strikes, which take humans so far away from the battlefield – and the people, the communities.
I mean a huge issue in some of these Middle Eastern places is that even media wasn't on the ground. There was no media. There was no conversation with the victims of illegal strikes where innocent citizens were targeted and killed. And there was no good process in the United States to review those civilian deaths, and there were no consequences on any of the targeteers and those others in the decision loop when civilians were harmed against international humanitarian law.
If you had more what we call boots on the ground, so physical human beings amongst other physical human beings, then you get the full gamut of human interaction. You get the humanitarian component part, the soldiers who help women save babies, the soldiers who try to find food for people in villages and in towns, who might help repair water supplies. They may also have weapons, and perhaps they also use them.
But when you have humans with other humans, it's a human endeavour. And when you take humans away from other humans and then put robots between them, then there's a loss of a connection from a human perspective.
So those long-range drone strikes may not be morally defensible, both from the perspective of those who are targeted, but also from the perspective of operators who have been shown to have suffered terrible moral injury from being, for example, in little sort of caves in the United States where they sit down and they do targeting, long targeting, and then they are asked to fire, and then they go home to their family at the end of the day, and they have an incredible amount of trauma from that process.
Dr Susan Carland: Ben Pronk, co-author of The Resilience Shield, is a former commanding officer of the Special Air Services Regiment, a special forces unit of the Australian Army. I asked him if he thought the introduction of artificial intelligence would change the nature of warfare and therefore its effect on soldiers. I wonder if the introduction of artificial intelligence in warfare will negate that need for physical courage and will it all be about, perhaps, moral courage for our soldiers going forward?
Ben Pronk: It's interesting to look back at history. And every technological advance, there's been these thoughts that it'll fundamentally change warfare, and air power is a really big one. There was this idea that now we've got these long-range bombers and aircraft, that we'll never need to send young men and into to harm's way again in the sense that we had. A couple of hundred years later we're still sending people to crawl around in the mud.
I definitely think it will change the character, but I think the nature is going to be immutable. I mean, as military officers, you get taught Clausewitz, Carl von Clausewitz, a Prussian military theorist, he said, “War is an extension of politics by another means.” And that's my mandatory Clausewitz reference, by the way. We've got to say that.
Dr Susan Carland: [Laughter] At least one.
Ben Pronk: [Laughter] Yeah. But governments have for centuries had this tool that if we exhaust all other options, then this is a tool of national power. And I think that will remain the same. I still think at some point you're going to need to seize and hold ground, if that's the national aim. And so I think it will be augmented by AI, but I don't know if it'll be replaced.
Dr Susan Carland: Do you think we have unfair expectations of our soldiers where we expect and even train them to be able to do things that in normal civilian life are unacceptable, such as killing other people?
Ben Pronk: Yeah.
Dr Susan Carland: But then we expect them just to be able to come back to civilian life and flick a switch and behave like everyone else.
Ben Pronk: Yeah, to an extent, I do. And I think if we accept that the vast majority of humans don't like inflicting violence on one another…. And there's a guy called Dave Grossman who wrote a book, On Killing, which talks a lot about this, the psychological impacts of warfare, and he offers that the vast majority of people can't kill without remorse. It's only the sociopaths really that are able to do that. So if we accept that most of us can't do that, then if we want people to be able to kill on behalf of the state, so sanctioned violence, then we need to train them to be able to do that.
But I also think, and my observations of the Afghanistan experience was, you are parking an element of your humanity when you kill, when you inflict legitimate violence, or sanction violence. And I kind of think if you do that long enough, it becomes harder and harder to touch back into that.
And I do think that some of the things we've seen in contemporary history are that chronic exposure to a really extraordinary environment where violence is pretty commonplace. And I do think that results in something of a humanity slip.
And so I think to an extent we expect that of our soldiers. I think we need to look at how we employ our soldiers to try and minimise that wherever possible. But I do also think that demands an element of transition and a recognition that we do need to come back from that spot.
Dr Susan Carland: Long-distance killing doesn't make killing any less difficult and it's clear that introducing new technology on the battlefield also introduces new moral questions.
As Kate says, “War is a human activity”, and we need to ensure we hold onto our humanity as we navigate future conflicts. Can we do it? Find out next week as our experts discuss the changing face of war, the nature of bravery, and the vital role of soldiers in peacekeeping.
Thanks to all our guests today, Prof Rob Sparrow, Paul Scharre, Dr Dr Kate Devitt, and Ben Pronk. For more information about their work, visit our show notes.
Thank you also to the Monash University Performing Art Centre's David Li Sound Gallery, where a portion of this season was recorded.
If you are enjoying What Happens Next?, don't forget to give us a five-star rating on Apple Podcasts or Spotify, and share the show with your friends. Thank you for joining us. See you next week.