Susan Carland: Hello, I'm Susan Carland, and welcome to What Happens Next, where we take a closer look at some of the really challenging issues facing the world. This is our final episode on right wing extremism and the final episode for this series of What Happens Next. I know, devastating. Today we'll talk to a behavior change expert about what to do when you see extremism in action. Do we call it out? Do we starve it of oxygen? Something else? Nick Faulkner, from Behavior Works at the Monitor Sustainable Development Institute, talks us through some strategies.
Nick Faulkner: I’m Nick Faulkner, I’m a research fellow at BehaviourWorks, which is
a behaviour change institute at the Monash Sustainable Development Institute. So I work in a bunch of different areas, but I'm most passionate about reducing racism and reducing prejudice.
Susan: Dr. Nick Faulkner, welcome.
Susan: I want to start by asking you about cyber hate. Exactly what is it?
Nick: Yes, I mean so I've done a bit of research on cyber racism, and essentially, it's just racism or hate speech on in an online environment.
Susan: In terms of content, it's no different to what you'd find offline?
Nick: Well, for the most part it’s pretty similar. Some of the racist groups are a bit strategic about how they present themselves online just to avoid censorship and being banned and that sort of thing. But for the most part, it's similar.
Susan: Do you think the online environment has helped facilitate the flourishing of
Nick: Definitely racists have been very good at using the Internet. I mean, even going back to the early nineties, there's been racist forums online that have been quite popular. One of the longest running ones is Stormfront. So that's been around since, I think, 92 and there’s hundreds of thousands of members across the world. So I mean, it's definitely helped spread the, that racist ideology, um and it's also helped racists find networks and some people who share similar views to themselves. So in that sense it has.
Susan: Right. All right, practicalities. Pretty much, I think anyone who's used the Internet ever has either witnessed or experienced probably some form of cyber racism. And there's all these different ideas about what we should do. Ignore them, don't feed the trolls, sort of quote-unquote. Or argue back, clap back, call out, public shaming, retweet.
What actually does the evidence say is the best way to deal with this?
Nick: Initially I have to give you some bad news - I don't have all the answers here.
Susan: Well, we’ll just call this podcast off!
Nick: The research is still emerging, so we’re still trying to figure out how to reduce racism offline...
Susan: Right, that’s what I was going to say…
Nick: … let alone in online environments. So there's a little bit in the offline world.
But about 10 years ago, there was a review which found that there was actually very, there was basically no rigorous experiments that had been done to test what worked in an offline context. That's changed a bit in the last 10 years, but that just gives you a bit of a sense as to where things are at online. About two years ago, my colleagues and I did a review of all the research on cyber racism. So we did these huge searches. We searched through thousands of different
records just to try and see where we're at, what's actually known about cyber racism. And
most of the research has been looking at kind of what it is, how the racists are using it, using the Internet, what sorts of materials they’re producing, you know, that sort of thing rather than saying okay, well, how do we stop it?
Nick: And also, what impact is that having on the victims of the racism as well, there’s not a lot on that.
Nick: Yeah. Yeah, so I don't have all the answers. I have heard a couple of common sort of suggestions about how to reduce it.
Nick: So one of them is, like you're saying, call it out.
Nick: Another one is for companies to ban the racists and ban the racist
Groups. But actually, the evidence that's emerging is sort of questioning both of those strategies.
Susan: So by calling out you mean … S=so, say someone sends me a horrible tweet.
Susan: So I retweet it to my followers and go ‘Look at this bigot, everyone. How dare you?’ - that kind of thing?
Nick: Yeah, I think, so I'm talking more lower level, I guess, so not as offensive. So if I, for example, saw my Aunty or Uncle post something that was mildly racist, you know, it's been suggested that I should say ‘Hey, don't do that. You're being racist’.
Susan: Right? Because otherwise we're bystanders.
Nick: Yeah, Yeah. Um, however, you’ve got to be really careful about this, because if I jump in and say, hey Auntie Whoever, you are being a racist, um, that might start an argument. And there's a lot of research in psychology showing that when you start getting in an argument, people tend to think about arguments that support their own preconceived ideas.
So you might actually end up strengthening that racist view by starting a fight with them.
Susan: I also wondered - just to jump in there - I always wondered about doing that as well, because often what happens is that person gets embarrassed or humiliated. And surely from a psychological point of view, when you're humiliated or embarrassed, you're actually gonna retreat into yourself double down, as opposed to saying, Well, tell me more about these new ideas. Is that true, or is that just an assumption?
Nick: There is some evidence suggesting that, um, and definitely I would suggest that if you are challenging, um, someone who's expressing some of the lower levels of racism, not the really aggressive ones that you're mentioning before, um, I would recommend trying to give them an opportunity to sort of take back what they said without feeling too bad about saying it. So, for example, you know, you might say something like, ‘Oh, I don't think you meant it this way. But that could be offensive, what you said, or it might hurt these people’.
There's also quite a bit of research showing that if you can get people to empathise, so understand what it's like to be somebody else or to cognitively sort of, imagine what it might be like to be someone else, that reduces prejudice and increases pro social behavior that sort of thing. So, you might also do stuff like suggesting ‘ok, imagine what it would be like if you heard these things all the time’ or…
Susan: Or if you were a refugee on a boat or …
Nick: Yeah, exactly. Exactly.
Susan: One thing I've often wondered about with the Internet is I hope that perhaps it could actually help, perhaps foolishly, reduce discrimination or prejudice of bigotry. Because there's this old theory in psychology. I think it's from, you know, the fifties, 1954 something called the Contact Theory where if you just get to know someone of a different background to you that perhaps your prejudiced towards that could really help because suddenly you go ‘Ah,
all those Muslim terrorists, they're crazy. But this Muhammed guy that I've met, who is my colleague and when we actually he's a nice guy, and that changes things.’ How does contact theory work at all in an online setting? Do we actually not feel like we've really met the person?
Nick: Yes. I mean, so I actually did some research on this couple of years ago, and the question was exactly that - like, can contact work in an online environment because we know that that is one of the most effective techniques offline? So what we did was we got a bunch of high school kids. Some of them were from Islamic high schools, others were from Christian high schools, and we organised a project essentially where they worked together online. So they were chatting. They never met face to face, but they just chatted over this online platform.
And then we measured their levels of prejudice and racism a couple of weeks later, three months later, and even a year later. And what we found was that even a year later, there were still significant reductions in prejudice. So just this six week program where they were chatting online, it reduced their prejudice. Even a year later thaqt was still there.
Susan: Okay, So to clarify, though, did they have to be sort of an interaction?
Susan: So me just being on Twitter as a Muslim woman merrily tweeting out news articles and what I had for breakfast and whatever, in and of itself might not change anyone's opinion because we're not actually engaging. I'm not talking. They tweet me, I tweet back. There's no backwards and forwards. Is that the case?
Nick: Yeah, so with the contact theory that you mentioned, so there are a few sort of criteria which you need to make it effective. And among them are, essentially, you should be working towards a shared goal, you should have equal status, you should have some sort of authority supporting you.
Susan: What do you mean by that?
Nick: So, like, for example, in the teaching, the school environment, the teacher would be saying, ‘look, you should be talking to people’. So it's not like you've got a leader saying ‘oh you shouldn’t talk to Muslims or you shouldn’t talk to Christians’
Susan: So we need Scomo to come out on Twitter and say … everyone, chat like friends.
Nick: It would help, yeah, it would help if you had some sort of authority saying, ‘look, we should be, there should be more interactions between people of different races’. So in the experiment I mentioned before. We did try and have all of these conditions met. In a Twitter environment, a lot of those aren’t going to be met. You're not working towards a common cause - in fact, a lot of the time you're trying to fight against each other on Twitter, right?
Susan: Right. So what about the technique, another technique that people often recommend is ignore. Don't feed the trolls. Someone says something abusive to you or you just see someone say something abusive to someone else - ignore ignore ignore.
Nick: Yeah, this is an interesting one. I don't think ignoring in itself is the best thing. I mean, at a minimum, I think you could be reporting it. So if you hit the report button and report it to Twitter or Facebook or TikTok, whatever, whatever platform you're on, just because then that gives them data to help with their efforts to address this issue.
So there's a new paper that came out a couple months ago in Nature, which was actually looking at what these media platforms could do to reduce racism on their platforms. And one of the - I'm sort of taking this off on a slightly different path from where we started -
Susan: I love a tangent!
Nick: But one of, one of the things that they were looking at is how should these platforms go about banning users and racist groups, and what do the racist groups do in response when they are banned? And what they found was that racist groups are really good at re-forming. So if you ban them on one platform they’ll just move to another.
Susan: Hydra head - chop one off, another one pops up.
Nick: Exactly. Exactly. And so they tested a few - they developed this computational model - and they tested a few different strategies and were able to measure in sort of this hypothetical computer model, um, what happened to the numbers of racists in this model. And what they found is, if you, rather than targeting the big groups, so rather than banning the big racist groups, if you instead target all the smaller ones stops the bigger ones from emerging in the first place.
Susan: So the smaller ones like feeder groups for the big ones?
Nick: Exactly, yeah, exactly. So this paper was suggesting we should target the smaller ones. Just leave the big ones. But by targeting the small ones, you’re basically
preventing the bigger ones from getting bigger. So in another strategy they looked at was whether you should go about banning all racist users or just a subsection of them. And what they found was that of you randomly banned 10% of racist uses that actually reduces racism on the platform by about 50% later on, or up to 50% later on.
Susan: Why's that?
Nick: Because it stops them kind of interacting together. It just sort of prevents, prevents all these hubs from forming. So, it's a bit counterintuitive, but by reporting those racists online, you're giving the data to these media platforms that, you know, this could be a racist user, which
ideally, they could use in a smart way to stop racists online.
Susan: From my own experience, I had someone recently send me a horrible message on Instagram - ‘I'm gonna kill you, kill your husband’, all of that stuff. I reported it to Instagram and they wrote back and said, ‘this person does not violate our community standards’, which is, I'm sure it's just, you know, it's a machine doing it. It's not a, there's no human, they couldn't possibly have a human to handle all the complaints. And in that situation, you have no recourse. You reported it, they've said no, that's the end. What should they be doing? Is it reasonable to even expect them to be able to do more when you've got millions, possibly billions of users?
Nick: Yeah, I mean it's a good question, and one that I don't think the research has addressed very thoroughly. I mean, this paper that I mentioned that came out a couple of months ago is one of the first papers that is investigating what these platforms should do. So I mean, I
I think it's obviously a difficult job for the platforms to get across all of the huge volumes.
I don't know what the answer is, but this research is suggesting that it would be good if they were able to identify who all the racists were and then used this, were strategic, about who they banned and why. Which, the sort of challenging, sort of issue there is that it means that there's going to be some people like the one who sent you that horrible message who probably get to stay on the platform for a little bit longer. But hopefully the strategies will prevent it in the longer run.
Susan: Let me ask you a bigger philosophical question. Is it problematic to ban users in the first place? Are these the kind of things where you ban it, it just goes underground, their comments or beliefs can't be addressed, so it just sort of simmers and festers below the surface. We know, for example, ah, just this week, Germany, a town in Germany, has declared a Nazi emergency like a climate emergency. They said right wing extremism is so bad in our town, Dresden, we’re declaring a Nazi emergency. So if these people don't have an opportunity to say what they think, and then people have an opportunity to respond or refute that, could it actually make things worse?
Nick: My inclination is to say that I think we need to be strategic about who we're banning. Um,
because if you do, just, if you do allow them to just say what they want, congregate how they want online, then you're going to get much more, closer networks of these people. And what we've found - so we did, we did a study a couple of years ago with some computer scientists here where we were looking at the networks of, in this case it was Stormfront, that racist website side I mentioned. And when those networks became stronger, they became stronger before some really problematic real world events. So, for example, the Cronualla riots about 10, 15 years ago, you could see evidence that the network and strength of the networks on Stormfront were actually going up before the Cronulla riots happened. Then they exploded once the Cronulla riots happened. So, I mean, that’s a long way of saying I think we need to sort of
be strategic to break these networks that have networks of racists.
I AM HERE:
Susan: Right wing extremism, or right wing extremists, are obviously not the only kind of extremists online. I've done research on Islamic State, and the way they used online platforms to recruit and spread their message. Is the way that your research suggests we should be dealing with right wing extremists different or the same to how we should be dealing with other sorts of extremism?
Nick: Yeah, it's, that's a good question. Admittedly I haven't done as much work on the
Islam, like, the extremism that you mentioned. Um, however, there was a
paper a couple of years ago looking at how you break the networks of
off groups like Isis and similar groups with similar ideologies. And it did suggest that same sort of approach where you target smaller ones, leave the bigger ones be, so there's probably some similarities there.
Susan: Let's finish on a positive note. Give us the practical takeaway tips that any person can use when they're online.Let's start with the case of you see something being said to somebody else - so it's not being directed towards you or you see someone making, you know, unpleasant racist post - what we do?
Nick: How aggressive is this racist person?
Susan: Hmm, good question. Uh, okay, I will, let me, I will be the person
and the post says something like, ‘I'm sick of all these refugees coming into this country stealing our job and trying to implement Sharia. Full stop.
Nick: Yeah. So I mean, I would recommend probably in that in general, reporting it - reporting the racist posts that you see just so then that the data is being collected, that could, that can be acted upon later on. Um, if it's somebody who you know, try and give them an opportunity to correct what they've said.
Susan: And do we do that publicly or should I send them a private message?
Nick: Well, probably privately would be better, just because you don't want people getting or feeling that, that shame, that public shame, which they will then probably be motivated to defend. So, um, people are very good at coming up with, we call them moral disengagement strategies in psychology where basically, as soon as they've done something wrong, they'll reframe the issue, reframe their behaviour to make it seem okay. And so we see that, actually, and, you know, in ... I did a study on responses to that Adam Goodes incident a few years ago online and you could see them - about 90% of the posts that supported the, the booing of Adam Goodes also used one or more of these moral disengagement strategies.
Susan: Adam Goodes, of course, being the indigenous player, highly celebrated Australian of the Year, who was the victim of a pretty sustained booing campaign.
Nick: Exactly. Yeah. And also some racists slurs from the audience. So, yes. I'm sorry, that just went down a bit of a tangent there, but to take it back...
Susan: So I should privately message them?
Nick: Yeah, so I would privately, message them and say something like , ‘hey, I know you probably didn't mean this, um, but imagine what it would be like if you were in this situation’.
Susan: So Empathy?
Nick: Yes. So try and avoid getting into an argument. Try and get them to empathise and imagine what it would be like to be that person. It's hard to do that. But those are probably the two key tips that are based in the research of the movement,
Susan: Okay, and now what about if the situation where I write a post ‘what a happy sunny day, I love eating cereal’ and someone immediately writes back, ‘why don't you eff off out of this country?’
Nick: Just report them.
Susan: Just report, okay. Is there any point trying to engage?
Nick: Well, when, when they are so radicalised that they’re not likely to change their opinion,
it's probably less beneficial, there’s probably not a lot of good that can come from engaging.
Susan: So report and ignore.
Nick: That would be my approach, yeah.
Susan: What do you think about the block button?
Nick: Yeah, well, I mean well, like I said before, there's not a lot of research looking at what the impact of this online racism is on the victims of the online, of online racism. But what we have seen is that the simple thing that you'd expect that when people do see more racism directed towards them online, they tend to have lower levels of wellbeing
Susan: Right, sort of the death of a thousand cuts.
Nick: Yeah, exactly. So the more of those sort of message that you're exposed to, the lower your wellbeing is probably going to be. So, yeah, block them. Don't expose yourself to those messages if you can avoid it. Obviously, you can't avoid it all the time, but where you can,
stop it from happening.
Susan: Right. Is there anything else we can do? I had this idea I did, a couple of years ago, I was getting so much abuse on Twitter and I tried everything. Ignoring, blocking, muting, trying to engage, trying to be friendly. None of it seemed to work, so I decided that I would instead, for every hate message I got on Twitter, I would donate $1 to UNICEF to try and to, clarify, create or
restore the cosmic equilibrium, I suppose - for every horrible thing that came my way I would try to do something positive, it didn’t involve engaging them in any way. I sent thousands to UNICEF! [Laughing]
Nick: Well, I mean, look, there's, there's plenty of research showing that people's wellbeing tends to go up when they volunteer or do something nice for others. So if your well being is being harmed by being exposed to these, sort of, racist messages, then, yeah, doing something positive like that could be a nice way to sort of neutralise that, that negative impact.
Susan: Yeah. Okay, so we’ve got a few suggestions now. It sounds like though it's, it's a really messy environment in which the research is just running to try to catch up. But, like you said,
when we haven't solved it offline why on earth do we think we would be able to solve it online?
Nick: Yeah, exactly. So we're still really struggling offline. At the moment I've got two PhD students working on this, plus another project where one's looking at Islamophobia Online, another’s looking at LGBTI inclusion in sport, then another's looking at immigrant prejudice, all in offline contexts and yeah, then there's still so much to be done online as well.
Susan: So they need to work faster.
Nick: Everyone needs to work faster, I'm telling them, I'm telling my students that all the time!
Susan: Nick, thanks so much for your time.
Nick: Thank you
Susan: Some great ideas there from our expert Nick Faulkner special thanks to Nick for coming on the show, and that is the last of our episodes on right wing extremism and the very last episode for this series of What Happens Next? We really hope you've enjoyed the podcast so far. We’ll be back soon with a new series of sliding door moments discussing the big challenges of the world around us, what the future could look like and how we can all help to change it. So if you have some topics you'd like us to discuss, shoot us an email - podcasts@monash dot edu.
This podcast was produced by Monash University. It would not have been possible without the support of Fabian Marrone, Belinda Hayes, the Monash University Law Faculty and all our experts, including academics, students and alumni.
Production and editing by the wonderful Sunny Leunig, editorial research and direction by the amazing Jo Crompton and our executive producer is the wondrous Angela Patrick.
What should we do when confronted by right-wing extremism in our daily lives? Our behaviour-change expert shares the best strategies.
To receive a fortnightly email wrap up of stories from Lens.