Published Jul 05 2021

Going dark: Holding platforms to account over targeted online advertising

A recent expose by the investigative journalists at The Markup revealed how Facebook uses detailed information about what people do online – the websites they visit and the search terms they use – to allow pharmaceutical companies to target people regarding medical conditions in which they’ve shown an interest.

This marketing strategy builds on the fact that one of the first places people turn to when they learn or suspect that they or their loved ones might be sick is the internet.

The fact that platforms may know more about us than our doctors reflects an apparent paradox – even when public schemes like My Health Record face widespread public scepticism, online platforms are largely escaping scrutiny for amassing detailed portraits of all the health conditions with which we might be associated.

This information feeds a hugely important structural shift in the dominant model of advertising, towards advertising that’s becoming both increasingly pervasive and less accountable.

Proxy mechanisms in the mass media

The era of mass media is defined by a number of familiar communications channels – terrestrial television and radio, newspaper ads and billboards, and an array of broadcast media.

Since the mass media has few technological mechanisms for targeting specific groups of people, advertisers developed very rough proxies – concentrating ads for household products, for example, during daytime hours to reach homemakers (hence the term “soap opera”); or placing toy ads alongside Saturday morning cartoons.

The ads followed the content, and, in some cases, its timing and geography. These ads were available to large groups of people, and thus available for public scrutiny – and often became the topic of concern about stereotyping and predatory marketing tactics.

The ads, although privately controlled and administered (in many cases) remained – in an important sense – public.


Read more: Twitter is banning political ads – but the real battle for democracy is with Facebook and Google


We know the historical struggles that have taken place regarding racist and sexist forms of advertising; struggles that highlighted the role played by the advertising system in reinforcing particular sets of values and cultural assumptions.

As the media historian Michael Schudson puts it:

“Advertising, whether or not it sells cars or chocolate, surrounds us and enters into us, so that when we speak, we may speak in, or with reference to, the language of advertising, and when we see, we may see through schemata that advertising has made salient for us.”

Advertising, in other words, is not just the filler between the content – it is a form of content that plays an important role in reproducing social and cultural values.

The rise of consumer society – a dramatic social shift – would have been impossible without it.

It’s therefore crucially important that advertising be subject to public examination and discussion as part of our ongoing reflection on the society we live in, and how to build a better one.

Multiple reasons for accountability

This is perhaps the overarching reason for attending to advertising, although there are other important reasons for holding it accountable.

Advertising isn’t just about selling household products and services. It’s also used to rent or sell housing, to promote political candidates, and to recruit employees – and, in some cases, to discriminate in these areas by age, ethnicity, or gender.

The broadcast model of advertising and its associated problems haven’t disappeared. Just as broadcast TV and newspapers remain, so do their core approaches to advertising, albeit working in more sophisticated ways, with more detailed audience data.

Yet in digital contexts, the degree of consumer tracking has led to new advertising methods that upend the “publicness” of historical forms of advertising.

The rise of online advertising represents an epochal shift in advertising that invokes the spectre of new and powerful forms of discrimination that can be difficult to detect.

During the 2016 US presidential campaign, for example, Donald Trump’s digital strategy advisor, Brad Parscale, boasted about targeting African-American voters in swing states with ads claiming that Hillary Clinton had once described young black men as “super-predators”. (She was referring to gang members in a formulation that, while not explicitly racially coded, nonetheless led to a subsequent public apology on her part.)

The goal of this ad buy was not so much to gain voters for Trump, who had very low support among black voters, but to stop them from turning out to vote at all.

There’s no way to know whether a job ad one encounters while browsing the internet is being shown only to people of a certain age, ethnicity, or gender.

Because of the nature of targeted advertising, which follows individual users, rather than particular forms of content, and is viewed on a personal device, it was impossible for those who received the ads to know they were being singled out as part of a voter suppression campaign.

The same is true of other forms of online advertising. There’s no way to know whether, for example, a job ad one encounters while browsing the internet is being shown only to people of a certain age, ethnicity, or gender.

Indeed, Facebook had to pay a US$5 million fine after it was revealed that its ad buying system made it possible to discriminate in ads for housing, employment, and credit.


Read more: Facial recognition technology and the end of privacy for good


A range of measures is being developed to provide accountability for “dark ads”, so called because they’re ephemeral and targeted. Facebook has made many of the ads it serves available through its ad library, although this is of limited functionality, because it provides only general information about how ads are targeted.

The NYU Ad Observatory tracks political advertising using volunteers who install a browser extension that captures ads served on Facebook. ProPublica developed a similar tool, which we have adapted to provide visibility into how individuals are targeted online.

Our tool, which people who are interested in contributing to the project can install on a Chrome browser, collects some basic demographic information so we can see how people are targeted by variables including age, gender, and location.

Anyone who installs it can also use it as a personal ad tracker to see how Facebook is targeting them over time.

We pilot-tested this with 136 people to demonstrate how a tool like this might work, and even with that relatively small sample, we were able to demonstrate to people how their online behaviour shaped their ad environment.

One volunteer, for example, was targeted based on information she had been searching for online about her child’s medical condition. In the abstract, we know this is how online advertising works, but it can be confronting to see how detailed and comprehensive the monitoring and tracking is, and how readily, for example, behaviour that we might not disclose publicly around drinking and gambling activity serves as raw material for advertisers.

Making invisible patterns visible

Equally importantly, the tool allows us to see overall patterns that are invisible to individual users – how men might be targeted differently from women, or older people from younger people.

The more data we’re able to capture with this tool, the clearer a picture we’ll have of the new and old forms of stereotyping enabled by dark ads, and the way they shape our information environments.

We know we’ll never have as clear a picture as Facebook does, but it’s crucial we find ways to hold it accountable for the potential and actual abuses that take place in the online advertising world.

About the Authors

  • Mark andrejevic

    Professor, Communications and Media Studies, Faculty of Arts

    Mark contributes expertise on the social and cultural implications of data mining, and online monitoring. He writes about monitoring and data mining from a socio-cultural perspective, and is the author of three monographs and more than 60 academic articles and book chapters. His research interests encompass digital media, surveillance and data mining in the digital era. He is particularly interested in social forms of sorting and automated decision-making associated with the online economy. He believes regulations for controlling commercial and state access to and use of personal information is becoming an increasingly important topic.

  • Robbie fordyce

    Lecturer, Media and Communications Studies, Faculty of Arts

    Robbie's thesis focused on activist and disruptive technologies, and his research continues and advances these interests, expanding into areas such as digital ethics, smart cities, infrastructure, automation, surveillance, and fabrication.

  • Verity trott

    Lecturer, Communications and Media Studies, School of Media, Film and Journalism

    Verity is a lecturer in digital media research. Her published research explores feminist connective actions, Indigenous women’s use of social media, everyday political talk in third spaces, and analyses of rape culture and feminism in popular media. Her current research projects focus on the intersectional issues of the #MeToo movement, and cultures of toxic masculinity online. Her teaching and research practices involve developing and implementing a range of computational tools and methods for analysing the political, cultural and social dimensions of digital media technologies.

  • Luzhou li

    Lecturer, Media and Communications Studies, Faculty of Arts

    Dr Luzhou Li's research focuses on digital and global media policy, particularly that of China.

Other stories you might like