Published Jan 23 2020

Facial recognition technology and the end of privacy for good

Facial recognition technology has recently been making headlines in a variety of guises. Advances in video technology and AI techniques mean it’s now relatively cheap to install ‘smart cameras’ that boast the capacity to instantaneously identify a person by matching their facial features against a photographic database.

There’s growing interest in how this technology might be applied across society. Recent developments include using facial recognition to deter marathon runners from cheating, relieving teachers of the burden of taking the class roll, and saving home-owners the worry of losing their door keys.

Applications such as these might seem innocuous enough. After all, if we’re beginning to unlock our smartphones with facial ID, it’s not much of a stretch to consent to using the same technology to pay for a coffee, or pass quickly through airport security.

However, it’s important not to lose sight of the wider significance of these developments. The main purpose of facial recognition technology is not to increase personal convenience, but to facilitate mass surveillance. Alongside applications that recognise a classroom of 30 students or a few thousand runners, this is technology that’s also being deployed to watch over entire cities, states and countries.

The bigger picture of mass surveillance

It’s crucial that our conversations around these various cases of facial recognition all pay attention to the broader logics of mass surveillance and control that are being put into place right in front of our faces, literally.

These issues have been pushed to the fore recently by the revelation that a relatively unknown startup has developed a commercial app that can potentially pick out any face from a crowd using publicly available online images. Led by Melbourne-born entrepreneur Hoan Ton-That, Clearview AI is reported to have developed the app to match a photo of someone against a proprietary database of, “more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites”. The app is already in use by 600 law enforcement agencies that are using it to identify suspects from video clips and mobile phone photos.

The apparent success of Clearview AI represents an end run around publicly regulated databases such as mug shots, driver’s licence, and passport photos. While Australian legislators debate the appropriate uses of such databases for automated facial recognition, the commercial sector has turned to publicly available images online to amass a far greater database. Clearview, for example, can already claim that its database is six times larger than the FBI’s.

It might well be that the Clearview story will fade quickly from sight. The company has already attracted vociferous criticism from a diverse range of representative groups and organisations. As Jathan Sadowski notes, the company seems alarmingly naive about the social consequences of its actions, and it seems unlikely that the startup will attract the high levels of funding it was hoping for.

Regardless of what happens to Hoan Ton-That and ClearView, this case raises serious issues that will not disappear. In fact, it’s surprising it’s taken this long for companies to begin pitching such developments. The idea of appropriating the personal profiles that internet users provide so readily to online services and social media has been around for some time now.

This is technology that doesn’t just strip its targets of anonymity, but also allows for new forms of tracking, making it possible to collect information about people’s movements throughout the course of the day, including, for example, which shops, clinics, or homes they visit, and when – and all the personal information that might be inferred from this activity.

At the beginning of the 2010s, the defence contractor Raytheon was touting its RIOT (Rapid Information Overlay Technology) software – promising to give users the capacity to track (and even predict) other people's movements by analysing social media data. Soon after, a tech startup developed a dating app for Google Glass that offered to match snapshots of potential romantic partners with an online database culled from social media profiles – and sex offender registries. Elsewhere, a Russian artist used the FindFace software to trawl the then-popular VKontakte social networking site to track down details of passers-by in St Petersburg underground stations.


Read more: The future of EdTech in schools? Just look at what they're doing in China


It’s important to retain a balanced perspective on these developments. In one sense, it’s easy to buy into the touted benefits of this technology. Who doesn’t want to live in a society where it’s easier for law enforcement to identify and track down legitimate suspects? The New York Times provides several police accounts of the ease with which ClearView’s app has identified suspects who might otherwise have evaded justice indefinitely.

However, the tool is potentially a powerful one with a range of potential commercial uses. This is technology that doesn’t just strip its targets of anonymity, but also allows for new forms of tracking, making it possible to collect information about people’s movements throughout the course of the day, including, for example, which shops, clinics, or homes they visit, and when – and all the personal information that might be inferred from this activity.

So, as with any emerging technology, it’s crucial that we talk about the unintended consequences, overblown expectations, and possible ‘function’ creep as people extend the implementation of the tools and techniques in currently unforeseen ways.

From this perspective, there’s plenty to be concerned about. First and foremost, this is technology that’s proven to be fallible in a number of alarming ways. Various tests of facial recognition systems continue to report concerning levels of misrecognition, non-recognition, and so-called ‘false positives’. These failures are notably pronounced when it comes to successfully identifying women, people of colour and other already marginalised groups.

Critics of the technology, such as Luke Stark, do not mince their words, arguing that facial recognition amplifies and exacerbates racial oppression. Similarly, Northeastern University professors Woodrow Hartzog and Evan Selinger have described automated facial recognition as “the most uniquely dangerous surveillance mechanism ever invented”.


Read more: Why regulating facial recognition technology is so problematic – and necessary


Second, in terms of function creep, this is technology with a host of potential lucrative markets – from shopping centres wanting to deliver personally-targeted advertisements to shoppers, through to employers wanting to keep tabs on the whereabouts of their workforce.

There’s also a large domestic demand for this technology. Indeed, police officers and Clearview’s investors predict that its app will eventually be available to the public. Smart-home technologies such as Alexa and Amazon’s Ring are already raising concerns over neighborhood vigilantism and ‘digital domestic abuse’ – adding facial recognition to the mix can only exacerbate such issues.

In terms of function creep, this is technology with a host of potential lucrative markets – from shopping centres wanting to deliver personally-targeted advertisements to shoppers, through to employers wanting to keep tabs on the whereabouts of their workforce.

If the development of this technology continues, we’re likely to find ourselves living in a society in which individual control over personal identity in public is dramatically reconfigured. In theory, we don’t have any expectation of privacy in public, where anyone can see us. In practice, however, we retain a sense of anonymity because we know that most people don’t know who we are, and their memories of seeing us are, for the most part, uncertain and ephemeral.

The development of smart cameras equipped with AI marks a dramatic change – a growing range of our activities can be tracked, stored and linked to our personal information. This can be done at a distance, without our knowledge, as our face becomes a form of personal identification to anyone with access to the app.

The result is the displacement of targeted surveillance by generalised, population-level monitoring that draws on the insights gleaned from combining data about individuals to discern patterns that enable new forms of social sorting and discrimination.

What about regulation?

While debates continue in Australia over proposed legislation allowing law enforcement to tap into a national facial recognition database, we need to be thinking about these issues on a much broader scale.

On a (inter)national level it is worth considering which institutions (if any) might be capable of regulating such technology. The initial success of the EU’s General Data Protection Regulation legislation might be one pathway that other regions can follow, alongside recent bans on facial recognition being used by public agencies in cities such as San Francisco.

It might also be that those working in the IT industry can push back against the more clearly problematic ‘innovations’ being pursued by the organisations they work for. The success of Google employees in curtailing the company’s military business dealings offers a glimmer of hope for the future (although similar protests by Microsoft and Amazon employees have so far been less successful).

Then again, on a personal level, instances such as the Clearview controversy should make us think much more carefully about the personal data we give away to the platforms and online services that we eagerly consume.

Australia must pay attention

Amid growing calls in the US and elsewhere for outright bans on all forms of facial recognition, it’s time for Australia to begin to pay closer attention. At present, most uses of the technology remain speculative or in the early stages of development. There’s a brief window for us all to have an influence on what happens next. It will be important to regulate not just the use of public databases, but also the creation of large-scale private databases, and the uses to which these can be put.

These are questions that touch not just on law enforcement and commerce, but also on democracy itself. In a total surveillance society, political and cultural freedom are under just as much threat as personal privacy.

About the Authors

  • Mark andrejevic

    Professor, Communications and Media Studies, Faculty of Arts

    Mark contributes expertise on the social and cultural implications of data mining, and online monitoring. He writes about monitoring and data mining from a socio-cultural perspective, and is the author of three monographs and more than 60 academic articles and book chapters. His research interests encompass digital media, surveillance and data mining in the digital era. He is particularly interested in social forms of sorting and automated decision-making associated with the online economy. He believes regulations for controlling commercial and state access to and use of personal information is becoming an increasingly important topic.

  • Neil selwyn

    Professor, School of Education Culture and Society, Faculty of Education

    Neil's research and teaching focuses on the place of digital media in everyday life, and the sociology of technology (non) use in educational settings. He's written on issues including digital exclusion, education technology policymaking and the student experience of technology-based learning.

Other stories you might like