Published Jun 21 2022

How should we regulate the use of facial recognition in Australia?

The trouble with facial recognition technology in Australia is that it’s being implemented piecemeal, without dedicated regulations and guidelines governing its use.

This is a powerful technology that promises to fundamentally reconfigure our experience of privacy and anonymity. It will also amplify the asymmetry between the large amount of information being collected about us every day, and our own knowledge about how this information is being put to use.

Thanks to an investigation by the consumer advocates at CHOICE, we recently learned major retailers in Australia are using facial recognition technology to create databases of people who have been involved in theft or disruptions in their stores. These databases are used to screen incoming customers by matching images of their faces to stored images in their database.

As CHOICE discovered, the notification to customers is inadequate – small, hard-to-notice signs in some cases, and, in others, online notifications that are rarely read by customers. The companies clearly don’t want to draw attention to their use of the technology.


Read more: Digital identity and biometrics: When your face reveals your vaccination status … and more


Something similar is happening with the use of the technology by Australian police, who have been using both state and national facial recognition databases, even though the enabling legislation for use of nationwide databases remains stalled.

Recently, police use of the commercial face recognition app developed by Clearview AI was found by the Australian Information Commissioner to interfere with Australians’ privacy.

The response by authorities to public criticism of the increasing use of the technology has been to suggest, as one cybersecurity expert put it, that, “if people heard more about these tangible benefits, they might have a different attitude”.

Suspicion on the public’s part

The reluctance of retail outlets and police alike to publicise their use of the technology, which has been criticised for being invasive and, in some cases, biased and inaccurate, suggests they’re not so sure the public response would be supportive.

Indeed, there’s been widespread public opposition to the use of the technology in some cities and states in the US, which have gone so far as to impose bans on its use.

Public opinion in Australia reveals both concerns about the invasiveness of the technology, and support of its potential use for public safety and security.

The fact of the matter is that facial recognition technology isn’t going away, and is likely to become less expensive and more accurate and powerful in the near future. Instead of implementing it piecemeal, under the radar, we need to confront both the potential harms and benefits of the technology directly, and provide clear guidelines for its use.

Forum to discuss issues in Adelaide

Former human rights commissioner Ed Santow, who called for a moratorium on the use of facial recognition technology when he headed the commission, is developing model legislation for regulating the use of facial recognition in Australia. Santow, who recently joined the University of Technology Sydney as an industry professor, will present the outlines of his model legislation during a public forum next week in Adelaide.

The forum, which also includes the participation of South Australia Greens MP Tammy Franks, and Justin Stewart-Rattray, the president of the Law Society of South Australia, will take place in Adelaide because of the city’s recent decision to upgrade its CCTV system to include facial recognition.


Read more: The future of EdTech in schools? Just look at what they're doing in China


The Adelaide City Council has asked the South Australian Police (SAPOL), which operates Adelaide’s CCTV system, not to use the facial recognition technology until legal guidelines are in place, but police have, as yet, not responded to the council’s request.

However, in a statement to the ABC, SA Police have indicated their interest in using the technology: “There is no legislative restriction on the use of facial recognition technology in South Australia for investigations. Should a facial recognition capability be available, investigators will consider the seriousness of the matter and evidentiary value when determining if it is appropriate to use the technology.”

Whether there should be restrictions on police use of the technology and what those restrictions should be, is a matter of clear public interest.

Taking the wrong approach

As of now, Adelaide seems poised to implement the new CCTV capability without widespread notification or clear guidelines for its use. Such an outcome would be in keeping with the trend of putting the technology to use first, and only addressing the issues that arise later.

This is exactly the wrong way to introduce the use of a powerful and transformative technology that raises important issues for the future of our control over our personal information.


Read more: Facial recognition survey reveals concerns about accuracy and bias


The rise of the online surveillance economy provides an alarming example of how quickly invasive forms of monitoring and tracking can be normalised when they’re implemented without clear regulation or public deliberation.

In the absence of such guidelines, we face the prospect of the physical world becoming as closely monitored and tracked as the online one in ways that are incompatible with democratic values and civil rights.

The time for public discussion of the risks of facial recognition technology is now – before it becomes baked into the security and marketing systems of our increasingly surveillance-based society.

We hope the Adelaide event might serve as one starting point for this discussion.

This article was co-authored with Australian National University’s Associate Professor Gavin Smith.

 

About the Authors

  • Mark andrejevic

    Professor, Communications and Media Studies, Faculty of Arts

    Mark contributes expertise on the social and cultural implications of data mining, and online monitoring. He writes about monitoring and data mining from a socio-cultural perspective, and is the author of three monographs and more than 60 academic articles and book chapters. His research interests encompass digital media, surveillance and data mining in the digital era. He is particularly interested in social forms of sorting and automated decision-making associated with the online economy. He believes regulations for controlling commercial and state access to and use of personal information is becoming an increasingly important topic.

  • Neil selwyn

    Professor, School of Education Culture and Society, Faculty of Education

    Neil's research and teaching focuses on the place of digital media in everyday life, and the sociology of technology (non) use in educational settings. He's written on issues including digital exclusion, education technology policymaking and the student experience of technology-based learning.

  • Christopher o'neill

    Research Fellow, Communications and Media Studies, Faculty of Arts

    Chris is a postdoctoral research fellow in the Monash node of the ARC Centre of Excellence for Automated Decision-Making and Society. He completed his PhD at the University of Melbourne in 2020. His doctoral research was in the analysis of body-sensing technologies, such as heart rate monitors and productivity sensors. He has a particular interest in the development of biometric technologies such as facial recognition cameras, and what implications such technologies might have for conceptions of identity and governance.

  • Xin gu

    Lecturer, Communications and Media Studies, Faculty of Arts

    Xin’s research concerns the digital creative economy, looking at the democratisation of creativity through vast transformative digital media ecosystems. Her work focuses on the transformation of creative cities and the creative economy under different social, economic and political conditions. She was appointed by UNESCO 2005 Convention on the Protection and Promotion of Expression of Cultural Diversity (2019-2022) and heads the Master of Cultural and Creative Industries (MCCI) at Monash. She has published widely on urban creative clusters and agglomerations, cultural work, creative entrepreneurship, cultural and creative industries policy, media cities, maker culture and cyberculture. Xin has worked with policy initiatives in the UK, China and Indonesia to support small-scale local creative industries development services.

Other stories you might like