Facial recognition systems and the underlying artificial intelligence (AI) technologies connected to them are advancing rapidly – much faster than we can fully understand the risks and dangers these technologies may pose.
There are growing calls for greater consideration of the ethical implications of their use, and the inherent biases that continue to be exposed, such as discrimination based on ethnicity or gender.
The potential applications of large-scale and automated decision-making afforded by AI becomes particularly concerning when we closely inspect the underlying theories and datasets that determine their predictions. That being said, there are alternative-use cases for this technology, such as interactive artworks that use AI-based emotion recognition technologies in a more constructive and positive way.
Behind basic emotions
In the case of emotion recognition, one of the most popular approaches involves using facial expression to classify the subject into one of the six basic emotion categories: happiness, sadness, anger, fear, surprise, and disgust. The theory underpinning this approach proposes that emotion categories are innate and universal, with each having a unique facial expression that makes it distinguishable from the others.
For example, when we feel happy, we smile; when we feel angry, we scowl. Irrespective of its scientific validity, the basic emotion model is ubiquitous in effective computing research, largely due to the fact that it’s easy to implement computationally because it only has to recognise a face, and classify its expression into six possibilities.
More recent approaches to automated emotion recognition also add modes of input into the recognition process – including analysing speech, body language, and biofeedback – to infer someone’s emotional state.
But these approaches still operate under the assumption that emotions are hard-wired; a system of innate responses that are triggered by external events, a phenomenon that lends itself to direct measurement.
Constructed emotions and emotion recognition technology
The theory of constructed emotion, pioneered by neuroscientist Lisa Feldmann Barrett, proposes that basic emotions aren’t hard-wired in the brain, but instead are constructed from a combination of more basic psychological processes. Meaning, there are no dedicated “circuits” or areas in the brain that directly correspond to the basic emotion categories we observe in society, and that are currently being used by facial recognition technology.
Constructed emotion raises a number of concerns for AI-based emotion recognition technologies. We’re no longer able to attribute a set of physical signifiers (for example, facial expression, heart rate) to a particular emotional state in a limited context or culturally independent manner.
Mirror Ritual offers an alternative use of AI technology – one that offers a deeper and more personal relationship, where the viewer is empowered to become the co-author of their own narrative.
Humans can at best make predictions about the emotional states of ourselves, and of others only when considering a situation in its full context. We can’t expect to develop emotion recognition systems that are transferable across different groups of people with any degree of accuracy. In fact, the idea of “accuracy” in emotion recognition is rendered meaningless.
In human interaction, the benchmark for emotion perception is no longer accurate recognition, as there’s no objective criteria to refer to – instead we simply strive for agreement in our perception of emotion.
Alternatives to prescriptive emotion AI
Historically, emotion recognition adopts a prescriptive approach, labelling you with a specific emotional category, typically at a distance and without your involvement. Under constructed emotion, systems such as these fail to capture human emotion in its full complexity. If such technologies were to become pervasive in society, they threaten to narrow the range of emotion concepts available to us by limiting the options to only a handful of pre-defined norms.
However, this doesn’t necessarily mean we need to scrap AI-based emotion recognition systems altogether. The concept of emotional synchrony between two people can just as well be applied in the context of a human interacting with a machine.
Real-time emotional interfaces offer the possibility of emotional engagement with a machine – not in any attempt to measure your “true” emotional state; rather, to engage you in the iterative conceptualisation of your feelings and experiences. This process of reflection and reconceptualisation of emotion is said to promote emotional regulation, which ultimately allows you to manage and stay in control of your emotional experiences.
Putting these ideas into practice, for the past 12 months, Monash researchers have developed a new type of human-computer interface that promotes emotional reflection and regulation.
Mirror Ritual is an interactive artwork designed and developed at Monash’s SensiLab, a multidisciplinary creative technologies research facility. The work uses AI-based emotion recognition technologies in a way that is more constructive than prescriptive.
At first glance, Mirror Ritual appears as though it is a normal mirror – until you look into it.
Directly behind the mirror glass, a machine-vision camera searches for human faces, and when it finds one, the face is analysed and the viewer’s emotion is categorised by a neural network. The detected emotion is then used as the basis for another AI to generate a poem, based on the machine-perceived emotion.
As the viewer stares into the mirror, the poem’s text appears on the mirror’s surface.
Through this AI-generated poetry, the mirror “speaks” to the viewer; each poem is unique and tailored to their machine-perceived emotional state.
The mirror is designed to facilitate emotional awareness and regulation in viewers by encouraging self-reflective thought, while aiming to educate and, ultimately, to liberate viewers from the idea that emotions are fixed, well-defined categories that “happen to you”.
Instead, emotions are reframed and moulded through poetic language, allowing the viewer to reflect on their emotional life and personal situation.
While presenting a critique on surveillance-style technologies that are becoming increasingly invasive and potentially disturbing, Mirror Ritual offers an alternative use of AI technology – one that offers a deeper and more personal relationship, where the viewer is empowered to become the co-author of their own narrative.
Researchers hope that one day, such devices may become significant objects in people’s lives. Currently, we mostly think of personal technology as something that offers a service (“Hey Siri, what time is it?”), or alleviates a common chore (robotic vacuum cleaners). But what if technology could help us find more meaning in our lives, becoming something that broadens our emotional and creative landscapes?
To receive a fortnightly email wrap up of stories from Lens.