Cleaning Emotional Data is a three channel video installation that addresses new forms of precarious labor emerging within artificial intelligence economies. Specifically, it focuses on the global infrastructure of microworkers who “clean” data to train emotion-recognition algorithms. These workers label, categorize, annotate, and validate large amounts of data, thereby enabling AI to function.
In the winter of 2019, while living in Palermo and researching affective computing systems, the artist ended up working remotely for several North American “human-in-the-loop” companies who provide “clean” datasets to train AI algorithms to detect emotions. Among the tasks she performed were the taxonomization of emotions, the annotation of facial expressions, and the recording of her own image to animate three-dimensional figures. Cleaning of Emotional Data documents these microtasks while simultaneously tracing a history of emotions that questions the methods and psychological theories underpinning facial expression mapping.
A number of AI systems, which supposedly recognize and simulate human affects, base their algorithms on flawed understandings of emotions as universal, authentic, and transparent. Increasingly, tech companies and government agencies are leveraging this prescribed transparency to develop software that identifies, on the one hand, consumers’ moods and, on the other hand, potentially dangerous citizens who pose a threat to the state.
The implications of this demand for emotional legibility are further explored in the embroideries of the textiles of the installation. The embroidery juxtaposes the abstract lines of facial micro-expressions detected by the algorithms with untranslatable emotional vernacular from the Sicilian dialect. This joint “fabrication” of computational and human language demonstrates how emotional sensibilities exceed reductive categorization.
Cleaning Emotional Data is the third installment of a trilogy of works exploring how labor, care, and affection are reframed by digital economies and artificial intelligence; it follows Technologies of Care (2016) and Labor of Sleep (2017).
Elisa Giardina Papa (IT)
Aksioma, Institute of Contemporary Art, Ljubljana, and Kunsthalle Mulhouse
Elisa Giardina Papa (IT) is an Italian artist whose work investigates gender, sexuality, and labor in relation to neoliberal capitalism and the borders of the Global South. Her work has been exhibited at the 59th Venice Biennale (The Milk of Dreams), MoMa (Modern Mondays), the Whitney Museum (Sunrise/Sunset Commission), Seoul Mediacity Biennale 2018, among others. Giardina Papa received an MFA from RISD, and she is currently pursuing a PhD in film, media, and gender studies at the University of California Berkeley. She lives and works in New York and Sant’Ignazio, Sicily.
Elisa Giardina Papa worked remotely for several North American “human-in-the-loop” companies that provide “clean” data sets to train AI algorithms to detect emotions. Among the tasks she performed was the taxonomization of emotions, the annotation of facial expressions, and the recording of her own image to animate three-dimensional characters. Cleaning Emotional Data documents these microtasks while simultaneously tracing a history of emotions that questions the methods and psychological theories underpinning facial expression mapping. The tech industry rarely opens up about their philosophical foundation of “What is Human?” Their modeling of emotion and behavior stays opaque. Through three documentary film clips, Elisa gives us insight into the model from the bottom up. The AI model treats single nouns as sufficient to describe complex emotions such as joy or disgust. The process of applying nouns to emotional states is tellingly called “labeling.” The nouns are directly translated into cultures as different as Arab, Spanish, or Malaysian.
As labels, these nouns are then attached to pixelated portraits in a working process called “click work,” where somebody sitting in front of a screen has to decide as fast as humanly possible to click on a label, in this way attaching it to a never-ending flow of portraits. Time for introspection is scarce, since the pay is low. With such training, algorithmic bias is systematically built into any AI application. To reduce algorithmic bias, work has to start with the creation of meaningful models which should then be trained with quality data. For STARTS, Elisa’s work serves as a beacon to direct the development of a European AI, grounded in our naturally given diversity of cultures and languages.