AI and Robotics: “Imperfect Versions of Our Own Minds”

43509944701_5939a3952f_k,

What can we human beings learn about ourselves from artificial intelligence (AI) and robotics? What are the social and religious consequences of technological progress? And how human are robots, actually?

“As human as we make them,” is Dr. Beth Singler’s answer. The anthropologist and expert in digital ethnography at the University of Cambridge’s Faraday Institute for Science and Religion does research on the implications of AI and robotics. She’ll talk about her specialty field on September 7, 2018 at the Ars Electronica Festival’s Theme Symposium. In this interview, she gives us a few preliminary insights.

Was können wir von Künstlicher Intelligenz (KI) und Robotik über uns Menschen lernen? Welche sozialen und religiösen Auswirkungen hat der technologische Fortschritt? Und – wie menschlich sind eigentlich Roboter?

Your talk at the Ars Electronica Festival is entitled “The Fractured Mirror: Reflecting on Artificial Intelligence and Us”. Figuratively speaking: Why is the mirror fractured?

Beth Singler: The mirror is fractured, is imperfect, because as we consider the future of artificial intelligence and what we want it to be like we work from our existing conceptions of what we are now and what we want to be. But such conceptions are formed through our own fallibilities, biases, assumptions and history of ideas. The first AI technologists defined human intelligence in terms of being good at maths and at playing games of strategy. Because those technologists were good at maths and at playing games of strategy! Human intelligence is much broader, richer and much more social than that. But we can consider the mirror that AI presents and find out where these imperfections exist. What are the moments of the ‘uncanny’ that give us pause and make us think? When do we find ourselves disturbed by the machine and what does that tell us about what we think the human is? Of course, our conceptions of what the human is change over time, but in this era with this advancing technology there is a potential space for a conversation about moving forward with a more intentional conception of the human being fit for the future.

In your work, you reflect on how advances in AI and robotics affect us as human beings, in social as well as religious terms. What do you consider the biggest challenges we are faced with right now in this respect?

Beth Singler: A lot of my work is about looking at the formational narratives that underpin assumptions about AI. And often the most engaging narratives for the public are those that are very clearly dystopic or even apocalyptic. Hence the abundance of Terminators in articles about AI! However, I am less concerned by this kind of future than the ‘now’ that we are living in where there are already ‘invisible killer robots’ – the algorithms that we increasingly rely on to make decisions about our individual futures and opportunities. Society’s reliance on decision making and abilities of AI will have broad socio-economic impacts, and I am very wary of anyone who draws either too simple historical analogies with the previous Industrial Revolutions, or who is too idealistic about humanities’ ability to survive and thrive without ‘work’, that which has defined our purpose for so long. Religions have long attempted to provide alternative explanations of what the human is for, and to answer questions around human dignity. They also enable communities of support for the vulnerable. Whether you are religious or not, and I am not, there will be a need for both these resources, and contrary to assumptions about secularization we might well see an increase in religiosity in an automated age.

In your documentary series “Rise of the Machines”, you examine very human concepts like pain or good in machines. This inevitably raises the question: How human are machines?

Beth Singler: They will be as human as we make them I suppose, with the caveat that each person working towards that end will be working with their own understanding of what the human ‘is’. And that even when the technology is not very human-like our tendency to anthropomorphise will still fill in the gaps. Take Sophia the Hanson Robotics robot for example. Her AI tech is not that advanced, but the presentation of her in very human situations – being interviewed on morning chat shows, doing a fashion shoot, chatting with Will Smith – as well as her android form only help that anthropomorphisation. Is her presentation in this way disingenuous? Well, the creators have sometimes talked about her being a scripted artwork or a social experiment in human/machine interactions, but not that often or that loudly. And even when an AI is completely disembodied and absolutely no claims about human-like ness are made beyond its skill in a specific traditionally human intellectual domain, as in the case of Google DeepMind’s AlphaGo, humans will still infer personhood onto the programme. Fanart of AlphaGo show it exultant in victory in a humanoid form, for example! We may actually never be able to fully judge the human-like-ness of a machine because of our inability to pass the Turing Test as humans! That is, we fail to spot the ‘passing’ machine and constantly draw ourselves into relations with the artificial.

What do AI and robotics reveal about us as humans?

Beth Singler: Primarily I think that our focus on AI and robotics tells us much about our collective desire to create. There is a wonderful novella by Catherynne M. Valente – Silently and Very Fast – where she writes: “Humanity lived many years and ruled the earth, sometimes wisely, sometimes well, but mostly neither. After all this time on the throne, humanity longed for a child.” The pursuit of Artificial Intelligence is the pursuit of an other – like us but perhaps better in some way and one that can help us to fulfil our hopes for the future. Much like a child. However, our fears about the future of AI and robotics also show us how we fear our creations, how we know that we ourselves are very fallible and how we suspect that our creations will betray us because they are like us. Frankenstein is an obvious working out of this narrative, and this year being the anniversary of the novel’s publication has encouraged a lot of public discussion about AI and Frankenstein. I’ve spoken about it in a variety of places including the Edinburgh Science Festival and there has been agreement from those audiences that we should be worried about the hubris of the ‘mad scientist’ with regards to AI. But it’s deeper than just the caricature of the mad scientist. The urge to create and our fears about other minds being imperfect versions of our own minds and being rebellious are very revealing about how we understand ourselves as human beings.

This year’s festival theme, “Error – the Art of Imperfection” is a follow-up to last year’s “AI – the Other I”, as a way of adding a call for social intelligence to the current enthusiasm for the digital world and AI. What is your opinion on the matter?

Beth Singler: Social intelligence is absolutely necessary, and not just for AI, but also for our own understanding of AI and for how we plan the uses of this potentially world changing technology. Of course, in some ways this conversation is not new, we’ve imagined creating other beings for centuries, and we’ve used technology to change the world for millennia. The means by which we’ve chosen to bring some technology into society and implement are often the result of chance, expediency or profit! Careful thinking about impact can come too late. However, would the world be as it is now, with all its good aspects as well as the bad, if we hadn’t whole heartedly embraced the technology of fire, writing, electricity etc.? In going forward into the future, one presumably with more and more examples of AI interacting with humans, we need to reflect upon where they are similar and where they are different to us in order, through those imperfections, to develop a greater understanding of who we are and who we want to be.

Dr Beth Singler is an anthropologist and digital ethnographer exploring the social, ethical, philosophical and religious implications of advances in Artificial Intelligence and robotics. As a part of her work she is producing a series of short documentaries. The first, Pain in the Machine, won the 2017 AHRC Best Research Film of the Year Award.  Beth is also an Associate Research Fellow at the Leverhulme Centre for the Future of Intelligence, collaborating on a CFI/Royal Society project on AI Narratives. Beth has appeared on Radio4’s Today, Sunday and Start the Week programmes discussing AI, robots, and pain. In 2017 she spoke at the Hay Festival as one of the ‘Hay 30’, the 30 best speakers to watch. Also in 2017 she was one of the Evening Standard’s Progress 1000 – the list of the 1000 most influential people in London. She has also spoken on AI and human identity at the London Science Museum, Cheltenham Science Festival, the London Barbican, the Being Human Festival, the Cambridge Festival of Ideas, the Edinburgh Science Festival, the Cheltenham Science Festival, and New Scientist Live.

Beth Singler will give a talk entitled “The Fractured Mirror: Reflecting on Artificial Intelligence and Us” at the Ars Electronica Theme Symposium in POSTCITY Linz’s Conference Hall on Friday, September 7, 2018.

To learn more about Ars Electronica, follow us on FacebookTwitterInstagram et al., subscribe to our newsletter, and check us out online at https://ars.electronica.art/news/en/.

, , ,