Q & A with Ellen Broad – Author of Made by Humans
Don't treat AI as though it’s something godlike, “better than” human, says Broad.
In this Q&A, Ellen Broad explains the literary influences in Made by Humans and dispels pervading myths and stereotypes about the processes of designing artificial intelligence.
Made by Humans explores our role and responsibilities in automation. Roaming from Australia to the UK and the US, elite data expert Ellen Broad talks to world leaders in AI about what we need to do next. It is a personal, thought-provoking examination of humans as data and humans as the designers of systems that are meant to help us.
Q–Your book opens with an epithet from Ursula K. Le Guin. Why did you want to lead in with that?
Ursula K. Le Guin died while I was on an overseas trip researching the book, in January this year. I knew I wanted to recognise her influence in some way. I found an old Guardian interview from 2005 where she talked about the importance of fantasy as a genre to help us understand power and what it does to people. That became the epithet.
A friend gave me my first Ursula K. Le Guin novel for my eighteenth birthday, in French (I was moving to Paris), and for a long time I assumed she was a French writer. It wasn’t until my mid-twenties that I started reading her novels in earnest. The Lathe of Heaven stayed with me for a long time. She wrote about ordinary human beings, human problems, in fantastical scenarios. She made me think about the choices we make about technology — how we shape it, the agency we have, the consequences of our actions — rather than technology being something that is always acting upon us.
Q–What are the biggest challenges involved in writing about artificial intelligence?
What’s called “artificial intelligence” means is always changing. Something is only called “AI” while it’s new and exciting, and then it gets its own ordinary-sounding name, like “virtual assistant”. My book is going to be out of date in no time.
Q–Are the benefits of artificial intelligence limited to those who can either create them or buy them? How can we ensure that everyone profits from this technology?
I think there are lots of ways in which as a society, we can benefit from well-designed AI systems – whether they’re helping us predict the weather, manage climate change, manage disease or make smarter planning decisions. I do worry about the economic benefits of AI accruing to a very small number of companies though, because they control enormous amounts of data and have acquired lots of AI startups. I think discussions around access to data – who should have it, how should it be used, what consequences of data monopolies – will take on increasing importance.
Q–Here’s the AI version of the ‘Passenger and Train’ moral conundrum. A self-driving car is crossing a narrow bridge, when a child suddenly darts out in front of it. It’s too late to stop; all the car can do is go forward, striking the child, or swerve, sending itself and its passenger into the rushing river below. What should it do?
There is no “right” answer here. This is why it’s a conundrum. The trolley problem has preoccupied a lot of the discussion around ethics of AI, because a driverless car has to make *a* decision in this scenario. We’ll come up with something, but we won’t have solved an ethical dilemma. There’s no ‘solving’ the trolley problem.
I am always a bit bemused when we talk about teaching machines ethics, using the illustration of driverless cars. A driverless car who has absorbed the trolley problem would hopefully be as flummoxed and uncertain as the rest of us, presented with that scenario.
Q–The most obvious examples of day-to-day interaction with AIs are inside our phones – Siri and Alexa. How does one go about developing artificial emotional intelligence?
I don’t know. I suppose for artificial intelligence to develop true emotional intelligence, it would need to have emotions, given emotional intelligence is about recognising feelings in yourself and others. And given we have no one single way of locating, defining, identifying emotion - lots of disciplines seek to demonstrate they can, in lots of different ways - I don’t know how we would satisfy ourselves that a machine feels emotion, to say emotional intelligence has been developed. But that’s a rich subject for science fiction films, literature, television.
Q–Is it “appropriate” for humans to design the iterations of artificial intelligence, to program its machine-learning, when those machines could inherit our prejudices and notions of fairness?
It’s all we have. Who else is going to design AI if not us? All we can do is be aware of our prejudices, our biases, our differences. And not treat AI as though it’s something godlike, “better than” human.
Q–Why is language so important in discussing tech, policy and decision-making? You include a note on language in the book, you include an anecdote from Hemingway and relate it to building ‘stories about data’. And there’s a poetry to the way you write about technology, and the ineffable power of data, especially in ethics and tech.
We hide in language and we exclude people using language awhen we talk about technology. Sometimes we hide in technical terms because we don’t quite understand them ourselves, to communicate them in different ways. Sometimes we hide because we want to avoid difficult questions. Sometimes we just don’t realise how deeply we’ve absorbed words that a lot of people don’t use.
When technology is affecting so much of our lives, and driving so many new inventions and interventions, this language barrier is inexcusable. This is a conversation everyone needs to be part of - the price of entry shouldn’t be a computer science degree or an IT career.