*** DEBUG START ***
*** DEBUG END ***

Artificial Intelligence: My life among the algorithms

by
26 July 2019

We asked several writers to reflect on what it means to be human in the light of Artificial Intelligence. To start, Jory Fleming describes his work in AI, with the help of a college chapel

PA

A child holds the hand of a robot at the RoboCup 2019 event in Sydney, Australia, in July

A child holds the hand of a robot at the RoboCup 2019 event in Sydney, Australia, in July

A RAY of light sneaks through the airplane window, hitting me square in the face. I struggle awake to discover that my service dog, Daisy, has appropriated the limited floor space almost entirely for herself. I look out to see a glimmer on the horizon: I have arrived in England.

This is my first overseas journey, and a long way from my home in the south of the United States. For months, I’ve been dreaming about what it will be like to study at Oxford University, after being contacted by the Rhodes Artificial Intelligence Lab (RAIL), a student group exploring the power of Artificial Intelligence (AI) to solve problems and drive social change.

The last stars fade from view out the window, and I remember the group’s email sign-off: Ad astra (“To the stars”). I knew I was interested the moment they got in touch, because I’d seen the change that AI can bring: change so sudden that it shocks, but subtle, as a meteor streaking across the night sky changes the starry landscape in a flash.

 

AS A geographer, I have always paid attention to changing maps, especially noticeable in the incessant alterations to Google Maps. First, it was buildings, magically appearing all across the world, thousands at a time. Now, they are added so fast that you can find new neighbourhoods being built before roads are added.

In 2016, it was “areas of interest”, to help you to locate the fun places to shop and eat. Last year, it was personal maps that knew which chemist I used, where I had booked my hotel for a holiday, and where I needed to be if I accepted an email invitation. “Algorithmically created”, Google says in its press releases.

With each update, a sinking feeling emerges: you imagine that you hear the faint cry of a skill that you learned that will never be in demand again. After all, why bother if a machine can do it? It’s more efficient that way.

Yet I take heart. For now, only I can create a story with a map that will make someone feel sad, or hopeful, or capture their interest. For now, I am better than the algorithm. But for how long?

 

A FEW weeks after I arrive, I’m at the first RAIL meeting. Time flies as I try to keep up, furiously writing in my notebook. “AI is any intelligent, autonomous system, but the current research push is in machine learning,” the speaker says.

JORY FLEMINGJory Fleming and his service dog, in Oxford

It is also at about this time, observing the sun start its ritual descent into the never-ending British winter, that I hear an enchanting sound flowing in on the breeze across the Worcester College quad. It’s coming from the choir, practising an anthem in the old chapel. I see a sign: “Evensong — 6.15 p.m.”

An hour later, I return to the towering wooden door, ancient and imposing. Tess, the college chaplain, is waiting outside, and the light of her smile draws me in. As I take my seat, I am struck by the wild pink columns, and the canticle resounding against walls adorned with paintings of various creatures offering praise to God.

One line, “All ye Fowls of the Air”, is written straight above a painting of the flightless dodo, who looks terribly surprised to be hearing this particular commandment. I laugh, and before I’m transported away by the choir I get a feeling that this is the right place for me. It reminds me of home, which was also full of pink, birds, and churches, although not usually all together.

 

I BEGIN to work with my team in RAIL, analysing data for a music-based education project. Together we watch musical interpretations of mathematics, and collaborate on code. I’m learning by tackling large problems and celebrating small victories. Within our tiny team, we are united, pursuing a shared goal of chipping away at social problems with our snippets of code.

Yet beyond our team, there is an entire network of actors involved with AI that have different goals. AI has created an entirely new ecosystem that is full of uncertainty. Those on the outside are unsure how to respond. Economists argue about whether automation will save our economies, or destroy them. Data is the new oil, they say, before squabbling about whether it will power the future or set it aflame.

Politicians and governments make ever grander statements about AI. I wonder, are our leaders plotting out the future of our society responsibly?

I ask Jade, one of RAIL’s co-directors, who attended an AI policy workshop at the World Government Summit in Dubai: “Are we in good hands?”

“[Most] people have no idea what they’re talking about,” she says. She goes on to explain that AI is being hailed as a magic potion, a cure for all of society’s ills. There are not enough who understand the blurry and ever-shifting line between what is possible and what is not, never mind what the best course is for society.

 

AI IS not magic, it’s mathematics. Today’s most common machine-learning algorithms hoover up data and spit out a useful prediction. For example, in an image-classification programme, thousands of labelled pictures are fed in; so the algorithm can provide a helpful thought on a new unknown input — perhaps your latest and greatest cat picture.

“Kitten — 97 per cent” the algorithm might reply. The capabilities of AI are still limited in fundamental ways, but its penchant for prediction, classification, and clustering are sophisticated enough that we are witnessing futuristic innovations emerge: driverless cars, speech and text translation, mapping buildings, scanning a medical image, and more.

In 2016, many were surprised when DeepMind’s AlphaGo beat a human Go master at the game once thought to be beyond the capability of machines to learn. Today, algorithms that beat us at our own games are beginning to surprise us less. These futuristic agents have no agency, however; they are at the mercy of the intentions of their human operators.

I attend a RAIL discussion on cyber-security, where a fellow student demonstrates how to fool an algorithm-learning machine. He puts up two photos of a kitten, where the second has had a few hundred pixels maliciously altered. “Kitten” the algorithm replies to the first one, “fire truck” for the second. I chuckle, but the mirth quickly fades as the implications for road signs on driverless cars are discussed.

 

THE ticker “fake news” is unreeling across the bottom of the television screen as I read a blog post by the Chinese AI company Baidu on my laptop. They’ve created an algorithm which can replicate someone’s voice from 30 minutes of speech audio. I listen to their examples. For the first pair, I can tell which is the machine. My brief moment of relief dies away, because, for the second, I am less sure. I look at the news and wonder about the implications.

Shortly afterwards, the Future of Life Institute releases a video on autonomous weapons which features a dystopian society, a future where machines have the authority to decide to kill. Why would anyone want to create a weapon with capabilities even remotely approaching that?

Later, I read that Google’s senior engineers are protesting against a partnership on drones with the United States Department of Defense, arguing that it’s getting too close to crossing that ethical line of “Do no evil” which Google bandies about.

 

IN HIS book Superintelligence (OUP, 2014) the Swedish philosopher Nick Bostrom envisions potential futures for AI. He looks beyond current technology to a hypothetical day when a machine becomes truly intelligent, able to apply knowledge across domains and think in unique and creative ways.

Co-opting evolution itself, it’s possible that, one day, an AI could be smart enough to know how to make itself even more intelligent, and continually rebuild itself until some indeterminable ceiling is reached. This scenario is full of catastrophic risk, with a potential end to humanity. There is no guarantee that such a creation would respect the wishes of its creators.

Bostrom writes that the shepherding of a successful development of AI is “quite possibly the most important and most daunting challenge humanity has ever faced”. Yet it is also full of opportunity: a potential future where every human problem that can be solved is whisked away into the never-ending void of zeros and ones.

Even if AI never approaches super-intelligence, advances could have outsize effects on society. A survey of the field’s top researchers produced an average estimate of a 50-per-cent chance that AI will outperform humans in all tasks in 45 years.

ALAMYDeepMind’s CEO Demis Hassabis (left) and Go world champion Lee Sedol, of South Korea, seen after the first game of the Google DeepMind Challenge Match against Google’s AlphaGo programme. AlphaGo won all but the fourth game

I begin to wonder: even if AI turns out to be benign, can people cope with such a transition? I ask Jade about her research into AI policy and governance, and how she thinks we will respond to a rapidly changing future. “There’s a plausible scenario where people feel out of control. . . After all, what’s the essence of humans above intelligence, above what we control?”

“What determines whether you’ll respond positively or destructively?” I ask. There’s a pause before she answers. “It’s about what you ground yourself in.”

 

REFLECTING on our conversation transports me to the floodplains of home, and the mighty bald cypress trees found there. These gentle giants, once blanketing the entire region, are now found only in isolated pockets. They ground themselves in unstable soil, their roots locked in a delicate dance with an environment that has wide seasonal swings. Their roots poke regularly above the ground, in structures called “knees”.

Their function remains a mystery, but my favorite theory is that they are nodes in a network, and the trees interweave their roots to provide stability during the flood season. My daydream makes me wonder if the ancient trees could be a model, whispering through the forest, that, to grow towards the stars, you have to be rooted in solid ground.

While I feel uprooted by my whirlwind journey to Oxford, I find myself grounded by my deepening connection to the college chapel community. It becomes my habit to begin the day in the quiet stillness of morning prayer, and end it with the sounds of evensong. As I begin to send out roots in this new community, I start to form connections with the people in the pews around me.

In morning prayer, I often find myself sitting with Nicola, whom I discover is a doctoral student researching applications of AI in medical imaging. She is developing an algorithm to differentiate regions of the brain for diagnosing Alzheimer’s disease. Coming from a background in engineering and machine vision, she loves her current work because it could, one day, be used in clinical settings. She tells me about engineers who are “more interested trying to outperform each other in identifying cat pictures than solving real-world problems”.

Algorithms are difficult to integrate into health care, she says, because “patients and doctors don’t like a black box. You can trust a person, but it’s harder to trust a computer you don’t understand.”

We talk about trust and community, and how to build a better path forward for AI. I confide to her my concerns for the future, and she shares a story from her past. The north of England, where she is from, still feels forgotten and reels from the effects of a technology-driven downturn. New jobs were created somewhere else.

She believes that technology is neutral, but its effects on people are real. She remains hopeful: “I really love what I do. It has an obvious end goal that is good for people. The gains we could have are really important. AI can’t solve everything, but it can solve this amazing array of problems. We can embrace AI if we also have a plan for people.”

For both Nicola and me, the AI communities we are a part of have been wholesome; but we are on the inside. It is easy to see the faces of those around us solving problems and see nothing amiss; but I realise that this is not always the case.

People talk about AI as if it’s something happening to them instead of something they control. From the outside, it is easy to feel powerless, adrift in a churning sea of technological change that seems to sweep us along towards a future landscape which we cannot see. Within the sanctum of tech, it often feels as if there are no doors to enter the conversation, imposing or otherwise.

 

WHILE some doors are closed, others open in surprising places. I learn that the Bishop of Oxford, Dr Steven Croft, is coming to give a dinner speech at my college, and he agrees to sit down for an interview with me.

Dr Croft is one of the members of the House of Lords select committee on Artificial Intelligence, which recently released its recommendations to the UK Government (Comment, 19 January 2018). When I ask why he decided to join the committee, he says that he has always had an interest in technology, in part inspired by his son’s work in the digital industry, and a friend who challenged him to take science and machine learning seriously.

He has written that to make the best future for AI, “the only way is ethics”, something that leaders in industry largely agree on.

In his book, Bostrom expresses concern that no ethical theory commands majority support among philosophers. He suggests that, in the future, we may need to settle ethical debates into a single value-set for humanity if we want machines properly to inherit our values.

ALAMYAn Android 5.0 Lollipop mobile-operating-system statue at Google Headquarters in Mountain View, California

I ask Dr Croft whether ethics can be solved or optimised in this way. He counters that it is more important that “the ones watching AI development are ethically sourced and have access to the wider traditions in philosophy and religion”. He continues: “Things have already been developed with inadequate ethics,” something he believes comes, in part, from a dominant culture in Silicon Valley of “extreme libertarianism”.

If the goal of the AI community so far has been to distill humanity into a single set of coherent values, Dr Croft takes a different view, arguing that the Christian response incorporates diversity and complexity in the human condition.

“Being human is a composite of physical, intellectual, and spiritual components that can’t be reduced to any of its parts; a whole that includes our infirmity and our mortality,” he explains.

I ask him about the future challenges that society faces as a result of AI development, and whether there will be an identity crisis in how people view each other and view God. “The unifying question, for me, is the question of human identity,” he replies. “The question of what it means to be human is a bigger question than what it means to be divine. Human identity is grounded in knowing you’re loved by God.”

I ask about his assertion that “It’s time for faith to once again ask hard questions of science.” If he had the chance, he says, there are questions he would ask the tech elite. He lists several categories: “Questions of humanity, questions of ethics, questions of humility, and questions of justice.” AI technology, he says, should be for everyone, but we won’t benefit if we don’t ask the right questions.

 

IN HIS talk, Dr Croft urges us to dare to use our voices to enter into a democratic conversation while we still can. If even the glimmer of utopia is possible, we should reach above ourselves and not allow ourselves to be trapped in a cycle of “technological determinism” in which we cede our collective ability to shape the future to a small group of elites.

Afterwards come questions from students, flickering candles casting a warm glow over wide-ranging enquiries about avoiding plutocracy, the future state of society, and the ethics of power, relationships, and the human condition.

Nicola asks one of the last questions, after describing how algorithmic bias is leading to decisions that are sexist and racist: How should we think about AI making decisions when they learn these things from us?

Dr Croft suggests that AI is like a mirror, showing us only what we already know: the fallibility and complexity of the human condition. It reminds me of 1 Corinthians 13.12: “For now we see in a mirror dimly, but then face to face. Now I know in part; then I shall know fully, even as I have been fully known.”

I wonder: what will AI show us of ourselves? And, perhaps even more importantly, if we don’t like the image it shows us, what can we dream of instead?

 

THE technology elite has carefully sculpted an image of innovation and societal good that has captivated the public, striding into our halls of power to proclaim the gospel of data-driven decisions. Give us the keys to the kingdom and watch us create, they seem to say.

But technological innovation has recently seemed much more sinister, breaching our collective trust in service of profit and power. The research company OpenAI halted publication of its new text-generation model because they feared that it would be misused. Explaining this decision in an interview with The Guardian, the charity’s head of policy used the term “the escalator from hell” to describe the process of nefarious actors’ keenly eyeing the fruits of their research, seeking to use it for their own devices.

Election meddling is stirring up the worst of our hate, fear, and anger, and threatens to rip communities apart; while the algorithms of shady companies disparage our collective ideals of justice and fairness.

This conflagration threatens to destroy the current networks which bind us all together, and, as these links between us and the technology we create erode, we are in danger of losing our collective way. If data is the cause of the fire, then it cannot also be the solution.

 

AI HAS taught me a great deal about data, but also about people. In a strange sense, it has even brought me closer to home, and the stories I already knew from my own religion.

At a midnight watch at Christ Church Cathedral, I see the flickering flames of dozens of candles arrayed in a small corner chapel, whispering into the void that every individual’s story is a new light which is not so easily extinguished. As the clock strikes midnight, I realise that I do not know how our story ends; but I know that I will be watching. I’m watching the birth of AI, for the changes which will begin to happen ever faster.

PAThe installation of the first robot on the Fiat 500 BEV fitting line at the Turin Fiat Mirafiori factory

Bostrom ends his book by saying: “The challenge we face is, in part, to hold on to our humanity.” I recall my conversation with Dr Croft, and his observation that “the Christian tradition dared to believe that God became human, but we still struggle with the nature of human distinctiveness.” How can we hold on to something we are still discovering?

My college chaplain, Tess, turned this question on its head. “We shouldn’t ask what is distinct about humans, but, instead, what is lacking in AI.”

Tess and I have been talking frequently as I prepare for my confirmation. Diversity, creativity, relationship, love, and “the hope found in the stories of the everyday” are what she thinks is missing from the AI conversation.

There is a dominant story about AI — one where our economic, political, and societal future are predetermined by data from a new tech elite. This story, Tess tells me, “is terrifying. It makes me feel helpless.” But this vision of fear and control is someone else’s dream. My confirmation ceremony is a mark of my new path, as Tess sends me out “rooted and grounded in love [to] bring forth the fruits of the Spirit”.

The secret of stories is that they can be retold and reinvented. We are never locked into a path we didn’t create. We are free to choose our own path, but we must know where we are going.

Stories offer something that data never can; but their ending is only as bright as we can dream up. The fuel of dreams is diversity, and AI needs more people to join together to watch, listen, and speak out. We shouldn’t leave it to an algorithm to predict the future: we should create the future we want.

I dare to craft my own dream, one of a future where AI does not reinvent us, but, instead, we use it to reinvent ourselves. What is your dream?

Jory Fleming is reading for an M.Phil. in environmental change and management at Oxford University, on a Rhodes scholarship. He formerly co-directed the Rhodes Artificial Intelligence Lab.

Browse Church and Charity jobs on the Church Times jobsite

The Church Times Archive

Read reports from issues stretching back to 1863, search for your parish or see if any of the clergy you know get a mention.

FREE for Church Times subscribers.

Explore the archive

Welcome to the Church Times

 

To explore the Church Times website fully, please sign in or subscribe.

Non-subscribers can read four articles for free each month. (You will need to register.)