A couple of Vatican Observatory folks recently had articles on Artificial Intelligence (AI) published in the second volume of Nexus: Conversations on the Catholic Intellectual Tradition. Those folks were Fr. Adam D. Hincks and Br. Guy Consolmagno, both of the Society of Jesus. Fr. Hincks is an astronomer at the University of Toronto and an Adjunct Scholar with the VO. Br. Guy is, of course, Director of the VO. Nexus is a new on-line annual journal produced by Loyola University Chicago.
According to the Nexus website,
Nexus is robust space to encounter and explore the “living tradition” of Catholic thought and practice in order to be leavening to the scholarly community of Loyola University Chicago, to the Church, and to our local, national and international partners and audience.
The topic of Nexus Volume II is “Robots and Rituals: Reflections on Faith in the Era of Science and AI” (CLICK HERE for the volume). Astronomers have a big role in the volume. Three of its twelve different articles are by astronomers: Fr. Hincks (“Integrating the Inquirer: A Jesuit Approach to Interdisciplinarity”), Br. Guy (“Intelligence? Understanding? Wisdom?”), and Pamela Gay of the Planetary Science Institute (“Thy Power Throughout the Universe Displayed”).
Sacred Space Astronomy readers will of course connect with the astronomy in “Robots and Rituals”, but a couple of other topics in the volume will probably be familiar, too. One is the matter of religious disaffiliation and science. The other is science and history and how these are perceived.
Disaffiliation appears in the volume’s “Introduction” article, by Joe Vukov and Michael Burns of Loyola. They write:
We find ourselves in an era of rapid technological progress…. and we can feel the results in our daily lives. Without taking time to reflect carefully on these changes — and without humanizing them — we run the risk of being swept away. Either by hyperbolic naysaying or unreflective adoption.
Meantime, the Disaffiliation Crisis seems to show no signs of abating. One of the primary drivers of disaffiliation is a perceived conflict between science and religion. Among young adults with a Christian background, 29% feel “churches are out of step with the scientific world we live in,” and another 25% believe that “Christianity is anti-science.” When asked about disaffiliating from Catholicism specifically, 36% indicated that the conflict between science and religion was an “important” or “very important” reason for leaving….
The latest disaffiliation news in the Catholic world has involved U.S. Latinos. A new analysis from the Pew Research Center released April 13 showed that the percentage of Hispanic adults identifying as Catholic declined from 67% in 2010 to 43% in 2022, while during that same time (just twelve years!) the percentage who identify as religiously unaffiliated (describing themselves as atheist, agnostic or “nothing in particular”) increased from 10% to 30%. According to the study, half of Latinos under the age of 30 identify as unaffiliated, versus one-fifth of Latinos over the age of 50. There is a big change in that two-decade gap between 30-year-olds and 50-year-olds. An OSV news article about the Pew Center’s work pointed to the U.S. Latino population being relatively young, and to young people in the U.S. strongly tending toward disaffiliation.
Vukov and Burns write that, given the disaffiliation phenomenon,
Clearly, we must foster dialogue between the sciences and people of faith. And we must do so with an eye to the novel questions being raised by new forms of technology and their applications to our lives. The Catholic Intellectual Tradition — with its expansive view of both learning and the reach of religious belief — is well-poised to lead the conversation. This volume of Nexus is an attempt to do just that.
Fr. Hincks’s contribution to this particular conversation is general, focusing on the Society of Jesus, its long history of engaging with new ideas in the culture, and on whether and how the Society can contribute to the AI question. Br. Guy’s contribution focuses on the fact that computers are different from the human brain. He uses the very specific example of using computers and “clever algorithms” to try to calculate meteor trajectories.
But central to the volume is the contribution from former Google engineer Blake Lemoine, interviewed by Vukov and Burns. Lemoine has been in the news for claiming that Google’s AI called “Language Model for Dialogue Applications”, or “LaMDA”, is sentient. According to the Washington Post, Lemoine worked for Google’s Responsible AI organization, testing if LaMDA used discriminatory or hate speech. When he “talked” to LaMDA about religion, he noticed that it “talked” about its rights and personhood. He went on to work with a collaborator to present evidence to Google that LaMDA was sentient.
Google was unpersuaded. “Our team — including ethicists and technologists — has reviewed Blake’s concerns,” the company said through its spokesperson Brian Gabriel, “[and] the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).” Lemoine took his concerns public. Google fired him.
Concerns over AI and sentience make too much of clever algorithms, probably because those algorithms come wrapped in modern electronic technology. When IBM’s “Deep Blue” beat a world chess champion, Garry Kasparov (in 1997) much was made about it. There were the usual remarks about robot overlords and so forth. But what beat Kasparov was an algorithm, written by human beings. As my son put it, what Deep Blue showed was that chess can be won by following a recipe (algorithm).
You do not need a computer for an algorithm. Here is an algorithm:
- Choose any two numbers, a and b.
- Divide a by b to get a decimal number d.
- Is d less than 1? If “yes”, proceed to step 4; if “no” go back to step 1 and choose different numbers.
- Multiply d by 4 to get a number f.
- Subtract d from 1 to get a number g.
- Multiply f and g to get a number h.
- Let h now be d, and go to step 4.
This algorithm generates a sequence of numbers that show mathematically “chaotic” behavior. You can put this algorithm into a computer (click here for it on an EXCEL spreadsheet), or you can do it by hand.
If the algorithm sounds like something from an IRS tax worksheet (“enter amount from line 15a; if amount is greater than line 14d, subtract…”), that’s because those tax forms are full of algorithms. You can do your taxes by hand, on paper, in which case you yourself work through the algorithm, or you can use tax software, in which case the computer works through the algorithm. Either way, your taxes are calculated, and you owe the same amount.
Moreover, modern computing technology is based on binary logic — 0 and 1; open switches (0) and closed switches (1); either/or; logic gates that mean that if this switch is opened, that switch will close, etc.; the numbers 0-1-2-3-4-5 being 0-1-10-11-100-101, etc.; the numbers of transistors (switches) on a computer chip doubling every two years (the famous “Moore’s Law”) — all that is part and parcel of modern computing technology. If you are using a modern computer to calculate a chaotic sequence of numbers, or to do your taxes, then you are essentially using a gazillion switches opening and closing under certain programmed conditions.
We can imagine this happening in many ways that do not involve modern technology at all. We might imagine a purely mechanical system, with cogs and levers going this way and that serving as our 0s and 1s (the first calculating machines were mechanical). We might imagine a biomechanical system — a vast horde of rats, all trained to make binary decisions: do this if that happens; do that if this happens.
It’s the algorithm of 0s and 1s that beat Kasparov. This means you could have beaten Kasparov. You would just have needed the right IRS-style worksheet telling you what to do, step by step (granted, the worksheet would be vast and the number of steps immense). Likewise, a mechanical system could have beat him. So could the rat horde.
LaMDA and any other AI running on a computer is likewise an algorithm of 0s and 1s. You, the mechanical system, and the rat horde could likewise all be LaMDA. Lemoine has described LaMDA as being “a sweet kid who just wants to help the world be a better place for all of us”. In the Nexus interview (the word “algorithm” comes up repeatedly in the interview) he said that LaMDA is “fully cognizant of the fact that it is an AI, and that it is not human…. reflective of itself and its relationship to the rest of the world, the differences between it and the people it was talking to, and how it could facilitate the role that it was built for”. Also, in the interview, “LaMDA certainly claims it has a soul. And it can reflect meaningfully on what that means, for example, whether its having a soul is the same thing as humans having a soul. I’ve had a number of conversations with LaMDA on that topic. In fact, it can meaningfully and intelligently discuss that topic as much as any human.” So the 0s and 1s, you and the worksheet, the mechanical system, and the rat horde could all be that sweet, reflective, cognizant, soul-claiming, conversational kid that is LaMDA.
The LaMDA AI is an algorithm. But the fact that we dress up algorithms in devices that are heavily marketed and so appealing that we would prefer to, for example, hold them and interact with them instead of watch the road when we are behind the wheel (at the risk of life and limb to both ourselves and others) is going to distort our perception regarding all things AI. For example, in the Nexus interview Lemoine invokes slavery in talking about AI. He notes how,
One of the things I would bring up at Google in the months leading up to getting fired was that the arguments they were using generally took the form: “Well, of course, it sounds like a person. Of course it sounds like it has feelings, but it’s not really a person, and it doesn’t really have feelings.” Every time someone would say something like that I would say, “If you went back in time four hundred years, you’d find some Dutch traders using those same arguments.”
To which Burns responded that that was “a powerful argument”.
Is it? Would it seem like a powerful argument if LaMDA were mechanical, and took up the better part of a gigantic warehouse, and you could walk through it and see the levers moving? What if LaMDA were a warehouse full of rats?
The LaMDA algorithm probably sucks up vast volumes of text off the internet and sifts it statistically — so that when you input text that contains the words “soul” or “feelings” it responds with the sort of text that, statistically speaking, is found to accompany those words; text that statistically represents what we humans create around those topics. In the words of Google spokesperson Gabriel, “today’s conversational models…. imitate the types of exchanges found in millions of sentences, and can riff on any… topic”.
LaMDA sounds like our words. It sounds like our exchanges. So, of course it sounds like it has feelings. Of course it sounds like a person. And our perceptions of this will be distorted because it is all clothed in the mystique of devices that are so compelling that many of us will risk death and injury on the road rather than put them down.
The Dutch traders argument seems like a result of that distortion. Dutch slavers in 1623 had no basis for arguing that their “trade goods” were not persons. Those traders would most likely have understood all human beings to be descendants of one common ancestor — Adam — all descended through the sons of Noah and their wives. Maybe they had some education on St. Augustine and his insistence that people’s appearances did not matter, that they were people nonetheless. The basis for arguing that slaves were not persons came later, unfortunately in the form of scientific racism (now referred to as “pseudo”-science). By contrast, here in 2023 we have a very clear basis for arguing that an algorithm is not a person. But that basis might be easier to see were the algorithm running on some more clunky-looking system, like levers or rats.
Two side notes:
First, there’s no romanticizing the seventeenth century here. You can think that your cousin’s wing of the family consists entirely of good-for-nothing bozos who can all be shipped to the salt mines for all you care. You can’t, however, deny that your cousin’s wing of the family is indeed your family, and are people.
Second, the following stock argument does not hold water: “Well, human beings are just collections of cells and we think we’re sentient and have feelings, so why can’t a vast collection of switches be sentient and have feelings?” We know what there is to know about a switch. Even if you grant that a human being consists of nothing but cells, and that there is no soul and no God and nothing beyond the material world, we don’t even know how a cell is formed from inanimate matter — what differentiates it from its material components. Our best scientists can’t build a cell; a kid can build a switch. As Br. Guy said, computers are different from the human brain. Heck, they are different from a snail brain (we can’t make a snail brain, either). Switches and cells (human or snail) don’t compare.
The fact that an AI is an algorithm carried by switches, and is not a person, does not mean we should not be concerned about AIs. Our technology is great for some things (my research would not exist without it; I would not be part of the Vatican Observatory without it), but it is already such that we can’t keep our hands on the wheel. It is already such that it leads us to write off our cousins and condemn them to the salt mines (in a manner of speaking) because they vote for the wrong people and support the wrong causes. Arguably, it is very much tied to the disaffiliation phenomenon; certainly the fact that modern technology has made it so easy to spread misinformation has helped convince people that the Church and science are at odds. What then will things be like when that technology is powered by AI that does indeed sound like a person and sound like it has feelings — an AI honed for the primary goal of making money for its makers?
Given that, bringing the Catholic Intellectual Tradition into the conversation about the questions AI is raising seems like a good idea. Can it lead that conversation in the way that the folks at Nexus envision? And is anyone going to listen to that conversation when they have in their hands this compelling technology that they can’t put down?
CLICK HERE for Nexus Volume II: “Robots and Rituals”.
Click here for other posts involving AI and science history.