A few minutes before the announcement was to be made on the loggia of St. Peter’s Basilica on May 8, 2025 as to which man had been elected as the new pope in the top-secret conclave, and no one other than the cardinals eligible to vote could know his name, ChatGPT announced that Robert Francis Prevost had become pope, the first time a US American had done so. A Spanish journalist had asked the AI for a prediction during a live television program, and it had promptly named Prevost as the future pontiff. The presenter and viewers reacted with astonishment, as this seemed to be further proof that AI is not only penetrating more and more areas of life, but that it also delivers astonishingly accurate results in highly complex situations where many unpredictable factors are involved.
This is nothing more than an anecdote. It should not be forgotten that ChatGPT predicted a different candidate as the future Pope for days before the end of the conclave. And it cannot be ruled out that other AI systems were making all kinds of incorrect predictions at the same time – it’s just that this was not reported afterwards. But the anecdote still makes us sit up and take notice, as it demonstrates that AI systems are arriving at ever more precise statements simply because they are plowing through more and more data following certain computing operations. The fact that something is taking place behind closed doors and under the greatest secrecy does not appear to be a significant obstacle.
At the beginning of his pontificate, Pope Leo XIV, whose election ChatGPT announced even before the solemn proclamation, made programmatic reference to the issues associated with artificial intelligence. In his first public speech to cardinals, the Pope described artificial intelligence as one of the greatest challenges of our time and one of the most important issues facing humanity in the coming years. According to Leo, this is about “the defense of human dignity, justice and work”. It is unmistakably clear that the Pope sees the Church as being challenged with regard to this central social issue of our time. This is not an entirely new perspective, as there have already been statements on this topic from Rome for some time.
One of the most recent magisterial documents on the subject (Antiqua et nova. Note on the relationship between artificial intelligence and human intelligence) from January 2025 addresses the special role of ethics in the use and development of AI. On the one hand, it recognizes the beneficial side of AI systems. According to the document, referring to Pope Francis, the “key criterion” for this life-enhancing dimension is whether the respective technology contributes to preserving human dignity and helping to express it at all levels of life.
Above all, however, the critical focus is also on the darker side of artificial intelligence and the potential dangers and risks that can accompany its use. Here too, as with all new technologies, the ethically relevant question of how to deal responsibly with the opportunities opened up by AI systems arises. However, in the context of AI systems, the question of responsibility takes on a particularly explosive dimension: if algorithmic systems make more and more decisions and the development towards increasingly automated decision-making processes continues at such a rapid pace, then it will become increasingly opaque who bears any responsibility for decisions at all.
Responsibility is beginning to diffuse. Is it (still) the human being, for example the programmer of the algorithms, the companies, the users – or is it the machine? But can a machine be responsible, can it be held responsible? What consequences would that have? These questions indicate what is at stake in times of digitalization: the focus is on the question of how we can deal responsibly with responsibility.
The document Antiqua et nova emphasizes that attention must also be drawn to the importance of people’s moral responsibility in the context of AI. It is crucial because it is people who develop the systems and determine how they are used. “Between a machine and a human being, only the latter is truly a moral actor, i.e. a morally responsible subject who exercises their freedom in their own decisions and accepts the consequences” (AN39).
Theological ethics, which explicitly sees itself as an ethics of responsibility, will have to pay particular attention to the fundamental questions that arise here and, in view of the new challenges that arise, will have to reflect more deeply on the concept of responsibility. It is clear that ethical reflection must not just be the rearguard, but must accompany technological developments in real time. The point is that humans must remain in control and that this idea must be the decisive condition for the further development of any AI. Human actors must remain identifiable, must take responsibility, even if the automation of AI systems holds out the prospect of increasing efficiency.
In his speech at the G7 summit in June 2024, Pope Francis said:
Faced with the marvels of machines that seem able to choose independently, we must be clear that the decision must always be left to human beings, even in the dramatic and urgent situations that sometimes arise in our lives. We would be condemning humanity to a hopeless future if we were to take away people’s ability to make decisions about themselves and their lives and condemn them to be dependent on the choices of machines.
We should certainly not allow ourselves to become dependent in this way, even if the idea that a machine could relieve us of the arduous task of weighing up and making decisions may seem tempting for a brief moment. The human being who abdicates his responsibility would become a puppet of the machines. We must not want that.