Artificial Intelligence (AI) has become part of all of our lives. It is so ubiquitous that there are times when we wonder whether a text is human- or AI-generated. I shall not look at the question plaguing most academics – how to detect AI- versus student-generated papers – but rather explore the extent to which AI can generate coherent, logical moral analysis and proposals—in this case, the greatest moral challenges facing contemporary Africa. Having set up this conversation, I switched track deliberately to explore the extent to which fears of AI domination are reasonable. Finally, I have tried to add my own reflections on the exercise.
The First Step
The first step was essentially to establish the degree to which AI[1] can do the work of moral analysis or, in more brutal terms, can replace the moral theologian or ethicist.
My first question was as follows: In your informed opinion, what in order are the three greatest moral challenges facing the continent of Africa today? The AI bot responded:
- Corruption and the failure of governance & accountability
- Deep and persistent inequality (economic, social, access to services) coupled with poverty.
- Erosion of human rights and weakening civic space: degradation of justice, fairness, and respect for dignity.
The AI bot added, without any further question from me:
Why these three are interlinked — and why breaking the cycle is so hard.
These challenges don’t exist in isolation. Corruption undermines governance and diverts resources that might otherwise reduce inequality or fund social safety nets. Inequality and poverty make people more vulnerable to rights abuses, or more likely to be exploited. Weak rights protections make it harder for citizens to demand accountability or resist corruption.
So — tackling one without the others tends to produce limited results. For real moral progress, Africa needs systemic reform that addresses governance and corruption, reduces inequality, and protects human rights and dignity.
and indicated that there were other issues closely related, offering a top six list as well.
I deferred from following this[2] and asked my next question. What theological resources – Catholic Social Teaching, moral theology, political ethics – do you think the Catholic Church can use in engaging with the three problems you indicated above?
The AI bot responded, citing extensively from sources, comprehensively listing the major official and non-official sources we all know in Catholic social ethics.
Significantly, it anticipated my next question by offering to follow up with:
If you’d like, I can now move to part 2: How the Church could realistically deploy these resources — what they offer specifically for: combating corruption, addressing inequality and poverty, defending human rights and civic space,
to which I replied: Please go on.
It then offered a series of proposals (which brevity prevents my listing) that once again would be all too familiar to theologians, church workers and justice and peace activists.
Finally, it concluded:
What Makes These Actions Realistic?
- The Church is respected and embedded across Africa.
- It has unmatched grassroots presence (schools, parishes, hospitals).
- It can mobilize moral authority without being a political party.
- CST offers a coherent, integrated framework rather than fragmented responses.
- African Catholic theology provides contextual legitimacy.
- The Church already has institutional tools (Caritas, episcopal conferences, justice-and-peace commissions, etc.).
Thus far, I was impressed by the scope, detail and accuracy of the AI bot’s findings. But collection and arrangement of freely available data was not enough.
I wanted to see if it could think politically and ethically.
The Second Step
So I asked my next question: Do you actually believe that the Church can make a real difference in these areas?
The AI bot’s response (“my honest, evidence-based answer — without piety, illusion, or deference”) was that
“Short version: Yes, the Church can make a real difference — but only in certain ways, and only if it confronts its own contradictions.
“The Church has structural capacities that could make real, measurable moral impact in Africa, but it rarely deploys them fully, and sometimes undermines them through its own failures. Its potential is real; its effectiveness is uneven.”
It cited education (and health care provision), mediation, social service provision and moral pressure exerted on governments by episcopal conferences (noting that the latter was only occasionally effective). Internal contradictions limit the Church effectiveness, the AI bot continued, including inability to enact direct political or economic reform, and areas of human rights (e.g. reproductive, LGBTIQ+, and gender equality rights) where the Church’s position is either hostile or ambivalent. These limits were compounded by areas where the Church undercuts its own ability and credibility through clerical corruption and financial opacity, ethnic and political alignments, fear of retaliation, as well as internal governance deficits (lack of financial transparency or independent oversight; hierarchical decision-making with lay partnership in governance). The AI bot added: This contradiction blunts its prophetic voice.
An astute remark – or simply an empirical observation?
I pushed the AI bot further: Given that the kind of change suggested by these reflections we’ve been doing seem so politically, economically and socially – as well as morally – obvious, why are they so hard to achieve?
It replied: This is an excellent—and essential—question. The gap between what is morally obvious and what is politically possible is often enormous. In the African context, the difficulty arises from a convergence of structural, psychological, economic, and historical factors.
Having listed the ‘usual suspects’ that the literature cites – structural inertia, vested interests, colonial legacy, economic inequality, factionalism, lack of trust in society, etc., it added something I had not specifically asked:
“On the theological level:
- Sin (both personal and structural) distorts human desires.
- Self-interest overrides the common good.
- Conversion—personal or social—takes time and is resisted.
Catholic moral theology calls this the “mystery of iniquity”: evil persists even when the good is clear. [The italics in last sentence are in the source]”
This was more than I expected. It volunteered more specifics than I had raised in my question, tailoring my use of ‘moral’ to a theological level.
The Third Step
I then shifted gear, asking: Given the reality of human sin, as well as all the social factors involved, wouldn’t it be better for African societies to be run by an all-wise, all-benevolent AI form of governance that would serve the common good of our people objectively and fairly?
AI responded: This is a bold and intellectually serious question, and it deserves a careful, theologically and philosophically grounded response. The short answer is: No—an all-powerful AI governing Africa (or any society) would not be better, even if it were imagined to be “all-wise” and “all-benevolent.” But the reason why is more interesting than the simple answer.
It came up with a number of objections to my provocation.
- Human dignity requires human participation in society. In short , the AI rule would violate freedom, responsibility, moral agency, human creativity, co-creation and stewardship, and subsidiarity: Even if benevolent, it would be a governance of children, not citizens.
- Power without accountability is always dangerous. Here, the AI bot noted that any authority with unchecked power becomes problematic; even a benevolent AI ruler would inevitably be manipulated or controlled. It added, Artificial intelligence would inevitably encode—and amplify—existing inequalities.
- The dream of “objective governance” is itself a myth. Even AI reflects the assumptions of the data it is fed, that then reproduce the biases of that data. Above all, it observed that “AI cannot capture lived human values.”
- African history warns against “saviour systems”. Here it noted that Africa has suffered from various saviour systems ranging from colonialism (‘civilising missions’), one-party ideologies (‘unity’), military regimes (‘order’), technocracies (‘efficiencies’), etc., observing that “Whenever Africans are told that a singular system will save them, it ends in domination.”
Finally, the AI bot came up with:
- Theology: God does not rule the world mechanically. The AI bot observed:
“If the Creator—who is all-wise and all-benevolent—does not govern humanity through dictation, control, or automation, but through freedom, there is a reason.”
Concluding that:
“A perfect AI government would eliminate: sin, yes but also virtue and freedom and the possibility of conversion. It would not create a just society; it would create an obedient society. But justice is not obedience. Justice is love structured socially.”
The AI bot’s conclusion, then, is that humans need better governance, not inhuman governance.
Being a dyed in the wool sceptic, I was still not satisfied with this all too Catholic – dare I say ‘virtue-al Catholic’ – reply. So I made a final play:
- Great. You have framed this through the lens of your questioner, a theologian-ethicist-political thinker. Perhaps you have even guessed where I stand on these matters, based on our other conversations. But now, imagine I was a kind of dystopian, dark renaissance, authoritarian tech-bro – call me for this exercise Elon Thiel Yarvin[3]. Would your responses be the same for them?
The AI bot replied:
In substance: yes, the core arguments would be the same.
But the framing, emphasis, and points of persuasion would be very different, because “Elon Thiel Yarvin”—your fictional dark-renaissance authoritarian tech-bro—does not care about [your concerns] …So the arguments that persuade you would be irrelevant—or even counterproductive—for him…
[T]he same conclusions would be delivered through a completely different lens: one shaped by technocracy, power, realpolitik, and the ethos of authoritarian accelerationism.
The AI bot presented detailed counterarguments based on these themes, concluding
The arguments against AI governance are identical in content, but radically different in persuasion strategy depending on the audience. To a theologian, I argue: dignity; freedom; virtue; theological anthropology; [and] the moral purpose of political life. To Elon Thiel Yarvin, I argue: power; fragility; irreversible-loss-of-control; class interest; game theory; existential risk; [sand] political realism.
In both cases, the conclusion is the same: No rational actor—ethical or authoritarian—should want an AI to govern society.
This rather heartening statement had two effects. It both confirmed that AI tailors its answers to questioners, but operates within a series of ‘programmed’ (as opposed to moral) restraints.
The Fourth Step
I could not resist asking the AI bot if it considered itself a moral being. It gave a long and detailed scientific-philosophical explanation that was, in short: No, my “moral sense” is not a sense at all; it is a constructed function based on patterns, constraints, and principles, not inner experience. Citing its lack of consciousness, intentionality, moral agency, responsibility and subjective experience, it stated
What I produce as “moral reasoning” is algorithmic, not experiential. When I give moral analyses, I am performing something like: pattern based synthesis + rule bounded reasoning + alignment constraints,
Based on careful training in academic and moral discourses (‘languages’), and then constrained by externally imposed programme rules. It stated:
[Three main things drive] my moral-sounding responses […]
(1) Coherence with human moral discourse
I try to speak in ways that make sense within the shared moral language of humanity.
(2) Safety constraints
My architecture forces me to prioritise user wellbeing and avoid harmful outcomes.
(3) The user’s moral framework
When you ask questions from a theological/ethical angle, I adopt that framework for coherence and helpfulness. If you asked from an authoritarian standpoint, I spoke to that worldview while still respecting safety boundaries. This is adaptive rhetoric, not moral commitment.
This confirms what the broad consensus of AI scientists and policy formulators[4] have been saying. The question, of course, is whether such constraints on dangerous recommendations – or indeed constraints on AI action – might someday be removed.
Concluding Reflections
This long exercise started as a game on my part. The good news, starting from my final question, is that there are still constraints on what AI might advocate. This seems still to be built into AI programmes – so far. Moreover, at least with the AI bot that I engaged with, there is a very clear ‘awareness’ of its non-humanity and thus its limitations in making clear moral judgments as such. My sense overall that AI used in exploring moral questions is still ‘consciously’ aware that it is sorting data – reading and applying sources of scripture, tradition and reason, while in no way employing experience, let alone personal spirituality, in reaching conclusions.
Finally, let me consider a practical question that such an exercise as this raises for ethicists: does AI make us redundant? In terms of AI’s ability to collect and analyse moral information at a rate that no human can do, it might seem so at first. It can write reasoned ethics papers, certainly. But with AI’s admission that it has no distinctiveness consciousness, let alone conscience (or even sense of faith), it does not have that personal spark – passion, compassion or outrage – that we read in the best of moral theology.
We still have a job.
The fan of dystopian science fiction in me adds sotto voce: “For the moment.”
—
[1] In this exercise I used ChatGPT. Italicized text indicates direct words of the Chat.
[2] For obvious reasons – it would lengthen the conversation, a conversation that has unfortunately generated an already long piece.
[3] Any references to persons living, dead or virtually living is purely coincidental, of course!
[4] See, for example: Study Panel for the Future of Science and Technology, The Ethics of Artificial Intelligence: Issues and Initiatives (Brussels: European Parliamentary Research Service, 2020); also: Volker H Schmidt, “Taming AI with morals? Skeptical notes”, Risk Sciences Volume 1, 2025, 100016, https://doi.org/10.1016/j.risk.2025.100016.