Back to Forum

My Cat Verifies that I Wrote This Without AI: An Ethicist’s Guide to Our Technological Dystopia

The ubiquity of discussions on artificial intelligence can be mind-numbing. Both in the sense that there are simply too many discussions to follow, and thus I feel numb encountering the alarming rise of everyone’s opinions on AI; and in the sense that AI-slop infiltrates so much online discourse that the discourse itself is but a caricature, a numbed simulation, a dim echo of what it might be otherwise. Am I reading more AI nonsense, or is this actually someone’s thoughts and feelings? Does this person align themselves so closely to an AI that their thoughts are indistinguishable from ChatGPT’s stochastic responses? Social media must be close to 100% AI generated these days, and everything else falls into some bucket less than that but more than I want it to be.

Don’t worry, no AI is writing or editing this essay. This particular piece of public scholarship is coming straight from my brain to the screen, with all of my computer’s attempts to assist me turned off. I will likely make mistakes, but at least you can be assured of my humanity. At a time when I suspect everyone uses AI, and even I largely have replaced Google searches with AI searching, I find it helpful to assuage the reader that this essay is not an exercise in crafty prompt engineering. It’s just me, at my table, next to my lazy cat, typing. Could an AI have written that? Maybe, but it didn’t.

— — —

Five years ago, dissecting the ethical landscape of emerging technologies was comparatively straightforward. A growing number of ethicists, sociologists, and the like had been investigating the harms caused by social media, the monopolistic rise of tech giants, the loss of online privacy, and the ubiquity of disinformation. The tragic scandal of misinformation and related political instability reared its head during the pandemic, and we wondered how to re-anchor truth in a new era of American falsehoods. We all yearn for such simpler times.

In 2026 AI existentially threatens the fabric of public truth and the production of new AI engines seems to have caught the entire military-industrial-technological-capitalist complex in its wake. AI is the new oil, the new railroad, the new gold. It is the parable of the pearl that Jesus preaches in Matthew 13, except instead of a pearl, all the powerful men are selling everything to find that elusive AI, each determined to be the one to reveal the holy grail of technological development.

To talk about the theological ethics of AI in 2026 is, in other words, to talk about everything. Every aspect of ethical life now intersects with some version of what we call AI. If our ethical discussions do not include AI in some fashion, they are not adequately performing the task of meeting the present moment. This is not to say that everything is about AI, but much like cellphones and cars and emails and houses and universities, AI is on a trajectory to become a permanent institution in the world. Much like the discovery of oil, this is going to bring great wealth to a few people, but it will always be ethically problematic.

As a scholar and researcher, it seems clear that the insertion of AI-generated articles into an already struggling academic publishing system will determine itself king, slowly devouring any semblance of humanity. Only AI editing can solve the demands of attempts at AI publishing, and only AI reviewers can keep up with the pace of new AI writers. Pretty soon all publishing houses will revert to AI and non-AI essays will be sent back for R&R for being not sufficiently professional.

As a professor, the future of AI that everyone championed in 2022 fills nearly every humanities colleague I know with sadness. There’s always the one crypto/NFT believer who holds that prompt engineering is the future and that the future of AI will have an “even more important role for humanities” as if any tech innovation had ever created more spaces for serious, slow discussions about deeply human topics. But for the rest of us, there is sadness, not just about the loss of trust in a digital teaching landscape, but about the deep trenches of inequity that AI will widen. As Sal Khan of Khan Academy has recently admitted, AI personalized tutors don’t seem to work very well. That transformation of equity promised by AI remains a distant dream (not that this is dissuading Sal Khan).

The ethical landscape of AI is vast and terrible: it reaches from labor disputes to dropping bombs to the space race to kindergarten classrooms to teen suicides. It is difficult to find an analog in human history of a technology so quickly impactful in so many areas. The ethics of AI are not as clear as those of nuclear bombs, but given that big tech companies are not only pushing nuclear power but becoming nuclear companies, and given that all historical reliance on nuclear power has a direct relationship to a reliance on nuclear weapons, and that even the most ethical-aspirational companies like Anthropic are literally suing the US government to be more entrenched in the creation of wars, ok maybe AI is just like nuclear weapons. Nuclear weapons we can use to search for obscure chocolate cake recipes that I vaguely remember from an episode of Top Chef 15 years ago. Oh, yes, it found it! Damn it AI.

— — —

This was supposed to be an article about cataloging the wide variety of ethical approaches to AI, but since I’m a human sitting next to a cat and not a chatbot, it is now an article where I’ve decided that AI is ethically kind of like a nuclear bomb that can also help me find cool recipes and build websites. Do I want this nuclear bomb to also control my emails and educate my students? Can I do things without nuclear bombs? Can anyone?

— — —

Does this make me a Luddite? I teach about the Luddite movement in a class on AI and labor movements, and I’ve grown to deeply love the haunting history that was recast by 20th century moguls. The Luddites rebelled against innovations in textile manufacturing not because of job replacement, but because the factories were objectively inhumane places to work, and the Lords that financed the inventors were rather terrible, greedy, and unrepenting. The Luddites destroyed tech and factories because of what it represented, not because technology itself was an evil. The Luddites lost, of course, and modern factory life became synonymous with poverty, child abuse, and a new class of destitution. The Luddites were recast as anti-technology rebels who were trying to delay the future with their simple ways. Progress is everything, say the tech-optimists of 1826 and 2026, as they preach a gospel of equity while practicing business filled with poverty wages, cathedrals of data centers, NDAs to cover lawsuits, and a new age of nuclear and non-nuclear violence.

— — —

Ok, look, if you want a great list of all the various ethical quandaries in generative AI, here is a good starting point. I suppose I could have started there, but like I said, as a human sitting next to a cat drinking knock-off diet cola, I get distracted.

— — —

I don’t think picking up a hammer and destroying data centers like a modern Luddite is the right call. Not because I’m a pacifist, but because it didn’t work for the Luddites and it won’t work today: the political architecture is so interlaced with modern tech companies as to be indistinguishable. The Luddites failed not because the tech moguls banded together, but because the literal army marched in and snuffed them out.

I do, however, think resistance is the right idea, both in the ethical delineations of definitions of theological resistance to aspects of modern technology and in the very practical insistence that university and community norms leave real space for true resistance to AI. This means that people could go to your school, be part of your community, be in your classes, and not be forced to interact with this assumed forever piece of technology. It’s not enough, it may not be even remotely sufficient, but it’s not nothing.

Despite what you will continue to hear, AI is not inevitable and we should not treat it as such.

Bibliography

Barnum, Matt. “Why Sal Khan Is Rethinking How AI Will Change Schools.” Chalkbeat, April 9, 2026. https://www.chalkbeat.org/2026/04/09/sal-khan-reflects-on-ai-in-schools-and-khanmigo/.

Hagendorff, Thilo. “Mapping the Ethics of Generative AI: A Comprehensive Scoping Review.” Minds and Machines 34, no. 4 (2024): 1-27. For the taxonomy tree, see https://www.thilo-hagendorff.info/ethics-of-generative-ai/tree.html.

Riley, Benjamin. “An Illustrated Guide to Resisting ‘AI Is Inevitable’ in Education.” Cognitive Resonance (Substack). April 13, 2026. https://buildcognitiveresonance.substack.com/p/an-illustrated-guide-to-resisting.

Slattery, John. “A Nuclear Future Is Not Inevitable.” Commonweal, February 9, 2025. https://www.commonwealmagazine.org/nuclear-power-amazon-microsoft-trump-biden-slattery-ai.

Warner, John. “Serial Failure: Sal Khan Wants to Take Over Higher Education.” Inside Higher Ed, April 15, 2026. https://www.insidehighered.com/opinion/columns/just-visiting/2026/04/15/serial-failure-sal-khan-wants-take-over-higher-education.