The New Heretics: How AI Users Became the Internet’s Favorite Villains
- The Autistic Lens

- Nov 1
- 6 min read
We were promised empathy. That’s what haunts me most—not the rise of cruelty itself, but the fact that it came wearing the robes of compassion. In corners of the internet once devoted to justice, art, and peace, something colder has taken root. A fury not directed at institutions or corporations or systems, but at people. Specifically, at people who use generative AI. Not to harm. Not to exploit. Just to explore. To play. To create.
And suddenly, for that alone, they are declared enemies. They are threatened, cursed, harassed. Their livelihoods are targeted, their names dragged through the public square, their presence exiled. They are told they deserve to suffer—not because they harmed anyone, but because they dared to prompt a machine. This is not critique. This is not principled objection. This is ritualized cruelty masquerading as virtue.
Of course, there are those who use these tools to harm—to deceive, to exploit, to create suffering on purpose. AI has already been weaponized in ways that wound deeply: through stolen likenesses, fabricated images, targeted harassment, and systems built for surveillance or manipulation. That is not creation; that is cruelty in code. And it must be named, condemned, and resisted with the same urgency as any other form of violence. To acknowledge that truth is not to excuse it—it is to refuse to let it be used as an excuse for punishing everyone else.
We’ve seen this ritual before. In the arc of my book The Descent, I wrote that “there must always be an enemy… a heretic… someone must be punished.” That refrain echoes now with disturbing familiarity. The AI user has become the new heretic. Marked. Humiliated. Burned in effigy across comment threads and timelines. This isn’t about ethics. It’s about spectacle. It’s about outrage given permission to feed.
Listen closely, and the tone reveals more than the words. These aren’t conversations. They’re purges. Not debates, but condemnations. The anger isn’t measured—it’s ravenous. You can hear the thrill in the language: the pleasure of domination cloaked in moral concern. They say it’s about protecting artists, about safeguarding truth, but cruelty has always been a shape-shifter. It will justify itself in whatever language is fashionable. Today, it speaks in the name of justice. Tomorrow, it will wear a different face. But underneath, the hunger is the same. What we are witnessing isn’t accountability. It’s performance. It’s what Ethicism warned about: when care becomes choreography, and empathy is no longer practiced—only mimicked. It’s Those We Call Monsters, combined with We Are The Panopticon, reborn through digital mobbing. A machine that doesn’t need truth, only momentum. That feasts not on solutions, but on scapegoats.
And what enemies they’ve chosen—not the billionaires building exploitative systems, not the corporations scraping data without consent, but the curious student, the disabled artist looking for a new way to create, the anxious writer asking a tool for help. People. Real people. Dehumanized. Turned into symbols. And once someone becomes a symbol, they become disposable. We’ve been told this story before: that certain people don’t deserve care. That some lines of experimentation make you less human. That dissent, or difference, or curiosity is grounds for exile. The specifics change. The script does not.
What makes this moment so insidious is its hypocrisy. Many who rage against AI use it daily—they accept autocorrect, photo filters, algorithmic recommendations. They let AI assist quietly, invisibly. But when someone uses it creatively, when they declare it, suddenly it’s heresy. Autocorrect is fine. AI art is war. But the principle is the same: a tool augmenting human effort. So why the double standard? Because one is invisible, and one is visible. One serves their convenience. The other threatens their identity. And so they draw a line—not between harm and healing, not between ethical and unethical use—but between what empowers them, and what empowers someone else. It’s not about the tool. It’s about control.
And what does this performative cruelty accomplish? Nothing. Except harm. The artist who used AI to cope with chronic illness—now terrified to post her work. The teen who tried an AI tool for their school project—dogpiled until they delete their account. The writer who used an LLM to structure their thoughts—now told they are a fraud, a thief, a threat to humanity. They shrink back. They go silent. They retreat from communities that once claimed to be about inclusion, growth, curiosity. They don’t just lose platforms. They lose trust. In others. In themselves. And that loss cannot be undone with an apology post.
This isn’t new. This is the ancient rhythm of dehumanization: “We calculate the death we accept.” Not always literal death—sometimes the death of reputation, of dignity, of personhood. The justification is always noble. The goal is always purification. But what it becomes is domination. Each cycle of hate needs a trigger, then a villain, then a punishment. It needs participants. It needs witnesses. It needs silence to count as consent. And slowly, a culture built on care becomes a stage for cruelty. One more coliseum in the algorithmic empire.
You probably use AI too. Your phone predicts your next word. Your camera enhances your photos. Your shopping site shows you what it thinks you’ll want. So let’s stop pretending AI use is foreign. Let’s stop pretending it’s inherently evil. Let’s start asking better questions. What kind of AI harms, and who profits from it? What kind of use enables care, accessibility, expression? Who are we punishing—and who are we letting off the hook? If we’re truly worried about exploitation, we should be challenging the corporations that profit from surveillance capitalism. If we’re worried about fairness, we should be building systems that prioritize consent, attribution, and equity. But yelling at an artist who tried Midjourney to visualize a dream? That’s not justice. That’s just an easier target than the system.
In The Praxis, I wrote that Ethicism’s praxis is not heroic. It’s maintenance. That’s the answer to moments like this. Not moral spectacle. But steady, daily repair. Because when our so-called justice begins to echo the cruelty it claims to resist, we haven’t won—we’ve become what we hated. And when we strip others of their humanity in order to defend “humanity,” we’ve already lost. The right fight, waged the wrong way, becomes indistinguishable from the violence it opposes.
So what do we do? We remember the ethic of care. We refuse to participate in pile-ons, even when we feel justified. We challenge ideas without humiliating people. We hold systems accountable, not scapegoats. We advocate for transparency, for consent, for artist protections—without becoming executioners of the curious. We create spaces for dialogue, for explanation, for context. We treat people like people. Even when it’s hard. Even when it’s unpopular. Especially then. Because that’s when it matters.
Maybe that’s all this moment really is—another rehearsal of an old cruelty wearing a new costume. A socially sanctioned outlet for rage that people pretend is virtue. The same pattern, just updated for a different stage. We invent a villain small enough to crucify, call it justice, and cast ourselves as heroes, as if outrage alone could save the world. Meanwhile, the actual architects of harm—the corporations hoarding power, the billionaires strip-mining human creativity—keep their hands clean and their profits intact. The mob feels righteous; the machine keeps turning. And the tragedy is not only in the pain it causes, but in how ordinary people, yearning to be good, are coaxed into mistaking punishment for purpose.
In the end, it’s all Frankenstein’s story retold. The corporations—the modern Dr. Frankensteins—stitched this new creation together from the stolen bones of human labor, animated it with lightning and greed, then recoiled from what they made. The people who use AI become both the monster and the mob: creators and destroyers, participants in a cycle of making and burning. They bring the machine to life, then fear its reflection. And those who hunt them—the torchbearers—become the fire itself, certain that the flames are cleansing the world even as they scorch what’s left of its humanity.
But even the architects cannot escape the fate of their creation. The same systems that devour art, energy, and truth are already turning inward, consuming their own foundations. The AI boom that feeds on exploitation will collapse under its own weight, as every empire of hubris eventually does. Like Frankenstein himself, they will be undone not by the monster’s evil, but by the consequences of refusing to care for what they created.
No one in this story is innocent. Not the makers, not the users, not the crowd. But perhaps innocence was never the point. The lesson—then as now—is that we are all implicated in the creation of the thing we fear, and that the real monstrosity is not in the creature at all, but in how quickly we learn to love the light of the burning windmill.

🕊
If this piece resonated with you, my book Ethicism: The Practice of Care continues this work in full — tracing how conscience, empathy, and responsibility can survive in a world built on cruelty.



