top of page

In the midst of winter, I found there was, within me, an invincible summer.”

— Albert Camus, “Return to Tipasa”

AI, Art, and the Problem With Wanting a Simple Villain


AI didn’t make me an artist. I was already an artist from childhood.


It helped me stay alive long enough to remain one.


I need to say something about AI and art, and I need to say it in a way that doesn’t pretend this is simple.


Because it isn’t.


And because I’m tired—so tired—of watching people treat every complex situation like it’s a courtroom drama with a clear-cut monster, a clear-cut hero, and a clean little ending where everyone claps and justice magically happens in the comments section.


That isn’t how reality behaves, it isn’t how capitalism operates, and it sure as hell isn’t how humans actually function.


So here’s where I’m at.



Yes, AI has an environmental cost. No, I’m not pretending it doesn’t.


I’m not here to pretend AI is “clean.” It isn’t.


Training and running large models costs energy. Data centers use water for cooling. Hardware production has supply chain baggage. The whole thing is wrapped in industrial systems that are not designed for ethics—they’re designed for throughput and profit.


I recognize that. I’m not dodging it.


But I also need to say the part that people call “whataboutism,” even though it’s not always whataboutism—it’s context.


We live in a world where so much of what we do is built on resource extraction and waste, and most of it is invisible until someone decides they want a moral outrage buffet and picks the “new scary thing” to scream about.


I’m not saying “AI is fine because other things are worse.”

I’m saying: If you want to talk about harm, talk about harm honestly.


There are industries we treat as normal that chew through water and energy like it’s nothing. There are products we don’t need that get a free pass because they’re old, familiar, and profitable. (And yes—every time I try to remember whether it’s almonds or pistachios that have the notorious water footprint, my brain turns into a printer jam. Either way, you get my point.)


So yes: AI isn’t good for the environment.


But neither is a huge chunk of “normal life” under capitalism.


And that’s the first reason this conversation feels fake to me sometimes—because people selectively care when it’s convenient.



Yes, training on copyrighted work is a problem. And it matters.


The second issue is the one that actually does keep me up at night: training data.


A lot of AI systems were built by scraping the internet, ingesting copyrighted material, and treating human labor like it was just “available.” Like it was public property. Like it was free raw fuel for a corporate engine.


That isn’t trivial, and it damn sure isn’t a footnote.


Artists deserve consent.

Artists deserve compensation.

Artists deserve control.


And I don’t think it’s “anti-tech” to say that. I think it’s the bare minimum standard of ethical use.


So yes, this is real. This is a genuine problem.


And if you’re an artist who feels violated by it, I understand why. I’m not here to tell you you’re wrong.



My “cell phone camera” analogy — and why it only goes so far


This is the part where I try to explain how I mentally organize the chaos.


When cell phone cameras became “good enough,” something massive happened to photography. The barrier to entry collapsed. Professional photography didn’t vanish, but the market shifted hard. A lot of working photographers got crushed unless they were already established, already networked, already positioned as “premium.”


Suddenly, “Joe and Mary with an iPhone” was competition.


And yes—people will argue about quality. They’ll say professional work is still better. They’ll say clients still pay for skill.


All true.

Also irrelevant.


Because the point is: the public’s tolerance for “good enough” changes markets. It changes what people pay for. It changes what survives.


AI feels like that. A tool that makes certain outputs easier and faster, and therefore changes the ecosystem whether we like it or not.


But there’s a huge difference.


A cell phone camera didn’t become good by secretly absorbing millions of copyrighted photos and reconstituting that labor without permission. It evolved through engineering and optics and computation, yes—but the training-data question is different.


So the analogy is useful… and incomplete.


AI isn’t just “a new tool.” It’s a new tool wrapped in a new set of ethical problems.



I don’t think we can “put the horse back in the stable”—and I don’t think pretending helps


Here’s the hard truth: I don’t think this is going away.


Even if lawsuits succeed. Even if regulations tighten. Even if certain companies get forced to change.


The idea is out there now, the methods are out there now, open-source versions are out there now, and the internet is what it is—so you can’t un-invent this; you can only fight over how it’s governed, how compensation works, and how it’s used.


And that is where my frustration comes in, because so much of the discourse is built around fantasy solutions:


“Ban it.”

“Shame anyone who uses it.”

“Destroy the tools.”

“Make individuals the scapegoat.”


That doesn’t fix the underlying machine.

It just gives people someone to throw stones at.



Now the part people will judge me for: AI has helped save my life.


I’m going to say this plainly: AI has helped save my life.


Not in a cute, “oh it’s a helpful app” way. In a real way.


There was a point where I was closer to the edge than I want to describe in detail. Not “stressed,” not “having a bad week”—I mean that edge. The one people recognize without needing it spelled out, and the one that can be triggering if you do. When I say AI helped save my life, I mean it in that way: it interrupted the spiral before it could turn into an ending. It gave me a place to put what was in my head so it stopped echoing in the abyss, stopped fermenting into rumination, and started becoming something I could hold.


Andmaybe even more than that—it proved to me I could still create. That there was still a “me” in here with a voice, with language, with art in my bloodstream. It didn’t replace my expression; it handed it back to me, piece by piece, when I genuinely thought it was gone forever.


My disabilities and brain injury didn’t just affect my energy or my schedule—they affected my ability to create. Skills I had. Abilities I relied on to feel like myself. Writing. Art. Structure. The ability to take what’s in my head and put it somewhere outside my skull where it stops clawing at me.


And the recovery path people love to recommend—“just practice,” “just take classes,” “just rebuild your skill over time”—requires two things I don’t have:


Money and functional bandwidth.


I’m broke. I’m in debt. I live paycheck to paycheck. I can’t take time off. I can’t buy years of training. I can’t magically turn my nervous system into something that cooperates on command.

So what happens when your main outlet—the thing that keeps you alive—gets ripped away?


You improvise.


And for me, AI didn’t just give me an output—it gave me a bridge back to myself. It’s helping me heal and relearn how to create: showing me structure when my thoughts are scattered, helping me translate raw emotion into something coherent, and letting me study the “why” behind what works so I can do more of it on my own next time.


Theway I use it keeps shifting, because I’m not trying to outsource my voice—I’m trying to rebuild it. I’m relying on it less over time, or using it differently, because it’s actually teaching me as I go, the way months of guided support and thousands of dollars of professional help would—except this is the version I can afford, access, and use in the moment I need it.



What I actually use AI for — and what I don’t


Let me be specific, because vagueness invites assumptions.


For writing: I use AI for grammar, structure, and clarity—especially when I’m doing what I used to call “word vomit,” which I now understand as stream-of-consciousness writing. Sometimes I need help turning raw emotion into readable language.


For mental processing: when I’m spiraling, AI can help me slow down, name what I’m feeling, and find the thread again.


For music: the lyrics are me—like, overwhelmingly me. Yes, I’ll use help with rhyme here and there, or tightening a line, or making sure something reads correctly. But the content, the story, the pain, the imagery—that comes from my bloodstream.


The vocals and instrumentals? Yes, those are AI-generated for right now.


And here’s the part people miss: I’m not doing this because I’m lazy. I’m doing it because I’m disabled and broke and still trying to make art instead of disappearing.


Also: AI has inspired me to try to learn how to sing, even though I can’t right now. It gave me a “maybe.” It gave me a future target instead of a closed door.


That matters more than internet purity tests.



I’m not excusing it. I’m explaining it.


I’m not asking for a free moral pass.


I’m explaining why I’m here.


I understand the critiques. I understand why artists are angry. I understand why people feel threatened.


But here’s what I’m not going to accept:

Being turned into the designated villain because I used a tool to survive.


Because if we’re going to be serious, we need to be serious about where power actually sits.



Before I go further: I already wrote about the social side of this—the ritual of finding a “heretic,” the comment-thread gallows, the way outrage becomes entertainment. If that’s the part you’re here for, it’s in “The New Heretics: How AI users became the internet’s favorite villain.” This post is the personal reality underneath it. (Click here to read that post)


If you want to be angry, be angry at the people who made it this way


If AI had been built ethically from the start—with artists compensated for training data, real consent systems in place, meaningful opt-in and opt-out control for creators, and corporations not using “innovation” as a cover to race toward replacing human labor—then this entire conversation would look radically different.


But billionaires and companies don’t want that.


They aren’t building this for ethics or fairness—they’re building it for profit and control: maximum scale, minimum friction, and a future they can own outright, without consent, without revenue sharing, and without anything that slows the machine down.


So when people funnel all their rage into attacking individual users—especially disabled users, broke users, people using AI as a brace just to stand upright—it feels like a diversion.


It feels like the torches are pointed at the wrong castle.



The “nuance” I’m allowed to claim—and the nuance I’m not


I’ll say it clearly: this is nuanced for me.


For others? Not always.


Some people use AI for scams, harassment, misinformation, deepfakes, theft, and cruelty. Some people use it to churn out low-effort content and flood platforms until human creators can’t be seen.


That’s real. That’s happening.


But that’s the point: the same tool can be used to harm or to heal.


And the modern internet hates that sentence, because it requires thought.


It requires staying present with discomfort, holding competing truths at the same time, and admitting that under capitalism “being ethical” isn’t a neat checklist—it’s a messy, imperfect negotiation with systems none of us fully control.


And a lot of people don’t want that.


They want a villain they can point to, a spectacle they can participate in, and a fast, clean sense of “justice” that feels good in the moment—something that turns anger into entertainment and complexity into a target, because a mob is easier than a conversation and dopamine is easier than accountability.


Justice doesn’t exist in comment threads.

What exists is performance.



What I want, moving forward


What I want is ethical AI: models trained on consenting data, artists paid for what’s used, real controls and meaningful restrictions, and laws that protect creators without turning survival tools into contraband for disabled people.

And I want a culture that can hold complexity without exploding into purity policing.


Because the truth is: most of us are trapped in systems we didn’t choose.

And we’re all trying to survive with the tools available to us.


You can be angry at the harm. I am too.

You can demand accountability. I do.

But if your solution is “find a person to hate,” don’t confuse that with ethics.


That’s just the internet doing what it does.



I don’t think it’s going to change anytime soon.

But I’m still going to try to be human anyway.


🕯️


~Morgan


This is a digitally altered (yes, with AI and some photo editing software) drawing I did from 2013 I believe, that finally shows what I had in my mind.
This is a digitally altered (yes, with AI and some photo editing software) drawing I did from 2013 I believe, that finally shows what I had in my mind.


© M. Bennett Photography

 Proudly created with Wix.com

A tiny floating banana
bottom of page