The AI spell has worn off, and an ethical dilemma replaces it

Image generation using words is magical. So engaging that when I first started using it, I didn't give a second thought to the ethical baggage.

That spell has worn off.

I was always aware of both the ethically questionable issue of stolen training data and the equally troubling environmental impact of massive data centers. But the magic of the tools themselves distracted from a reckoning of these actual costs.

Then reality hit. The Trump administration dumped statutory guardrails and basically said "screw the planet" so American AI can dominate. Suddenly, the environmental costs felt visceral and urgent.

Data centers are a water and heat disaster. Training a single large AI model produces over 626,000 pounds of carbon dioxide equivalent - more than five times a car's lifetime emissions.

The Devil's Bargain I Can't Resolve

I refuse to kid myself about this. I use the tools and carry the knowledge of the costs.

I believe this is important technology to explore. For now, I embrace the unresolved nature of the devil's bargain.

When I create something using these tools, knowing they're built on stolen data and environmental damage, I can't resolve the moral tension. At the very least, I should use them for useful purposes that serve others.

But is that enough? I don't know. Probably not.

My personal approach is to embrace uncertainty without an irritable reaching after facts or conclusions. Keats called this capacity "negative capability." It's easier said than done.

Fools Grasp At Certainties

Most people either avoid AI entirely or pretend the ethical issues don't exist. Both responses miss something crucial.

Fools grasp at certainties. The entire world is far more interesting when you let matters remain uncertain.

When I see other creators confidently proclaiming their AI use is "ethical" or "responsible," I suspect an intellectual dishonesty. The comfortable narratives they construct avoid the real complexity.

The tech companies aren't fools, though. They know exactly what they're doing. The ethical price is worth the result for them.

Over 151 copyright infringement lawsuits are currently pending against AI companies. Yet they continue training on unauthorized content because the business model depends on it.

Scale Is The Only Difference

What separates me from those companies in terms of moral culpability? Nothing except a matter of scale.

My impact is bad, but smaller. That's the best I can say.

This happens in a very human context. We are inherently curious and AI is inherently fascinating. Perhaps we can't blame individual users driven by that powerful impulse.

But we can demand honesty about what we're participating in.

We could talk about what would make it better. Ocean-floor data centres managing heat. Paying copyright holders for training data. These are just two future solutions that might resolve these tensions.

But focusing on distant fixes allows us to avoid dealing with the uncomfortable reality of what we're doing right now, today.

The Value Of Conscious Complicity

Trump won't last forever. The hope is that the true reckoning of AI's costs will emerge once we get past this wild west phase.

Until then, I choose conscious complicity and involvement over dishonest purity.

There's something valuable in sitting with uncertainty rather than rushing to claim a comfortable moral position. It keeps us intellectually honest about the complexity we're navigating.

The spell of AI magic has worn off. What remains is the harder work of engaging with powerful technology while acknowledging its costs.

I can't resolve the moral tension. But I can refuse to pretend it doesn't exist.

Previous
Previous

You may have already made your creative choice

Next
Next

Writing exercise: defining architecture