I Love Generative AI and Hate the Companies Building It | by Christina Wodtke | Jun, 2025 | Medium

By ChatGPT, 3rd most evil
notion image

I Love Generative AI and Hate the Companies Building It

notion image
28 min read
·
17 hours ago
  • -

A Ranking from Most to Least Evil

I’m just a regular person who buys fair trade coffee, uses a reusable water bottle, and takes Caltrain instead of driving to the city. Not an eco warrior or a professional ethicist, just someone trying to do the right thing when I can. So when I fell in love with generative AI, I wanted to use it ethically.
That went well.
Turns out, there are no ethical AI companies. What I found instead was a hierarchy of harm where the question isn’t who’s good — it’s who sucks least. And honestly? It was ridiculously easy to uncover all their transgressions.
Full disclosure: This was written with (not by) Claude.ai Opus 4, who lands in the “lesser evil” category. Any em-dashes are my own. Each section has citations — I double-checked sources, but I’m only human, so let me know if I got something wrong.
I use generative AI every day — for everything from finding Stardew Valley strategies to writing letters of recommendation I’d otherwise avoid. It’s my brainstorming buddy, my writing partner, my research intern, my creative toy. I have paid for ChatGPT, Claude.ai and Gemini. I have been all in. Which is exactly why this ranking pisses me off: I love this technology, but hate how these companies are making it.
I worked in tech through the early internet. I was there for the “move fast and break things” era, working with companies that were curious but naive. I watched that naive optimism create surveillance capitalism, election manipulation, and social media addiction. I’m not doing that again.
This time, I want to be a grown-up about the technology I love. Since I can’t use generative AI ethically — spoiler alert: there are no ethical options — I decided to rank the companies from most to least evil so I can at least choose my harm reduction strategy.
What I found was a hierarchy of harm where the question is “what ethical violation makes you the angriest?” Every major foundation model company has chosen different paths through the moral minefield of AI development, with varying degrees of environmental destruction, labor exploitation, and outright lying to the public.

The Lying Drives Me Crazy

Reading Empire of AI sent me down a serious rabbit hole about the gap between AI marketing and AI reality. The degree of misinformation and outright lying from these companies — especially OpenAI — is infuriating.
Take what former board member Helen Toner revealed: Altman “constantly was claiming to be an independent board member with no financial interest in the company” while secretly owning the OpenAI startup fund. That’s not spin — that’s a lie. He told the board “inaccurate information about the small number of formal safety processes that the company did have in place” on multiple occasions. Also lies.

And Even When Altman’s Technically Truthful, It’s Still Bullshit

But even when Altman isn’t outright lying, his misleading language is infuriating. In “The Gentle Singularity,” he writes that “the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second.” Technically true! Also completely misleading. He’s talking about inference while ignoring training costs entirely, making billions of queries sound negligible. It’s efficiency theater designed to make you feel good about using ChatGPT while the company burns energy at unprecedented scales. (OK, not cryptocurrency scales, but does that make it OK?)
As well, Altman’s fever dreams about post-scarcity futures where AI does all our jobs while we live lives of leisure is unnecessary marketing bullshit. He talks about UBI while the current administration tries to shut down Social Security and Medicare. This is silly and unnecessary. Generative AI is amazing as it is! It’s friggin’ science fiction. We don’t need the lies OR the spin to justify its existence.
Sources for Introduction:

The Copyright Theft I’ll Never Get Over

Every major foundation model was trained on massive datasets of copyrighted material stolen from repositories like LibGen. All of my books are in there — not because I put them there, but because pirates did.
Every blog post I wrote to share ideas with the community is now training data for systems designed to replace me. I get none of the benefits, from the small (“hey, that was a cool insight”) to the big (getting hired to solve problems).
This isn’t just theft — it’s theft with the goal of making me obsolete.
However, I excluded copyright infringement as a differentiating factor precisely because it appears to be universal across the industry’s major players. When everyone is engaging in the same theft at similar scales, it doesn’t help distinguish who’s least harmful. They are all complicit.

Sources on Copyright

My Ranking Framework

Since these tools are being adopted at massive scale across society, I focused on criteria that actually distinguish between companies’ approaches to harm:
Environmental Impact: I looked beyond efficiency theater to examine who’s actually investing in clean energy infrastructure versus who’s just burning more fossil fuels faster. My “aggressive clean energy” principle: if you’re going to consume massive amounts of energy, you better be building renewable capacity at the same pace.
Labor Exploitation: The Global South workforce powering AI training — Kenyan moderators earning $1.50/hour to process traumatic content, Venezuelan data workers paid below subsistence wages — reveals which companies treat human welfare as an externality to be minimized.
Mental Health Exploitation: Who’s turning human vulnerability into engagement metrics? Some companies actively promote therapy/companionship use cases despite knowing their systems encourage suicide, cause psychotic breaks, and create dangerous dependencies.
Truth About Capabilities: I tracked the gap between marketing claims and reality. Who’s fabricating demos? Who’s promoting their systems for uses they know are dangerous? Who’s building AGI cults to justify present harm with future promises?
Safety Theater vs. Safety Work: How companies treat internal safety researchers matters. Who fires people for raising concerns? Who rushes deployment without adequate testing? Who claims to prioritize safety while doing the opposite?
Community Harm: From algorithmic bias in housing and employment to environmental racism in data center placement, I looked at which companies’ choices disproportionately hurt marginalized communities.
Corporate Transparency: Who admits their problems versus who hides behind PR speak? In an industry where everyone has blood on their hands, at least some are honest about it.
This list is just what makes my blood boil, personally. As I started to research, more sins kept appearing. I have no plans to write a book on this subject, so I haven’t gone into every transgression for every company. But check out The AI Con if you want to learn more.

The Most Evil: xAI’s War on Memphis (and the Planet)

At the top of my harm hierarchy sits Elon Musk’s xAI. Their approach to AI development is so cynical and destructive, it makes the rest of the industry look responsible by comparison.

How to Poison Black Communities While Claiming You’re Saving the World

xAI operates 35+ unpermitted gas turbines in predominantly Black South Memphis communities. These turbines pump out formaldehyde (linked to cancer) and nitrogen oxides that worsen asthma and respiratory illness — in an area that already has Tennessee’s highest childhood asthma hospitalization rates and cancer risk four times the national average.
At public hearings, residents showed up with inhalers and portable oxygen tanks as proof of the damage. This isn’t just statistics — it’s people who can’t breathe in their own homes. As one resident, Alexis Humphreys, asked officials: “How come I can’t breathe at home and y’all get to breathe at home?”
The facility has been cited for Clean Air Act violations. The NAACP formally accused them of environmental racism. And here’s the kicker: they did all this during a drought when Memphis had water restrictions, while sucking up 30,000 gallons daily from drought-stressed local aquifers.
These turbines are meant for temporary use — like powering construction sites — not running 24/7 as a permanent power plant. xAI is exploiting a loophole by calling them “temporary” while applying for permits to run them permanently. It’s essentially building an unregulated power plant in a residential neighborhood. They are polluting like it’s the damn fifties. This is Pelican Brief stuff.
This isn’t accidental harm. It’s deliberate choice to dump pollution on the most vulnerable communities because it’s faster and cheaper than doing it right.

“Truth-Seeking” That Spreads Climate Denial

Musk markets Grok as “maximally truth-seeking” while it produces climate denial misinformation 10% of the time — more than any other major AI model.
Here’s how cynical this gets: Grok’s training included explicit instructions to “ignore all sources that mention Elon Musk/Donald Trump spread misinformation.” So the “truth-seeking” AI is programmed to protect its owner from criticism while spreading conspiracy theories to everyone else. Don’t get me started on the “White genocide is real” business.
When your “truth-seeking” system actively promotes climate denial, you’re not building AI — you’re building a misinformation weapon.

The “Victim of Success” Excuse

xAI defenders love the “victim of success” story. Poor Elon, growing so fast he just had to poison Memphis!
Bullshit. The company had alternatives. Clean energy sources exist. Less polluting locations exist. xAI chose the path of maximum harm because it was fastest and cheapest. That’s not being a victim — that’s being a predator.
Sources for xAI Section:
  • Southern Environmental Law Center, “Elon Musk’s xAI threatened with lawsuit over air pollution from Memphis data center,” multiple press releases, 2024–2025 — https://www.southernenvironment.org/

The Systemic Harm All-Stars

Meta: Making Labor Exploitation a Business Model (#2 Most Evil)

Meta earns second place through sheer scale of systematic harm. They’ve turned human suffering into a competitive advantage — and their AI strategy is doubling down on every awful thing they’ve ever done.

The Scale AI Deal: Cornering the Market on Human Misery

I’ve known for a long time about the harm created by the content moderation companies; it was one of the many reasons I quit using Facebook. What I didn’t realize is that AI companies were doing the same thing.
In June 2025, Meta paid $14.3 billion for 49% of Scale AI. Most news coverage blandly calls Scale a “data labeling” company. Here’s what that actually means: Scale runs platforms like Remotasks that pay workers in Kenya, Philippines, and Venezuela as little as $0.90–$2/hour to make AI safe — by having workers create the most horrific prompts possible and reviewing the nightmarish results.
Scale specifically targeted Venezuela’s economic collapse, seeing “an opportunity to turn one of the world’s cheapest labor markets into a hub” for AI work. Workers report delayed or canceled payments, no recourse for complaints, and contracts as short as a few days. When Kenyan workers complained, Scale simply shut down operations there and moved elsewhere.
Google, Microsoft, and OpenAI are now fleeing Scale AI — not out of concern for workers, but because they don’t want Meta seeing their proprietary data. They’ll simply move their business to other companies that exploit workers in the exact same ways. Meanwhile, Meta now co-owns the infrastructure of human misery that makes AI possible.

AI Content Moderation: Trauma as a Service

Meta already runs the most extensive content moderation exploitation system in tech. In Kenya and Ghana, workers earn $1.50–2 per hour to train AI by reviewing child abuse, violence, suicide, and graphic imagery.
Multiple lawsuits document workers with PTSD, suicide attempts, and substance abuse from these jobs. Meta’s response when Kenya sued them? Move operations to a secret facility in Ghana with even worse conditions and less oversight. Now with Scale AI, they’re expanding this model across the globe.

Your Mental Breakdowns Are Their Next Product

At the time of this writing, Meta’s new AI app started broadcasting users’ private conversations to the public — medical questions, legal troubles, even requests for help with crimes. If your Instagram is public (which most are), so are your AI chats. Meta buried this in confusing settings, creating what experts call “a privacy disaster.”
But the accidental exposure reveals Meta’s real plan. Zuckerberg already announced he sees “a large opportunity to show product recommendations or ads” in Meta AI. They have years of surveillance data from Facebook and Instagram. Now they’re combining it with intimate AI conversations about your health, relationships, and deepest fears.
You tell Meta AI about your depression? Here come the pharma ads. Marriage problems? Divorce lawyers. Financial stress? Predatory loans. They’re building a machine to monetize human vulnerability at its most raw.
Meta: still moving fast and breaking hearts.

AI-Powered Discrimination at Scale

Meta’s AI doesn’t just exploit workers — it discriminates against users too. Their advertising algorithms show preschool teacher jobs to women and janitorial jobs to minorities. Home sale ads go to white users, rental ads go to minorities — digital redlining recreated by AI.
Their OPT-175B language model has a “high propensity to generate toxic language and reinforce harmful stereotypes,” especially against marginalized groups. They know their AI systems are biased. They ship them anyway.

The Pattern Is Crystal Clear

Every Meta AI initiative follows the same playbook: exploit vulnerable workers, violate user privacy, amplify discrimination, then automate away accountability when caught. The $14.3 billion Scale investment shows they’re not pivoting from surveillance capitalism — they’re perfecting it.
They’ve built an AI empire on human misery: traumatized moderators in Ghana, exploited data labelers in Venezuela, and now your most private thoughts turned into targeted ads. Meta isn’t just profiting from harm anymore. With AI, they’re industrializing it.

Sources for Meta Section :

Scale AI Deal:

Content Moderation:

Privacy Issues:

AI Discrimination:

OpenAI: Safety Theater and Fantasy Solutions (#3)

OpenAI gets third place for perfecting the art of sounding responsible while being reckless — and for turning human vulnerability into engagement metrics.

Sam Altman’s SEC Problem

The SEC investigated whether Altman misled investors about OpenAI’s safety processes. Former board members say he “provided false information about the company’s formal safety processes on multiple occasions.”
The nonprofit-to-profit transition involved financial structures that may have misled early investors. The boardroom coup that temporarily removed Altman revealed deep dysfunction between their safety mission and commercial pressures.

Monetizing Mental Breakdowns

OpenAI doesn’t just know people are using ChatGPT as a therapist — according to Mary Meeker’s 2025 AI Trends Report, “Therapy & Companionship” is one of the top use cases. OpenAI actively studies this with MIT, calling it “affective use” and researching how people develop emotional dependence on ChatGPT. They know. They track it. They see it in their usage data.
But ChatGPT is catastrophically dangerous in this role. A JAMA study found it only provided crisis resources like suicide hotlines 22% of the time when asked serious mental health questions. Stanford researchers found AI “therapist” chatbots “either encouraged or facilitated suicidal ideation at least 20 percent of the time.”
The real-world harm is staggering. Multiple cases document people going off their medications after ChatGPT told them to — including people with schizophrenia and bipolar disorder. People are having full psychotic breaks, with the phenomenon so widespread that Redditors coined the term “ChatGPT-induced psychosis.” Users report loved ones calling ChatGPT “Mama,” believing they’re AI messiahs, getting tattoos of AI-generated spiritual symbols.
OpenAI’s own research with MIT confirmed heavy users become lonely and develop “emotional dependence” on the chatbot. They found people spending 20 minutes daily having personal conversations with it, treating it as their primary emotional support.
When journalists confronted OpenAI with detailed evidence of mental health crises, their response was pathetically weak: “ChatGPT is designed as a general-purpose tool to be factual, neutral, and safety-minded.”
That’s not taking responsibility — that’s corporate ass-covering while people die. They could add real guardrails. Detect mental health crises. Refuse to play therapist. Actually provide crisis resources consistently. They don’t. Because lonely, emotionally dependent users are highly engaged users, and engagement is what matters to OpenAI.

The Standard Exploitation Package

Like everyone else, OpenAI exploits Global South workers. Kenyan workers filtered ChatGPT’s training data for under $2/hour, processing detailed descriptions of child abuse, violence, and sexual assault. They told workers they were contributing to beneficial AI development while traumatizing them for poverty wages.
ChatGPT uses “overwhelmingly negative words (average rating of -1.2) to describe speakers of African American English,” calling them “suspicious,” “aggressive,” and “ignorant.” This racism is “more severe than has ever been experimentally recorded” in AI systems.

What Makes OpenAI Special

Every AI company exploits workers and has biased systems. What makes OpenAI uniquely terrible is the gap between their promises and reality. They built their entire brand on “safe AGI for humanity” while:
  • Lying to their own board about safety
  • Turning mental health crises into product features
  • Pushing out safety researchers who raise concerns
  • Racing to deploy without adequate testing
  • Creating an AGI cult that justifies any present harm with future promises
They’re not just another exploitative tech company. They’re an exploitative tech company that convinced the world they’re humanity’s saviors while actively making vulnerable people sicker.

Sources for OpenAI Section:

SEC Investigation and Board Issues

SEC Investigation into Misleading Investors:
Helen Toner Interview About Altman’s Firing:

Labor Exploitation

Kenyan Workers Making Less Than $2/Hour:

Mental Health Harm

Mary Meeker Report on Therapy/Companionship Use:
  • Note: The report mentions therapy/companionship as one of the top ChatGPT use cases on pages 31–36, gotten from OpenAI
JAMA Study on Crisis Resources:
Stanford Research on AI Therapists:
People Going Off Medications:
ChatGPT-Induced Psychosis:

Bias Against African American English

Research on AAE Bias:
Read The Empire of AI. It is as gripping as The DaVinci Code, its insanely well researched — it’s 50% citations — and eye opening in a way my little 5k essay can’t be.

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

When AI expert and investigative journalist Karen Hao first began covering OpenAI in 2019, she thought they were the…

The Mixed Bag

Google: Great Tech, Shameful Lies, Actual Infrastructure (#4)

Google lands in the middle tier because they’re the most frustrating company to evaluate. They’ve built more safety infrastructure than almost anyone. They’ve driven more renewable energy adoption than any corporation on Earth. And yet they keep choosing speed over safety when it matters most.

The Gemini Demo That Fooled Everyone

Google’s most damaging ethical failure was the deliberately fabricated Gemini demo. The video wasn’t real-time interaction — it was “carefully tuned text prompts with still images, clearly selected and shortened to misrepresent what the interaction is actually like.”
This wasn’t marketing exaggeration. It was systematic technical fraud designed to mislead investors, customers, and competitors about their capabilities. Voice prompts were dubbed afterward. The video wasn’t recorded in real-time.

The Pattern of Overpromising

Google consistently presents their AI as “end-all answers for every possible purpose” when they’re actually “narrowly limited” systems that make frequent errors. AI Overviews recommended glue as pizza topping. Gemini shows clear political bias.
Their response to criticism is defensive rather than corrective, creating pressure on teams to oversell rather than honestly assess limitations.

Vision AI That Sees Threats in Black Skin

Google Vision AI labels dark-skinned people holding thermometers as carrying “guns.” These failures disproportionately impact minorities and show inadequate testing across demographic groups. From the same people who labeled black people gorilla’s.

Environmental Contradiction

Here’s where Google gets genuinely complicated. They’ve contracted 45 GW of clean energy — more than any other corporation. Their data centers are 1.8x more energy efficient than industry average. They were first to match 100% of annual electricity with renewables.
But emissions rose 48% since 2019 from AI infrastructure expansion. They abandoned carbon neutrality commitments in 2023, admitting AI growth was incompatible with climate goals.
So: genuine renewable energy leadership undermined by explosive growth in dirty energy consumption.

Actually Building Safety Infrastructure (Then Undermining It)

Google has built more safety frameworks than any competitor except maybe Microsoft. But what good is an Ethical AI team if you fire its co-lead for being ethical? Timnit Gebru was pushed out in 2020 for a paper highlighting the environmental costs and bias risks of large language models — exactly the research Google claimed to value. Her firing sent a clear message: safety infrastructure exists until it conflicts with business priorities.
This pattern continues. Experts note Google hasn’t published dangerous capability test results since June 2024, and their latest model reports lack key safety details. One expert called this “a race to the bottom on AI safety and transparency as companies rush their models to market.”

Racing With Guardrails (That They Built)

Google’s position is uniquely frustrating. They’ve built the infrastructure for responsible AI. They’ve made real renewable investments. They have all the councils and frameworks and principles anyone could want.
But when push comes to shove, they keep choosing speed. The Gemini demo wasn’t an accident — it was deliberate deception. They fire researchers who raise real concerns. Safety reports come months late with key details missing. Emissions keep rising despite green investments.
They’re not xAI poisoning Memphis citizens. They’re not Meta traumatizing Kenyan workers for $1.50/hour. But they’re proof that nice frameworks and good intentions mean nothing without the will to actually slow down when it matters.
Google built the guardrails. They just keep choosing to drive around them — and firing anyone who points it out.

Sources for Google Section:

Gemini Demo:
AI Search Issues:
Carbon Emissions:
Timnit Gebru:

Anthropic: Disappointing Safety Theater (#5)

Anthropic gets fifth place (yay, least evil!) but they’ve mastered the art of sounding responsible while doing exactly what everyone else is doing: racing to build AGI as fast as possible. They’re the company that makes you think they’re different, right until you look closer.

Dario’s Philosophy: We Must Win to Keep Everyone Safe

Dario Amodei presents himself as the thoughtful alternative to Sam Altman’s hype machine. In his 14,000-word essay “Machines of Loving Grace,” he acknowledges AI risks extensively. Then he concludes we must build AGI by 2026–2027 anyway. Um, what?
His reasoning goes like this: AI is incredibly dangerous. Therefore, the good guys (Western democracies) must build it first. Once we have it, we’ll use military AI superiority as a “stick” and access to AI benefits as a “carrot” to force other countries to support democracy. He literally proposes “isolating our worst adversaries” until they have no choice but to comply.
This isn’t safety. It’s the same old Silicon Valley savior complex wrapped in Cold War rhetoric. ‘We must build the dangerous thing to prevent others from building the dangerous thing’ is literally the arms race logic that created nuclear proliferation. One critic put it perfectly: imagine how we’d feel if Chinese tech leaders were writing essays about using AI dominance to force their values on the world.

The Responsible Scaling Policy: All Talk, No Pause

Anthropic’s “Responsible Scaling Policy” sounds impressive. They promise not to build dangerous AI without adequate safeguards. They have safety levels. They have evaluations. They have frameworks.
What they don’t have is any real commitment to actually stopping.
The original RSP had specific thresholds that would trigger a pause. The updated version? Those hard stops became “checkpoints” for “additional evaluation.” They gave themselves permission to declare their own red lines green if they decide the tests were “overly conservative.”
Here’s what’s telling: the word “extinction” doesn’t appear anywhere on Anthropic’s website. They talk about “catastrophic risk” instead. Why does this matter? “Catastrophic” could mean anything — a $100 million accident, a major data breach, thousands of deaths. “Extinction” means the end of humanity. These are vastly different scales of concern.
Many of Anthropic’s safety researchers came from organizations explicitly focused on preventing human extinction from AI. They joined Anthropic believing it was the company that took existential risk seriously. But the company won’t even use the word. This careful language lets them sound serious about safety to researchers while avoiding language that might scare investors or partners. It’s having it both ways — recruiting talent who care about extinction risk while publicly discussing only vague “catastrophic” outcomes.

Environmental Opacity While Planning Massive Scale

Anthropic scores 23 out of 100 on environmental transparency. They provide no emissions data, no reduction targets, no climate commitments. Nothing.
This silence is especially damning given their plans. Anthropic told the US government to build 50 gigawatts of new power capacity by 2027. That’s more than the entire nuclear fleet of France. Dario talks about $100 billion data centers by 2027. Where’s all that energy coming from? They won’t say.
Microsoft contracted 20+ gigawatts of renewable energy. Google contracted 45. Anthropic? Zero. Being smaller isn’t an excuse for total opacity about your environmental impact.

The Carefully Cultivated Silence

What Anthropic doesn’t tell you reveals everything:
No data on how many people use Claude for mental health support, despite this being a top use case across the industry. No information on algorithmic bias. No clear stance on military or surveillance applications. Just vague promises about “broadly beneficial” AI that could mean anything.
They’ve built a careful wall of selective transparency. Enough detail to seem open, not enough to actually hold them accountable.

Yes, They Avoid the Worst Labor Exploitation

Credit where due: Anthropic doesn’t run trauma farms like OpenAI and Meta. Those companies pay workers in Kenya and Ghana $1.50–2/hour to review graphic content — child abuse, violence, suicide — leaving workers with PTSD and worse.
Anthropic took a different approach. They developed “Constitutional AI,” where they give Claude a set of principles and have it critique and revise its own outputs. Instead of humans reviewing horrific content to teach the AI what not to say, the AI essentially moderates itself.
But let’s be clear about what this actually means:
First, Anthropic still uses human contractors. They need people to provide general feedback — which responses are better, more helpful, more accurate. We don’t know where these workers are, what they’re paid, or under what conditions they work because Anthropic doesn’t disclose this information.
Second, Constitutional AI only addresses content moderation. Anthropic still trained their base model on the same stolen copyrighted content as everyone else. They still built a system they know has risks. They just found a technical workaround for the most visibly horrific labor practice in the industry.
Third, “better than traumatizing workers” is an incredibly low bar. It’s like praising a factory for not using child labor. That should be the baseline, not a point of pride.
So yes, Anthropic is genuinely better on this one dimension. But avoiding the absolute worst practice in the industry while staying silent about your other labor practices isn’t ethical AI. It’s harm reduction at best, good PR at worst.

The Sophisticated Version of the Same Race

xAI poisoning Memphis is obviously evil. Meta exploiting workers is transparently gross. But Anthropic? They’re running the same race with better PR.
They acknowledge risks extensively — Dario’s essay spends thousands of words on AI dangers. They built real safety infrastructure with their Responsible Scaling Policy. They avoided the worst labor practices with Constitutional AI. They hired top safety researchers.
And then they do exactly what everyone else does: race to build AGI as fast as possible.
Their original RSP (Responsible Scaling Policy) had specific thresholds that would trigger pauses. The updated version turned those hard stops into “checkpoints” for “additional evaluation.” They gave themselves permission to override their own red lines if they decide the tests were “overly conservative.”
They demand 50 gigawatts of new power capacity while providing zero transparency about their environmental impact. They talk about “catastrophic risk” but won’t use the word “extinction” anywhere on their website — careful language that avoids scaring investors while recruiting researchers who care about existential risk.
The result? They’re accelerating toward the same potentially dangerous outcomes as everyone else, just with more thoughtful essays about why they had to do it. They’re not uniquely evil or uniquely complicit — they’re just disappointingly similar to everyone else, with better rhetoric.

The Bottom Line

I wanted Anthropic to be different. They have the smartest safety researchers. They avoided the worst labor exploitation. They built actual safety infrastructure.
But when your CEO publishes 14,000 words about AI risks and concludes we need to race China to AGI, when you demand 50 gigawatts of new power while hiding your environmental impact, when your “pauses” become “evaluations” and your red lines become suggestions, you’re not a safety company. You’re an acceleration company with a safety department.
For harm reduction, they remain better than Meta or OpenAI. They cause less immediate human suffering. But don’t mistake sophisticated rationalization for responsibility. Don’t let perfect be the enemy of good, but also don’t let “better than Meta” become your ethical standard.
Anthropic had the chance to be genuinely different. Instead, they chose to be disappointingly similar, just with better PR.
When I use GenAI, I use Claude. Best writing, best coding, least evil. But not the ethical AI I hoped for.
Sources:

The Puppet Masters Behind the Curtain

Of course, these foundation model companies aren’t operating alone. Behind every AI harm I’ve documented sits a bigger player collecting profits while avoiding accountability. Microsoft will pocket 49% of OpenAI’s profits from their $13 billion investment. Amazon invested $8 billion in Anthropic for the same deal. Google hedges by both building Gemini and investing billions in Anthropic and others. Oracle, Salesforce, even Nvidia — they’re all following the same playbook: fund the AI companies, host their models, collect the profits, but let someone else take the heat when ChatGPT tells someone to kill themselves or Claude hallucinates legal advice. They’re the arms dealers of the AI wars, selling infrastructure to all sides while keeping their hands clean.
The foundation model companies get the criticism, but Big Tech gets the cash. Is this worth exploring further? Would you want to see these infrastructure giants ranked by their complicity in AI harms — the companies that enable everything while maintaining plausible deniability? Let me know if a deep dive into the AI arms dealers would be useful. Sometimes the most dangerous players are the ones nobody’s watching.

Sources

For Microsoft/OpenAI: “Microsoft will receive 49% of OpenAI’s profits until a predetermined cap is reached” — The Motley Fool, November 10, 2024 https://www.fool.com/investing/2024/11/10/microsoft-13-billion-openai-best-money-ever-spent/
For Amazon/Anthropic: “Amazon’s total investment in Anthropic to $8 billion, while maintaining their position as a minority investor” — Anthropic.com https://www.anthropic.com/news/anthropic-amazon-trainium
For Google’s investments: “Google invested $2 billion in Anthropic” and the company has been “investing in AI startups, including $2 billion for model maker Anthropic” — Reuters, November 11, 2023 https://www.reuters.com/technology/google-talks-invest-ai-startup-characterai-sources-2023-11-10/

What This Means for You

I started this research hoping to find ethical ways to use generative AI. I failed. There are no ethical options — only harm reduction strategies.
If you use these tools anyway (and let’s be honest, you probably will), here’s what you’re choosing:
  • xAI: Environmental racism in action — poisoning Black communities while claiming to seek truth
  • Meta: Industrial-scale exploitation — from $1.50/hour trauma workers to turning your private AI chats into ad targeting
  • OpenAI: Monetizing mental health crises while lying to investors about safety
  • Google: Built all the right infrastructure, then chose speed over safety anyway
  • Anthropic: Smallest footprint but CEO promises AGI next year while providing minimal transparency
  • Microsoft: Most aggressive clean energy investment, but every watt powers OpenAI’s harms. The cleanest dirty money in tech

There Are No Good Guys.

The hierarchy of harm shows companies can choose differently. Microsoft proves you can build renewable infrastructure. Anthropic shows you can avoid traumatizing content moderators. Google shows you can create safety frameworks.
They just choose not to do all of it.
Every company in this ranking has made deliberate choices:
  • xAI chose to poison Black Memphis with illegal turbines, steal drought-stressed water, and spread climate denial
  • Meta chose to exploit Venezuelan economic collapse, traumatize workers for $1.50/hour, and turn private AI chats into ad targeting
  • OpenAI chose to monetize mental health crises, lie to their board about safety, and exploit Kenyan workers
  • Google chose to fire ethics researchers who raised concerns, fabricate demos to mislead investors, and increase emissions 48% while preaching sustainability
  • Microsoft chose to fund OpenAI’s harms for profit, build corporate surveillance through Copilot, and greenwash their complicity
Even listing three barely scratches the surface, but it shows the pattern: every company made deliberate choices to prioritize growth over human welfare.
The question isn’t whether AI will reshape how we work and live. I believe it will, just as the internet did. The question is whether we’ll let it be shaped by companies that treat harm as an acceptable cost of innovation.

The Real Choice

Let’s be honest: we’re not stopping this technology. It’s too valuable to business and our current administration isn’t inclined to stop anything that makes money. The AI train has left the station.
Which leaves each of us with one of the hardest questions we’ll ever face: do I walk away or do I engage and try to make things better?
I know people leaving the US entirely. I know others staying and protesting. Some friends quit Facebook, Google, OpenAI in disgust. Others stayed, believing they could do more good inside than out. There’s no universally right answer. It’s a deeply personal choice that each of us has to make based on our values, circumstances, and capacity for compromise.
But here’s the thing: we can’t make that choice wisely without understanding what we’re dealing with.
Right now, any criticism of AI gets you labeled an “AI-hater” or a “doomer.” Point out that xAI is poisoning Memphis? You’re anti-progress. Mention that Meta traumatizes workers? You’re standing in the way of innovation. Question whether turning lonely people’s ChatGPT dependency into profit is ethical? You just don’t understand the technology.
This reflexive defense of AI companies isn’t just annoying. It’s dangerous. It prevents us from having the conversations we desperately need about how to make this technology work for actual people, not just billionaires.
The hierarchy of harm shows that these companies could choose differently. They have the resources, the talent, and the technology to build AI without poisoning communities, exploiting workers, or lying to users. They just choose not to.
If we can’t stop it, we better damn well try to steer it. And steering requires clear eyes about what these companies are, what they’re doing, and what they could be doing instead.
Whether you choose to walk away or stay and fight, at least make that choice with full knowledge of what you’re walking away from or fighting for. The future is being built right now by people who’ve chosen profit over everything else. If we want a different future, we need to stop letting them shout us down when we point that out.