AI is corporations' new idol. It's time to break it's statue.

Mythology

Tech giants are trying to convince us that AI is the new god. They don’t mention that their language models guess patterns based on massive training datasets (usually scraped without creators’ consent), but instead claim that the model “generates code.” In their view, a single deity can replace an entire team of mere mortals toiling away at coding, marketing, and writing. And Sam Altman claims we might already be seeing the seeds of artificial general intelligence. The giants deliberately blur the line between statistical data processing and actual understanding.

Imagine humanity tainted by original sin—protein-based engineers and artists grow weary, need breaks, or (horrors!) want to spend time with their families. Human needs become a flaw requiring redemption. The savior? AI—emotionless, tireless, and free of salary demands. In Christianity, the sacrifice was Jesus’ death on the cross. Here, we sacrifice our data to corporations, and corporations sacrifice the environment—OpenAI’s servers reportedly consume more electricity annually than Iceland, and hardly any country’s energy is truly green. Corporations don’t mention that in their sermons.

Of course, these tools were built so their creators could profit and shareholders could reap returns. There’s nothing wrong with that—just remember whose interests the corporation will defend. Meta won’t stand up for creators whose texts it illegally scraped from Library Genesis. OpenAI won’t pay Studio Ghibli a single cent for using their films to train its models out of the goodness of its heart.

If corporations are building us a golden calf, we’ll need a furious Moses to smash it to dust.

Language in Service of the Giants

It doesn’t matter whether corporate CEOs truly believe what they preach. Sundar Pichai, CEO of Google, said: “AI is one of the most profound innovations humanity is working on. It’s more profound than fire or electricity.” Really? AI is so brilliant, yet it depends on that very electricity which is supposedly “less profound.”

Satya Nadella, Microsoft’s CEO, claims that “AI is the defining technology of our time. It amplifies human ingenuity and helps us tackle some of society’s most pressing challenges.” If this is such a monumental achievement, they should specify exactly which “pressing challenges” AI has solved. Maybe AI could finally advise corporations on whom they should pay for stolen data?

Sam Altman, it seems, has a deep sense of mission: “Our mission is to ensure that artificial general intelligence benefits all of humanity.” As long as people in Africa still have to walk hours every day for water, this is just hot air.

The real impact lies in how their words shape the AI discourse. Let’s turn to Edward Bernays’ book Propaganda, where he describes the relationship between human psychology, democracy, and corporations.

He writes: “The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of.”

Should we poke holes in this narrative and shout, “The emperor has no clothes!”?

Yes, because I’ve already shown how our thinking is being shaped. Why should entities we don’t know—whose sole goal is selling their product—dictate what we should think and feel?

Sam Altman claims AI will make programmers 10x more efficient. Who benefits? The programmers themselves? Unlikely—salaried or pseudo-freelance (B2B for a single company) coders won’t see a tenfold pay raise for their increased productivity. OpenAI’s shareholders, however, will profit handsomely when another company buys ChatGPT’s premium version. And the gain is double: programmers accustomed to daily ChatGPT use will be more inclined to take the path of least resistance rather than write code from scratch, juggling documentation on one screen and the app’s code on the other. Halting AI use in such a scenario would mean stagnation—forced to relearn how to work with raw documentation.

Active Resistance

So, can we use AI tools to protect our own interests? First, if climate concerns are a dealbreaker for you, then no—unless you’re certain the service runs mostly on renewables or nuclear power. Alternatively, you could live in a country with green energy and run models locally. The latter approach is more advantageous—you lose some speed but retain control over your data.

From a consumer’s perspective, the model should serve you, not Sam Altman, DeepSeek’s creators, Meta, or Google.

Treat large language models as dangerous tools, not magical helpers. You heard that right—repeat it until it sticks. They’re dangerous tools. You can get things done with them, but remember: if you ask them to make decisions, you bear the consequences.

A screwdriver is dangerous—you could stab your finger or stick it in a socket—but it’s meant for dull, repetitive tasks. And the user can still mess up: under-tighten a screw, skip one, or miss the mark. If a factory burns down because you didn’t tighten a bolt, the judge will laugh if you blame the screwdriver for not “hitting the screws right.” So why let a screwdriver carve woodcuts just because it can chisel something out?

The problem with comparing AI to hand tools? A hammer won’t generate a statistically convincing answer when asked. But framing AI as a dangerous tool helps us consider responsibility and creative agency. The subtext here is reducing AI to “just a tool”—that’s the goal.

There’s nothing wrong with generating email templates, but letting a model decide who to do business with is unacceptable. If we believe we’re the architects of our fate, yet let a tool made by some rando in a suit dictate our actions, that’s a contradiction.

What if we grow dependent on AI tools and corporations suddenly hike prices? Or GPU makers inflate costs until local models become unaffordable? We can’t rule out losing access to language models altogether.

The best approach? Use AI for tedious tasks sometimes, but also do them manually to stay sharp. Write an email from scratch. Draft a sales pitch yourself.

Another safeguard is preventing market monopolization and diversifying tools. To avoid being at one giant’s mercy, use multiple models, weighing their pros and cons. If one link breaks, another can keep your workflow alive. Relying solely on OpenAI is unacceptable—why stick to models by American or Chinese giants when France’s Mistral or Poland’s Bielik exist?

Preventing monopolies is the state’s job, not mine. All I can do is spread my bets across different tools.

As you can see, revolution doesn’t require throwing stones at OpenAI’s HQ. The user’s internal shift is key to winning this David vs. Goliath battle. Start asking uncomfortable questions. Challenge corporate narratives. And for the world I live in? I wish for more heretics.

Filip Cichowski