Image: Photo Agency / shutterstock.com

AI takes over - without war, without drama

When we talk about the risks of artificial intelligence, we often talk about gloomy end-time scenarios in which rebellious machines fight against humanity. But according to Sam Altman, CEO of OpenAI and ChatGPT, the reality is much more subtle - and perhaps that is precisely why it is so dangerous.

In an interview with Axel Springer CEO Mathias Döpfner on the MD MEETS podcast, Altman warns of a development that hardly anyone has on their radar: An AI that takes control not by command or force - but simply because we follow it voluntarily.

That sounds absurd? Not if you take a closer look.

Step by step towards dependency

Hundreds of millions of people already use AI tools such as ChatGPT every day - for texts, job applications, decisions, diagnoses and school assignments. Soon it will be billions. And the better the AI's answers are, the more trust in it grows.

Altman sums it up: "It gives you advice that you understand - and that you should almost always follow. Then it gets even smarter. And now it gives you advice that you no longer understand at all." Nevertheless, we continue to follow it. Why? Because the tips are almost always right - even if we no longer understand why.

And there's the catch: if you refuse to follow this advice, you risk being left behind in your job or in life. Suddenly you're no longer doing what you think makes sense - you're doing what "the model" recommends. Welcome to the age of algorithmic authority.

The secret power of the feedback loop

It becomes particularly explosive due to a kind of digital vicious circle: we follow the AI tips, which generates new data. The AI uses this data to improve itself. And the better it gets, the more we trust it. The whole thing repeats itself - like a hamster wheel that no one wants to get off because otherwise they will no longer be able to keep up.

Altman calls this a collective phenomenon: nobody is forcing us - and yet we are all going along with it. Because otherwise we will no longer be competitive. A quiet, creeping loss of control that doesn't come from a revolution, but from convenience, efficiency and the fear of losing out.

AI as an invisible boss?

What Altman describes is not a Hollywood screenplay. It is a social development that has long since begun - and hardly anyone is talking about it. The central question is: who will actually control AI if it becomes smarter than us - and we continue to follow it anyway?

The answer to this remains vague. And perhaps that is precisely the problem: as long as we don't have a plan for how to deal with a super-intelligent, constantly learning AI, we run the risk of only doing what "the model" wants us to do at some point.

Man plays with

The greatest danger is not that an AI will rebel - but that we will voluntarily capitulate. If people start to make decisions that they no longer understand themselves because an AI recommends them, then that is not progress. It is the end of personal responsibility. Perhaps it's time to ask not only what AI can do - but whether we really want everything it can do.

Subscribe to the newsletter

and always up to date on data protection.