Image: Konektus Photo / Shutterstock.com
This is exactly the kind of story that is likely to be making many companies nervous right now. An AI platform, built to boost efficiency, facilitate knowledge transfer, and speed up processes, becomes a vulnerability in itself. As reported by the Golem portal, this is exactly what is said to have happened with McKinsey’s internal platform “Lilli.” Security researchers at Codewall claim they deployed an autonomous AI agent on the system—and that it reportedly gained extensive access within just two hours.
Not with stolen passwords. Not with an insider. But apparently just with a domain name, publicly accessible interfaces, and a system that wasn’t properly secured in a critical area.
Getting started was probably easier than I thought
Lilli is not just a small side project at McKinsey. The platform is used internally to search for documents, make company knowledge accessible, and consolidate information more quickly. That is precisely why this case is so sensitive. After all, when a large amount of content is centralized in one place, the potential damage can quickly become significant.
According to the researchers, the AI agent discovered numerous publicly documented API endpoints. Some of these were reportedly accessible without adequate security measures. Through one of these channels, the agent was able to inject commands into the database. From that point on, the path was apparently wide open.
The researchers put it quite bluntly themselves: “So we decided to put our autonomous offensive agent to work on it. No login credentials. No insider knowledge. And no human intervention. Just a domain name and a dream.”
Suddenly, more was revealed than should have been
According to Codewall, it didn’t stop at a small test access. The agent is said to have ultimately accessed millions of chat messages, hundreds of thousands of files, and tens of thousands of user accounts. It was also reportedly able to interfere with the system’s behavior.
The key question is whether this data was actually accessed or analyzed on a large scale. McKinsey emphasizes that there is no evidence that client data or confidential information was accessed by unauthorized parties. The company also stated that the vulnerability was patched very quickly after it was reported.
Still, an uneasy feeling lingers. Because when an external test penetrates so deeply into a system, it doesn’t just point to a technical problem. It shows just how vulnerable key AI platforms can become when speed was prioritized over proper security measures.
The real wake-up call is for almost all companies
This incident is more than just an embarrassing moment for McKinsey. It’s a preview of what many companies may still face. AI agents are becoming not only more useful, but also more dangerous. They can search for, test, and exploit vulnerabilities—quickly, persistently, and with little guidance.
That is precisely why the lesson here is quite clear: when you bring AI into your company, you’re not just boosting productivity—you’re also introducing new risks. Many people are currently raving about the opportunities. Far too few are talking about the vulnerabilities. And that is exactly what could come back to haunt you.
Source: golem.de




