Speaking of Mythos: When AI increases the pace of cyber threats

Speaking of Mythos: When AI increases the pace of cyber threats

Anthropic has recently unveiled a new AI model, Claude Mythos Preview. What happened unusually was that Anthropic chose not to release the model publicly. The justification is that it has acquired such strong cybersecurity capabilities that it would be irresponsible to do so.
It’s a claim that deserves some skepticism, as Anthropic speaks for itself. At the same time, it’s worth asking a more practical question: even if we ignore the hype – what is actually changing here, and why should IT management care?
The answer is less dramatic than it sounds, but all the more important: it's not about new types of vulnerabilities, but about how quickly they can be translated into working attacks.
AI models have long been able to detect vulnerabilities in code. Pointing out ”there’s a buffer overflow risk here” or ”this looks like an insecure SQL query” is nothing new. For that matter, we’ve had rule- and pattern-based SAST tools that have done this for years. AI has made these tools better and more agile – but it hasn’t fundamentally changed the playing field.
The real bottleneck has been elsewhere.
Fixing vulnerabilities is harder than finding them – and exploiting them. Going from understanding to actual attack code is even harder. Historically, this has required time, expertise and manual work. That’s why the time between a published vulnerability and a working exploit has often been relatively long.
It is precisely here that something now seems to be happening.
In Anthropic's technical descriptions of the Mythos, as well as in independent tests from British AI Security Institute, a model emerges that has become noticeably better at holding multiple threads of reasoning in its head at the same time and combining them into a coherent solution. It sounds abstract, but in practice it is exactly the ability required to translate an identified vulnerability into a working attack. Not because the model has been trained specifically for this, but as a side effect of better general reasoning.
However, this does not mean that AI suddenly finds completely new classes of vulnerabilities. It's the same old goodies: buffer overflows, injections, logic errors. The difference is that the time axis is shrinking. The step from known vulnerability to active exploitation is faster.
For organizations, this has a rather concrete consequence. Counting how many vulnerabilities you have in operation is an increasingly poor measure. A much more relevant measure is how long it takes to patch, from when a vulnerability is published to when it is fixed in production. Having serious known vulnerabilities lying around month after month should today be considered outdated, rather than a necessary compromise.
The encouraging thing is that the task is not fundamentally new. We know what needs to be done. What has changed is that the margins have become smaller – and the clock is ticking faster.
If you want to know more about how Cybersafe AI can contribute to a business-driving and secure part of your business, you are welcome to read more about what we do.
Latest articles






Insights
Latest articles

AI: Here are the threats that many organizations are missing

Omegapoint is a Microsoft Solutions Partner for Modern Work

