Skip to content

Security in the age of AI: Mythos and going back to basics

AI models like Anthropic's Mythos are finding and exploiting vulnerabilities faster than humans can keep up. Learn why the answer to AI-driven cyber threats starts with getting the basics right.

Ivan Ristic·Chief Scientist
Published: April 20, 2026·5 min read

Since the beginning of computers, our approach to security has been to kick the proverbial can down the road. For decades, it's been to "deploy now, secure it later". Over the years, this approach became deeply ingrained in how we do things, with an elaborate security industry growing around how organizations build things to keep the wheels of business turning, while never addressing the root causes.

But now, AI is coming for cybersecurity and there is nowhere to hide.

AI security skills have significantly improved

We're talking about this at the moment because what we had thought would be happening at some point in the future is actually happening in front of our very eyes. A few days ago, Anthropic, one of the leading AI vendors, disclosed their frontier AI model, Mythos, which is apparently very, very good at finding security problems and exploiting them. They also announced Project Glasswing, their effort to responsibly disclose thousands of newly discovered vulnerabilities.

There is a palpable sense of panic among security practitioners. Governments are issuing warnings to businesses (Canada, UK). This is AI doing its thing: human knowledge is packaged, replicated, and replayed at low cost, for better and worse.

To understand why this is impacting security so much, we first need to understand how computer security works today. In a nutshell, and simplifying somewhat, nothing is secure. The security industry is not designed to fix underlying problems but to instead maintain an equilibrium so that business could carry on. In truth, human endeavors are messy, computer networks even more so, and none of this stuff is easy to fix.

To take just one example, the latest release of Chrome, one of the most sophisticated and cared-for pieces of software, includes a total of 60 security fixes. Of those, 2 are rated critical and 14 are rated high.

The crucial change is that AI is now finding problems, either in software or computer networks, at a rate at which we can no longer keep up. Doing more of the same doesn't work.

Evidence that AI is already affecting security

It would be easy to dismiss Mythos as a well-executed PR stunt, but all other indicators and trends are pointing in the same direction. In the UK, the AI Security Institute tests AI models against simulated "cyber ranges" that mimic real environments. They announced that Mythos is the first model that successfully completed one of their ranges. Unattended.

In the open source space, which we can use as a canary of sorts, AI was initially a negative force that resulted in a deluge of useless problem reports. No longer. The reports arriving today are good, and virtually all generated with the help of AI. Other projects are reporting similar experiences and complaining that it's difficult to keep up.

At the same time, HackerOne is reporting a significant increase of bug submissions, attributed to AI helping security researchers to do more and amplify their capabilities. Companies are trying their best to keep up, with increased remediation rates, but they have been unable to match the increase in the submissions.

If you want to get more technical, read this testimonial from an actual exploit writer who gave AI a chance. The conclusion? It's clumsy still, but it works... and it's only going to get better. Unlike human operators, who have an inherent limit enforced by our biology, AI systems have the potential for self-learning at a scale that can vastly surpass ours. Again and again, we have to remind ourselves that this is the worst version of technology we'll ever see.

The short answer: Start with the basics

What to do now? One direction, which is possibly the easier option in the short term, is to do what we've always been doing, just do it better. That's easier said than done. AI agents could help with defense, but there's always going to be an asymmetry. Attackers can let their offensive agents loose without worry, whilst defenders still have to maintain oversight of what their defensive agents are doing. Besides, AI agents themselves have a way to go before they are secure... a problem that the attacking side don't have.

But the emergent long-term advice is to go back to the basics. It may seem weird at first because we're so used to the current state of affairs, but insecurity is not inevitable. Complex environments will no doubt always lead to exploitable weaknesses, but there is so much low-hanging fruit that we could be picking up right now.

What are these basics? First of all, every organization needs an accurate asset inventoryaccurate asset inventory, continuously updated, at machine speed. That's the same speed at which the attackers are operating. Second, ruthlessly remove from the internet anything that doesn't need to be there. Obscurity has worked for security for a long time, but only because it was expensive to breach through. If the cost of discovery and exploitation is going to approach zero, security through obscurity will no longer be feasible. Third, what remains online has to be hardened and configured well. Fourth, activity should be monitored and detection improved.

In the long term, we just have to stop adding new insecure systems to our networks. Are we witnessing the moment when deploying insecure software stops being economically viable? Our only option going forward is to actually figure out how to write and deploy secure software. Back to the basics indeed.

Ivan Ristic
Ivan Ristic
Chief Scientist

Ivan Ristic is the Chief Scientist for Red Sift and former founder of Hardenize. Learn more about how Red Sift helps organizations with their Certificate Monitoring.