From Mythos to Logos and the coming vulnerability wave
AI may increase the number of discovered vulnerabilities in the short term. But the same capabilities will also help developers, vendors, and security teams prevent and remediate weaknesses earlier

Written by:

TL;DR
In this blog post, we explore how AI will likely accelerate vulnerability discovery, creating a short-term surge in findings, advisories, and remediation pressure. But the same AI capabilities that help attackers identify weaknesses faster will also help defenders prevent, test, prioritise, explain, and remediate them.
The real challenge will not be patching everything faster, but understanding which exposures represent real operational risk for your organisation. We need to understand what matters first.
This is the practical meaning of Logos in vulnerability management: not reacting to every new signal with equal urgency, but applying evidence, context, and prioritisation to decide what requires action. Organisations that do that will be better positioned to navigate the coming vulnerability wave.
What is Mythos?
Claude Mythos Preview is a general-purpose, unreleased frontier mode from Anthropic with strong cybersecurity capabilities, particularly in finding and exploiting software vulnerabilities.
According to Anthropic, Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities become widely available, potentially beyond actors who are committed to deploying them safely.
If frontier AI models can accelerate vulnerability discovery, Boards of Directors will naturally ask how security leaders are preparing for a world where attackers may find and exploit weaknesses faster than before.
Mythos is not only a story about attackers becoming faster. It is also a story about defenders gaining access to better tools.
A new phase in vulnerability and exposure management
Vulnerability management has always been a race between discovery, exploitation, detection, and remediation.
Vulnerabilities are disclosed, scanners detect affected systems, security teams prioritise based on severity, and IT or development teams work through remediation according to available capacity and business constraints.
That model is now under pressure. Organisations are already facing more disclosed vulnerabilities, faster exploitation timelines, and security teams struggling to prioritise findings across fragmented tools.
In mnemonic’s Threat Exposure Management recommendations for 2026 we highlighted that traditional proactive security models based on scheduled scans and compliance-driven patching cannot keep pace with today’s threat landscape. The challenge comes down to two critical areas: identifying exposures and vulnerabilities, and validating and prioritising findings to focus on what truly matters.
For security teams, this creates an uncomfortable question:
What happens when the speed of vulnerability discovery increases faster than the speed of remediation?
The expected vulnerability tsunami
We should expect a period where AI contributes to an increase in vulnerability discovery.
This may come from several sources:
- researchers using AI to review open-source projects and commercial software more efficiently
- attackers using AI to identify weak patterns in exposed services and public code repositories
- vendors using AI-assisted security testing to find more issues before customers do
- developers using AI coding assistants that may introduce vulnerabilities
- automated tooling generating more findings across infrastructure, applications, APIs, cloud, identity, and supply-chain environments
Once a vulnerability becomes known, especially if exploitation is feasible, organisations must quickly understand whether they are affected, whether the issue is exploitable in their environment, and what action is required.
The immediate effect may feel like a vulnerability tsunami. More findings. More advisories. More vendor notifications. More internal questions. More patching pressure.
For many organisations, this will amplify an existing problem: vulnerability fatigue. Security teams are already overwhelmed by alerts and reports from scanners, cloud platforms, endpoint tools, application testing, penetration tests, threat intelligence feeds, and external attack surface monitoring. Adding AI-generated discovery without better prioritisation will only increase noise.
The answer cannot be to patch everything immediately. The answer is to understand what matters first.
This is the practical meaning of Logos in vulnerability management: not reacting to every new signal with equal urgency, but applying evidence, context, and prioritisation to decide what requires action.
Severity is not the same as risk
Vulnerability management often starts with severity. This is understandable. Severity scores such as the Common Vulnerability Scoring System (CVSS) provide a standardised way to describe technical characteristics and potential impact.
But severity is not the same as operational risk. A critical vulnerability on an isolated test system may represent less real-world risk than a medium-severity vulnerability on an internet-facing system with weak authentication and access to sensitive data. Similarly, a vulnerability with high theoretical impact may never be exploited in practice, while a less severe vulnerability may be widely exploited because it is easy to weaponise and commonly exposed.
However, one important implication of Mythos and other frontier AIs is that vulnerabilities previously considered mostly theoretical may become more practically exploitable. Weaknesses that have been known for years, but rarely weaponised because exploitation required specialist knowledge, time, or manual analysis, may become easier to understand, chain, and operationalise.
This is why organisations need to distinguish between theoretical risk and actual exploitation risk.
The Exploit Prediction Scoring System (EPSS) has improved the industry’s ability to estimate exploitation probability by using global datasets and statistical modelling. It provides a probability estimate for whether exploitation activity is likely to be observed against a vulnerability within the next 30 days. However, EPSS does not answer every question an organisation must answer.
It does not know whether a vulnerable asset is business-critical. It does not understand internal compensating controls. It does not know whether the affected service is exposed to the internet, reachable through an attack path, or connected to sensitive processes. It also does not provide a fully transparent model that every organisation can adapt to its own operational reality.
For example, Tenable addresses part of this challenge by combining vulnerability context with asset context in an Assets Exposure Score. This score is a combination of the severity and exploitability of a given vulnerability, and the criticality of a given asset to the organisation.
As part of mnemonic’s research activities, we are developing a predictive scoring system. This is driven by the need for transparency and explainability that are essential for maintaining trust in automated decision support within our Managed Detection and Response (MDR) services. We have named this system the Vulnerability Exploitation Prediction System (VEPS).
VEPS is not positioned as a replacement for existing global initiatives. It is a strategic capability that allows mnemonic to have a model that can be trained at any time with our own data, run the model in our own infrastructure and have the knowledge to understand how the model functions. However, this is a story for another upcoming post.
AI also improves the defender side
The Mythos debate can easily become dramatic. It forces organisations to confront weaknesses they already know exist:
- incomplete asset visibility
- slow patching cycles
- fragmented ownership of remediation
Mythos does not remove the need for security fundamentals. It makes them harder to postpone.
The current AI conversation often focuses on attackers becoming faster. That is a valid concern, but it is only half the story. Developers, security teams, and vendors will use many of the same tools to reduce vulnerabilities earlier in the lifecycle. AI-assisted code review, secure coding support, dependency analysis, test generation, configuration validation, and remediation guidance can all help identify weaknesses before software reaches production.
Anthropic’s Project Glasswing is an example of an attempt to put these capabilities to work for defensive purposes. This initiative provides access to Mythos Preview to Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks and 40 additional organisations that build or maintain critical software infrastructure so they can use the model to scan and secure both first-party and open-source systems.
Over time, this may help the vulnerability curve normalise and the waters settle.
Final remarks
AI will likely create a short-term boom in vulnerability discovery. This should not be ignored, but it should also not be viewed only as a threat. The same technologies that help researchers and attackers find weaknesses faster will also help developers, vendors, and security teams prevent and remediate them faster.
The organisations that succeed will not be those that try to patch everything at the same speed.
They will be the ones that understand exposure, prioritise based on real-world risk, collaborate closely with vendors, integrate security earlier in development, and use AI to strengthen human decision-making.
At mnemonic, our approach is to combine continuous exposure management, predictive risk modelling, vendor collaboration, and experienced security expertise to help customers focus on what truly matters: reducing the exposures most likely to become incidents.
Get in touch!
