Attackers are industrializing at scale: Where defenders stand in 2026

Publicado el:25 de febrero de 2026
8 min de lectura

Executive summary: In Red Sift's latest webinar, co-founder and CEO Rahul Powar, joined by VP Customer Engineering Billy McDiarmid, walked through what the threat landscape actually looks like heading into 2026, how AI changes the equation for both attackers and defenders, and what Red Sift is building in response. The session also included a developer preview of RadarLight's new ChatGPT app integration and a live demo of Red Sift's MCP capabilities.

Key takeaways:

  • Lookalike domain attacks have become cheap to launch at scale, and most organizations are underestimating how much of this activity is targeting them right now
  • The real AI opportunity in 2026 isn't a new model release, it's integration. The components exist. The value comes from putting them together properly.
  • Radar Lite gives any organization a real-time free snapshot of their domain security posture benchmarked against thousands of industry peers
  • Red Sift's MCP capability is in testing across OnDMARC, BrandTrust, Certificates, and ASM, with full customer rollout planned for 2026

The uncomfortable starting point for 2026

The current state of play for AI protection is concerning, as Rahul notes "I worry that there's a trend towards over promising and under delivering, basically making teams reluctant to adopt AI. If your vendor is telling you that this magic thing is going to do everything that you need to do and it doesn't, that creates more problems than it solves."

That reluctance is a real risk. AI fatigue, where security teams stop engaging with new tooling because they've been burned too many times, is genuinely damaging to security outcomes. It means threats that capable tools could catch go unaddressed.

The flip side is equally real. "Threats and complexity are multiplying while the resources to combat them are at best flat." Flat security team budgets and headcount don't scale with expanding attack surfaces. AI isn't a nice-to-have in that environment. It's a structural necessity.

Red Sift's position is that 2026 is the year the market transitions from AI as a promise to AI as practical reality. Not because models are suddenly better, but because integration has finally caught up.

AI predictions webinar recap

Integration is the real product

Rahul gave an example of the recent ClaudeBot project, which took an existing Claude Code capability and packaged it with integrations to Telegram, WhatsApp, and Slack, giving users a way to triage emails, run scripts, and take actions on their computers from messaging interfaces they already use.

"There's literally no magic here. It's basically an integration play. This is an LLM model that's been available for nine months. All someone has done is fairly cleverly package that up with the right integrations, and suddenly the utility has grown exponentially."

This example particularly highlights the available opportunity to security teams. "We have the data, we have the platforms, we have the underlying APIs. We've got to open that up to capable models, direct them appropriately, and actually deploy them against business problems we all care about."

Red Sift has been doing this since 2021. The question for 2026 is how far that can extend.

What Red Sift actually uses AI for

The first is protocol adoption and maintenance. Red Sift OnDMARC helps organizations implement DMARC (Domain-based Message Authentication, Reporting and Conformance) correctly and keep it configured as their sending infrastructure changes, backed through Red Sift Radar, the first-to-market LLM tool designed to find and fix security issues faster. Similar tooling covers PKI and certificate monitoring. These are complex, layered protocols that interact with each other in ways that are hard to reason about without AI assistance, especially at the scale of a large government estate or a multinational enterprise.

The second is brand and impersonation monitoring. DMARC protects your sending domain from being spoofed. But it does nothing to stop an attacker from registering a domain that looks like yours and using that for phishing. "The standards are silent about this," Rahul said, "because you can't build internet-scale standards to solve these problems."

Red Sift Brand Trust fills that gap. It uses agentic AI models for logo detection, face recognition for executive impersonation, and pattern analysis across web and social media to find brand abuse in progress. Rahul was explicit that these tasks don't use generative AI. "If we wanted to do logo detection at internet scale through a generative AI workflow, we would have to build nuclear power plants ourselves."

That's not a throwaway line. It's the sustainability argument made concrete.

AI and sustainability

Billy raised the environmental impact question, with Rahul adding "I like to think of generative AI as a tool that simplistically converts energy into intelligence. The more energy you put into it, the more intelligence you get out of it. The game everyone's playing is trying to find the right balance of how much energy you put in and how much intelligence you get out."

The practical consequence is that using a heavyweight generative model for a task that a purpose-built specialist model handles better is wasteful on every dimension: cost, speed, environmental impact, and often accuracy.

Red Sift has three levers in this space. First, not everything uses generative AI. Face recognition and logo detection run on purpose-built models. Second, when generative AI is used, they select a model appropriate to the problem size. Third, they control the inputs going into the model, which directly controls energy consumption.

On carbon reporting for customers, Rahul said Red Sift is actively building the capability internally but it's not yet customer-facing. "We hope to be able to expose that directly to customers in the future, especially for those with tight carbon reporting requirements. It doesn't seem to be a burning ask from anyone today, but it is something we hope to report on in the near future."

If your organization has sustainability reporting requirements, this is worth flagging to your Red Sift customer success manager.

Lookalike domains: the threat most organizations are underestimating

From the live Q&A, what AI-driven attack vectors security teams are still failing to understand and what they should prioritize.

Rahul's answer was specific: the industrialization of lookalike domains and websites.

"We've seen the cost of actually setting up and launching these attacks drop so dramatically that this is happening at scale for pretty much every single one of our customers in ways they are not even aware of today."

The attack chain is straightforward. A lookalike domain gets registered. A convincing phishing site goes up, often using AI-generated content and scraped brand assets. Phishing emails, SMS fraud, or fake social profiles follow. Users who land there think they're dealing with the real company.

This isn't an emerging risk. It's active right now, at scale, against organizations of every size. The underestimation isn't about severity, it's about volume. Most organizations think of brand impersonation as occasional. For any recognizable brand, it's continuous.

ChatGPT and MCP: what's coming

Billy demoed two capabilities that are in development.

The ChatGPT app version of Radar Lite is a developer preview. It's not yet publicly released. Once live, it will let anyone run domain security assessments and get industry benchmarking results directly inside ChatGPT. The advantage over the web version is conversation context. If the assessment flags that your CAA records aren't configured correctly, you can ask ChatGPT in the same thread what CAA records are, why they matter, and how to fix them. The assessment becomes the starting point for a working session rather than a static report.

Rahul explained the reasoning for going where users already are: "We're not precious about where our tools live. If someone wants to do a domain security assessment inside ChatGPT, why not give them the best tools and data that's out there?"

The MCP (Model Control Protocol) capability is also in active testing. Billy demonstrated it live, connected to Red Sift demonstration accounts across OnDMARC, BrandTrust, Certificates, and ASM. From a command-line interface, you can ask it to pull domain configurations, surface issues, and initiate remediation actions across all four products.

"We're moving into a world where everything we do and all the engagement we do with applications is through command lines. We're going back to 1982 in that regard, but getting access to this information in this way is so powerful, not only for speed of access but for being able to make configuration changes and look at what's actually going on within the estate."

The goal is to let customers make correct configuration changes faster, get education about why those changes matter, and do all of it from familiar environments like Claude or ChatGPT, rather than switching between product UIs.

Learn more about upcoming developments from Red Sift and get a free domain health check with Radar Lite

Try it for free now