Security teams are being told that AI has fundamentally changed the threat landscape. Some of that is true. Much of it is vendor positioning. The challenge for practitioners is separating genuine shifts in attacker capability from noise — and then deciding what to do about it.
This article is a practitioner’s take. Here is what has actually changed, what the real-world implications are, and where defenses are worth the investment.
What Is Actually Different
Phishing at scale, with quality
For most of the past decade, phishing success correlated with targeting. Mass phishing campaigns were cheap but obvious. Spear-phishing required time and skill. AI collapses that tradeoff.
Language models can generate highly personalized phishing content at volume. Given a name, employer, LinkedIn profile, and a few public data points, a model can draft a contextually relevant email that does not read like a template. The visual and linguistic tells that trained users have learned to spot — the generic greeting, the awkward phrasing, the inconsistent formatting — are largely absent from AI-generated content.
This is not theoretical. Phishing kits available on criminal forums now include LLM integration as a standard feature. The barrier to producing convincing, targeted phishing content has dropped significantly.
What has not changed: A well-crafted phishing email still needs a delivery mechanism (a real or spoofed domain that bypasses filters), and still needs the recipient to act. The human and infrastructure elements of phishing defense remain valid. What AI changes is the quality ceiling of mass campaigns, not the fundamental attack chain.
Deepfakes as a social engineering vector
Voice and video synthesis has reached a level where short-duration audio deepfakes are accessible to non-specialist attackers. This has specific implications for business email compromise (BEC) escalation and multi-factor bypass via social engineering.
The attack pattern: attacker sends a BEC email requesting a wire transfer or credential reset. Target is skeptical. Attacker follows up with a brief phone call using a voice clone of the CEO or finance director. The target, now with “audio confirmation,” complies.
This attack has been documented against real targets with real financial losses. The UK energy company case (2019) was an early example. The capability has become significantly more accessible since.
What this means for defenders: Phone-based confirmation of financial or access requests is no longer a reliable second factor if the attacker knows who they are impersonating. Organizations that rely on “call the manager to confirm” as a safeguard for sensitive transactions need to rethink that control.
Automated vulnerability discovery
AI-assisted code analysis and fuzzing is improving the efficiency of vulnerability research. Tools that combine static analysis, semantic understanding of code, and pattern-matching against known vulnerability classes can surface issues faster than manual review.
The honest assessment: this raises the efficiency of both offensive security researchers and attackers, but it does not change the fundamental attack-defense dynamic. Vulnerabilities still need to be patched. Defense still requires secure development practices and timely remediation.
The net effect is modest compression of the window between vulnerability introduction and exploitation. The organizations most at risk are those that were already slow to patch.
AI-enhanced credential stuffing and password spraying
Credential stuffing at scale is not new. What AI adds is smarter pattern generation and filtering. Models trained on leaked credential data can generate more plausible password variations and prioritize targets more effectively based on observed patterns.
More meaningfully, AI is being used to process and cross-correlate breach databases at scale — identifying accounts across services that share identifiers, enriching attacker lists with public data, and prioritizing high-value targets. The industrialization of credential-based attacks continues.
What Is Mostly Hype
“AI will enable autonomous nation-state-level attacks against anyone.” Advanced persistent threat (APT) capabilities are still resource-constrained by factors beyond tooling: operational security, human intelligence, physical access, and organizational coordination. AI lowers some barriers but does not eliminate the structural advantages of well-resourced state actors or meaningfully replicate their full capability set for commodity attackers.
“AI-generated malware is undetectable.” AI can generate malware variants that evade signature-based detection. This is real but not novel — polymorphic malware has existed for decades. Behavior-based detection, sandboxing, and endpoint telemetry remain effective against AI-generated payloads.
“AI will make zero-days trivially discoverable.” Zero-day discovery is still hard. AI tools help. They do not make novel vulnerability classes trivially discoverable at scale. The economics of zero-day markets suggest supply has not dramatically increased.
Where AI Genuinely Changes the Calculus
The most significant shift is compression of attacker ramp-up time. A lower-skilled attacker with access to AI tooling can now operate with a sophistication that previously required more expertise and time. This expands the threat population against mid-market organizations.
Mid-market companies have historically been under-targeted by sophisticated attackers because the effort-to-reward ratio was unfavorable. AI-assisted attacks change that math for certain attack classes — specifically phishing, credential attacks, and social engineering — because AI reduces the marginal effort of targeting.
The implication is not that mid-market organizations now face APT-level threats. It is that the quality floor of attacks against them is rising, while defenses built around detecting low-quality attacks may not hold.
What to Actually Do
1. Harden against AI-enhanced phishing specifically.
The response to AI-enhanced phishing is not teaching users to spot AI-written emails — that is a losing game. The response is reducing the value of credential phishing and the impact of a successful phish.
- Enforce phishing-resistant MFA (FIDO2/passkeys) on every externally-facing service where it is feasible. SMS OTP and TOTP are not sufficient defenses against real-time phishing proxies.
- Implement email authentication (DMARC, DKIM, SPF) to reduce spoofing. Most mid-market organizations still have enforcement gaps here.
- Reduce blast radius: if credentials are stolen, what can an attacker do? Privileged access management and just-in-time access limit the answer.
2. Rethink phone-based confirmation for high-value transactions.
If your control for approving wire transfers or privileged access grants involves calling the requester, add a second channel or code-based authentication that a voice clone cannot fake. Out-of-band verification via a separate authenticated system (not just a separate phone call) is the appropriate control.
3. Patch faster, especially for external-facing systems.
The compression of vulnerability-to-exploitation windows from AI-assisted tooling means slow patching cycles are riskier than they used to be. Prioritize attack surface reduction on external-facing assets. If you cannot patch in days, can you temporarily restrict access or add compensating controls?
4. Monitor your AI-accessible attack surface.
If you have deployed AI tools and integrations, those integrations have credentials, API keys, and access scopes. Attackers targeting those integrations can exfiltrate data or pivot to connected systems without triggering traditional endpoint detections. Ensure your AI integrations are in scope for your security monitoring.
5. Evaluate your credential exposure posture.
Run your corporate email domains and credentials against breach intelligence services (Have I Been Pwned’s organizational API, commercial breach feeds). Understand your exposure. Enforce password manager adoption and prohibit password reuse — not as aspirational policy, but as an auditable technical control where possible.
The Honest Bottom Line
AI has made some attack classes more accessible and more effective. The changes are real and specific — not apocalyptic, but not dismissible.
The practical response is not a new security program. It is an intensification of existing fundamentals: phishing-resistant authentication, reduced attack surface, faster patching, and privileged access management. These controls are effective against AI-enhanced attacks for the same reasons they are effective against traditional attacks — they reduce the value of what attackers can achieve, even after initial access.
The organizations most at risk from AI-enhanced attacks are not the ones without AI-specific defenses. They are the ones with weak fundamentals. Fix those first.