Key Takeaways
• Traditional firewalls like WAFs are ineffective against generative AI threats such as prompt injection and model exploitation
• Akamai’s Firewall for AI flagged 6% of real enterprise AI requests as risky, revealing threats companies were unaware of
• Experts argue that AI-specific security tools are urgently needed to supplement or evolve existing defenses
• Regulatory pressure and enterprise AI adoption are accelerating the demand for context-aware AI protection
Firewalls Are Failing AI: Why Traditional Tools Can’t Defend Against Modern AI Threats
As artificial intelligence becomes deeply embedded into enterprise systems, security experts are raising alarms: legacy security tools can’t defend against AI-driven threats.
From prompt injection to toxic content generation, generative AI opens new, largely unmonitored attack surfaces, and traditional firewalls are simply not equipped to handle them.
AI Interactions Are Invisible to Legacy Firewalls
Firewalls have long been the backbone of application security, built to block code-based threats like SQL injections or cross-site scripting.
But with large language models (LLMs) now interacting through natural language and generating autonomous responses, threats are no longer strictly code-based—they’re contextual.
“Traditional security tools like WAFs and API gateways are largely insufficient for protecting generative AI systems mainly because they are not pointing to, reading, and intersecting with the AI interactions and do not know how to interpret them.”— Avivah Litan, Distinguished VP Analyst, Gartner
This gap has led to the development of a new category of tools purpose-built for AI security, capable of scanning input and output and the intent and context behind user prompts.
Akamai’s Firewall for AI Reveals Hidden Risks
Akamai Technologies recently introduced its Firewall for AI at RSA 2025, designed to sit between users and AI models, analyzing real-time interactions.
Akamai’s firewall examined over 100,000 AI requests in a proof-of-concept deployment with an early customer. Results showed that:
• 6% of requests were flagged as risky
• Threats included prompt injection, toxic output, and sensitive data leakage
• Companies using the firewall discovered unauthorized AI usage internally
“We believe all global businesses will need security tools specific to LLMs and AI interactions. WAFs remain foundational, but AI introduces a new class of threats that require specialized protection.”— Rupesh Choksi, Senior Vice President and GM, Application Security, Akamai
The findings also exposed a critical reality: even organizations unaware of using AI discovered unauthorized interactions occurring within their environments.
Experts Confirm: AI Threats Are Already Real
While some threats like model extraction have existed for years, newer threats like prompt injection are becoming more common. Unlike traditional exploits, these attacks often bypass filters by manipulating behavior through prompt design.
“We’ve seen examples of these types of attacks on public GenAI apps like ChatGPT/OpenAI, so they’re not hypothetical. Many enterprises are moving forward quickly with internally built apps leveraging AI.”— John Grady, Principal Analyst, Enterprise Strategy Group.
Both Litan and Grady agree: existing security tools are fundamentally not designed to interpret natural language or enforce behavioral guardrails, making dedicated AI protections essential.
Will AI Security Become Its Category?
There’s an ongoing debate about whether AI firewalls will emerge as a standalone market or merge into existing security stacks. According to industry experts:
• Some vendors are integrating AI protection into current tools (e.g., WAFs)
• Others are launching standalone AI-specific products or acquiring startups
• The future likely holds deeper integration across applications, identity, and data security
“I see it as incremental and important functionality that existing security vendors must build or acquire to remain relevant and competitive.”— Avivah Litan, Distinguished VP Analyst, Gartner
Recent acquisitions underscore this trend:
-
Palo Alto Networks acquired Protect-AI
-
Cisco acquired Robust Intelligence
-
ZScaler and Securiti added native AI-security features
Regulatory Momentum Will Accelerate Adoption
Beyond technology shifts, regulation is also playing a significant role. Laws such as the EU AI Act and existing sector-specific frameworks (e.g., CPRA, PCI DSS) are pushing companies, especially in finance and healthcare, toward adopting AI-focused security controls.
“Strict data privacy laws already govern regulated sectors like finance and healthcare, and we will see accelerated adoption in those industries to avoid legal and reputational risks.”— Avivah Litan, Distinguished VP Analyst, Gartner
The rapid adoption of AI isn’t just changing how businesses operate—it’s reshaping their threat landscape.
While still useful, traditional firewalls are insufficient to protect against threats rooted in language, behavior, and intent. Without purpose-built tools, organizations risk allowing unseen AI misuse within their walls.
As enterprises continue integrating AI into core systems, adapting security strategy is no longer optional, it’s urgent.
For more news and insights, visit AI News on our website.