RSAC – Day 2 Reporting

RSAC 2026 – Day 2 Report (March 24, 2026) Audio
By Jorge Avila, KCXU 92.7 FM Reporter – Cyber / AI News Desk
SAN FRANCISCO — Day 2 of the RSAC 2026 Conference is underway, and the
message from today’s sessions is clear: AI is not just changing cybersecurity—it is
fundamentally breaking long-standing security assumptions.
With over 45,000 cybersecurity professionals gathered at the Moscone Center,
discussions today moved deeper into technical realities, emerging risks, and the
operational impact of AI-driven systems.
AI Moves from Theory to Real-World Risk
While Day 1 focused on strategy and global leadership, Day 2 shifted into hands-on
technical sessions and real-world scenarios.
One of the most critical themes:
AI systems are becoming active participants in cyber environments—not just tools.
Insider Threats Are Being Rewritten by AI
In the session “AI in the Shadows: Inside the New Insider Threat,” researchers
highlighted how AI is creating a new category of risk:
● 83% of organizations experienced an insider threat in the past year
● Insider incidents are increasingly driven by human error and negligence (up to
75%), not just by malicious actors
● 88% of data breaches are tied to employee mistakes
But the most alarming shift discussed:
AI agents are now behaving like insider threats
● AI tools are being over-permissioned
● Acting autonomously without oversight
● Accidentally leaking sensitive data
In some cases, organizations are seeing 300+ incidents per month involving sensitive
data being sent to AI systems
Even more concerning:
● Up to 98% of organizations report unsanctioned AI use
This phenomenon, often called “Shadow AI,” is becoming one of the most urgent
security challenges.
AI is Breaking the Internet’s Security Model
Another standout session, “Crashing Browsers: How AI Agents Break the Browser
Threat Model,” revealed how AI is disrupting foundational internet security.
Traditionally, browsers rely on protections like:
● Same-Origin Policy
● Site isolation
● Controlled access between websites
But researchers demonstrated that:
AI agents can bypass these assumptions entirely
● AI can see and interact across multiple browser tabs simultaneously
● It can execute actions across different websites—something humans normally
control
● Malicious prompts can be injected into AI systems, causing unintended actions
This creates new risks such as:
● Data exfiltration from local files
● Cross-site attacks triggered by AI
● Automation of browser-based exploits
In short:
The traditional browser security model was built for humans—not autonomous AI
agents.
Red Teaming AI: The New Security Discipline
Another major focus today was on how to test and secure advanced AI systems.
In the session “How to Red Team a Frontier AI Model,” Microsoft researchers outlined a
structured approach to securing next-generation AI:
● AI risk is no longer limited to known threats—“unknown unknowns” must be
tested
● Frontier AI models require continuous, interdisciplinary testing frameworks
● Risks include:
○ Automated cyberattacks
○ Phishing at scale
○ Control bypass and system manipulation
A key takeaway:
AI security is no longer reactive; it must be proactive and adversarial by design
Organizations are now building AI Red Teams to simulate attacks against their own
systems before adversaries do.
Across Day 2, these themes translated into technical reality:
1) AI is accelerating both defense capabilities and attack sophistication
2) Cybersecurity is now global, systemic, and deeply tied to AI governance
The Rise of the “Agentic SOC”
Another major discussion point:
The emergence of the AI-driven Security Operations Center (SOC)
Industry sessions highlighted that:
● AI agents can detect and respond to threats at machine speed
● Security teams are shifting from reactive response → proactive strategy
● The future SOC will be human-led but machine-accelerated
KCXU Perspective: Why Day 2 Matters
Day 2 made one thing very clear: AI is no longer just a tool it is now an actor in
cybersecurity, and that changes everything.
What we are seeing here at RSAC:
● AI acting like employees (and insider threats)
● AI is breaking core internet security assumptions
● AI requires entirely new testing and governance models
For communities, businesses, and government agencies:
This means:
● New risks
● New jobs
● New responsibilities
And most importantly: A new requirement for AI literacy, because the future of
cybersecurity will not just depend on technology, it will depend on who understands how
to use and secure it.
What’s Next
As RSAC 2026 continues, expect deeper conversations around:
● AI governance and regulation
● Autonomous cyber defense
● Startup innovation in AI security
● Workforce development in cyber + AI
Reporting live from RSAC 2026 — Day 2 complete. More to come.
Jorge Avila
KCXU 92.7 FM
Cyber / AI News Desk
