The platform that powers the modern web, supporting over six million developers and giants like Walmart, OpenAI, and Nike is facing its most significant security challenge to date.
Vercel’s internal database is currently listed for sale for $2 million on BreachForums.
While Vercel has moved quickly to contain the incident, the anatomy of this breach reveals a terrifying new reality that your security is only as strong as the OAuth permissions granted to your AI productivity tools.
Recent posts from the Siemba team have sounded the alarm on the AI Agent Blast Radius. In a recent newsletter series we warned that the rapid adoption of AI tools is creating "invisible bridges" into corporate environments.
This Vercel incident is the textbook case.
Vercel didn't even use this tool. But it still became their front door.
The recent breach wasn't caused by a flaw in Vercel's code. It was caused by an "unrelated" AI productivity tool. A Context AI employee downloaded what was thought a game cheat.
This is the AI Agent Blast Radius in action. One "Allow All" OAuth grant created an invisible bridge that an attacker used to pivot into internal systems.
That single download installed Lumma Stealer, a malware whose entire job is to silently collect credentials and send them back to an attacker.
One infected machine was all it took to expose credentials that cascaded all the way into Vercel's internal systems.
The dwell time problem:
Context AI detected the breach internally in March 2026. Vercel did not find out until April 19. That is approximately 30 days during which the attacker may have had access and nobody outside Context AI's walls knew a thing.
Approximately 30 days of potential access.
No alert. No vendor disclosure. No warning.
This is one of the most alarming details of the entire incident.
“The attacker used that access to take over the employee's Vercel Google Workspace account, which enabled them to gain access to some Vercel environments and environment variables that were not marked as “sensitive.”
Environment variables marked as "sensitive" in Vercel are stored in a manner that prevents them from being read, and we currently do not have evidence that those values were accessed.
We assess the attacker as highly sophisticated based on their operational velocity and detailed understanding of Vercel's systems. We are working with Mandiant, additional cybersecurity firms, industry peers, and law enforcement. We have also engaged Context.ai directly to understand the full scope of the underlying compromise.
In collaboration with GitHub, Microsoft, npm, and Socket, our security team has confirmed that no npm packages published by Vercel have been compromised. There is no evidence of tampering, and we believe the supply chain remains safe.” - Vercel Security Bulletin, April 2026
Vercel's CEO, Guillermo Rauch claim that the attackers were "significantly accelerated by AI" is worth unpacking carefully. It came around the same time as the launch of powerful new AI models,
While some have speculated a link to the release of models like Claude Mythos, there is zero evidence a specific model was used.
However, the observation is vital: AI is shrinking the "dwell time" (the time between initial infection and full exfiltration). Attackers are using LLMs to write scripts that enumerate environments and pivot through OAuth connections faster than human defenders can react.
Vercel shipped two dashboard updates as part of the response: environment variables now default to sensitive, and a new sensitive-variable management UI was released.
To be clear, there is zero evidence that any specific AI model was used in this breach. Rauch never named one.
What the observation does capture is something real, AI is shrinking attacker dwell time.
Attackers are using LLMs to write enumeration scripts, pivot through OAuth connections, and map internal environments faster than human defenders can react.
The velocity Rauch described, surprising speed and in-depth understanding of Vercel's systems, is consistent with AI-assisted reconnaissance, not necessarily any one product.
If you've visited the web today, there's a good chance Vercel had something to do with it.
It's the platform behind Next.js, the React framework used by Walmart.com, OpenAI, Anthropic, PayPal, Nike and TikTok's web experience.
6M+
developers on the platform
30B
requests processed weekly
$9.3B
valuation, Sep 2025
That scale matters for one very specific reason: Vercel controls the publish pipeline for Next.js, which has over six million weekly downloads on npm.
If an attacker were to push a malicious update to Next.js using stolen npm tokens, it wouldn't just affect Vercel customers. It would hit every developer who installs or updates the package, regardless of whether they've ever heard of Vercel.
That's not a hypothetical analysis. That's the threat the attacker put on the table.
The initial access began when a Vercel employee installed the Context AI browser extension, a legitimate productivity tool whose OAuth access was later exploited by the attacker.
This enabled the attacker to gain access and pivot from Context.ai related access into the employee’s Vercel Google Workspace account, ultimately leading to the compromise.
After gaining initial access, the attacker was able to access environment variables that were not marked as sensitive and, therefore, were not encrypted at rest.
Although these variables were not intended to contain sensitive information, the attacker was able to enumerate them and use the data to gain further access.
Source: Hudson Rock / Infostealers.com — full report linked in references
The attacker also shared a text file containing Vercel employee information: 580 data records containing names, email addresses, account status, and activity timestamps.
This is the question that matters most. The attacker didn't just claim to have Vercel's data. They specifically claimed that if they pushed a malicious update to Next.js using stolen npm tokens, it could reach every developer who installs or updates the package. That framing is deliberate, it's designed to maximise the perceived value of what they're selling. But the threat vector is real.
A poisoned update wouldn't need anyone to be a Vercel customer. It would propagate silently through package managers the same way it happened before:
|
Incident |
Year |
Method |
Impact |
|
SolarWinds |
2020 |
Malicious update pushed to Orion software |
18,000+ organisations received the compromised update, including US government agencies |
|
3CX |
2023 |
Trojanised desktop app via official update pipeline |
Millions of users exposed before detection |
|
Vercel / Next.js |
2026 |
Stolen npm tokens via AI-tool OAuth pivot |
Supply chain verified safe as of April 21, but the vector was real |
Vercel collaborated with GitHub, Microsoft, npm, and Socket to verify that no published npm packages including Next.js, Turborepo, SWR, and AI SDK were tampered with. The supply chain is intact. That said, pin your Next.js version explicitly until the investigation fully closes.
Vercel's official statement is cautious and deliberately vague, which is normal in the first 24–48 hours of an active investigation. They say a limited subset of customers was affected and are contacting them directly.
They have not confirmed what was taken or denied the BreachForums claims.
Reading between the lines: they likely know more than they are sharing right now.
But the broader answer is anyone who ever connected a Google account to the Context AI Office Suite.
That extension had OAuth tokens from hundreds of organisations across many teams, not just Vercel.
If that includes your team, you have direct exposure independent of anything to do with Vercel.
And Context AI had been compromised for nearly a month before anyone outside their walls knew about it.
"The supply chain here is not your code or your npm packages. It's the SaaS apps your employees log into with Google.
The Axios breach. Trivy. Shai-Hulud. Vercel.
The same playbook, run again. Find the AI tool, find the trust relationship, find the OAuth token, move fast.
What's changed isn't the technique, it's that AI tools are being adopted by engineering teams at a pace that completely outstrips the security reviews that should surround them.
Broad OAuth permissions get granted during a two-minute onboarding flow. Nobody audits them. Nobody alerts on them. And when one vendor in that chain gets hit, every downstream organisation feels it.
The Vercel breach didn't need a zero-day. It just needed an overlooked OAuth grant. Siemba helps you find these invisible bridges before attackers do.