Vercel breach traced back to one employee signing into Context.ai with an "Allow All" Google Workspace grant, data listed on BreachForums for $2 million

Posted by juliarmg@reddit | sysadmin | View on Reddit | 44 comments

Putting this here because it is a very Monday-morning story and the OAuth angle has not gotten enough attention yet.

Vercel disclosed a breach on April 19-20. ShinyHunters listed the data on BreachForums for $2 million. The headline finding. A Vercel employee signed up for Context.ai using their enterprise Google Workspace account, granted "Allow All" permissions during the OAuth consent dance, and moved on with their day. An attacker who had previously compromised Context.ai AWS environment pulled that OAuth token out of the vendor, reused it, and walked into Vercel systems to pull environment variables.

The timeline is worth tracing because every step is something a normal-sized team could miss.

In February 2026, a Context.ai employee downloaded a Roblox auto-farm script on a work device. The script carried Lumma Stealer. In March, the attacker pivoted from the resulting credentials into Context.ai AWS environment and found stockpiled OAuth tokens, including the one belonging to the Vercel employee. On March 27, Google removed Context.ai Chrome extension after discovering a second embedded grant for Drive files. In April, the attacker used the Vercel token to access Vercel infrastructure and exfiltrate environment variables. The data hit BreachForums a couple weeks later.

Vercel described the exposed env vars as "non-sensitive." If you have shipped anything in the last five years, you know how much weight that word is carrying. Non-sensitive generally means "not the obvious secret-store entries," and yet env vars routinely carry API keys, DB creds, signing keys, third-party tokens. Vercel sits upstream of a lot of production traffic. If the attacker had weaponized GitHub or npm tokens inside that haul, this goes from disclosure post to supply-chain event.

Guillermo Rauch blamed AI-assisted tooling for the attacker operational speed. Take that for what it is worth, CEOs have motive, but the broader pattern matches what I am seeing elsewhere. AI-mediated analytics tools sit at the center of a hub of OAuth grants with wide scopes, usually at companies that are two years old and do not have mature security. They are the richest pivot surface in the stack right now.

The operational lesson I am walking into this week.

Go look at your Google Workspace or Microsoft 365 third-party app list. Filter by grants with Drive, Gmail, or Admin scopes. Every one of those is a Vercel-shaped incident waiting for the vendor to get popped. Revoke anything nobody has used in 60 days. Downgrade "Allow All" to least-privilege where the vendor supports it. Turn on workspace-wide restrictions on which OAuth scopes end users can consent to without admin approval. Google lets you configure this, most orgs never turn it on.

Assume Context.ai is the first one we know about, not the last. If your own org runs an AI analytics or AI-assistant SaaS with a Workspace integration, treat its AWS posture as your AWS posture.

Curious what noise anyone else is finding inside their OAuth grant review this week, and what policy is being used to decide what gets revoked.

https://elephas.app/resources/vercel-got-hacked-context-ai-2026