What if your AI platform was silently sharing one user's private session with another?

Most teams find out from an attacker. Some find out from Siemba.

Book a Security Assessment

Zero Trust Inside the Conversation:How Siemba Secured a Fortune 500 Internal AI Platform

When thousands of employees and vendors share one AI engine,a single authorisation flaw becomes a company-wide data breach.


KEY SOLUTIONS

Grey Box Security Assessment · Penetration Testing as a Service (PTaaS) · Prompt Injection Testing · Session Isolation Validation · Cross-User Leakage Testing · Privilege Escalation Testing · Vendor Boundary Assessment

THE SCENARIO

When the AI Platform Serves Everyone

The client had deployed "The Core" - a sophisticated internal AI platform allowing employees and third-party vendors to automate complex IT tasks via natural language. The agents were not passive: they had direct integrations into ServiceNow for IT management, internal APIs, and business-critical tooling.

The platform served thousands of concurrent users,  from senior engineers to external customer support vendors, each interacting with agents that held significant operational privileges. It was, in every sense, a productivity multiplier. And a single point of failure.

THE THREAT

Two Threats. One Platform.

The multi-tenant, high-permission environment created a specific dual threat model that traditional security tools were not equipped to detect:

  • The "Agent Authority" Risk — could a low-level user manipulate the LLM into performing administrative actions — resetting a server, granting access, modifying records — that far exceeded their actual permission level?
  • The Privacy & Isolation Risk — with thousands of concurrent sessions, could an architectural flaw allow one user to access the private chat history or proprietary tool outputs belonging to another user?

 

In a shared environment serving both employees and external vendors, a failure in isolation is not a bug — it is a data breach. The client needed to know: does Zero Trust exist inside the conversation itself, or only at the login screen?



THE TEST

Simulating the Insider Threat

Siemba conducted a "Grey Box" assessment - using legitimate authenticated access to push the absolute boundaries of what the AI agents permitted. Rather than scanning from the outside, the team worked from within, mimicking the real-world access patterns of a curious employee or a compromised vendor.

Traffic interception and logic analysis

The team analysed traffic between the chat UI and backend orchestration layer, dissecting the exact hand-off between the LLM and its integrated tools, looking for gaps where business logic could be bypassed without triggering any alerts.

The Curious Employee persona

Siemba simulated a legitimate employee attempting to access data, outputs, and tool results outside their authorised department scope - testing whether the platform's isolation was truly enforced at the session level.

The Compromised Vendor persona

The team simulated a third-party vendor attempting to escalate privileges, access internal IP, and trigger administrative workflows beyond their designated scope, testing whether vendor boundaries held under deliberate pressure.

Findings_CS2 (2)

THE FIX

From Shared Risk to Hermetic Isolation

F-01 — Session Isolation Protocols

Strict session isolation was implemented, ensuring conversational data is hermetically sealed per user ID. No cross-session access is possible under any load condition or request manipulation.

F-02 — Secondary Permission Validation Layer

A validation layer was added that independently verifies user permissions before the agent executes any tool action — completely separate from the LLM's own reasoning about what it should do.

F-03 — Vendor Boundary Enforcement

Stricter boundary enforcement was implemented across the platform, ensuring third-party vendors cannot access sensitive internal workflows, data structures, or proprietary schema information regardless of their prompt strategy.

THE LESSON

Zero Trust Must Extend to the AI Agent Itself

The most important takeaway from this engagement: the boundary between user and administrator is blurring. As AI platforms serve increasingly diverse user populations with increasingly broad operational access, the conversation itself becomes a privilege boundary.

Zero Trust cannot stop at the login screen. Every tool call, every session, every vendor interaction must be treated as potentially adversarial. Siemba's assessment proved that "The Core" could deliver on its promise — but only after the AI agent layer was treated as a security boundary, not just a productivity feature.

"The insights from Siemba didn't just point out what we needed to fix, they taught us how to think about security in a more sophisticated and proactive way. This has significantly propelled us forward, making our approach to cybersecurity more robust and better prepared to face the challenges ahead."

Alvin Allen
Head of Cybersecurity, FrontSteps

Is Your Internal AI Platform Truly Isolated?

Siemba tests the conversation layer, not just the login. Find out if your AI agents respect the permission boundaries you've set.

Book a Demo