← Back to blog

March 17, 2026 · Krunal Sabnis

Your AI Agent Has Access to 47 Tools. Who Approved That?

MCP gives AI agents a standard way to discover and call tools. But discovery without governance means every agent has access to everything. The tool boundary is the next compliance gap — and it needs the same layered thinking we applied to data.

MCP AI Governance Tool Boundary Enterprise AI Agent Architecture RBAC

Three Boundaries, One Solved

In Part 1 and Part 2, we solved the data boundary — what leaves your perimeter, in what state, with what auditability. PII gets detected and masked before it reaches a cloud model. Deterministic layers handle 95% of decisions. The architecture works.

But data is one boundary. Enterprise AI systems have at least three:

  • Data boundary — what leaves your perimeter, and in what state?
  • Tool boundary — which systems can an AI agent access, with what permissions, under whose authority?
  • Model boundary — which model handles which task, at what cost, with what auditability?

The data boundary has known solutions — PII detection, redaction, routing. The model boundary is a cost optimisation problem. The tool boundary is where most organisations have no governance at all.

What Changed: Agents Use Tools Now

Twelve months ago, most enterprise AI deployments were prompt-in, text-out. The LLM received a question and returned an answer. The security perimeter was the API call — control the prompt, control the output, control the billing.

That model is disappearing. AI agents now call tools: databases, APIs, file systems, internal services. The agent doesn’t just answer questions — it takes actions. It queries your CRM, updates a ticket, reads a document, triggers a workflow.

The Model Context Protocol (MCP) is emerging as the standard interface for this. MCP gives agents a consistent way to discover available tools, understand their input schemas, and execute them. It does for agent-to-tool communication what REST did for service-to-service communication: standardise the interface so both sides can evolve independently.

But REST had API gateways from the start — authentication, rate limiting, access control, audit logging. MCP has none of that. The protocol handles discovery and execution. It doesn’t handle governance.

The Gap: Discovery Without Policy

Here’s what MCP tool discovery looks like today in most deployments:

An agent connects to an MCP server. The server returns every tool it exposes. The agent can call any of them.

There’s no concept of: “this agent is authorised for these 5 tools but not those 12.” There’s no policy layer between discovery and execution. There’s no audit trail of which agent called which tool with what arguments. There’s no kill switch to revoke access when something goes wrong.

This is the equivalent of giving every microservice in your infrastructure a root database credential because “it’s easier than managing permissions.”

It works in development. It’s a compliance incident in production.

What Governance at the Tool Boundary Looks Like

The data boundary taught us something: governance works when it’s layered, not monolithic. The same principle applies to tools.

Layer 1: Policy definition — which tools are available to which credentials, under what conditions. This is RBAC, but scoped to tools instead of endpoints. A policy grants access to specific tools from specific connections. Everything else is denied by default.

Layer 2: Discovery filtering — when an agent asks “what tools can I use?”, it only sees what its policy allows. Tools outside the policy don’t appear in the list. The agent doesn’t know they exist. This is fail-closed — not fail-open with logging.

Layer 3: Execution gating — even if an agent somehow constructs a call to a tool outside its policy, the execution layer rejects it. Defence in depth. The discovery filter is the first boundary; execution gating is the second.

Layer 4: Audit trail — every tool call is logged with: who (which credential), what (which tool, which arguments), when (timestamp), and the result. This isn’t optional. It’s what your compliance team reviews. It’s what your security team queries during incident response. Without it, you have access control. With it, you have governance.

Why This Can’t Be Solved Inside the Agent

The instinct is to add tool governance at the agent level. “Configure the agent to only use approved tools.” This is the same mistake as letting an SLM make PII classification decisions.

Agents are optimised for helpfulness, not compliance. An agent given access to a tool will use it if it thinks that’s the most helpful response. It doesn’t evaluate organisational policy. It doesn’t check whether the user behind the request is authorised for that specific tool. It doesn’t know about regulatory constraints on data flowing through that tool.

Policy decisions need a layer that is independent of the agent — the same way PII detection needs a layer that is independent of the language model. The agent’s job is reasoning. The governance layer’s job is enforcement. Mixing them creates the same failure mode we saw in Part 2: the SLM classified credit cards as “not sensitive” because it was optimising for helpfulness, not compliance.

The Credential Problem

API gateways solved a specific credential problem for REST: one API key per consumer, scoped to specific endpoints, with rate limits and expiration. The consumer doesn’t hold the backend credentials — the gateway does.

MCP deployments today have the inverse problem. The agent often holds credentials for every tool server it connects to. If the agent is compromised, every tool is compromised. If you want to revoke access to one tool, you need to reconfigure the agent — not flip a switch in a policy layer.

The governance architecture separates these concerns:

  • The agent holds one credential — an API key scoped to a policy.
  • The policy defines which tools that key can access.
  • The gateway holds the connection credentials for external tool servers.
  • Revocation means deactivating a key or removing a tool from a policy — not reconfiguring every agent that might use it.

This is the same separation of concerns that made API gateways viable. The consumer doesn’t need to know how the backend authenticates. The backend doesn’t need to know who the consumer is. The gateway handles both.

What MCP Gets Right — And What’s Missing

MCP is the right abstraction. A standard protocol for tool discovery and execution means agents and tool servers can evolve independently. You can swap an agent framework without rewriting tool integrations. You can add a new tool server without modifying agent code. This is the same value REST provided for web services.

What MCP doesn’t provide — and shouldn’t, because it’s a protocol, not a platform:

ConcernREST ecosystemMCP ecosystem today
AuthenticationAPI keys, OAuth, JWTLeft to implementation
Access controlGateway policies, scopesNone
Rate limitingPer-key quotasNone
Audit loggingRequest/response logsNone
RevocationKey deactivationReconfigure agent
Discovery filteringScope-based visibilityAll tools visible

REST didn’t solve these problems either. Gateways did. MCP needs the same layer.

What This Means for Enterprises Deploying Agents

1. Govern tools the way you govern APIs

If you wouldn’t give a microservice unrestricted database access, don’t give an AI agent unrestricted tool access. Same principle, different interface. The tooling is different — MCP instead of REST — but the governance model is the same: identity, policy, enforcement, audit.

2. Fail closed, not open

If a tool isn’t explicitly granted to a credential, it shouldn’t be discoverable or executable. This is the opposite of most current MCP deployments, where every connected tool is available to every agent. Default-deny with explicit grants is the only model that scales past a proof of concept.

3. Separate the agent from the policy

The agent decides what’s helpful. The policy layer decides what’s allowed. These are different concerns with different owners — the AI team owns the agent; the security team owns the policy. If they’re coupled, every policy change requires an agent redeployment. If they’re separated, policy changes are immediate and auditable.

4. The audit trail is the product

Access control without logging is security theatre. The audit trail is what proves to your CISO that governance is working. It’s what your compliance team exports for regulators. It’s what your platform team uses to understand agent behaviour. Build the trail first, not last.

The Arc

We started with a specific problem: PII in prompts reaching cloud models. We solved it with layered architecture — deterministic rules, statistical NER, SLM for judgment. Each layer does what it’s good at.

The tool boundary is the same pattern at a different layer. Deterministic policy enforcement. Structured access control. Audit at every decision point. The agent provides judgment — the governance layer provides boundaries.

The question for any enterprise deploying AI agents isn’t “which framework should we use?” It’s: who approved the tools your agent is calling, and can you prove it?


This is Part 3 of a series on building governed AI architecture for the enterprise. Part 1: PII recall from 76% to 98%. Part 2: three-mode benchmark proving layered architecture. Part 3: the tool boundary.