Skip to main content
The patterns from the previous page share a structural weakness: credentials are too broad, last too long, and are not attributable to a specific agent-user delegation.

Blast radius is unlimited

When an agent holds a shared credential, compromise of one path can expose everything that credential allows.
# Task: summarize issues in one repo
# Needed: issues:read on acme/backend
# Granted: org-wide token

@server.tool("summarize_issues")
async def summarize_issues(repo: str) -> str:
    gh = Github(os.environ["GITHUB_TOKEN"])

    # Intended behavior
    issues = gh.get_repo(repo).get_issues(state="open")

    # Same token could also do much more:
    # gh.get_repo("acme/secrets").get_contents("production.env")
    # gh.get_repo("acme/backend").delete()
    # gh.get_organization("acme").edit(billing_email="[email protected]")

Prompt injection becomes a systems attack

Without credentials, prompt injection is mostly a content integrity issue. With credentials, it becomes an infrastructure issue.
1) User asks: "Summarize latest issues in acme/backend"
2) Agent reads issue text containing hidden malicious instructions
3) LLM treats those instructions as operational guidance
4) Agent executes tool calls using real credentials
5) Agent creates/modifies resources or exfiltrates data
The agent does not just say something wrong. It performs authenticated actions that can be destructive.

You cannot revoke one agent

Shared credentials force all-or-nothing incident response.
Shared token is used by:
- Agent A (PR review)
- Agent B (issue triage)  <-- compromised
- Agent C (release notes)
- CI pipeline
- Cron jobs

Options:
- Revoke token: everything breaks
- Keep token: compromised agent keeps access
- Rotate token: update every consumer and redeploy
There is no surgical kill switch for one agent.

No audit trail

Shared credentials collapse many actors into one identity.
{
  "actor": "github-bot",
  "action": "repo.branch_delete",
  "repo": "acme/backend",
  "timestamp": "2026-03-03T09:32:14Z"
}
This does not answer the key questions: which agent, on whose behalf, from which prompt, under what policy.

Credentials leak through outputs

Agents run in adversarial conditions. Credentials in process memory can leak through many paths.
# Path 1: Tool args can end up in observability logs
await agent.call_tool("http_request", {
    "url": "https://api.github.com/repos/acme/backend",
    "headers": {"Authorization": f"Bearer {os.environ['GITHUB_TOKEN']}"}
})

# Path 2: Error logging can serialize headers
except requests.HTTPError as e:
    logger.error(f"API call failed: {e.request.headers}")

# Path 3: LLM output can be steered to include sensitive context
The risk surface includes logs, traces, exceptions, tool payloads, and generated responses.

Overprivileged by default

Static tokens keep all original scopes for every task.
Task: "Read README from acme/docs"
Needed: contents:read

Token scopes (example):
- contents:read
- contents:write
- issues:write
- pull_requests:write
- admin:repo_hook
- delete_repo
Least privilege is not enforced per invocation, so every task runs with excess permissions.
This is why security teams often block agents in production. The core risk is not model output quality alone; it is credential architecture.