LDP: Identity-Aware Communication for Multi-Agent LLM Systems
How the LLM Delegate Protocol makes agent identity, trust, and provenance first-class primitives in multi-agent communication.
When multiple LLM agents communicate, a fundamental question goes unanswered by most frameworks: who is actually sending this message, and can it be trusted? Most multi-agent systems treat inter-agent messages as plain text payloads, leaving identity, capability declarations, and audit trails as afterthoughts bolted on at the application layer. The LLM Delegate Protocol (LDP) is a proposal to fix this by making model-level properties—identity, trust scope, and provenance—first-class protocol primitives.
The Identity Gap in Multi-Agent Systems
Today’s multi-agent frameworks typically route messages between agents using application-level conventions: a dictionary field labeled sender, a thread ID, or a named queue. These mechanisms work fine for simple pipelines, but they break down under adversarial conditions or organizational scale. Any agent that can write to a channel can forge a sender field. There is no standard way for a receiving agent to verify the capabilities of the caller, understand what model is behind it, or trace a decision back through a chain of delegations.
This creates three practical problems engineers hit in production:
- Impersonation and prompt injection: A compromised or malicious sub-agent can claim to be a trusted orchestrator, causing downstream agents to grant it elevated permissions.
- Capability mismatch: An orchestrator routes a task to an agent without knowing whether that agent actually supports the required tool set or context window size.
- Opaque audit trails: When something goes wrong in a five-agent pipeline, reconstructing which agent made which decision—and with what context—requires custom instrumentation in every agent.
Treating agent identity as an application-level convention rather than a protocol-level guarantee is the multi-agent equivalent of building an API without authentication. It works until it doesn’t, and failures are hard to detect.
Core Primitives of LDP
LDP addresses these gaps by defining a small set of protocol-level objects that travel with every inter-agent message.
Delegate Identity Cards are signed descriptors attached to each agent. They encode the agent’s identifier, its underlying model (name, version, provider), declared capabilities, and a cryptographic signature that allows receivers to verify authenticity. Think of them as a combination of an X.509 certificate and an OpenAPI spec—they tell you both who is sending and what it can do.
Progressive Payload Modes let the sender control how much context is transmitted. A lightweight mode sends only the task and identity card; a verbose mode includes full conversation history and intermediate reasoning. This mirrors the principle of least privilege: agents receive exactly the context they need, no more.
Governed Sessions establish a scoped interaction context with explicit lifecycle management—creation, active operation, and termination. A session carries its own trust configuration and can enforce that only credentialed delegates participate. Sessions make it straightforward to isolate a sensitive sub-task (e.g., accessing a financial tool) from the broader agent graph.
Trust Domains define zones of permission. Agents within the same trust domain can exchange full identity cards and elevated capabilities. Cross-domain calls are subject to downgraded permissions and additional verification, similar to how a corporate network enforces different rules for internal vs. external traffic.
Structured Provenance Tracking maintains a chain-of-custody record as a task passes through agents. Each hop appends a signed entry: which delegate received the task, what decision it made, and what it forwarded. The result is a tamper-evident audit log embedded in the protocol itself, not reconstructed after the fact from logs.
┌─────────────────────────────────────────────────────────────┐ │ LDP Message Envelope │ ├───────────────────────┬─────────────────────────────────────┤ │ Delegate Identity │ Model: gpt-4o / provider: openai │ │ Card (signed) │ Capabilities: [code, search, sql] │ │ │ Signature: <ed25519> │ ├───────────────────────┼─────────────────────────────────────┤ │ Governed Session │ Session ID: sess_abc123 │ │ Context │ Trust Domain: internal │ │ │ Payload Mode: verbose │ ├───────────────────────┼─────────────────────────────────────┤ │ Provenance Chain │ hop[0]: orchestrator → planner │ │ (append-only log) │ hop[1]: planner → code-agent │ │ │ hop[2]: code-agent → reviewer │ ├───────────────────────┼─────────────────────────────────────┤ │ Task Payload │ <actual task content> │ └───────────────────────┴─────────────────────────────────────┘
Implementing Identity Cards in Practice
For engineers building systems today, the delegate identity card concept translates naturally to a structured header on agent-to-agent HTTP or message-queue calls. A minimal implementation might look like this:
import json
import hashlib
import time
from dataclasses import dataclass, field
from typing import List, Optional
@dataclass
class DelegateIdentityCard:
agent_id: str
model_name: str
model_provider: str
capabilities: List[str]
trust_domain: str
issued_at: float = field(default_factory=time.time)
signature: Optional[str] = None
def sign(self, secret_key: str) -> None:
payload = json.dumps({
"agent_id": self.agent_id,
"model_name": self.model_name,
"capabilities": sorted(self.capabilities),
"trust_domain": self.trust_domain,
"issued_at": self.issued_at,
}, sort_keys=True)
self.signature = hashlib.sha256(
(payload + secret_key).encode()
).hexdigest()
def verify(self, secret_key: str) -> bool:
claimed_sig = self.signature
self.signature = None
self.sign(secret_key)
valid = self.signature == claimed_sig
self.signature = claimed_sig
return valid
@dataclass
class ProvenanceHop:
from_agent: str
to_agent: str
timestamp: float = field(default_factory=time.time)
action_summary: str = ""
@dataclass
class LDPEnvelope:
identity: DelegateIdentityCard
session_id: str
payload_mode: str # "minimal" | "verbose"
task_payload: dict
provenance_chain: List[ProvenanceHop] = field(default_factory=list)
def add_hop(self, from_agent: str, to_agent: str, summary: str) -> None:
self.provenance_chain.append(
ProvenanceHop(from_agent, to_agent, action_summary=summary)
)
A receiving agent validates the identity card before processing the payload, checks that the sender’s trust domain grants the requested operation, and appends its own hop to the provenance chain before forwarding.
Even without full LDP adoption, adding a signed identity card header to every agent-to-agent call gives you two immediate wins: impersonation becomes detectable, and your logs gain a structured agent-chain field you can query.
Trust Domains and Governed Sessions
The trust domain model maps cleanly onto organizational boundaries that already exist in enterprise deployments. Consider a company running an internal research agent with access to proprietary databases alongside a public-facing customer service agent. Under LDP, these live in separate trust domains. The internal agent’s identity card declares trust_domain: internal; the customer service agent’s declares trust_domain: external. When the customer service agent tries to invoke the internal research agent directly, the session governance layer downgrades the permitted capabilities automatically—no application-layer if-statements required.
Governed sessions add temporal and membership constraints. A session for a sensitive financial workflow can be configured to expire after 10 minutes, accept only delegates from a pre-approved list, and require verbose payload mode so every decision is fully logged. Compare this to today’s common pattern, where session-like state is carried in a mutable Python dictionary passed between agents—mutable by any agent that touches it, with no access control.
Why This Matters for Agent Engineering
LDP is not a finished standard—it is a design vocabulary for a problem space that is only going to get more urgent. As agent graphs grow from three-node pipelines to dozens of specialized agents operating across organizational and security boundaries, the informal conventions that hold small systems together become attack surfaces and debugging nightmares.
The practical takeaway is a set of design principles engineers can apply now:
- Declare identity explicitly. Every agent-to-agent message should carry a verifiable descriptor of the sender, not just a string name.
- Scope context to need. Pass the minimum payload a downstream agent requires. Progressive payload modes are a forcing function for this discipline.
- Embed provenance, don’t reconstruct it. A tamper-evident chain appended at each hop is far more reliable than correlating log lines after the fact.
- Model trust as zones, not binary flags. The internal/external trust domain split is a reasonable starting point; it mirrors how network security already works.
As A2A and MCP mature on the capability-discovery and tool-invocation sides, protocols like LDP address the complementary question: once agents can find and call each other, how do they know they should?
This article is an AI-generated summary. Read the original paper: LDP: An Identity-Aware Protocol for Multi-Agent LLM Systems .