MCP, A2A, and ORD: The Three Protocols Shaping AI Agent Architecture
If you’ve been following the AI agent space, you’ve probably seen three protocols come up repeatedly: MCP (Model Context Protocol), A2A (Agent-to-Agent), and ORD (Open Resource Discovery). They’re often mentioned together, but they solve fundamentally different problems.
This post breaks down what each protocol does, how it works under the hood, and how the three fit together — especially in the context of enterprise landscapes like SAP BTP.
The Problem They Solve
Today’s AI agents need three things:
- Tools and data — an agent that can’t read your database or call your APIs is just a chatbot
- Collaboration — complex tasks require multiple specialized agents working together
- Discovery — in a landscape with hundreds of services, agents need to know what’s available
Each protocol addresses one of these needs:
| Need | Protocol | By |
|---|---|---|
| Tools and data | MCP (Model Context Protocol) | Anthropic |
| Agent collaboration | A2A (Agent-to-Agent) | |
| Resource discovery | ORD (Open Resource Discovery) | SAP / Linux Foundation |
None of them are “just REST APIs.” They all build structured semantics on top of HTTP and JSON.
MCP: Giving Agents Their Tools
What It Is
MCP is an open standard for connecting AI applications to external tools, data sources, and prompts. Think of it as a USB-C port for AI — one standardized interface that lets any AI app connect to any data source.
Architecture
MCP follows a client-server model built on JSON-RPC 2.0:
AI Host (Claude, VS Code, ChatGPT)
├── MCP Client 1 ──→ MCP Server A (filesystem) [stdio]
├── MCP Client 2 ──→ MCP Server B (database) [stdio]
└── MCP Client 3 ──→ MCP Server C (Sentry, remote) [HTTP+SSE]
The host (your AI app) creates one client per server. Each client maintains a dedicated connection.
Transport Layer
Two mechanisms:
| Transport | How | When |
|---|---|---|
| stdio | Standard input/output between local processes | Local servers on the same machine |
| Streamable HTTP | HTTP POST + Server-Sent Events for streaming | Remote servers over the network |
What Servers Expose
MCP defines three server primitives:
- Tools — executable functions the AI can invoke (query a database, create a file, call an API)
- Resources — read-only data sources for context (file contents, database schemas)
- Prompts — reusable templates for structuring LLM interactions
And two client primitives that let the server ask things of the client:
- Sampling — server asks the client’s LLM to generate a completion
- Elicitation — server asks the user for input or confirmation
The Lifecycle
This is what separates MCP from a REST API. MCP is stateful:
- Initialize — client and server exchange capabilities in a handshake
- Discover — client calls
tools/list,resources/listto learn what’s available - Execute — client calls
tools/callwith arguments, gets structured results - Notify — server pushes real-time notifications when tools change (no polling)
A REST API has no initialization, no capability negotiation, and no push notifications. You read the docs, hardcode the endpoint, and hope nothing changes.
Example Exchange
// Client discovers tools
{ "method": "tools/list" }
// Server responds
{
"result": {
"tools": [{
"name": "weather_current",
"description": "Get current weather for any location",
"inputSchema": {
"type": "object",
"properties": {
"location": { "type": "string" }
},
"required": ["location"]
}
}]
}
}
// Client invokes a tool
{ "method": "tools/call", "params": { "name": "weather_current", "arguments": { "location": "Berlin" } } }
// Server returns result
{ "result": { "content": [{ "type": "text", "text": "Berlin: 12°C, partly cloudy" }] } }
Adoption
MCP is supported by Claude, ChatGPT, VS Code (Copilot), Cursor, and many other AI clients. The ecosystem already has hundreds of community-built servers for databases, APIs, cloud services, and developer tools.
A2A: Agents Talking to Agents
What It Is
A2A is an open protocol for agent-to-agent communication. While MCP connects an AI to its tools, A2A lets two independent AI agents discover each other and collaborate on tasks — without exposing their internal logic.
Architecture
A2A is peer-to-peer. Agents are opaque — they publish what they can do, but never reveal how they do it.
Agent A (sales optimizer) Agent B (inventory manager)
│ │
├─ publishes Agent Card ├─ publishes Agent Card
│ │
└─ sends Task to B ──────────────→ │
├─ works on Task
← streams progress updates ────┤
← returns Artifacts ───────────┘
Transport
Also built on JSON-RPC 2.0 over HTTPS, with three communication patterns:
| Pattern | Mechanism | Use Case |
|---|---|---|
| Synchronous | Standard HTTP request/response | Quick tasks |
| Streaming | Server-Sent Events (SSE) | Real-time progress |
| Push Notifications | Webhook callbacks | Long-running async work |
A2A also supports gRPC and plain HTTP+JSON as alternative bindings. Agents declare which protocols they support.
Core Concepts
Agent Card — a metadata document every agent publishes, describing:
- Identity and endpoint URL
- Skills and capabilities
- Supported input/output modalities (text, forms, files, media)
- Authentication requirements (OAuth2, API keys, mTLS)
This is how agents discover each other. Think of it as a business card.
Task — the fundamental unit of work. Has a full lifecycle:
submitted → pending → working → completed
↓ ↓
input-required failed
↓ ↓
canceled rejected
Tasks support multi-turn interactions. An agent can pause a task to ask for clarification (input-required), then resume when it gets an answer.
Messages — communication exchanged during task execution. Persisted in task history.
Artifacts — the deliverables. What the agent produces as output.
Key Properties
- Opaque agents — no internal tools, memory, or reasoning exposed
- Async-first — designed for long-running operations
- Framework-agnostic — agents built on LangChain, CrewAI, AutoGen, or custom code can all interoperate
- Capability negotiation — agents agree on modalities before starting work
ORD: The Catalog of Everything
What It Is
ORD is a protocol for metadata publishing and discovery. It doesn’t move data or execute tasks — it answers the question: “What exists in my landscape, and what can it do?”
ORD lets applications self-describe their APIs, events, data products, agents, and capabilities through machine-readable metadata. An aggregator then crawls these descriptions to build a unified catalog.
Architecture
ORD uses a simple pull-based model:
ORD Aggregator (e.g., SAP BTP Unified Customer Landscape)
│
├─ GET /.well-known/open-resource-discovery ──→ App A
│ ↓
│ ORD Configuration (document URLs, access strategies)
│ ↓
│ GET /ord/documents/1 ──→ ORD Document (JSON)
│ ↓
│ GET /api-specs/sales-order.oas3.json ──→ OpenAPI spec
│
├─ Same flow for App B, App C, ...
│
└─ Consolidated catalog of all resources
The Discovery Flow
- Aggregator hits the well-known URI:
GET https://app.example.com/.well-known/open-resource-discovery - Gets ORD Configuration: supported versions, document URLs, access strategies
- Fetches ORD Documents: JSON files (max 2MB) containing resource descriptions
- Follows links to detailed specs (OpenAPI, AsyncAPI, OData CSDL)
What Gets Described
ORD Documents contain these resource types:
| Resource Type | What It Describes |
|---|---|
| API Resources | REST, OData, SOAP, GraphQL APIs |
| Event Resources | Async events (links to AsyncAPI specs) |
| Entity Types | Business objects / domain models |
| Data Products | Consumable datasets |
| Packages | Logical grouping of resources |
| Consumption Bundles | Resources consumed together with shared auth |
| Integration Dependencies | What the system needs from other systems |
| Capabilities | What the system can do |
| Agents | AI agents with discoverable metadata (beta) |
ORD IDs
Every resource gets a globally unique, stable identifier:
<namespace>:<conceptName>:<resourceName>:<version>
Real examples:
sap.s4:apiResource:CE_APS_COM_CS_A4C_ODATA_0001:v1
sap.foo:eventResource:BillingDocumentEvents:v1
sap.foo:package:ord-reference-app:v0
sap.foo:entityType:Constellation:v1
mycompany.erp:apiResource:sales-order-api:v1
Rules:
- Namespace: lowercase, dot-separated (
sap.s4,mycompany.erp) - Concept name: fixed set (
apiResource,eventResource,package, etc.) - Resource name: human-readable, ASCII, stable across versions
- Version:
v1,v2, etc. (empty forproductandvendor) - Immutable once published, max 255 characters
Key Properties
- Decentralized — each app is the authoritative source of truth about itself (no central registry drift)
- Pull-based — aggregators crawl providers via standard HTTP GET
- Extensible — custom labels, types, and spec extensions supported
- References existing standards — doesn’t reinvent OpenAPI or AsyncAPI, just links to them
- Static + Runtime — supports both documentation and live tenant-specific discovery
- Open governance — Linux Foundation project under NeoNephos Foundation
How They Fit Together
These three protocols aren’t competing — they’re complementary layers:
┌──────────────────────────────────────────────────────┐
│ ORD (Discovery Layer) │
│ "What exists? What can it do? Where is it?" │
│ │
│ Publishes metadata catalogs: APIs, events, agents, │
│ data products, capabilities │
└───────────────┬──────────────────────┬────────────────┘
│ │
┌──────────▼──────────┐ ┌────────▼─────────────┐
│ MCP (Tool Layer) │ │ A2A (Agent Layer) │
│ │ │ │
│ AI app connects to │ │ Agent collaborates │
│ databases, files, │ │ with other agents │
│ APIs as tools │ │ as opaque peers │
└─────────────────────┘ └───────────────────────┘
A practical example:
-
ORD describes the landscape: “There’s a Sales Order API at
sap.s4:apiResource:SalesOrder:v2, an Inventory Agent atwarehouse.co:agent:stock-checker:v1, and a Shipping Event atlogistics:eventResource:ShipmentCreated:v1” -
MCP gives your agent tools: An MCP server wraps the Sales Order API so the agent can query orders, create deliveries, and check pricing — all through structured tool calls with discovery and lifecycle management
-
A2A enables collaboration: Your sales agent sends a Task to the warehouse agent asking “Can you fulfill 500 units of material X from plant 1710?” The warehouse agent checks stock, responds with an Artifact containing availability data, and your agent proceeds
The SAP BTP Context
In SAP’s vision:
- ORD powers the Unified Customer Landscape — the metadata backbone that knows what services, APIs, and agents exist across your BTP subaccounts and connected systems
- MCP enables Joule and custom AI agents to ground themselves in real SAP data — connecting to S/4HANA, SuccessFactors, or Ariba through tool interfaces
- A2A will enable cross-system agent orchestration — a procurement agent in Ariba collaborating with a finance agent in S/4HANA, each opaque, each autonomous
Comparison Table
| MCP | A2A | ORD | |
|---|---|---|---|
| Purpose | Connect AI to tools/data | Agent-to-agent collaboration | Metadata discovery |
| By | Anthropic | SAP / Linux Foundation | |
| Wire format | JSON-RPC 2.0 | JSON-RPC 2.0 | JSON over HTTP GET |
| Transport | stdio, Streamable HTTP | HTTP, SSE, webhooks, gRPC | HTTP pull (well-known URI) |
| Stateful | Yes (lifecycle, sessions) | Yes (task lifecycle) | No (read-only catalog) |
| Discovery | tools/list, resources/list | Agent Cards | /.well-known/open-resource-discovery |
| Bidirectional | Yes (sampling, elicitation) | Yes (multi-turn tasks) | No (one-way pull) |
| Real-time | Notifications on change | SSE streaming, push | No (aggregator polls) |
| Auth | Bearer tokens, API keys | OAuth2, API keys, mTLS | Per access strategy |
| Analogy | USB-C port for AI | Business meeting between agents | Yellow Pages / service catalog |
When to Use What
Use MCP when your AI application needs to interact with external systems — read files, query databases, call APIs, execute code. If you’re building an AI-powered SAP tool that needs to call OData services or read ABAP source code, you’d wrap those as MCP servers.
Use A2A when you have multiple autonomous agents that need to collaborate without sharing internals. If you have a sales planning agent and a demand forecasting agent that need to exchange results, A2A gives them a standard way to do it.
Use ORD when you need to catalog and discover what’s available in your landscape. If you’re managing a BTP environment with 50+ services and want agents to programmatically find which APIs are available, ORD provides the machine-readable directory.
Conclusion
MCP, A2A, and ORD aren’t competing standards — they’re the plumbing, meeting rooms, and building directory of the AI agent world. MCP gives agents their hands (tools). A2A gives agents their voice (peer communication). ORD gives agents their map (what exists and where).
As enterprise AI moves from single-agent demos to multi-agent production systems, having standardized protocols at each layer becomes critical. The good news is that all three are open source, industry-backed, and designed to work together. The architecture is coming into focus — now it’s time to build on it.