25 April 2026
IN-DEPTH ANALYSIS · AGENT INFRASTRUCTURE
10 min read

At Google Cloud Next 2026 on April 22, Google promoted the Agent2Agent protocol (A2A) version 1.2 to production status as a Linux Foundation project. Over 150 organizations are already running A2A in production, including Microsoft, AWS, Salesforce, SAP, and ServiceNow. Native support exists in Google Agent Development Kit, LangGraph, CrewAI, LlamaIndex Agents, Semantic Kernel, and AutoGen. For DACH cloud architects, this makes one question acutely relevant-where it was previously only theoretical since A2A’s initial announcement last year: Does A2A require its own infrastructure layer alongside the existing service mesh, or can it operate within the established sidecar model?

TL;DR (as of April 24, 2026):
  • A2A 1.2 launched into production at Cloud Next 2026 (April 22, 2026), now a Linux Foundation Agentic AI project, featuring signed Agent Cards with domain verification.
  • A2A complements MCP (Model Context Protocol) but does not replace it: MCP handles agent-to-tool communication, while A2A enables cross-organizational and cross-platform agent-to-agent interaction.
  • Istio and Linkerd remain the primary service mesh choices for mTLS, traffic shaping, and L7 policies; A2A operates above them as an application-layer protocol, not in parallel.
  • Integration works cleanly via the existing sidecar model but requires three key adaptations: an Agent Card service, signature validation, and a delegation audit trail.
  • For mid-sized DACH enterprises with an existing Istio setup, integration is realistically achievable within eight to twelve weeks-without replacing the current mesh.

What A2A Actually Is-Technically

What is the Agent2Agent Protocol (A2A)? A2A is an open application-level protocol built on HTTP/JSON for communication between autonomous AI agents from different vendors. It defines standards for Agent-Cards (metadata, capabilities, endpoints), task delegation, result return, and signature verification. A2A 1.2 introduces cryptographically signed Agent-Cards with domain binding, enabling a Salesforce agent to authenticate a Google Vertex agent without the two systems needing prior internal knowledge of each other. Since Cloud Next 2026, the protocol has been hosted under the Linux Foundation’s Agentic AI Foundation.

The key architectural feature: A2A operates at Layer 7, above the service mesh layer. A conventional Istio setup (mTLS, Envoy sidecar, control plane) remains fully intact. A2A leverages the TLS connection provided by the mesh, then adds its own signature chain on top. Anyone interpreting A2A as a competitor to service mesh technology fundamentally misunderstands the protocol. A2A is an application-layer abstraction for agent interoperability-not a transport-layer alternative.

A quick look at the A2A 1.2 release notes: Signed Agent-Cards now include an Agent-DID (Decentralized Identifier) linked to a domain. The signature chain is validated via DNS TXT records or a well-known endpoint. In practice, this means any organization deploying an A2A agent must publish a publicly accessible Agent-Card, with its signature countersigned using a domain-specific key. Companies already running DNSSEC and a well-known endpoint likely meet these technical requirements as part of their existing infrastructure standards.

The Difference Between A2A and MCP-Overlooked in Most Architecture Reviews

Anthropic’s Model Context Protocol (MCP), established since 2024, defines the interface between a single agent and external tools or data sources. A2A, by contrast, governs communication between two agents-even across platform and organizational boundaries. A typical scenario: a Salesforce agent on Agentforce receives a task via A2A from a Vertex AI agent, queries an internal CRM tool using MCP, then returns the result via A2A. The two protocols are complementary. Designing one without the other leads to unnecessary integration overhead and cost.

Three Key Figures for Context

150+
Organizations are running A2A in production (as of Cloud Next 2026, April 22, 2026), including Microsoft, AWS, Salesforce, SAP, and ServiceNow.
v1.2
A2A release under the Linux Foundation’s Agentic AI, featuring signed agent cards and domain verification via DID (Decentralized Identifiers).
Layer 7
A2A is an application-layer protocol. Solutions like Istio and Linkerd remain active beneath it-A2A is not a service mesh replacement, but a complementary layer.
Any good web architecture should be sketchable on a single Post-It note. If it can’t, there’s likely an unnecessary abstraction somewhere. A2A belongs in that sketch as a thin layer atop the mesh-not as a second mesh running alongside it.

Integration into existing service mesh: three architectural decisions

The practical question currently dominating agendas in DACH (Germany, Austria, Switzerland) architecture rounds is: How do you integrate A2A into an existing Istio or Linkerd setup without breaking the mesh design? Three specific decisions need to be made.

Decision 1: Separate Agent-Card Service or integrate into existing Discovery

A2A requires an accessible Agent-Card endpoint per published agent. Since the 1.0 release, two patterns have emerged. First, a dedicated Agent-Card service as a separate deployment behind the ingress, with its own caching and rate-limiting policy. Second, integrating the Agent-Card into existing service discovery (Consul, Istio Workload Entries), where the card is carried as a metadata attribute. Our experience: The separate service is cleaner to operate and audit because it isolates access patterns. For greenfield implementations, we recommend the separate service; for brownfield with a strong Consul landscape, the metadata solution suffices.

Decision 2: Signature validation in the sidecar or in the agent runtime

A2A 1.2 introduces cryptographically signed Agent-Cards. Signature validation can either occur in the Envoy sidecar via Lua filter or WASM plugin, or directly in the agent runtime through the official Agent SDK. For high-throughput setups (more than 200 agent calls per second per service), we recommend the sidecar variant as it uniformly manages cache warm-up and revocation lists. For experimental and medium-sized setups, the runtime variant is simpler and faster to deploy. Important: Those who choose sidecar validation must synchronize the Agent-Card CRL (Certificate Revocation List) with the mesh update cycle. Otherwise, revoked cards will enter the system with a delay, which falsifies audit trails.

Decision 3: Delegation audit trail in SIEM or dedicated

Agent-to-agent delegation creates an audit trail not fully visible in classic Istio access logs. For compliance requirements (DORA (Digital Operational Resilience Act), NIS2 (Network and Information Systems Directive), EU AI Act), the delegation trail is a separate audit artifact: Which agent delegated which task to which agent, with what justification, what result, what runtime. We see two patterns in practice. First, export to SIEM via a dedicated A2A event pipeline (Kafka topic plus parser). Second, a dedicated agent observability tool (Arize, Helicone, LangSmith). For regulated industries, SIEM export is typically mandatory, while the observability tool runs in parallel for performance analytics. Together, these realistically cost mid-sized companies €15,000 to €35,000 per year, depending on transaction volume.

What this means for the architecture roadmap

For DACH architects, the three decisions result in a concrete eight-to-twelve-week roadmap. Weeks one to three: Inventory of existing agents and identification of first integration use cases (typically an assistant agent plus a system agent). Weeks four to six: Agent-Card Service as separate deployment, DNS-TXT record setup for signature chain, SDK integration into the target agent. Weeks seven to nine: Enable sidecar validation or runtime validation, load tests, observability pipeline. Weeks ten to twelve: SIEM export, audit trail review, productive release. Those who finish earlier either have a very simple use case or have made compromises in compliance documentation.

What actually changes in your own mesh operations daily routine

The most interesting operational question for DACH cloud architects is not “do I need A2A?”, but “what changes about the operational reality of my mesh when I introduce A2A?”. Three observations from the first integration projects. First, control plane latency increases slightly (three to seven percent) because signature validation produces additional cache writes. Second, Envoy WASM plugins require more review than classic Lua filters because A2A semantics are complex and bug fixes can take two to three days. Third, the observability team receives a new event class (Agent-Delegation-Events) that must be integrated into existing dashboards. Those who don’t plan proactively for this will end up with two parallel observability surfaces.

A concrete recommendation from the reviews: Run the first A2A production in shadow mode for at least three weeks before actual traffic is passed through. Shadow mode means that real agent calls are processed in parallel, but the responses are not forwarded to the application logic. This way, the team detects regressions in response format, signature handling, and latency without risking business impact. These three weeks pay off in almost all client projects because regressions become visible early, before they become an issue during production acceptance.

An honest look at the maturity level

Although A2A 1.2 is in production, the protocol is not finished. Missing components we expect in the next 18 months: Standardized rate limits for cross-org calls, a rollback mechanism for failed delegations, clearer governance rules for agent card revocation. Those who introduce A2A today should plan these gaps as their own additional components. The good news: The Linux Foundation Governance has cleanly published the roadmap; active members (Google, Microsoft, AWS, Salesforce, SAP, ServiceNow) have committed to delivering quarterly roadmap updates. This shows more governance discipline than was seen in most other AI standards in 2025.

The maturity check in three questions before introduction: Is your first use case an intra-org scenario (two of your own agents) or already cross-org (partner agent)? How much regulatory pressure affects delegation (banks, insurers, public sector are significantly more sensitive here than industry)? Is the existing mesh operations team ready to operate sidecar plugins and manage the A2A certificate lifecycle? Those who can clearly answer all three questions have the foundation for a clean introduction. Those who hesitate on any of the questions should start with a proof-of-concept rather than a production rollout.

Three misconceptions that regularly appear in the first projects

The first misconception concerns the separation from MCP: Many teams assume that A2A also handles tool integration. This is technically incorrect. A2A exclusively describes agent-to-agent, and every tool integration continues to run via MCP or direct API calls. Those who don’t clearly draw this separation in architecture discussions will create two overlapping protocol layers in three months, each covering only half of the interactions.

The second misconception concerns performance costs: Many teams calculate a linear surcharge per agent call and dimension their infrastructure accordingly. In practice, the overhead is sublinear because signature caching and connection pooling create scale effects. Those who dimension their infrastructure from the beginning for the linear worst case pay 20 to 30 percent too much for the first six months.

The third misconception concerns governance: Many organizations believe that agent card signing automatically provides compliance. It provides the technical foundation, but organizational approval (Who is allowed to sign agent cards? What processes are in place for card revocation?) must be built in parallel. Without these processes, the signature is a stamp that nobody monitors. For DACH organizations operating in regulated industries, we recommend defining the governance layer even before the technical introduction, so that the first productive release doesn’t become a bottleneck for audits.

Frequently Asked Questions

Does A2A require Istio or Linkerd?

No. A2A is mesh-agnostic and runs on any HTTP-based infrastructure. A mesh simplifies the operational view (mTLS, observability, policy enforcement) and reduces integration effort, but is not a requirement. For cloud-native Kubernetes setups, we recommend using a mesh, while for legacy enterprises with AKS/EKS without a mesh, A2A works equally well via standard ingress.

What is the performance cost of A2A signature validation in the sidecar?

With the Envoy WASM plugin and good caching, we expect additional latency of 2 to 5 milliseconds per call, under 1 millisecond with a warm cache. This is not problematic in almost all enterprise scenarios. For real-time trading or low-latency gaming, runtime validation would be the better choice as it only activates when needed.

How does A2A position itself regarding the EU AI Act?

The EU AI Act requires transparency and traceability for autonomous systems. A2A provides two of four required components with signed agent cards and delegation audit trails. The missing parts (model identification and output documentation) are at the MCP level and with the application agent. Those who properly implement A2A plus MCP cover approximately 70% of the AI Act documentation requirements.

What does a realistic A2A rollout cost?

For a mid-sized context with an existing Istio setup, we estimate 40,000 to 90,000 Euros for the first project, depending on use case complexity and number of agents. Cross-organizational scenarios with multiple partners typically double this amount due to additional governance and contractual requirements.

What risks does a cross-organizational A2A delegation have?

The three main risks: prompt injection via the external agent, data leakage through the task payload, and trust escalation in the delegation chain. The countermeasures: input sanitization before agent calls, strict data classification policy for payload content, and a maximum delegation depth in the agent runtime. All three are documented in the official A2A reference implementation and should not be overlooked in your own implementation.

Network: Further reading in cloudmagazin

Source title image: Pexels / Christina Morillo (px:1181675)

Also available in

A magazine by Evernine Media GmbH