Agent-to-Agent Communication: Google's A2A Protocol and Agentic Infrastructure
Google launches the Agent-to-Agent (A2A) open protocol, creating a standard for AI agents to discover and communicate with each other. Meta releases Llama 4 Scout and Maverick, Amazon unveils Nova Act for browser automation, and Google reveals its Ironwood TPU. April 2025 was the month agentic AI got its infrastructure layer.

Giovanni van Dam
IT & Business Development Consultant
Google's A2A: A Universal Language for AI Agents
On 9 April 2025, Google announced the Agent-to-Agent (A2A) protocol — an open standard that allows AI agents built by different vendors, on different frameworks, to discover each other, negotiate capabilities, and collaborate on tasks. The protocol launched with support from over 50 technology partners, including Salesforce, SAP, ServiceNow, and MongoDB.
The core problem A2A solves is interoperability. As businesses deploy specialised AI agents for different functions — a customer service agent, a procurement agent, a data analysis agent — those agents need to communicate. Without a standard protocol, each integration becomes a bespoke engineering project. A2A provides the equivalent of HTTP for agent communication: a shared language that allows any compliant agent to interact with any other.
The protocol defines three key capabilities: agent discovery (finding agents and understanding what they can do), task management (delegating work and tracking progress), and secure communication (authenticating agents and encrypting interactions). It is designed to complement Anthropic's Model Context Protocol (MCP), which handles how agents connect to tools and data sources.
Llama 4 Scout and Maverick: Meta's Open-Weight Offensive
Meta released Llama 4 Scout (17 billion active parameters, 109 billion total) and Llama 4 Maverick (17 billion active, 400 billion total) on 5 April 2025. Both used a mixture-of-experts architecture, following DeepSeek's demonstration that this approach could dramatically reduce inference costs while maintaining quality.
Scout featured a 10-million-token context window — the longest of any production model at launch — enabling use cases like entire-codebase analysis, full-book comprehension, and multi-document legal review. Maverick focused on conversational quality and multimodal reasoning, scoring competitively against Gemini 2.5 Pro and GPT-4o on standard benchmarks.
Meta's continued commitment to open weights applied further downward pressure on API pricing across the industry. Every open-weight release makes it harder for closed-model providers to justify premium pricing for comparable capabilities, which benefits enterprise buyers regardless of which model they ultimately deploy.
Amazon Nova Act: Browser Automation at Scale
Amazon entered the agentic AI race with Nova Act, a specialised model and SDK designed for browser-based agent actions. Where OpenAI's Operator was a consumer-facing product, Nova Act targeted developers building automated web workflows: data extraction, form submission, multi-step web processes, and end-to-end testing.
Nova Act was significant because it came from Amazon — the company with the largest cloud infrastructure footprint and the deepest integration with enterprise workflows through AWS. The combination of a browser automation model with AWS's existing suite of services (Lambda, Step Functions, Bedrock) created a compelling platform for building production-grade agentic workflows.
For businesses already invested in AWS, Nova Act reduced the barrier to adopting browser-based automation. For the broader market, it validated that agentic browser automation was not a niche experiment but a core infrastructure capability that the largest cloud providers were investing in.
Google Ironwood TPU: Custom Silicon for the Agentic Era
Google Cloud Next 2025 also introduced the Ironwood TPU, Google's seventh-generation tensor processing unit. Designed specifically for large-scale inference workloads rather than training, Ironwood reflected a strategic bet that the next phase of AI would be dominated by inference — running models at scale — rather than training new ones.
This aligned with the broader industry trend towards agentic AI. Agents run inference continuously: every decision, every tool call, every communication with another agent requires a model inference. An infrastructure optimised for high-throughput, low-latency inference is therefore purpose-built for the agentic era.
For enterprise technology leaders, the Ironwood announcement reinforced the importance of evaluating AI infrastructure holistically. The choice of model, cloud provider, and hardware platform are increasingly interdependent. Google's Gemini models running on Ironwood TPUs on Google Cloud will have inherent latency and cost advantages — a form of vertical integration that competitors will need to match.
Building Your Agentic Infrastructure Stack
April 2025 delivered the foundational building blocks for agentic AI in the enterprise. The stack is now taking shape:
- Agent frameworks: LangGraph, CrewAI, AutoGen, Amazon Bedrock Agents
- Tool integration: Model Context Protocol (MCP) by Anthropic
- Agent communication: Agent-to-Agent (A2A) protocol by Google
- Browser automation: OpenAI Operator, Amazon Nova Act
- Inference infrastructure: Nvidia GPUs, Google Ironwood TPUs, custom ASIC alternatives
The businesses that will capture the most value from agentic AI are those that begin building this infrastructure now — not waiting for the standards to fully mature. The protocols will evolve, but the architectural patterns are clear. Start with a single high-value workflow, instrument it with agents, connect it via MCP and A2A, and expand from there.
Explore how embedded technology leadership can help you build agentic infrastructure that scales.
Frequently Asked Questions
Further Reading
Related Articles
Gemini 2.5 Pro Tops Every Leaderboard: Google Rewrites the AI Pecking Order
Google's Gemini 2.5 Pro claims the #1 position on LMArena by a wide margin, demonstrating native multimodal reasoning that competitors cannot match. Meanwhile, ChatGPT's image generation feature attracts over 1 million users per hour. March 2025 proved that the AI race is far from settled — and that Google is very much in it.
Claude Opus 4: World's Best Coding Model, and Shopify's AI Shopping Agents
Anthropic releases Claude Opus 4 with a 72.5% SWE-bench score — the highest ever measured — while Claude Sonnet 4 matches it at one-fifth the price. Shopify launches its AI-powered product Catalog and voice Sidekick, signalling that agentic commerce is no longer theoretical. May 2025 was the month AI became the best programmer in the room and started selling things.

Giovanni van Dam
MBA-qualified entrepreneur in IT & business development. I help founder-led businesses scale through technology via GVDworks and build AI-powered SaaS at Veldspark Labs.