Skip to main content
November 15, 202510 min readAI & Technology

Gemini 3 and the Agentic AI Foundation: The AI Stack Consolidates

Google releases Gemini 3 — its most powerful model yet — while the Linux Foundation launches the Agentic AI Infrastructure Foundation (AAIF) to standardise MCP, Goose, and AGENTS.md. Google's Antigravity semi-autonomous coding agent enters the market. November 2025 marked the moment when agentic AI infrastructure moved from fragmented experiments to coordinated industry standards.

GoogleGemini 3Linux FoundationAAIFMCPAgentic AIAI StandardsGooseOpen Source
Giovanni van Dam

Giovanni van Dam

IT & Business Development Consultant

Gemini 3: Google's Most Powerful Model

Google released Gemini 3 in November 2025, and it represented a generational leap in capability. Building on the architectural innovations of Gemini 2.5 Pro, Gemini 3 extended the performance frontier across reasoning, coding, multimodal understanding, and — critically — agentic capability.

Gemini 3 was designed from the ground up for agent workflows: it maintained state across long interaction chains, called tools reliably, recovered gracefully from errors, and could orchestrate multi-step processes with minimal human guidance. Google positioned it not as a chat model that could also act as an agent, but as an agent model that could also chat — a subtle but important distinction in design philosophy.

The model launched with deep integration into Google's ecosystem: Workspace (Docs, Sheets, Gmail, Calendar), Cloud (BigQuery, Vertex AI, Cloud Functions), and Android. For organisations already invested in Google's platform, Gemini 3 offered a compelling path to agentic AI with minimal integration overhead.

The Agentic AI Infrastructure Foundation: MCP, Goose, and AGENTS.md

In November 2025, the Linux Foundation launched the Agentic AI Infrastructure Foundation (AAIF), bringing together the key open standards and tools for agentic AI under a neutral governance umbrella. The founding projects included:

  • MCP (Model Context Protocol): Originally created by Anthropic, now contributed to AAIF as a vendor-neutral standard for agent-tool communication.
  • Goose: An open-source framework for building and deploying autonomous AI agents, providing the runtime environment for agent execution.
  • AGENTS.md: A standardisation effort for describing agent capabilities, permissions, and interfaces in a machine-readable format — essentially a manifest file that tells other agents and systems what an agent can do.

The move to Linux Foundation governance was significant. It transformed MCP from an Anthropic-led initiative into an industry standard with the same governance model as Linux, Kubernetes, and Node.js. This gave enterprise adopters confidence that the standard would evolve in a vendor-neutral direction and would not be subject to the strategic interests of any single company.

Google Antigravity: Semi-Autonomous Coding at Scale

Google entered the AI coding agent market with Antigravity, a semi-autonomous coding tool designed for large-scale enterprise codebases. Where Cursor and Claude Code focused on individual developer productivity, Antigravity targeted the organisational challenges of software engineering: migrating codebases, enforcing coding standards, resolving technical debt, and executing large-scale refactoring across thousands of files.

Antigravity integrated with Google Cloud's development infrastructure and used Gemini 3 as its intelligence layer. It could analyse an entire codebase, generate a migration plan, execute the changes, run the test suite, and present a comprehensive pull request for human review. For enterprises managing legacy codebases with millions of lines of code, this was a tool that could compress months of migration work into days.

The entry of Google into the AI coding agent market alongside Anthropic (Claude Code), Cursor, and GitHub (Copilot) intensified competition in a segment that was becoming central to developer workflows. For engineering leaders, the choice of AI coding tool was becoming as strategically important as the choice of cloud provider.

The Agentic AI Stack Consolidates

November 2025 marked a turning point in the maturity of agentic AI infrastructure. The stack was consolidating around clear standards and components:

  • Intelligence layer: Gemini 3, Claude Opus/Sonnet 4.x, GPT models — the foundation models that power agent reasoning.
  • Tool integration: MCP (now under Linux Foundation governance) — the standard for connecting agents to data and tools.
  • Agent communication: A2A (Google) — the standard for inter-agent discovery and collaboration.
  • Agent runtime: Goose, LangGraph, Amazon Bedrock Agents — the frameworks that manage agent execution.
  • Agent description: AGENTS.md — the manifest format for declaring agent capabilities and permissions.
  • Platform integration: Salesforce Agentforce, Microsoft Copilot, Google Workspace — enterprise platforms with embedded agent capabilities.

This consolidation was healthy for the ecosystem. Fragmentation is the enemy of enterprise adoption; standards reduce integration risk and enable interoperability. The agentic AI infrastructure of November 2025 was still maturing, but it was no longer a collection of incompatible experiments — it was an emerging, standardised stack.

Assessing Your Organisation's Agentic AI Readiness

With the agentic AI stack consolidating, enterprise technology leaders face a practical question: how ready is your organisation to deploy agents? The readiness assessment spans five dimensions:

  • Data readiness: Are your data sources accessible via APIs or MCP servers? Agents need structured, reliable data access to function effectively.
  • Process maturity: Are your workflows documented and standardised enough for an agent to follow? Agents automate processes — they do not invent them.
  • Security and governance: Do you have authentication, authorisation, and audit infrastructure that can extend to non-human (agent) identities?
  • Change management: Is your organisation culturally prepared to delegate tasks to AI agents? The human element of agentic AI adoption is often the hardest.
  • Technical capability: Does your team have the skills to build, deploy, and maintain agent systems? If not, what upskilling or partnership is needed?

Most organisations score well on one or two dimensions and poorly on the rest. The organisations that deploy agents successfully are those that invest across all five — technology, data, process, governance, and people — simultaneously. Assess your agentic AI readiness with an embedded technology partner.

Frequently Asked Questions

Further Reading

Related Articles

Giovanni van Dam

Giovanni van Dam

MBA-qualified entrepreneur in IT & business development. I help founder-led businesses scale through technology via GVDworks and build AI-powered SaaS at Veldspark Labs.