Back to blog

WebMCP: when your website becomes actionable by AI agents

Google, Microsoft, and the W3C are pushing WebMCP, a standard that lets websites expose their features as callable tools for AI agents. After citation comes action — here's why this protocol changes the game for GEO.

WebMCP: when your website becomes actionable by AI agents

WebMCP: when your website becomes actionable by AI agents

In February 2026, Google shipped WebMCP in a Chrome preview. Behind the acronym sits a W3C standard, co-led with Microsoft, that turns every website into an API callable by AI agents. After GEO (being cited), welcome to the era where you also need to be actionable.


From "answering humans" to "serving agents"

Since 2024, the GEO (Generative Engine Optimization) conversation has focused on one question: how do you get ChatGPT, Perplexity, or Gemini to cite your site in their answers? Statistics, citations, clear structure, earned media — the playbook is now well documented.

WebMCP shifts the problem one level up. When a user tells an AI agent "book me a flight from Paris to Lisbon tomorrow morning on site X" or "buy me those sneakers," citation is no longer the goal. Execution is. But today, agents trying to interact with a "normal" site have to simulate a human: parsing the DOM, clicking visual elements, guessing what each button does. It's slow, unreliable, and breaks at the slightest UI change.

WebMCP takes a different route: let the website declare itself to agents, as a set of typed tools.


What is WebMCP, concretely?

WebMCP (Web Model Context Protocol) is a W3C standard in progress that lets a web page expose its features — submitting a form, running a search, adding a product to a cart, filing a support ticket — as callable tools for an AI agent, directly from the browser.

The project started in summer 2025 as MCP-B ("MCP for the Browser"), created by a former Amazon developer. It is now led by Google (Chrome team) and Microsoft (Edge team), under the WebMCP banner, and shepherded at the W3C. A first draft specification was published in early 2026, and Chrome opened an Early Preview Program for developers.

The idea inherits from Anthropic's Model Context Protocol (2024), which standardizes how an LLM calls external tools. WebMCP is its natural extension to the browser: the web page itself becomes an MCP server, and tools run client-side, inside the user's tab.


Two APIs: declarative and imperative

The standard offers two ways to expose capabilities.

1. The declarative API — for HTML forms

Just add attributes to an existing <form>. No JavaScript required.

<form action="/search" toolname="search_flights"
      tooldescription="Search available flights between two cities">
  <input name="from" toolparamtitle="Departure city"
         toolparamdescription="IATA code or city name" />
  <input name="to" toolparamtitle="Arrival city"
         toolparamdescription="IATA code or city name" />
  <input type="date" name="date" toolparamtitle="Departure date" />
  <button type="submit">Search</button>
</form>

To an agent, this form is no longer a set of pixels to interpret: it's a search_flights tool with three documented parameters.

2. The imperative API — for everything else

For more dynamic interactions (actions triggered inside an SPA, filtering, cart updates…), developers use a new JavaScript interface:

navigator.modelContext.registerTool({
  name: "add_to_cart",
  description: "Add a product to the user's shopping cart",
  schema: {
    productId: { type: "string", description: "Product SKU" },
    quantity:  { type: "number", description: "Number of units" }
  },
  async execute({ productId, quantity }) {
    await cart.add(productId, quantity);
    return { ok: true, cartSize: cart.size };
  }
});

The W3C draft defines three main methods on window.navigator.modelContext:

  • provideContext(...) — full update (handy for SPAs)
  • registerTool(...) — add an individual tool
  • unregisterTool(...) — remove a tool

Functions can be asynchronous and delegated to web workers.


The three use cases Google highlights

In its Early Preview Program announcement, the Chrome team calls out three priority areas:

  1. Customer support — the agent fills in a ticket with technical context (browser, current URL, error logs) without the user typing it all over again.
  2. E-commerce — the agent finds a product, configures options (size, color, customization), adds to cart, and walks through checkout.
  3. Travel — the agent searches for flights, filters results, books, all while talking to the user instead of clicking for them.

These are not the only use cases — any interface built around forms or structured actions is a candidate.


What WebMCP changes for GEO

Until now, a brand's presence in the AI ecosystem could be measured on two axes:

  • Discoverability — do AI engines mention your brand when a user asks a relevant question?
  • Accuracy — is what they say about you correct, up to date, positive?

WebMCP opens a third axis: actionability. When a user tells their agent "order me a pizza," three scenarios emerge:

  1. No site in the market implements WebMCP → the agent tries its best to navigate visually, often fails, and ends up recommending a competitor.
  2. A single vendor implements WebMCP → the agent systematically picks that site, because it's the only one it can operate reliably.
  3. Several sites implement WebMCP → competition kicks back in, but now on the quality of the exposed tools (granularity, speed, clear descriptions).

It's the exact dynamic we saw with responsive design around 2012–2014: first a UX bonus, then a Google signal, then a baseline requirement. Semrush, in its recent coverage, draws the parallel explicitly: sites that move early will capture the emerging agentic traffic; laggards will lose ground.

For the brands we support on Hlight, the implication is direct: a complete GEO strategy in 2026 is no longer limited to editorial content. It must include an actionability audit — are your key user journeys exposable via WebMCP? Are your forms clean enough that a declarative annotation is all it takes?


What it doesn't solve yet

The standard is young, and several questions remain open:

  • Security and privacy — the corresponding section of the W3C draft is currently empty. Risks around CSRF, XSS, or abuse combined with emerging features (Prompt API, Web AI) are not yet scoped.
  • User consent — when can the agent call a tool without re-asking for confirmation? Guardrails still depend heavily on the MCP client (Claude Desktop, Chrome, etc.), not the protocol.
  • Multi-browser support — Chrome opens the ball in preview, Edge follows, Firefox and Safari have not yet committed. Broad availability is expected mid- to late 2026 at the earliest.
  • Discoverability of the tools themselves — nothing today says which sites expose WebMCP, nor how agents rank them. This looks suspiciously like a fresh ranking battlefield.

What to do now

Three concrete moves for teams building or running a website:

  1. Audit key user journeys. What are the 3 to 5 actions users actually come to your site to do (buy, book, sign up, contact)? Those are the first candidates for WebMCP exposure.
  2. Clean up your HTML forms. The declarative API costs almost nothing… provided you have clean <form> elements, explicit name attributes, and sensible labels. That's basic HTML hygiene that also benefits accessibility and SEO.
  3. Sign up for Chrome's Early Preview Program. The Chrome team ships docs, demos, and spec changes as they go. Being there early means not redoing the work six months later.

And on the GEO side: keep investing in citation (statistics, sources, earned media), because an agent that doesn't know you will never call your tools — no matter how well exposed they are.


Sources

X LinkedIn

Put these strategies into practice

Lancez votre audit GEO gratuit et découvrez votre visibilité dans les réponses IA.