Skip to main content
Developer Experience

Bridging Figma, VS Code, and production with a single API

March 21, 2026 · 7 min read


The silo problem

Design-to-development tooling has proliferated over the past few years. Figma plugins that audit component accessibility. VS Code extensions that lint design token usage. Browser extensions that check WCAG compliance on live sites. Each of these tools is genuinely useful in isolation. The problem is the word 'isolation.'

When these tools do not share a common data layer, the results do not accumulate. A Figma plugin flags a contrast violation. The developer fixes it. A browser extension later flags the same issue on the live site — because the fix was applied to the wrong token, or to a different variant of the component. There is no continuity between the audit at design time and the audit at runtime.

The audit trifecta

A different model: three audit surfaces, one contract. The same component specification that the Figma plugin checks at design time is the specification that the VS Code extension surfaces at development time, and the specification that the browser extension validates against in production.

When all three stages read from the same contract, the results are continuous. A violation flagged in Figma is the same violation that can be checked in code and confirmed as fixed in production. There is no ambiguity about which version of the specification is authoritative — there is only one version.

What this looks like in practice

Here is a concrete workflow. A designer updates a button's focus ring colour in Figma. The Figma plugin checks the new colour against the contract's WCAG 2.2 requirement and passes — the new colour has sufficient contrast. The plugin pushes the updated token to the Parlance platform.

The developer picks up the change. They are working in VS Code and using an AI coding assistant — Claude Code — via the Parlance MCP server. The assistant can query the contract directly: 'What does the Button contract specify for focus ring colour?' The MCP server returns the exact token value and the associated WCAG requirement. The developer implements it correctly on the first pass.

Before shipping, the browser extension validates the live staging environment against the contract. The focus ring passes. The component is shipped. The contract status is updated to 'agreed.'

Source attribution

One underappreciated aspect of this model is source attribution: knowing which tool produced which audit result. When an audit finding reaches the platform, it carries its source — Figma, VS Code, MCP server, or browser extension. This matters more than it seems.

A violation flagged by the Figma plugin indicates a design-stage problem — the spec itself may need revision. A violation flagged only by the browser extension, after the Figma plugin passed, indicates an implementation divergence — the developer deviated from the agreed spec. These are different problems requiring different conversations.

AI coding assistants as a new audit surface

The MCP server introduces a fourth audit surface that is qualitatively different from the other three: the AI coding assistant. When a developer asks Claude Code to implement a component, the assistant can query the Parlance MCP server and receive the full contract specification as part of its context.

This is not a passive integration. The assistant can check its own implementation against the contract requirements before suggesting code. It can flag when a proposed implementation would violate a WCAG requirement in the contract. It can automatically use the correct design token from the glossary.

The contract becomes not just a specification for humans, but a constraint for AI systems. When both humans and AI assistants are working from the same verified specification, the surface area for divergence shrinks dramatically.

A single source of truth

The value of a unified audit platform is not any single integration — it is the continuity. Design, code, and production all point to the same contract. Audit results accumulate in one place. Divergences are caught at the earliest possible stage. When something is fixed, that fix is verifiable at every subsequent stage.

This is what Parlance is built to be: not another design tool or another developer tool, but the shared layer that connects them — the single source of agreement that travels through the entire workflow.

← Back to blog