- AtlasMoth Newsletter
- Posts
- Exploring AI-native Design Systems
Exploring AI-native Design Systems
MCP is a hack.
Hey, it’s Kushagra. Welcome to this week’s AtlasMoth drop.
For 10+ years, design systems have been the quiet infrastructure behind every product. They shaped how teams ship UI at scale, maintained consistent branding, and provided designers and developers with a shared language.
But the shift to AI is revealing something important:
Most design systems were built for humans to read, not for machines to reason with.
And that’s the gap we need to close.
Today, I want to break down what an AI-native design system actually looks like. Not theory. Not hand-wavy “AI for design.”
A practical, structured approach that connects documentation, Figma, and your coded components into one machine-readable surface using MCP (Model Context Protocol).
Because this shift isn’t about replacing humans.
It’s about making collaboration feel smoother than ever.

Design systems
The 3 layers of every design system
Most design systems sit on top of three layers:
1. Guidelines & documentation
The principles, rules, naming conventions, accessibility criteria, voice & tone, writing guidelines, dos/don’ts are all scattered across portals and wikis.
Great for humans.
Painful for machines.
2. Figma assets & tokens
The visual source of truth tokens, variables, shared styles, components, icons, patterns.
But without semantic structure, AI can’t interpret intent or choose variants correctly.
3. The coded component library
Functional, production-ready components with props, states, interactions, theming, and accessibility.
Useful, but not discoverable unless you already know the component exists.
Each layer works.
But what’s missing is the connective tissue, the part that allows an AI to understand how layers relate and why decisions matter.

Layers of every design system
What’s your biggest challenge with design systems in the AI era? |
💬 Building for people beyond borders? Book a call to explore more
Vibing While DesigningThis track gave me a serious boost—check out ‘Backbone’ by Droeloe 🎵 |
From static documentation → to responsive systems
AI doesn’t struggle with complexity.
It struggles with ambiguity.
Current design systems assume a human will translate intent:
“Which variant should I use?”
“Which token is correct?”
“Does this pass accessibility?”
“What’s the canonical version?”
An AI-ready system needs to shift from readable to actionable:
Documentation that can be queried.
Tokens that carry meaning, not just values.
Components that reveal their rules, constraints, and metadata.
That’s when prompts become production-ready operations instead of guesses.
How to make a design system AI-readable
To enable machine reasoning, two things must exist:
Structured data
Intelligent retrieval
Let’s translate the three layers into something an agent can understand.
1. Turn documentation into a structured knowledge base
Most guidelines are prose. AI can read them, but it can’t apply them reliably.
Fix this by moving your documentation into a RAG-ready database, every entry enriched with metadata like:
component_ref
platform
tags (accessibility, states, feedback, performance, content style)
version + updated_at
images + alt text
canonical URLs
This transforms guidelines from “nice to read” → “machine-queryable truth.”
2. Make Figma assets computable
AI’s work best when naming and structure are predictable.
To make your design layer machine-friendly:
Keep variant names stable and descriptive
Minimize unnecessary nesting
Use Auto Layout rigorously
Write short, semantic descriptions
Use Variables as semantic tokens, not raw values
Organize modes by theme/platform
Apply tokens consistently
This gives the model clarity on intent, meaning, and allowed configurations.
3. Make coded components machine-readable
Human-readable API design ≠ is machine-readable.
Expose:
props
states
token bindings
accessibility attributes
variants
constraints
version metadata
Use a consistent schema across the entire library.
This lets an AI agent:
enumerate your components
Select the correct variant
apply tokens correctly
Generate an implementation aligned with your system
and never invent props, names, or colors

Design system AI-readable
The glue: Model Context Protocol (MCP)
MCP is the bridge that lets your design system “speak” to AI tools.
Instead of custom integrations, MCP provides a predictable and structured interface.
A coding agent asks:
“Give me the accessibility rules for buttons.”
“Show me the token bindings for this component.”
“What is the Figma structure for this frame?”
And your system responds with clean, machine-readable data.
This is how documentation, design, and code behave like one unified system.

MCP
MCP across the 3 layers
Documentation MCP
Expose your RAG store via tools like:
get_guidelinesearch_guidelinesget_related_examples
Return metadata like id, version, tags, component_ref, and canonical_url.
Figma Dev Mode MCP
Leverage Figma’s official MCP server for:
hierarchy (
get_frame_structure)variable bindings
Code Connect mappings
component references
Component MCP
Expose your coded components via:
list_componentsget_componentsuggest_variantapply_tokens
Let the AI browse, choose, and assemble without touching your build pipeline.
How to unify all MCP servers
You have three options based on maturity:
1. Connect servers directly (fastest start)
Documentation MCP → Figma Dev Mode MCP → Component MCP.
2. Add a gateway (for auth, logs, rate limits)
One surface. Three backends. Cleaner governance.
3. A single unified MCP (for strict control)
One MCP server exposing a curated, cross-layer toolset.
Start loose → evolve into governed → consolidate.
The orchestration lives in the IDE
Which server to trust first
How to validate design intent
How to verify accessibility
What components are allowed
When to open PRs
How to enforce tokens
Example rule behavior:
Pull the Figma truth first
Validate with Guidelines MCP
Match components via Component MCP
Never invent tokens
Always open a PR, never push
Return a full trace of guideline IDs, token mappings, component versions, and sources
This turns a single prompt into a traceable, standards-compliant workflow.
30 Minutes Can Save YouGreat design doesn’t happen alone. One session can save you 10+ design iterations later. |
What changes for designers
Prototype → code becomes instant
Accessibility enforced automatically
Variant selection is consistent
Content tone and terminology checked in context
Less handoff friction
Decisions become traceable
Updates propagate everywhere
Contributions back to the system get validated automatically
What changes for developers
Faster implementation
Stronger accessibility enforcement
Stable APIs and component discovery
PRs with full traces to guidelines and tokens
Auto-generated stories, tests, visuals
Safer migrations and deprecations
Cleaner governance
Normalized component use across teams
Where the industry is heading
Teams are already moving in this direction.
Knapsack launched an open-source MCP server to make design systems queryable.
Murphy Trueman argues the next design system user isn’t a developer, it’s an agent.
When the system starts thinking with you
This shift doesn’t remove people.
It removes friction.
Instead of searching, guessing, or explaining intent…
You talk to the system.
It responds with rules, visuals, decisions, and implementation, all sourced from your own design system.
Takeaway
An AI design system isn’t just a component library.
It’s:
structured
machine-readable
enriched with metadata
linked across Figma, docs, and code
queryable via MCP
enforced through IDE rules
traceable end-to-end
When done right, it becomes an active participant in the design and development workflow.
And once you see it in action, it becomes hard to imagine building any other way.




Reply