AI and the Design Process
DESIGN.md becomes an open standard for agents
DESIGN.md was introduced through Stitch as a humble text file: colors, typography, rules, and the reasoning behind them, all in one place, so an agent could make on-brand design decisions. Its use has since grown beyond Stitch, showing up in IDEs, agent tools, and curated GitHub repos in ways Google says it did not anticipate. As a result, Google has published a first-draft specification on GitHub, adding a Tokens section, and shipping a Command Line Interface (CLI) that lets agents validate their own work against the spec.
Worth watching, and still explicitly a work in progress, is the new Components section. The idea is that a DESIGN.md file can carry not just a brand’s visual language, but also component-level decisions like button styles and hover variants. If this direction lands, design systems become more portable across the tools that generate and modify interface work, giving agents and AI-native tools a shared text-based layer to work from.
David East | Stitch’s DESIGN.md format is now open-source so you can use it across platforms
Watch Time: 10 minutes
David East | What DESIGN.md is and isn't
⚡ Quick Read (1 minute)
AI and the Design Process
I spent $200 on Claude Design, so you don't have to
In Claire Vo's recent hands-on testing of Claude Design, she focused on three use cases: importing a design system to spin up marketing landing pages, turning written content into branded slide decks, and “going feral” with no design system at all for reference-style exploration (i.e., “make something in the style of x”).
Her key takeaway is around how Claude Design treats your design system as a first-class citizen. You can import HTML, logos, fonts, and brand assets, and the tool builds a structured spec it can then design against. Google Labs has released a DESIGN.md standard with a similar intent. It’s clear AI design tools are consolidating around a shared pattern: describe your design system in a format an AI can actually reason over.
Claire is candid about Claude Design’s current limitations. It’s slow: iteration cycles run minutes, not seconds. She hit her usage limit after only two or three things and had to top up to $200 to finish the demo. Figma’s enduring advantage, she argues, is the part of the workflow where there’s no model in the loop: drag, change, swap, see it instantly. That’s the speed of iteration we underestimate. Claude Design is good at marketing landing pages, content-to-deck conversions, and copywriting. She feels it’s less convincing for application UX and product-interface work.
Claire Vo | I spent $200 on Claude Design so you don't have to
Watch Time: 28 minutes
Building with AI
Anthropic's strategic bet: Code, Cowork, and now Design
From Nate Jones:
"Claude Design shipped April 17 alongside Opus 4.7, in research preview. The coverage split cleanly. One camp: Figma’s stock dropped 7% and commentators argued about what that means for the design software industry. Another camp: the tool has real rough edges and commentators argued about whether those break the story. Both frames miss the point. Sometimes a product ships with warts and the warts don’t matter. Anthropic has a pattern of shipping those. Claude Code was the first. Cowork was the second. Claude Design is the third, the one that makes the strategy finally clear."
What was missing before Design was the visual step. You could think in Chat, execute knowledge work in Cowork, and ship software in Code. But taking an idea and turning it into something you can show someone — the mockup, the rough screen, the deck — happened outside Anthropic's tools. With Design, that gap closes. Design produces the actual UI, deck, or prototype in the medium it will run in (HTML, CSS, JSX), ready for Code to harden into production.
Nate B. Jones | Claude Design just cut 60% of your designer's week ($)
Watch Time: 23 minutes
Frontier Models
OpenAI releases GPT-5.5
From Ethan Mollick:
"I had early access to GPT-5.5, and I think it is a big deal. It is a big deal because it indicates that we are not done with the rapid improvement in AI. It is also a big deal because it is just plain good. And it is a big deal because even with all of this, the frontier of AI ability remains jagged."
Ethan Mollick's framing of AI as three interlinked layers — models, apps, and harnesses — is useful in understanding the gains in AI capability. Models are the underlying intelligence (Opus, Gemini, GPT-5.5). Apps are how you actually talk to them (ChatGPT, Claude Cowork, Codex). Harnesses are the tools the model can reach for: writing code, controlling your computer, and generating images. Progress isn't a single curve. All three are advancing, and the compounding effect is what makes each release feel bigger than the last.
A key advance is OpenAI's new image model, which finally renders readable text inside images. Mollick demonstrates this with an art gallery scene in which every label below a painting is legible, something that was effectively impossible a few months ago. That single capability changes what OpenAI can plausibly produce: PowerPoint slides, product mockups, example websites, anything where words and images have to coexist. He also got Codex to generate a 101-page illustrated tabletop role-playing game and a near-PhD-quality academic paper from four prompts.
It's definitely worth clicking through to his post to see the gallery of what he was able to produce.
Ethan Mollick | Sign of the future: GPT-5.5
☕ Medium Read (6 minutes)
A final thought…
This week, I wrote an essay on how tools shape design practice more than we might admit (using Figma as a case study) — and why I'm more optimistic about AI's effect on design than many of the takes I'm reading.
Tools Do Shape Practice
⚡ Quick Read (5 minutes)
That’s it for this week.
Thanks for reading, and see you next Wednesday with more curated AI/UX news and insights. 👋
All the best, Heidi

