AI and the Design Process

Google Stitch gets a major upgrade

Google Labs overhauled Stitch (its AI-powered UI design tool) this week, shipping five new features: an AI-native canvas, a smarter design agent, voice input, instant prototypes, and a design systems format called DESIGN.md.

With Stitch, you can describe a business objective, a desired user feeling, or a design inspiration (what Google calls "vibe design"), and it generates design directions accordingly. The ability to iterate on design conversationally, using either natural language or visual prompts — and by using your voice — makes it especially suited to early-stage design explorations. To see this in action, I recommend Maureen Josphine's post on how she used Google Stitch to improve her fashion app.

A notable feature of Stitch is DESIGN.md: a markdown file that captures your design rules (colors, typography, spacing, component patterns) in a format AI agents can read. It connects to a Stitch MCP server that lets designs flow into tools like AI Studio, Cursor, or Claude. Google is clearly positioning Stitch as a node in a larger AI-powered development pipeline, from concept to code.

AI and the Design Process

With AI, design systems evolve and take center stage

In this Dive Club episode, Luis Ouriach, Designer Advocate at Figma, argues that AI-generated code has changed how teams think about the purpose of design systems. Systems and documentation were once created mainly for people; now they increasingly shape what AI tools generate, which raises the value of clean, consistent, well-documented design foundations. A weak design system no longer just leads to inconsistent UI. It can also create a growing pile of AI-assisted mess that someone eventually has to untangle.

The interview also explores dissolving role boundaries, designers moving closer to production, and what it means to work in a world where you can start anywhere, whether that’s a document, canvas, codebase, or even a Slack thread.

AI Agents

Perplexity's Computer puts AI agents to work for you

Perplexity has introduced Computer, a new cloud-based agentic tool that lets users describe a desired outcome, then breaks the work into subtasks handled by specialized sub-agents and multiple underlying AI models. It runs in Perplexity’s secure cloud sandbox and connects to external tools via authenticated app connectors, providing greater control and containment than agents that run directly on a user’s machine.

Note that Perplexity Computer works differently from OpenClaw, the open-source personal AI assistant that runs on users’ own devices and can take local actions. Somewhat confusingly, Perplexity has a separate product, Personal Computer, that is designed for always-on local access to files, apps, and sessions, which Perplexity promises offers better safeguards than OpenClaw. (In a widely shared February incident, a Meta AI security researcher said an OpenClaw agent deleted emails after being asked only to suggest deletions.)

At this writing, Perplexity Computer requires a Pro subscription (it was initially rolled out only to Max plans). I found the suggestions of tasks to create, based on my history with Perplexity, quite helpful. For example, Perplexity knows I manage outreach supplies for my church. It suggested a cost-comparison agent to run weekly, based on a stock list I have in a Google Sheet. Every month, it checks prices across Amazon, Costco, Target, and Walmart for items on the stock list. It did a great job of prompting me through the agent setup.

If you want to see Perplexity Computer in action, I can recommend two videos from Teacher's Tech: Perplexity Computer for Beginners and Perplexity Computer: 6 Advanced Features that Changed Everything.

AI and Product Development

Three questions nobody can answer yet

From Alex Kehr:
"This is the first time in my career where the future of software design feels genuinely unknown. Not "changing fast"... unknown. I talk to designers, engineers, founders who've been doing this for decades. Everyone can feel the ground shifting. Nobody knows where it's going.
And I think that's actually more interesting than pretending we do. So instead of predictions, I want to think through some questions I can't stop turning over."

I love this piece for the questions it poses—no answers—just questions to be explored.

  • What happens to craft when iteration is instant?

  • What is a designer when there's nothing to hand off?

  • What does design even mean when the product isn't a fixed thing?

That’s it for this week.

Thanks for reading, and see you next Wednesday with more curated AI/UX news and insights. 👋

All the best, Heidi

Reply

Avatar

or to participate

Keep Reading