Designing for AI

AI’s real bottleneck might be the chatbot

In a recent newsletter, Ethan Mollick argues that AI models are already far more capable than most people experience, and that the gap is often not about intelligence but about interface. He points to a new paper in which a small group of financial professionals used GPT-4o for a complex valuation task and saw productivity gains, but some of those gains were offset by the cognitive load of the chatbot itself: walls of text, unsolicited tangents, and sprawling conversations that were hard to rein in. The people hurt most were less experienced workers, exactly the group that stands to benefit the most.

Mollick then explores alternatives to the standard chatbot interface, including specialized tools like Google Stitch, Pomelli, and NotebookLM, as well as personal agents like OpenClaw and Anthropic’s Claude Cowork with Dispatch, which lets you message Claude from your phone while it works through files and apps on your desktop.

He also argues that we may be moving beyond fixed interfaces entirely, toward AI systems that generate the right interface for the moment: an agent on your desktop, an interactive chart in a conversation, or a custom app spun up on demand. In that framing, much of what people experience as “AI disappointment” may be more about bad interfaces than weak models.

Designing with AI

From generic to distinctive: Providing design direction with Google Stitch

Google’s Stitch team shared a video that doubles as a strong lesson in creative direction for AI design tools. David East walks through designing a community marathon site, and the before-and-after is striking. The version built from a bare prompt, “a road running race listing page,” feels generic. The version built with intentional creative direction, drawing on ideas like architectural limestone, ink on paper, and old track clay, has a much clearer point of view.

The broader lesson is to think less like a prompt engineer and more like a creative director. David starts with empathy: who is this for, and what should they feel? From there, he translates that into specific aesthetic language rather than vague descriptors like “sporty” or “high end,” which he argues tend to yield generic results. He also uses Gemini to help develop more concrete design vocabulary for the feeling he wants to evoke. The walkthrough then moves through color hierarchy, typography, layout concepts, and finally copywriting, which he says is what makes the experience stop feeling like a template and start feeling real.

Designing with AI

Figma Make kits and attachments bring more context to AI prototyping

Figma just shipped two features aimed at a persistent frustration with AI-generated design: the output often lacks your design system conventions and the real project context behind what you’re building.

  • Make Kits let design system authors package components from Figma libraries or code via npm (Node Package Manager), along with guidelines that teach Make not just what exists, but how those components and styles should be used. So instead of AI dropping in a generic button, it has a better shot at using the right button in the right context.

  • Make Attachments address the project-context side of the problem. Instead of cramming specs and constraints into a prompt, you can attach PDFs, CSVs, JSON, screenshots, brand guidelines, and other source files directly to a Make session. Make can then reference the source material itself rather than relying only on your summary.

Together, kits and attachments give Figma Make context at two levels: your design system and your actual project.

AI Agents

From retrieval pipelines to direct connectors

Ethan Mollick stirred up some controversy this week when he posted on X that “the RAG era was short-lived, but intense,” in response to Anthropic’s Microsoft 365 connector becoming available across all Claude plans. The connector gives Claude access to Outlook, OneDrive, SharePoint, and Teams via a prebuilt integration, removing much of the setup that used to be required to ground AI in company data.

As a review, RAG (retrieval-augmented generation) was the default pattern for getting AI to work with your actual documents and knowledge bases. In practice, that often meant chunking content, generating embeddings, storing them in a vector database, and retrieving relevant passages at query time. It worked, and in many enterprise contexts it still does. But it also required real engineering effort and introduced failure points that were often hard to see.

Mollick’s post drew pushback, with several replies arguing that retrieval has not gone away at all — which is true. What’s changing is not the need for retrieval, but where the complexity lives. Retrieval is shifting away from custom-built RAG stacks and toward vendor-managed connectors, making enterprise retrieval feel more like a product feature than an engineering project.

That’s it for this week.

Thanks for reading, and see you next Wednesday with more curated AI/UX news and insights. 👋

All the Best, Heidi

Reply

Avatar

or to participate

Keep Reading