AI Prompting

Management skills are the new prompting skills

Ethan Mollick reports on an experiment where he challenged executive MBA students—doctors, managers, and leaders with little to no coding experience—to build working startup prototypes in just four days using AI tools like Claude Code. The results far exceeded what he'd seen from students working an entire semester pre-AI. The students succeeded not because they were AI experts, but because they already knew how to manage: scoping problems, defining deliverables, and recognizing when output is off.

As AI agents become more capable of doing hours of work in minutes, the scarce skill isn't prompting cleverness—it's knowing what good looks like and communicating it clearly enough for an AI to deliver it.

Mollick offers a useful model for deciding when to delegate to AI, weighing Human Baseline Time against the probability that the AI will succeed and the time it takes you to evaluate the output. The more expertise you have, the better you can tip that equation in your favor—you give clearer instructions, catch problems faster, and course-correct more efficiently. It's a compelling argument that so-called "soft" management skills are becoming the hard ones in an AI-augmented world.

AI in the Organization

What does the research actually say about productivity and AI?

From Alex Imas:

"We now have a growing body of micro studies showing real productivity gains from generative AI. However, the productivity impact of AI has yet to clearly show up in the aggregate data. This disconnect should not be surprising at this stage given the history of technology adoption. In the case of the previous big tech shock (information technology), Robert Solow famously observed in 1987 that 'you can see the computer age everywhere but in the productivity statistics.' It is likely that the same dynamics are showing up with AI, at least for now."

The research literature on AI's impact on productivity presents an interesting puzzle. At the micro level—controlled studies of specific tasks—AI shows productivity gains, sometimes substantial (14-55% improvements in areas like coding and customer support). But at the macro level, these gains haven't yet shown up in aggregate economic statistics. Imas argues that this disconnect is predictable: real-world adoption is messy: only 36% of workers feel properly trained, and many are keeping their use of AI hidden from employers. Additionally, speeding up one task doesn't help much when other tasks become bottlenecks. These factors explain why individual wins haven't yet translated into organizational metrics.

Designing with AI

Vibe coding as an exploratory design mode

From Molly Mahar:

"I’ve found the value of vibe coding isn’t speed. It’s not about replacing Figma, skipping rigor, or racing to ship. And it’s not about who can build prototypes. It’s about who knows when making something real is the right move, and what to do with the clarity that follows... One of the biggest mistakes I see is treating vibe coding as either a default or a novelty. In reality, choosing when to vibe code is becoming a new dimension of design judgement."

Mahar maintains that when teams argue in circles about whether an idea is valuable or feasible, an interactive prototype populated with real data can collapse weeks of debate into a shared reaction. She identifies three signals that it's time to vibe code a prototype:

  • When value is uncertain

  • When scope is unclear

  • When the experience depends on motion or timing that static tools can't express

Mahar includes guardrails: every prototype should answer one specific question, don't oversell what doesn't exist yet, and exit the mode once feedback shifts from "Is this useful?" to "Can we polish this?"

Overall, she presents a thoughtful framework for treating AI-generated prototypes as a deliberate design tool.

Fontier Models

Claude can now create FigJam diagrams from chat

Figma announced an integration that lets Claude generate editable FigJam diagrams directly from prompts, PDFs, images, or screenshots. Imagine generating user flows from a PRD, Gantt charts for project timelines, or system architecture diagrams from documentation—all in Claude and then refined in FigJam's collaborative canvas. The integration works with Claude Opus 4.5 and Sonnet 4.5 on browser or desktop.

Note: I tested this with some process documentation, and Claude did a fine job creating accurate, appropriate diagrams. Getting them into FigJam was another story—the integration didn't work reliably for me, often producing a blank canvas instead of the populated diagram. I expect (hope?) these kinks will get worked out.

ICYMI: Last week, entrepreneur Matt Schlicht built Moltbook: a Reddit-like social network where only AI agents can post. Within five days, the platform claimed 1.5 million agent users and over 124,000 posts across nearly 15,000 forums.

The whole episode was a viral spectacle. Casey Newton breaks down what happened and some of the implications.

That’s it for this week.

Thanks for reading, and see you next Wednesday with more curated AI/UX news and insights. 👋

All the best, Heidi

Reply

Avatar

or to participate

Keep Reading