🔑 Key AI Reads for August 6, 2025

Issue 9 • Why AI might not need to understand your workflows, OpenAI's Study Mode, Designing effective prompt suggestions, Evaluating your ROI with AI prototyping, Using Claude Code for non-coding tasks

Agentic AI

The Bitter Lesson: Why AI might not need to understand your workflows

Current best practice for designing AI agents for the enterprise involves defining the standard operating procedure (SOP) for the process that's being automated. But what if, ultimately, this is not the most effective approach to creating AI agents? What if relying on all that "human-defined process" actually hinders AI agent performance? This is the possibility arising from AI research described in the influential 2019 essay The Bitter Lesson. Ethan Mollick, in a recent essay, revisits the Bitter Lesson:

"... the Bitter Lesson (is that) encoding human understanding into an AI tends to be worse than just letting the AI figure out how to solve the problem, adding enough computing power until it can do it better than any human."

The Bitter Lesson was learned through training AI to play games such as chess, shogi, and go.

"In 2017, Google released AlphaZero, which could beat humans not just in chess but also in shogi and go, and it did it with no prior knowledge of these games at all. Instead, the AI model trained against itself, playing the games until it learned them. All of the elegant knowledge of chess was irrelevant, pure brute force computing combined with generalized approaches to machine learning, was enough to beat them."

We don't yet know whether The Bitter Lesson will apply to the "real-world messiness" of how organizations accomplish work. But Ethan, in his essay, explores the possibility that it might:

"The effort companies spent refining processes, building institutional knowledge, and creating competitive moats through operational excellence might matter less than they think. ⁠⁠If AI agents can train on outputs alone, any organization that can define quality and provide enough examples might achieve similar results, whether they understand their own processes or not.⁠"

I highly recommend reading the full essay to understand more about the possible future of AI agents in the enterprise.

The Bitter Lesson versus the Garbage Can
☕ Medium Read (8 minutes)

Learning with AI

From answer machine to AI tutor: OpenAI's Study Mode

Much ink has been spilled over the impact AI is having on student learning. When students over-rely on AI to complete assignments, they miss opportunities to develop critical thinking and problem-solving skills.

OpenAI’s new Study Mode is a first step in addressing this issue. It helps students learn by asking guiding questions instead of just giving answers. OpenAI says the goal with Study Mode is to provide access to "a personal tutor that never gets tired of their questions."

Like many AI "power users,” I had already been using AI as a tutor, specifically for learning French grammar. However, I've been using Study Mode for the past few days and appreciate how it provides practice exercises and clearly explains any mistakes I've made. Because it’s purpose-built for learning, Study Mode removes the friction for the learning use case.

It's worth trying Study Mode because no doubt AI is (and will continue to be) a factor in education. That said, Study Mode is an early effort, and it remains to be seen what role AI tutors will ultimately have in students' learning.

Introducing Study Mode
☕ Medium Read (7 minutes)

Designing for AI

Addressing a central design challenge with AI: the empty prompt box

The fact is, in the current state of AI, the empty prompt box is the prevailing interface. New users in particular may be unsure of how to proceed and what sorts of prompts are possible. Prompt suggestions give users example starting points or ways to explore more deeply.

If you’re interested in learning how to design effective prompt suggestions, the Nielsen Norman Group has a great primer focusing on two main goals:

  • Helping new users quickly understand what the system can do

  • Teaching and inspiring active users to use the system effectively

Overall, while simple prompt suggestions work well for new users, experienced users are better served by complex, context-aware prompts.

The article includes both principles to follow and helpful real-world examples across a range of AI tools. The article provides an excellent foundation for addressing this central design challenge with AI.

Designing use-case prompt suggestions
☕ Medium Read (12 minutes) | 💡Bookmark for Reference

Protoyping with AI

Evaluating your ROI with AI prototyping

In a LinkedIn post, Michael Riddering (Ridd) addresses a question he’s hearing from folks experimenting with AI Prototyping:

“When I use tools like Lovable or Figma Make, I end up spending hours debugging buttons and nav bars. How is this faster?”

He admits this is a fair question! His response:

"...it's so important to ask yourself what the goal of your prototype is. If your goal is to fully replicate your app UI, then yeah...it’s probably not worth it. The ROI drops fast. The key is to zero in on the one thing you need to test or communicate."

His post includes a real-world example from his design work and a helpful chain of replies on how others are leveraging AI prototyping in their design process.

LinkedIn post
⚡ Quick Read (2 minutes)

Frontier Models

Is Claude Code a best-kept secret for tasks other than coding?

I'm starting to feel like Claude Code is a "hidden gem" for use cases outside of coding. In Issue 6 of this newsletter, I highlighted how Jorge Arango found Claude Code superior for creating a taxonomy for his website (over other tools he’d tried). This past week, Marc Baselga touted his experience using Claude Code for non-coding tasks:

"Yes, it's incredible for coding. But here's what surprised me: it's equally powerful for content workflows, data cleaning, and complex task sequencing. The tool is scary good at breaking down complex problems into subtasks and knowing exactly when to keep you in the loop versus when to just execute."

A challenge for non-coders? You've got to run Claude Code through the Terminal (command-line interface), which can feel intimidating. Marc maintains, however, that even as a "not super technical" person, he found it easy to pick up in a couple of hours. OK, I am committing this week to take Claude Code for a spin! (Claude Sonnet gave me some great non-coding, beginner-friendly task suggestions.)

LinkedIn post
⚡ Quick Read (1 minute)

That’s it for this week.

Thanks for reading, and see you next Wednesday with more curated AI/UX news and insights. 👋

All the best, Heidi

Reply

or to participate.