- The AI UX Dispatch
- Posts
- đ Key AI Reads for August 20, 2025
đ Key AI Reads for August 20, 2025
Issue 11 ⢠ChatGPT-5 experience versus GPT-5 performance, vibe coding as legacy code generator, knowledge graphs going mainstream, redesigning AI beyond the chat box
Frontier Models
ChatGPT-5 experience versus GPT-5 performance
Something that is sometimes getting lost in all the discussion around the GPT-5 rollout is the difference between how everyday consumers experience this new model (through the chat interface, ChatGPT-5) and the performance of the model itself (improvements to which are being reported by those who use the model through its API). Box CEO Aaron Levie recently described very specific improvements they've observed with GPT-5 around contextualizing information, multimodal analysis, and data interpretation.
Ben Thompson, in his paid newsletter, tackles the difference between GPT-5 and ChatGPT-5:
"What I am more interested in, however, is not GPT-5, but ChatGPT 5, which to me is something distinct: itâs the AI user experience for the vast majority of people, as opposed to developers or programmers using the API.â
The key improvement for âthe massesâ with ChatGPT-5 is free access to a reasoning model, albeit mediated through the automatic model routing in the default/free ChatGPT-5 experience. Reasoning models were previously available only through paid accounts, and even those with paid accounts often didnât switch to other models from the non-reasoning ChatGPT-4o default.
The problem, of course, was in the initial ChatGPT-5 rollout; the automatic routing was not working properly, so the promise of an improved experience for the masses actually resulted in degraded responses relative to ChatGPT-4o.
Another complicating factor is consumers becoming attached to "personality characteristics" of specific models. Because OpenAI initially removed access to the older models from the ChatGPT interface, some users felt a sense of loss.
Sam Altman on X: "If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology."
I'm linking here to the paywalled article by Ben Thompson, because I think his coverage of the complexities OpenAI faces (managing both a consumer experience and enterprise needs via the API) is so well done. I never hesitate to recommend subscribing to Ben's newsletter, Stratechery, for anyone who wants to understand the strategy aspects of technology.
ChatGPT 5, Product Trade-Offs, Personality and Model Upgrades ($)
đ Long Read (17 minutes)
Building with AI
Vibe code as legacy code
Steve Krouseâs premise in a recent essay is simple: Vibe-coded code that no one understands behaves like legacy code on day one: itâs fast to create but slow to modify, debug, or extend. The result is code you canât confidently fix. Asking AI to fix code only compounds the debt. He maintains that the moment you stop understanding the code you ship, youâre manufacturing technical debt.
That stance contrasts with a growing âEnglish-as-codeâ narrative. Jensen Huang, CEO of Nvidia, has argued that human language itself is becoming a programming language, framing AI as a universal interface that makes âeveryone a programmer.â Andrej Karpathyâs viral line, âthe hottest new programming language is English,â captures the same idea.
It remains to be seen the extent to which English, as a âhuman language,â will replace programming languages. There is no doubt that, currently, vibe coding is enormously useful for prototypes and throwawaysâwork you donât intend to maintain. (Because itâs only legacy code if you have to maintain it.) Krouseâs bottom line mirrors team best practice for coding with AI: treat the AI like an intern, keep reviews rigorous, and remember that our human understanding of the problems to be solved remains central to creating software.
Vibe code is legacy code
⥠Quick Read (4 minutes)
AI in the Enterprise
With AI, knowledge graphs go mainstream
From Tony Seale on LinkedIn:
"Thereâs been a noticeable shift in the enterprise data world over the past year. You can feel it. SAP now has a knowledge graph. Netflix just published a fascinating piece on how theyâre using ontologies to âmodel once and represent everywhere.â ServiceNow acquired Data.World to make their customers' data âAI-readyâ. Samsung embedded a knowledge graph directly into their flagship smartphone."
If you're not familiar with knowledge graphs, they're a way to connect scattered data points into a unified map of relationships, providing a single, interconnected view where everything links together.
For AI, a knowledge graph provides critical context:
"Everyoneâs experimenting with GenAI. But if you want answers that are accurate, explainable, and grounded in realityâyou need a structured, contextual foundation. A knowledge graph gives you that... Now that AI is pushing context and trust to the forefront, it seems that knowledge graphs are going mainstream."
Knowledge graphs are used for a specific implementation of RAG (Retrieval-Augmented Generation) called GraphRAG. I covered GraphRag in an earlier issue, Issue 6:
"GraphRAG uses a knowledge graph to give the LLM structured, semantic context it doesn't inherently possess. Think of traditional RAG systems as reading your documents like scattered sticky notesâeach fragment is disconnected from the others. GraphRAG is different: it builds a map of how ideas relate by using a knowledge graph."
Now is a great time to learn more about knowledge graphs! The video below by Emil Eifrem from Neo4J provides a fantastic overview of knowledge graphs, with resources to learn more.
GraphRAG: The marriage of knowledge graphs and RAG
Watch time: 19 minutes
Designing for AI
Redesigning AI beyond the chat box
Get a comfy chair, make some popcorn, and settle in to watch this amazing episode of Dive Club with Smashing Magazine founder Vitaly Friedman. He takes host Michael Riddering (Ridd) through a series of AI experiences that go beyond the standard chat box. Experiences highlighted include Perplexity, Consensus, Elicit, and Exa.ai.
Some highlights from the episode:
12:40: "We should not forget that searching, sorting, filtering -- all that stuff is great--and we are just dismissing it for the sake of AI first."
23:00: "If the input is better, the output will be better. For me, it's about slowing down people when they are trying to articulate their intent."
29:10: "We can do so much better. This is not difficult stuff. [Established affordances] are something we've forgotten to use because we're getting a bit too excited about the AI hype."
If you're designing for AI, this episode is a must-watch.
Beyond chat: What's next for AI design patterns
Watch Time: 57 minutes
Thatâs it for this week.
Thanks for reading, and see you next Wednesday with more curated AI/UX news and insights. đ
All the best, Heidi
Reply