🔑 Key AI Reads for October 15, 2025

Issue 19 • OpenAI's platform play, OpenAI's Agent Kit and the challenge of visual workflow builders, the rise of vibe engineering, why ontologies are having a moment

Frontier Models

OpenAI's push to make ChatGPT the new front door to the internet

At its DevDay this past week, OpenAI unveiled its most ambitious platform play yet: Apps inside ChatGPT. Using a new Apps SDK (built on MCP), developers can now create full-featured applications that run directly within ChatGPT conversations. Tag Zillow to browse homes, Canva to design posters, or Spotify to build playlists—all without leaving the chat interface. OpenAI is positioning ChatGPT as "the new front door to the internet," with over 800 million weekly users providing massive distribution for developers. Launch partners include Spotify, Canva, Figma, Coursera, and Expedia, with additional partners coming soon.

As Platformer's Casey Newton notes, the announcement draws parallels to Facebook's 2007 platform strategy, and draws similar concerns about data privacy and monetization. ChatGPT stores users' most intimate conversations, making data leaks potentially more damaging than with Facebook. OpenAI executives promised to share only "minimum necessary" information with developers, though details remain vague.

The revenue model is also unclear, although options include finder's fees, revenue sharing, or auctioned placements, which could compromise ChatGPT's helpfulness, similar to how SEO has degraded Google. Casey Newton concludes: "It's easy to put users first before the revenue comes in. Once you're operating a platform, though, the incentives can all start to look very different."

OpenAI is making a massive bet that ChatGPT can become the next dominant platform—a conversational layer sitting between users and the internet itself.

OpenAI’s platform play
☕ Medium Read (12 minutes)

Agentic AI

OpenAI introduces Agent Kit, but is visual workflow the right approach?

Also at this past week's DevDay, OpenAI unveiled Agent Kit—a visual canvas-based builder designed to help developers create AI agents faster. In the demo, OpenAI built a functional agent in under eight minutes by dragging and dropping nodes, connecting tools, and adding guardrails. The kit includes pre-built components, such as file search, MCP integrations, and an embeddable chat interface, all designed to reduce the complexity of transitioning agents from prototype to production.

The announcement sparked some debate around OpenAI's choice to use a visual canvas (a workflow builder). LangChain's Harrison Chase argued that visual workflow builders are getting "squeezed from both directions." For simple tasks, he suggests that no-code agents (which require only a prompt and tools) are becoming reliably good enough as models improve. For complex tasks, code remains the superior option as visual workflow builders become unmanageable after a certain level of complexity. As AI-assisted coding improves, the barrier to creating coded agents will continue to drop.

Meanwhile, Intercom's Emmet Connolly contrasted OpenAI's visual canvas approach with their own document-based agent builder, Procedures, which lets you start with natural language instructions and add deterministic logic only when needed. His key insight is that the fundamental design challenge of the AI era is figuring out how to blend deterministic (predictable, if-then logic) and probabilistic (fuzzy, LLM-powered) approaches. OpenAI's visual approach leans heavily into traditional deterministic patterns, while document-based builders embrace the probabilistic nature of LLMs from the start.

I highly recommend reading all three articles to get a solid "current state picture" of agent building approaches.

Introducing AgentKit
âš¡ Quick Read (4 minutes)

Not another workflow builder
âš¡ Quick Read (4 minutes)

A tale of two agent builders
☕ Medium Read (6 minutes)

Building with AI

The rise of vibe engineering

Simon Willison is proposing a new term that reclaims "vibes" from its association with AI-generated code. Unlike "vibe coding," where someone prompts an AI and accepts the output, "vibe engineering" requires seasoned professionals to operate at the top of their game. The practice rewards classic software engineering fundamentals: comprehensive test suites, strong documentation, effective code review habits, and the ability to manage what Willison calls "a growing army of weird digital interns who will absolutely cheat if you give them a chance."

What makes this approach noteworthy is the use of AI tools to amplify existing expertise rather than replace it. The better your foundational skills—research, planning, QA, version control—the more productive you can be with AI assistance. It's not about letting AI do your thinking, it's about using AI to expand what you can accomplish while staying firmly in the driver's seat.

Simon Willison on "vibe engineering"
âš¡ Quick Read (5 minutes)

Knowledge Graphs

Why ontologies are having a moment (and what they actually are)

As organizations rush to implement AI, many are discovering a bottleneck they didn't expect: their data isn't structured in a way that AI systems can reliably understand. Enter ontologies—formal frameworks that define how concepts relate to each other within a domain. Think of them as the rigorous cousin of taxonomies, going beyond simple hierarchies to encode logical relationships, constraints, and rules about how things behave. While taxonomies tell you "this is a type of that," ontologies explain "this connects to that in these specific ways, under these conditions." This level of semantic precision is becoming increasingly essential because LLMs benefit from clean, well-structured, and semantically rich data to deliver accurate results. Without it, even the most sophisticated AI can misinterpret context or miss critical relationships.

In a recent article, Jessica Talisman walks through "the ontology pipeline"—a process that progresses through controlled vocabularies, metadata standards, taxonomies, thesauri, and finally ontologies, culminating in knowledge graphs. The work is handled by ontologists, who function as specialized data architects designing how information is represented within an organization. The role requires fluency with semantic graph standards (RDF/RDFS, OWL, SHACL), the ability to write SPARQL queries, and a working understanding of how LLMs and retrieval-augmented generation (RAG) systems operate.

The bottom line: if your organization is struggling to get meaningful results from AI implementations, the problem might not be the AI—it could be that your knowledge systems aren't structured to support machine understanding.

The ontology pipeline
☕ Medium Read (10 minutes)

Becoming an ontologist
☕ Medium Read (7 minutes)

That’s it for this week.

Thanks for reading, and see you next Wednesday with more curated AI/UX news and insights. 👋

All the best, Heidi

Reply

or to participate.