🔑 Key AI Reads for September 24, 2025

Issue 16 • Chrome's AI-infused rebirth, converging on a definition for "AI agent," four key vibe-coding strategies from Google design, Figma introduces "Prompt to Edit"

AI Browsers

Chrome gets a major AI overhaul with Gemini integration

"Now that it's looking like Chrome will remain in the Google fold, the browser is undergoing a Gemini-infused rebirth. Google claims the browser will see its most significant upgrade ever in the next few weeks as AI permeates every part of the experience. The most prominent change, and one that AI subscribers may have already seen, is the addition of a Gemini button on the desktop browser. This button opens a popup where you can ask questions about—and get summaries of—content in your open tabs."

Google is positioning the above feature, dubbed Gemini in Chrome, as your personal AI assistant designed to help you understand web content and complete tasks more efficiently.

Another feature rolling out is AI Mode in Chrome's address bar (the omnibox), which allows AI queries directly from the address bar.

Later this year, Google says it will add agent functionality to Chrome. From Google's official announcement:

"In the coming months, we’ll be introducing agentic capabilities to Gemini in Chrome. These will let Gemini in Chrome handle those tedious tasks that take up so much of your time, like booking a haircut or ordering your weekly groceries. You tell Gemini in Chrome what you want to get done, and it acts on web pages on your behalf, while you focus on other things. It can be stopped at any time so you’re in control."

With Chrome having around 70% of the global browser market, these updates will very likely normalize AI as a way of interacting with the web. The timing is notable—just as the "AI Browser Wars" are heating up. (Atlassian recently purchased the AI browser Dia, and Perplexity has been heavily pushing its AI browser Comet.)

AI Agents

The definition of "AI agent" gains momentum and convergence

Simon Willison argues that "agent" now has a useful, shared meaning in AI engineering:

"I think 'agent' may finally have a widely enough agreed upon definition to be useful jargon now. I’ve noticed something interesting over the past few weeks: I’ve started using the term “agent” in conversations where I don’t feel the need to then define it, roll my eyes or wrap it in scare quotes. This is a big piece of personal character development for me! Moving forward, when I talk about agents I’m going to use this: An LLM agent runs tools in a loop to achieve a goal."

His post goes on to trace the usage of the term "agent" in AI and breaks down his definition in more detail. The whole post is worth a read, particularly the caution about thinking agents can fully replace humans:

"Amusingly enough, humans also have agency. They can form their own goals and intentions and act autonomously to achieve them—while taking accountability for those decisions. Despite the name, AI agents can do nothing of the sort."

AI Product Development

Vibe coding: Four lessons from Google's Deep Mind design team

In this excellent episode of the Dive Club podcast, Google DeepMind's Head of Design, Amar Rashi, shares practical examples and strategies for vibe coding, demonstrating the amazing work he and his team did on Google's new AI Studio.

His four key strategies for vibe-coding like a pro:

1. Export constraints directly from Figma: Lean into developer mode and treat Figma's CSS/specs as instructions you can pass directly to your AI coding assistant. This bridges the fidelity gap many designers complain about.

2. Iterate your prompts: Ask AI to rewrite your prompts in four different ways—sometimes it finds better technical phrasing for what you're trying to achieve. You might not know how to ask for "concurrent threads," but AI can translate your intent into code-speak.

3. Build your inspiration library: Collect components from GitHub and open-source repositories as references. Copy-paste that React component code directly into your prompt for instant polish.

4. Assume everything is possible: "Go in with the assumption that nothing is impossible... with that mindset, you're going to get so so far." In the video, he shows a scroll bar with timeline navigation. He wouldn't have mocked it in Figma because he wasn't sure the web could even do it—but vibe coding proved it could.

The video is really worth watching to see Amar in action.

AI and the Design Process

Figma introduces natural language editing to the design canvas

Figma is introducing an AI-powered "Prompt to Edit" feature that lets you edit and refine your designs simply by typing what you want. Instead of clicking through menus and panels, you’ll be able to type commands like "make this button blue" or "add 20px padding" directly to modify designs on the canvas. Prompt to Edit is currently in alpha.

The feature goes beyond simple edits. You can convert static layouts into interactive prototypes with a single prompt, point at specific elements for targeted changes, and start designs from scratch with prompts and visual references. The alpha is limited to 5,000 users on paid Figma plans (it excludes Starter and EDU accounts).

You can see it in action in this YouTube short.

Final quick thoughts…

AI-forward Notion 3.0 is here. From The Verge: "While Notion users previously constructed pages and databases manually, now the [Notion] agent builds both for them. Agents can also search for information beyond the Notion workspace and across connected tools such as Slack and the internet." You can see it in action on YouTube (1:30-minute demo). And read more about it in Notion’s announcement.

How in the world did boring old Oracle become the “it company” in AI? Nicholas Thompson, CEO of the Atlantic, has the best explanation I've found in this video short (2 minutes) on LinkedIn. This is seriously my favorite twist-of-fate tech story of the year.

That’s it for this week.

Thanks for reading, and see you next Wednesday with more curated AI/UX news and insights. đź‘‹

All the best, Heidi

Reply

or to participate.