Veo 3, Gemini, & the End of Web Designers?
On May 21, 2025, I watched Google I/O with a coffee in hand, fully expecting the usual parade of buzzwords.
What I got instead?
A total blitz of tools, agents, and announcements so overwhelming, I considered applying for early retirement.
One of my "AI Engineer" friends keeps saying "let's buy some cows and chickens, I'm becoming a farmer!"
I watched at 2x speed, 40 minutes later, my browser was auto-filing tax forms, my YouTube feed featured a Spielberg-grade trailer generated by "Flow," and my credit card was whimpering at a $249 "AI Ultra" pre-charge.
If you blinked, here’s what actually happened.
Let’s unpack 9 things you absolutely need to know.
TL;DR for Busy Friends
- Gemini 2.5 and its zippy sibling Flash jump to the top of nearly every reasoning leaderboard.
- Agent Mode turns Chrome into an intern that clicks, buys, and comments for you.
- Stitch & Jules threaten both front-end designers and junior devs.
- Flow, Veo 3, Imagen 4 bring sound-on, Hollywood-level video.
- Astra + XR Glasses + Beam = always-on AR.
- Gemma 3N gives open-source devs real teeth.
- Prepare for a $249/mo Ultra bill (30 TB of storage says hi).
1. Gemini 2.5 Crushes the Competition
Google dropped Gemini 2.5 a couple of weeks ago and it instantly became the most powerful reasoning model in the world.
Yes, even stronger than OpenAI's GPT-O3 and O4 (I'm not talking about 4O 😵💫).
I've shared a long qualitative comparison in my previous newsletter issue.
Even the budget version — Gemini Flash — dethroned some of the best models in Chrome.
That’s like your car’s cheaper cousin suddenly winning a Formula 1 race.
Gemini’s multi-modal power means it’s not just about text anymore. It reasons across images, video, code, and documents. Great for customer support, business intelligence, and generative design.
I'll admit it - I got stuck in the "ChatGPT or nothing" mindset for a while. Now, I'm finding myself increasingly using Gemini 2.5 on my phone and daily tasks. You feel like you're driving a Rolls-Royce ^^
(BTW — if you haven't installed NotebookLM on your phone yet, YOU SHOULD. It's my new app for digesting books.)
2. Stitch: The End of Front-End?
Before you read this, visit this website and write any prompt.
What do you think?
Google introduced Stitch — a tool that automatically generates user interfaces.
Web designers, you might want to look away. This one hurts.
It doesn’t just throw together a layout. It actually designs UIs for AI agents to use, not humans.
Picture an AI-designed interface used by another AI agent… in an AI browser.
Example:
You want to build an onboarding flow. You describe it in text.
Stitch generates the flow, buttons, layout, styling, and connects it to backend logic.
No Figma. No devs. No pixels pushed.
This is just one prompt ↓

3. Jules: An Agent that Codes Your Agents
Yes, we’ve reached the “agent that codes other agents” phase.
Jules is asynchronous, agentic, and terrifyingly good at writing code autonomously.
Coders used to worry about AI replacing junior devs.Now they’re wondering if they
The promise:
Describing a feature and having Jules build, test, and integrate it into your app — with no human intervention.

4. Flow: The Death of Low-Effort YouTube Channels
This one’s scary.
Flow generates entire cinematic experiences using AI.
Script. Voices. Music. Video. Even sound design.
All generated. No camera. No crew.
Let me point out something very important here:
Veo 3 isn't the first video generation model. But it's the first that generates high-quality videos with synchronized high-quality sound.For me, it's a pure tech masterpiece.
Check out this very nice compilation of Veo 3 generated videos (created by someone called Ari Kuschnir).
5. The AI Ultra Payment Plan (It Hurts)
To unlock full Gemini power, Google offers the AI Ultra Plan:
- $124.99/month for 3 months
- Then $249/month
- 1 free month + 30 TB storage (yes, terabytes)
If you feel like every week introduces another $100 AI bill, you’re not alone.
My AI expenses are now competing with rent.
6. Gemma 3N: Open-Source AI with Real Teeth
Gemma 3N is Google’s newest open-source model, approaching the level of Claude Sonnet 3.7.
(That’s G’s claim, not compared myself to be honest + Claude 4 is apparently coming soon 😵💫).
Developer perks:
- No license drama
- Integrate into your products
- Monetize directly
- Extend it however you want
Use it to build, launch, and scale your side project without asking Google for permission.
You can try it out in Google AI Studio and Google AI Edge.
7. Agent Mode Is Official (Formerly Project Mariner)
Google’s browser agent can now click, fill, and interact with websites — automatically.
This is not search.
This is action.
What’s Google browser agent?
Google’s Browser Agent Mode is an AI-driven feature in Chrome (and the Gemini app) that autonomously interacts with web pages on your behalf.
Instead of just fetching links, it can understand page elements (text, images, forms, even raw pixels), perform multistep tasks like booking flights or filling out forms, and execute actions—from clicking buttons and scrolling to leaving comments—using natural-language commands.
What can it do?
- Fill out tax forms
- Sign you up for classes
- Buy groceries
- Argue with strangers on Reddit

8. Project Astra: Live Vision with Low-Latency AI
Imagine walking through the forest with your camera on.
You ask, “Is that mushroom edible?”
Astra responds in milliseconds.
It’s real-time visual understanding using your camera feed.
Use it for:
- Identifying food
- Track calories
- Explaining street signs abroad
- Helping visually impaired users
- Playing a live game of “What’s That Thing?”

9. Android XR Glasses + Project Beam
Yes, Android XR glasses are real.
Google is combining this with Project Beam — which turns 2D video into 3D experiences.
Daily Zoom calls might soon feel like a Pixar short film.
Casual + business uses:
- Immersive education
- More engaging virtual meetings
- …and some NSFW stuff we can’t talk about
What’s Next?
Google has just released its fireworks.
I'm pretty sure other companies will jump on the bandwagon soon (they always do).
AI grew legs, moved into your browser, challenged UX designers, made a movie, built a website for another AI to use… and then rewrote its own CSS.
If 2024 was about foundation models, 2025 is about AI agents — think & act.
What do you think? What cool stuff are you building?
See you in the comments or replies Gemini will write for you 😊
Until the next one,
— Charafeddine