Sunday Rundown #91: Google-Flex & Elephant Sombrero
Sunday Bonus #51: Fun Midjourney style references, chapter 5.
Happy Sunday, friends!
Welcome back to the weekly look at generative AI that covers the following:
Sunday Rundown (free): this week’s AI news + a fun AI fail.
Sunday Bonus (paid): an exclusive segment for paid subscribers.
Let’s get to it.
🗞️ AI news
Here are this week’s AI developments.
👩💻 AI releases
New stuff you can try right now:
Adobe now has a Customize feature that lets Adobe Stock users directly tweak stock images by removing background, re-cropping to a new aspect ratio, etc.
Bolt launched a “Figma to Bolt,” which lets you turn Figma designs into full-stack apps in one click.
Cohere launched Command A, a 111B, fast enterprise model optimized for retrieval, agentic tool use, and multilingual tasks with 150% higher throughput.
Convergence introduced a DeepWork agent that can coordinate many separate AI agents to handle multi-step complex workflows and tasks. (Paid users only.)
Freepik now lets you use Google’s top-tier Veo 2 model for image-to-video generation.
Google has been shipping like crazy this week:
The new tiny but extremely capable Gemma 3 family is out. The smallest model can run on a single CPU and its bigger sibling sits above reasoning models like o3-mini in the chatbot arena despite its much smaller size.
Gemini 2.0 Flash Experimental can now generate images from text prompts natively—not by using Imagen 3. (Try it for free on Google AI Studio.)
Google AI Studio now parses YouTube URLs, so you can paste those into your chats for Gemini to analyze. (So you can e.g. skip the “Download and upload” steps of my guide to analyzing video meetings with Gemini, as long as they’re made available on YouTube.)
NotebookLM got some upgrades: It’s now powered by Gemini 2.0 Flash Thinking, provides citations inside notes, and lets you pick custom sources for the “Audio Overview” podcasts.
Gemini can now (optionally) personalize its responses based on your search history and interactions with apps like YouTube and Google Photos.
Deep Research is now available for free to everyone. You no longer need a Gemini Advanced subscription to try it out.
Guohao Li dropped OWL, an open-source alternative to Manus AI that sits at #1 on the GAIA Benchmark among open-source projects.
Microsoft is rolling out AI-powered summaries to Notepad and a “draw & hold” feature for the Snipping Tool that automatically turns your squiggles into straight shapes.
OpenAI news:
A new toolkit for building AI agents includes Responses API that simplifies tool integration and Agents SDK for orchestrating agent workflows.
The Chat Playground is now Prompts Playground and lets developers easily test, compare, and iterate on their prompts.
Perplexity AI now has a Windows app with voice input, keyboard shortcuts, and easy access to all Perplexity models.
Reka AI introduced Reka Flash, a compact, low-latency model competitive with o1-mini on multiple benchmarks.
Snapchat Platinum members can now use AI Video Lenses to insert AI-generated objects and characters into their Snaps.
Tencent released Hunyuan-TurboS, an ultra-large Hybrid-Transformer-Mamba Mixture of Experts (MoE) model that outperforms GPT-4o on many benchmarks.
🔬 AI research
Cool stuff you might get to try one day:
Meta’s upcoming Llama 4 model family might be an “omni” model with a special focus on improved voice capabilities.
📖 AI resources
Helpful AI tools and stuff that teaches you about AI:
“How to Write With AI” [VIDEO] - a great David Perell interview with Tyler Cowen, full of practical takes.
“Open AI Model Comparison” [TOOL] - a handy tool by OpenAI that lets you compare its models side-by-side across dozens of dimensions.
🔀 AI random
Other notable AI stories of the week:
Google will be upgrading its Google Assistant on mobile devices over the coming months to be powered by the Gemini model family.
Sakana AI’s AI Scientist-v2 agent independently produced a research paper that passed peer review at an ICLR 2025 workshop.
🤦♂️ AI fail of the week
I just wanted a “cute robot trying to hide an elephant under a carpet.”
Where did we go so wrong, Sora?
💰 Sunday Bonus #51: Four fun Midjourney style references (Vol. 5)
Today I’ve got the fifth installment of funky --sref codes for you to use in Midjourney. The first four are right here:
If you want to learn more about --sref, you can:
Check out my “Midjourney Masterclass” workshop.
To use the style reference codes, append their --sref [number] at the end of your prompt.
Today’s showcase subjects are:
Dragon
Umbrella
Ancient ruins
I give styles “nicknames” for easy reference, but it’s the --sref number that counts.
Have fun!