Sunday Rundown #86: Deep Thinkers & Phantom Bovine
Sunday Bonus #46: Making and sharing apps with o3-mini
Happy Sunday, friends!
Welcome back to the weekly look at generative AI that covers the following:
Sunday Rundown (free): this week’s AI news + a fun AI fail.
Sunday Bonus (paid): a goodie for my paid subscribers.
Let’s get to it.
🗞️ AI news
Here are this week’s AI developments.
👩💻 AI releases
New stuff you can try right now:
Alibaba unleashed an array of models this week (try them for free at Qwen Chat):
Qwen2.5-Max, a large-scale MoE model that beats GPT-4o, Claude-3.5 Sonnet, and DeepSeek-V3 (but not R1) on multiple benchmarks.
Qwen2.5-VL, a vision-language model family that’s better at parsing and understanding image and video inputs.
Qwen2.5-1M, a long-context model that can handle up to one million tokens.
DeepSeek released Janus-Pro-7B, a model that combines image understanding with image generation. (Spoiler alert: It makes makes crappy images for now.)
Genspark now offers a Deep Research agent that turns your request into a research plan, processes hundreds of pages, and gives you a comprehensive report. (Get a free month of Genspark Plus with my invite link.)
Google now uses 2.0 Flash to power its Gemini chatbot, making it both faster and smarter.
Hailuo AI launched a T2V-01-Director feature that lets you control camera movement using natural language descriptions.
Hugging Face integrated four inference providers—fal, Replicate, Sambanova, and Together AI—into its Hub, giving developers more flexibility.
KREA AI introduced character consistency for the Hailuo model, so you can upload a reference image and preserve its appearance in new video generations.
Luma now lets users upscale videos to 4K resolution directly on the platform.
Microsoft made the “Think Deeper” feature in Copilot available for free. It uses OpenAI’s o1 model under the hood for complex searches and reasoning tasks.
Mistral released Small 3, an extremely fast yet capable model that outperforms larger ones like GPT-4o-mini and Llama 3.3 70B on multiple benchmarks.
OpenAI has a few updates:
Released o3-mini, the reasoning successor to o1-mini that is cheaper, smarter, and also available to free ChatGPT users.
Rolled out the updated custom instructions, screen sharing, and live video in Advanced Voice mode to most European users.
Perplexity integrated a US-hosted DeepSeek R1 model into its Pro Search feature. (Free users get 5 daily “Pro” searches.)
Pika Labs is cooking:
The upgraded Pika 2.1 video model is out, featuring 1080p resolution, sharp details, and accurate, lifelike motion. (Requires a paid plan.)
There’s now a “Turbo mode” that generates videos three times faster while using 7x fewer credits.
Riffusion is back, a year after my Suno vs. Riffusion showdown. Its public beta music model FUZZ generates full-length tracks in one go. (Try it for free.)
🔬 AI research
Cool stuff you might get to try one day:
KREA AI is rolling out Krea Chat, which incorporates the platform’s features in a unified chat interface (powered by DeepSeek).
Meta is working on making its Meta AI assistant more personalized and capable of remembering key details about you from your interactions.
📖 AI resources
Helpful AI tools and stuff that teaches you about AI:
“Copyright & Artificial Intelligence: Copyrightability” [REPORT] - guidelines on what AI works are copyrightable by the US Copyright Office.
“How to Prepare for AGI” [VIDEO] - Reid Hoffman on Dan Shipper’s Every podcast.
“On DeepSeek and Export Controls” [ARTICLE] - a somewhat divisive take by Anthropic CEO Dario Amodei.
“OpenAI o3-mini System Card” [PDF] - a report on the safety testing and training details behind the model by OpenAI.
“o3-mini and the ‘AI War’” [VIDEO] - a deep-dive into o3-mini by AI Explained.
🔀 AI random
Other notable AI stories of the week:
OpenAI is making government-level moves:
ChatGPT Gov is a custom version of ChatGPT for US government agencies that grants them access to ChatGPT Enterprise features and capabilities.
The US National Laboratories will get access to OpenAI’s reasoning models to “supercharge their scientific research.”
🤦♂️ AI fail of the week
I asked Sora for a UFO landing next to a peasant milking a cow. See if you can spot the missing detail. (It’s subtle.)
💰 Sunday Bonus #46: Stupid simple way to make and share apps with o3-mini
The new 03-mini is a coding powerhouse (among other things).

It’s at least as good as the larger o1 model, which recently had me excited:
The o3-mini can often create entire apps in one take. No more endless back-and-forth loops with Bing and ChatGPT! It’s also really fast, so you can iterate and fix any errors quickly.
Better yet, o3-mini is available to free ChatGPT accounts (rate limits apply), which means just about everyone can access the current best coding model in the world.
But here’s the thing: Getting o3-mini to spit out polished code is only half the puzzle.
Then comes the part where you have to figure out how to:
Run the code and test the app.
Share the finished app with others.
For instance, even though I was pumped about my o1-generated tool above, it took me a solid hour to get it working properly. I had to learn how to set up and run Python, install the required libraries, tweak local files and folders to solve errors and conflicts, ask o1 for multiple code iterations, etc.
And when I was done? I had a tool that worked as expected…on my computer. It wasn’t something I could easily share with others.
That’s why I set out to find a super simple tool stack and workflow that lets even 100% code-illiterate people like me:
Request an app from o3-mini.
Run and test the app online without installing anything or setting up coding environments.
Share a link that lets everyone else use the app instantly.
Do all of the above for free.
After some trial and error and plenty of frustration, I’m happy to report that I’ve cracked the code.