5 Comments

Congrats on the Midjourney silliness! I know conquering a subreddit isn't one of your life goals (is it?), but it's still really cool to see your stuff recognized as creative and innovative.

Credit to Google: when they decide to catch up, they really get into it. I've seen Gemini's frontier models improve dramatically over the last 3 months. I think you turned me onto a model that was way better, then like a day later they launched another one. I just switched over to this updated new hotness, 1206, and am looking forward to it.

GPT's o1 model is definitely powerful. I just noticed a cap on the number of questions you can ask per week. That caught my eye because I still haven't dived into the new business model there, but it makes sense that they want to sell this to crazy people. I'm just barely outside of that particular window, I think - sorry, Open AI!

Expand full comment

Yeah, it's a nice acknowledgment, even though I'm definitely not chasing any major league fame with my silly Midjourney creations.

Google's been stepping up pretty consistently for sure, both in terms of frontier models but also all sorts of additional tools like NotebookLM, Learn About, Illuminate, MusicFX, etc. etc. etc.

The o1 model is definitely not for everyday use by the average Joe. It's only marginally better (if at all) for run-of-the-mill tasks like drafting, creative ideas generation, etc. Its main advantage is the ability to spend significantly more time thinking through harder problems to get them right, so it's a niche use case. And with the current price tag, it really is only for a small subset of people. Most of us are perfectly fine with other frontier LLMs.

Expand full comment

I want to push that fancy expensive model, just to see what it can do, but I'm not sure of my own use case just yet. As you mentioned, it's kinda slightly better than other models for most of the tasks I need.

Maybe the next time I hit a wall w/advanced voice mode I'll hop over to try uber-advanced reasoning thinky mode.

Expand full comment

Thanks for putting this together really well done!

Expand full comment

Dario Amodei writes, "It is my hope, like some other people in the field — I think Demis Hassabis is also driven in this way too — to use AI to solve the problems of science and particularly biology, in order to make human life better. "

One of the great ironies of our time is that AI developers are bad engineers. To be fair, this is true in many, most or all other emerging technical fields beyond AI.

Good engineers look at a challenge holistically, bringing in all factors which can affect the desired outcome. They particularly look for single points of failure.

Bad engineers blindly race forward, ignoring inconvenient factors which they don't wish to be bothered with.

Bad engineers never ask, "how much knowledge and power can human beings successfully manage" because they don't understand this issue is a potential single point of failure, and they don't wish to hear inconvenient answers which might present a threat to their work.

The greatest threat to our future may not be AI itself, but the new powers that will emerge from an AI accelerated knowledge explosion.

"Knowledge explosion, what's that?", asks the bad engineer.

Expand full comment