22 Comments

I believe that before the emergence of GPT-5, Claude 3.5 has the strongest reasoning ability. I personally use both GPT and Claude.

Current tool assistance:

For AI painting: GPT-4o/midjourney

For writing assistance: Claude 3.5

For programming assistance: Claude 3.5

For AI market analysis: GPT-4o

Expand full comment

I'm with you on most of these! DALL-E 3 and Midjourney are my go-to for images, depending on the needs (DALLE-3 for cartoons, Midjourney for photographic stuff).

In your view, what makes GPT-4o better for market analysis?

Expand full comment

awesome article, love me some free stuff and I need to start using Claude because consensus seems to be its the leading Frontier model now so ...

But something has been bugging me. Why free? Nothing is free especially on the internets so what's the end goal of these companies. I can think of 3 possible reasons:

1. Normalize usage so upcoming products/ services are easier to consume by consumer or integrate into biz products

2. We are training data like a lot of consumer products - we are the product. So in addition to us refining results, potentially ads come or other nefarious ways to subvert our eyeballs

3. Traditional Freemium upsell models. They all have this (I think) where for a fee you will get better/more/fast results

I might be missing one or more? Daniel what do you think?

Expand full comment

Glad you liked the article!

I think all of your guesses sound reasonable to a degree.

Here's my take:

1. That's part of it, but it doesn't explain why they'd make their *best* models free. You could normalize usage with a crappier free model, like the original ChatGPT-3.5.

2. Most AI companies definitely use customers' interactions with LLMs for future training, but:

a) This is true for the paid version as well - you don't automatically block training by going paid (unless it's something like a "Teams" plan in ChatGPT which is specifically marketed as being for data-sensitive business use cases and opts your data out of training by default.)

b) Users can opt out at will. (But how many are even aware of this and will do so is another question.)

Perhaps the free model casts a wider net and allows for more training data to come through. Still, this doesn't quite explain why they choose to make their *best* models free. They might as well collect human input for training with a lower-tier model.

3. This to me is the most likely explanation, because that aligns with how every free best model is currently structured:

GPT-4o free doesn't have DALL-E 3 access and has much lower usage limits. You need to go paid for more.

Claude 3.5 Sonnet free allows for maybe a dozen messages per day, which isn't nearly enough for serious power users. Also, it doesn't let you use the "Projects" feature to organize your work. You need to go paid to unlock.

Gemini 1.5 Pro inside Google AI Studio also has rate limits (max 50 messages per day).

So I think they're using these as freemium "teasers" to get people to sample the best there is, nudging them to upgrade for higher usage limits and access to additional features.

Then there's the competitive factor. Not so long ago, the best models were paid-only offers. Then, Google made Gemini 1.5 Pro accessible for free. Soon after, OpenAI did the same with GPT-4o. And then finally Claude 3.5 Sonnet came out for free as well.

At this point, I guess the big players see having a "limited free" version of their best model as the way the game's played. If you're the one company with a frontier model not offering a free sample, you're behind the curve.

But all of that is pure speculation on my part.

Would've been fun to have been a fly on the wall when these discussions and decisions took place internally at OpenAI, Google, and Anthropic.

Expand full comment

Agree and it's super interesting. I think you're right for short term revenue/cost recovery that freemium is the game and it's easy enough to play. I do think there is also some element of #1 because we're still really in the very beginning and they want Pandora's box fully opened to offset regulatory impact and also build brand for future products.

I think we'll see a step change in the future that all these guys are working on vs. the evolution of the current models and they may come with very different ways to monetize.

Expand full comment

It’s interesting in this regard that since a week the possibility of choosing a model has been terminated on the free GPT plan. At least here in the Netherlands we are now only able to access GPT 3.5. Maybe this was already earlier the case and I missed this news. @Daniel Nest: did you know this and already wrote about it? It seems you’re right Andrew and I am curious who will follow. (Also, the new Claude Haiku will only be available through API, not even the ‘regular’ paid model. But that may change. There is certainly some experimenting with the monetizing going on already Andrew!

Expand full comment

Hey Maurice,

It's true that you can no longer pick the model directly as a free ChatGPT user, but the default model is now GPT-4o, not GPT 3.5. You have a limited amount of messages per hour and when those get used up, you drop down to GPT-4o mini until the next reset.

Here's my screen for this: https://i.imgur.com/1zlEMsH.png

I believe OpenAI have completely discontinued GPT 3.5 inside the customer-facing interface. Here's an old Reddit post about it: https://www.reddit.com/r/ChatGPT/comments/1e6js32/gpt_35_is_gone_sad_moment_all_pay_your_respect_to/

They still offer fine-tuning on GPT 3,5-turbo for developers, but the costs are significantly higher than GPT-4o (https://openai.com/api/pricing/).

As such, I don't see any advantage for OpenAI to use the obsolete, more expensive model inside ChatGPT.

As for Claude 3.5 Haiku, as I understand it, it has a quite expensive per-token price while being less capable than Claude 3.5 Sonnet, which I guess is why they made the decision to only make it available via API to developers who really need its coding abilities. Maybe if they'll bring the price down at some point, it'll become a part of the standard claude.ai offering.

Having said all that, there's no doubt that AI labs will have to find a sustainable way forward to make sure they at least break even on inference costs, so I wouldn't be surprised if future, more powerful models won't be made available for free in the same way as we've gotten used to up to now.

Expand full comment

The only 3 models I've used in the last month are Gemini, GPT4-o, and perplexity. I can truly see the value each brings for a particular task.

Expand full comment

What's your primary use case for each and where does it excel in your opinion? I guess Perplexity is for research, as intended?

Expand full comment

Yeah, but Gemini is also very good for research. I think I can get somewhere quickly with Gemini, but it's probably not going to be as thorough as either Perplexity or GPT.. but I also tend to prompt with a conversation, not just with one question. Still, sometimes it can be useful to have more information sooner, and that's where Perplexity can shine.

GPT4o or ChatGPT or OpenAI or whatever we are calling the flagship service this week is the best all around, but it's clearly better in image generation and editing, and probably in brainstorming/being "creative". I don't use any of them to brainstorm too much at this particular moment, but that might change... but from the limited brainstorming I've done, there is nothing better than GPT4o today.

Expand full comment

Makes a lot of sense. But do give Claude 3.5 Sonnet a spin, especially with its Artifacts window. It's really growing on me, and it's entirely free to try.

Expand full comment

I will at some point! It is not a high priority since everything's working right now, but I'll get curious and excited soon enough. (and have free time)

Expand full comment

It can't access the web so it's not useful for current research, but for any general brainstorming, feedback, or turning input into something interactive within Artifacts, it's great. So if you have some go-to “beta reader” prompt, just throw it into Claude and see how it compares to what you're used to.

Expand full comment

Good point about current research. I guess I am very plugged in to news stories, so I don't tend to use any LLMs for news dives. That could also change.

Expand full comment

Nice article. It's great to have so many solid tools available for free. Thanks for putting this list together.

Expand full comment

Happy you found it useful!

And yeah, it's amazing what we're getting for free these days. Although I wonder if the sheer number of LLMs, text-to-image models, etc. is overwhelming for the casual user and might prevent some of them from jumping in.

Expand full comment

You're right about it being overwhelming. We talked a little about that in our latest episode. Happy to post a link if you're interested.

Expand full comment

Sure, feel free to share. Can't promise that I'll get to listen to it in full, but I'd be curious to hear the "AI overwhelm" section.

Expand full comment

Here's the link: https://www.aigoestocollege.com/encouraging-ethical-use-ai-friction-and-why-you-might-be-the-problem/

We touch on one aspect of the overwhelm problem (the proliferation of models and the related investment) in the first five minutes. It's a little tangential, but I'm going to write an article about the overwhelm problem as it relates to higher in the next couple of weeks. Thanks!

Expand full comment

Looks like you've linked to your front page - would you mind updating that with the exact podcast episode?

Expand full comment

Sorry about that. My comment has been updated. Thanks

Expand full comment