21 Comments

Another point is that creating this additional friction not giving internet access is actually helpful for users I think. It reduces offloading information gathering and synthesizing or at least makes it a bit more challenging and I think that's actually might be better than long-term for human users.

I don't want the future where I'm just validating AI outputs after it's done all the cognitive work for me that is pretty dystopian in my opinion.

https://www.microsoft.com/en-us/research/uploads/prod/2025/01/lee_2025_ai_critical_thinking_survey.pdf?_bhlid=40d0090608b9ebfba2ccd983402c9ede29cfcf47#:~:text=We%20survey%20319%20knowledge%20workers%20to%20investigate%201%29,first-hand%20examples%20of%20using%20GenAI%20in%20work%20tasks.&msockid=14be17bab9a06aed088e03b4b8f76bf7

Expand full comment

My early testing has me thinking that it might win me back. I've been kicking Claude to the side lately for some of the other new models and features like ChatGPT deep research and Grok.

I would like to see internet access at some point. Seems pretty standard at this point. Regular users can work around this by using something like ChatGPT or Perplexity and NotebookLM to build out briefs from online sources. Then feed that into Claude's project knowledge. Gives you full control of the sources it pulls, which is often better than just letting it browse the web anyways.

Oh and the visuals/tables it creates within articles are pretty damn good.

Expand full comment

I think I've gone through the same journey as you!

Claude 3.5 Sonnet briefly became my go-to model for a few months late last year because of its personality and creativity. But then we had so many new features launching elsewhere that I kind of put it on the back burner lately.

And I agree that you can usually work around live web browsing limitations, but it always requires additional effort to create that context, so I can absolutely see many people gravitating towards "natively online" models when given the choice. Let's see if Anthropic eventually gets around to unlocking Internet access for Claude.

Do you have examples of Claude's visuals other than tables that you were especially impressed with? I'd love to see those!

Expand full comment

The lack of internet access seems to be a philosophical sticking point, not a technical one. Well before ChatGPT, Anthropic was worried about the dangers of giving LLMs access to the web as well as terminals (a la OpenAI’s code interpreter). We’ll see if the competitive pressures here change their stance.

Expand full comment

Yeah, it definitely goes back to their focus on safety, reliability, and predictability rather than any technical challenges, as I also touched upon in a separate comment thread.

But regardless of the underlying reasons, the limitation isn't great as far as I'm concerned. Especially so when it hobbles a powerful reasoning model in certain tasks.

Also, the philosophical sticking point appears increasingly silly considering that it's relatively easy to grant Claude Internet access via alternative interfaces and tools.

So let's see if they finally cave!

Expand full comment

This discussion led me down a rabbit hole in terms of MCP, which you can use to give the desktop version of Claude various tools like you mentioned. I might do a writeup on a DIY Claude desktop with internet access.

Expand full comment

That could be wortwhile, especially if more people find Claude + Internet access a useful proposition. (I linked to this tool in the post which isn't fully free and seems to only be available for MacOS for now: https://www.linkup.so/)

Expand full comment

I tend to agree with how smart a model is directly being tied to how connected it is. Heck, I'm the same way! Ask me a question when I'm out in the middle of nowhere, and there is a non-zero chance I'll know the answer. Give me some time to "reason" and I might even up those odds a bit... but there is a cap.

Give me a connected laptop and ask me that same question? I'll get back to you in a few seconds with a cohesive answer. Give me more time, I'll write a Substack piece and try to draw some conclusions.

I also wanted to touch on the marketing considerations being in the driver's seat: I would expect this icky situation to persist from now on.

Expand full comment

Yeah. I kind of understand that Anthropic doesn't want to "pollute" the model with crappy external output, but I think they can't get away with not plugging it into the Internet at some point. Nowadays, AI search is becoming an expectation.

Expand full comment

It's also very frustrating if you're talking about, say, the state of NATO or the UN or whatever, and it's using data from any time prior to, like, today.

Expand full comment

I need those real-time updates to Donald Trump's unfiltered chain-of-madness verbal vomit, damnit! Pull yourself together, Claude!

Expand full comment

Ha1 I'm talking about asking a question to understand (broadly) how geopolitics works, but if it's describing how the last 80 years have been, it's not describing 2025... so I'm getting a view on how things *used to be*, as crazy as that is.

I suspect fewer and fewer things will make sense if they're not connected, especially if you're trying to talk about nation-states or other things subject to change.

Expand full comment

True, anything related to world affairs in general is tricky without web access.

Expand full comment

Agree that lack of web search is a weakness. As is the limited context window and high usage rate. It does makes some banging data visuals though!

I am actually glad anthropic is doing their own thing and not making the fiffh iteration of deep research since all the other AI labs are doing it.

There are some pretty glaring failure modes with deep research anyways.

https://www.ben-evans.com/benedictevans/2025/2/17/the-deep-research-problem?utm_source=substack&utm_medium=email

Expand full comment

Thanks for sharing Benedict's article - a solid read!

And I agree, I mainly mentioned "Deep Research" as a throwaway joke. I don't think we need a million Deep Research clones out there.

I also agree that we have to be careful with outsourcing cognitive efforts to AI. In fact, that was the gist of the argument in my "Are we even ready for AI search?" article (https://www.whytryai.com/p/are-we-even-ready-for-ai-search)

But it's not a binary switch. There's a whole range of tasks between "Go out and do deep research, source verification, and thinking for me" and "Make a dozen quick, easily verifiable lookups for me faster than I could, directly in this interface."

I'd love for Claude to have the ability to bring in non-critical but up-to-date information into queries without having to feed it context, etc. So while web access isn't a panacea and we shouldn't outsource our brains to AI, I hope Claude will eventually get plugged into the web in order to make certain basic information gathering more friction-free!

Expand full comment

will check out your piece on ai search!

i agree that adding some basic internet features like reading url would be a great addition for claude.

Expand full comment

I couldn't use a model without internet access for the latest! Early models I used like this felt like science experiments, time capsules. Is there a technical or legal or other reason for the limitation?

Also, what's up with Claude's logo?

Expand full comment

Agreed. It's funny how quickly we went from being amazed by the original ChatGPT with no web access or other bells-n-whistles to expecting those.

From what I understood, the lack of web access is because Anthropic are generally more focused on safety and alignment so they don't want to risk polluting the model's answers with unverified or potentially harmful content. It's also a form of quality control, I guess, in that they want the way the model responds and the training data it uses to be predictable.

As for the logo: It's a classic Greek sign of an exploding orange, which symbolizes a burst of knowledge and the exponential power of cumulative experience.

I might have made that last part up completely.

Expand full comment

It looks like a butthole to me. The 2yr anniversary of chatGPT is coming up in a couple of weeks. If that’s like the AI iPhone moment, this next year will see more than just internet connected LLMs!

Expand full comment

"It looks like a butthole to me." - everyone on the Internet commenting on just about anything.

Also, the original ChatGPT (3.5) came out in October 2022, so we've already crossed the anniversary mark. Unless you're referring to the GPT-4 upgrade?

Expand full comment

Oh, my bad that makes more sense - I remembered the hullabaloo hitting fever pitch around Thanksgiving here. Some crazyness about to go down this year!

Expand full comment