20 Comments

Okay. So, like 5% of us are about to get a whole lot smarter with these tools. We understand reasonably well that we are the easiest ones to fool (ourselves), and so we have to verify the things we're putting out there into the world. These tools are very much designed for people like us.

At the same time, these tools may well effectively make the rest of the population dumber, simply because they haven't practiced rigorous fact-checking every day as we have.

There's going to be a pretty big divergence in the next decade or two.

Expand full comment
author

Yes, exactly.

That's a more nuanced dichotomy I kept in my head that I could've made more clear.

With Perplexity, you pretty much had to be an AI enthusiast in the first place to know about and want to use the tool.

With ChatGPT's new search, many more casual users will be thrust into the "AI search" paradigm.

With Google now pushing "AI Overviews" to the rest of the world, almost everyone will be exposed to AI summaries without knowing about the caveats and limitations.

Expand full comment

Good nuance. I like the added breakdown of different types of users, and since I use all three of those weekly (almost daily), I very much get it. We kind of live in this world, but it's basically a rabbit hole/echo chamber, and the rest of the world starting to use these tools (gradually, then suddenly) isn't all that intuitive for us to understand.

I'd say this reminds me of a recent election, but I don't like to make political observations too much here.

Here's a twist that could prove interesting: these agents that are beginning to crop up (the Jarvis leak is the sort of hot news story on them this week if I'm not mistaken) may be employed as fact-checkers, going to all these websites to verify things.

Here's the thing about that: it's an arms race. Will AI search simply become the norm first, before any kind of rigorous fact checking could be put into place, leading to all sorts of updates to websites, new sites being built with information based on AI search, and so on?

Expand full comment
author

Yeah Jarvis is still a work-in-progress project (mentioned it in my last roundup), but Claude's "computer use" is already a working crude prototype.

The problem is that even if you let the current LLMs use computers via screenshots/video, etc. and want to employ them as fact checkers, you don't solve the reliability and hallucination issues. So to have AI that can verify information, we'd need a new architecture.

As for the period with AI search being adopted before people learn how to sanity check it, that's exactly what I'm hinting at. We're in the early days of the "email chainmail hoax" where your friends are forwarding you a giveaway from Bill Gates that you need to forward to 25 of your friends.

Before people internalize the "proper" way to use AI search, we'll first have many people just trusting it by default just because it's right there.

Expand full comment

Ugh. It's social media all over again, isn't it?

Expand full comment
author

But thankfully we figured that one out fully by now!

Expand full comment

As a trainer who works with school librarians I find this fascinating and scary at the same time. It is well known that teaching students information/media/digital literacy has always been difficult. Who has time or cares about checking sources especially when you are disengaged and just want to get your homework done... With AI literacy now on the agenda this is going to become even harder. What are your thoughts on teaching this stuff to students? Is it the same old or do we need to actually start doing something different in education? I would love to see engaged and empowered students who want to learn but it feels that until we move from teaching to the test we are never going to solve this problem.

Expand full comment
author
Nov 10·edited Nov 10Author

Hi Elizabeth, I hear what you're saying!

I don't have a background in education, but the optimistic part of me feels that the same things that make LLMs a risky proposition might also be flipped to make them especially good at teaching students AI literacy (among other things).

LLMs and AI chatbots have something that one-way traditional information search didn't: They can initiate a back-and-forth conversation.

What if AI search tools were pre-prompted to proactively inform the searcher (or student) that the information they're providing is prone to hallucinations, etc.? What if we used variations of the "Socractic tutor" prompt to make sure to nudge students to seek answers collaboratively with AI instead of using them as one-off answer enginges?

Young people and students are likely to engage more deeply with these emerging tools, so if we build in the tutoring/teaching/information providing aspect into these tools, students may well learn AI literacy as part of the process of engaging with the tools themselves.

Ethan Mollick had some excellent points in the early days of AI emergence, and many of them still hold: https://www.oneusefulthing.org/p/the-future-of-education-in-a-world

An example of an interaction from this utopia:

Student: "Help me find more information about [topic]"

AI: "Great, in your own words, can you tell me what you already know about this topic?

Student: "Uh, sure: [response]"

AI: "Sounds like you already know [part of topic]. What would you guess is the next thing you should learn?

Student: "I guess I would need to know [another aspect]."

AI: "Sounds good, let's look for it together. Here's what I have [sources/summary]. Note that I'm a large language model, so this might be wrong. You should click on the links here and check that what I'm saying is correct [list of links]. What did you find? Did I get it right, or did I make up facts?

Student: "You were mostly right, but [specific fact] was off.

AI: "Oh, sorry about that. What did you learn about [specific fact] on your own?

And so on...

Expand full comment

Thanks for sharing your thoughts Daniel, yes I think that this kind of research is where it is going.... It will be interesting to see how education and school libraries change because of this... at some point learning knowledge will have to be part of this as how will students know if what they have found or been told is right or not... Certainly going to be interesting to watch what happens next.

Expand full comment
Nov 8Liked by Daniel Nest

Here's a plan that might be useful. How about creating a list of topics one should not use AI search for? Legal and medical advice immediately come to mind, and I'm sure there are likely others topics that are too important to trust AI with. Like what?

Another angle is to disclose the source of information we're sharing. I often preface Notes I post on Substack with "according to ChatGPT...."

Another angle that tends to get overlooked is this. Just because a web page was made by hand by a human being in no way guarantees the information contained within is accurate. It's true that ChatGPT sometimes hallucinates. And it's also true that sometimes humans do too. The world was full of bullshit long before AI arrived on the scene.

Perhaps the best place for your article and others like it would be on general information sites for the benefit of those who aren't AI nerds. You know, People Magazine and the like.

Personally, I rarely use Google anymore. ChatGPT has become my go to search tool.

Expand full comment
author

Hey Phil,

Many good points there.

For your first point, I think a good starting point for where you shouldn’t rely exclusively on AI are so-called YMYL (Your Money Or Your Life) categories:

ahrefs.com/seo/glossary…

Anything to do with health, finance, legal and civic stuff, etc. - where getting things wrong might have a serious adverse effect on a person’s life.

As for “humans write bullshit too” angle, my article didn’t overlook it - my entire premise is that with Google, we’re forced to visit the pages and verify their accuracy ourselves. With AI Search, the source for much of the info are still those same human-written pages, many of which might be crap…but we don’t get the chance to review them because they become an integrated part of a neat AI summary.

I think the problem isn’t AI Search itself. The problem is that it might be thrust as the authoritative source of info upon people who might not be aware of its limitations and caveats.

[Pasted here from the Notes for posterity]

Expand full comment

I would use it only for something unimportant. For anything important, no matter how terrible Google is, it is still the best way to find information and learn through multiple websites. Perplexity has a place, but I still search Google and read through websites.

Overall, I see oversimplification of concepts/ideas, biases, and errors, my favorite illusion of knowledge (you mentioned the Dunning-Kruger effect) most of the time. Regardless of the confidence level, it will spew something and make you feel like you do not need to check any further, which leads to over-reliance on a tool whose output is so dependent on the quality and quantity of data.

Expand full comment
author

That's exactly it!

It's not that the sources that AI draws from are any worse (they're likely the same sources that Google would list in many cases). It's the fact that this little added layer of separation will make even fewer people go and visit the original source, believing that the AI summary is good enough (when it may very well not be).

Looks like you're already doing the right thing in terms of going to the source.

My take is that AI summaries can make a great jump-off point for further exploration if you treat them as a sort of topic "teaser" instead of the ultimate source of information. Then you can get the gist from AI and head out to do deep research on your own.

Expand full comment
Nov 8Liked by Daniel Nest

I've come across some other perspectives, which suggest that directly summarizing content through AI might potentially dampen the enthusiasm of content creators, as they would find it difficult to earn revenue directly through clicks. However, there are also strategies like Perplexity's approach, which considers sharing revenue with the sources of information.

Expand full comment
author

Oh yeah, while that wasn't what my article is about, that's very much another concern.

I actually work with SEO and content as my dayjob, and AI summaries are absolutely going to force a rethink, especially for so-called "informational" queries. Back in the day, e.g. a camera retailer could create thought leadership pages about related topics (for instance, "Tips for taking great photos at night"). The goal was to make these helpful pages drive organic traffic from relevant searches.

With AI Overviews answering the "tips for taking great photos at night" directly, we can expect far less clickthrough to the original pages.

So yeah, we're going to have to rethink a lot of existing business models for sure.

Expand full comment

I’ve tried AI search. It’s a nice way to start researching a topic or question. It’s not at all helpful when I want to go into depth on some point. I assume that’s because, the deeper down you drill on anything, the sparser the training data.

Expand full comment
author

Spot on!

I'd also treat any AI search as a "broad strokes" intro to a topic and a jump-off point to further research. That way, you limit the damage from hallucinated facts and sources.

As for why it's not as great for drilling into the topic, that's likely more due to the fact that LLMs are typically pre-prompted to summarize and condense information for easy consumption, so it necessarily glances over some details when presenting you with the overview. With search/access to the Internet, sparse pre-training data shouldn't theoretically be a limitation, since the chatbot can source new data by browsing. But that might also play a role be leaving LLMs with fewer "anchors" in their pre-existing data to connect new info to.

Expand full comment

full panic bro and thank you for the rickroll. When I was doing a big thing or buying a big thing I used to revel in my ability to leverage the reach of the internet to find the TRUTH. Now its convoluted and so tempting to just believe what the borg err AI spits back at me. When I do click thru to cited links there are some very questionable sources and I have been fooled by subtle hallucinations. I could go to wirecutter for example or another trusted site but that doesn't work for non-mainstream items.

Surfing-the-web hyperlink style is gone daddy gone

Expand full comment
author

Indeed.

And I thought we'd never give those hyperlinks up.

Never thought we'd let them down.

Didn't want to run around and desert them.

I still haven't quite transitioned to AI search myself. Old habits die hard. But it's coming, and we'll have to mature into developing the right way of working with these tools.

Expand full comment

Beautiful hyperlinks, lovely blue

Take me through the internets, i must pursue

Beautiful hyperlinks, lovely blue

Where I am now I cannot undo

Cause it’s gone daddy gone the love is gone away

definitely a job for whytryai

Expand full comment