Dan, do you use any of these on a daily basis? I'm curious about how much of a deep dive they could be vs just having a more stripped down (but quick) LLM for faster type conversations. Are any of them jumping out for the type of stuff you find yourself doing?
For me, it's all about finding a use case. If I can find a good use for a tool and use it every day, I'll get very good at that tool, or at least good at the way I'm using it. I'm not sure if any of these are like necessary for me at the moment, but it's intriguing to think through different frameworks for learning and thinking.
I'm reminded of Perplexity vs the other LLMs I've tried this year. It seems a lot more like Learn About than like, say, Gemini. Interesting to watch these things develop!
The only tool I personally use rather frequently is Learn About. It's quite good for quick topic understanding.
But I can see the other tools appealing to visual learners and mind mappers. (I've personally never been one, but I certainly appreciate the potential.)
And I definitely don't think any of these tools are strictly "necessary" - just good alternative ways to explore topics, depending on people's individual learning styles.
How frequently is rather frequently? My ears are gonna perk up when you start using something new with some kind of regularity, and it's interesting that Learn About is in this category.
I certainly use a very limited suite on a daily basis myself, and I also agree that these are in the nice-to-have category, not the must-have type.
It's a bit ad hoc, but I'd say a few times a week. It's not a dedicated tool I use for intentional topic research, but whenever I have an "I wonder about [insert thing]" moment and my laptop is nearby, I'll give Learn About a whirl.
Ha, cute - you think AI needs a "group" to do its mind mapping.
But jokes aside, I think this is such a nice and quick shortcut to quickly getting a top-level overview of a topic. Sure, the usual caveats about e.g. hallucinations apply, but it's awesome to just input a few words about your topic and get a thorough visual exploring many of the interconnections.
Also - on hallucinations - I’ve noticed a new (?) behavior where the AI will fess to it right away like oh sorry, my bad and I’m hoping this goes into reinforcement learning or whatever the appropriate name is to get better here
The thing is, LLMs will often say "Sorry, my bad" when you point a hallucination out. Even the original Bing did that, because they're trained through RLHF to be responsive to feedback.
But the key is to have LLMs catch their own mistakes BEFORE ever outputting them. The o1 model is slightly better here because it revisits its own assumptions during reasoning steps, but it's not perfect.
It's my understanding that the current approach to training LLMs means that hallucinations aren't possible to solve 100% - you can minimize them dramatically, but not eliminate them. We'd need new architectures and approaches for that.
I haven't, mainly because I'm not an authority on the topic and wouldn't quite be able to tell which tool is best. But maybe I can try research them one day.
Dan, do you use any of these on a daily basis? I'm curious about how much of a deep dive they could be vs just having a more stripped down (but quick) LLM for faster type conversations. Are any of them jumping out for the type of stuff you find yourself doing?
For me, it's all about finding a use case. If I can find a good use for a tool and use it every day, I'll get very good at that tool, or at least good at the way I'm using it. I'm not sure if any of these are like necessary for me at the moment, but it's intriguing to think through different frameworks for learning and thinking.
I'm reminded of Perplexity vs the other LLMs I've tried this year. It seems a lot more like Learn About than like, say, Gemini. Interesting to watch these things develop!
The only tool I personally use rather frequently is Learn About. It's quite good for quick topic understanding.
But I can see the other tools appealing to visual learners and mind mappers. (I've personally never been one, but I certainly appreciate the potential.)
And I definitely don't think any of these tools are strictly "necessary" - just good alternative ways to explore topics, depending on people's individual learning styles.
How frequently is rather frequently? My ears are gonna perk up when you start using something new with some kind of regularity, and it's interesting that Learn About is in this category.
I certainly use a very limited suite on a daily basis myself, and I also agree that these are in the nice-to-have category, not the must-have type.
It's a bit ad hoc, but I'd say a few times a week. It's not a dedicated tool I use for intentional topic research, but whenever I have an "I wonder about [insert thing]" moment and my laptop is nearby, I'll give Learn About a whirl.
If I can remember (that's a big if with my Swiss cheese brain, but I did open a tab), I'll give L. A. a shot tomorrow!
Super cool; AI mind maps. Can you imagine a future where AI facilitates a mind mapping exercise with a group?
Ha, cute - you think AI needs a "group" to do its mind mapping.
But jokes aside, I think this is such a nice and quick shortcut to quickly getting a top-level overview of a topic. Sure, the usual caveats about e.g. hallucinations apply, but it's awesome to just input a few words about your topic and get a thorough visual exploring many of the interconnections.
Also - on hallucinations - I’ve noticed a new (?) behavior where the AI will fess to it right away like oh sorry, my bad and I’m hoping this goes into reinforcement learning or whatever the appropriate name is to get better here
The thing is, LLMs will often say "Sorry, my bad" when you point a hallucination out. Even the original Bing did that, because they're trained through RLHF to be responsive to feedback.
But the key is to have LLMs catch their own mistakes BEFORE ever outputting them. The o1 model is slightly better here because it revisits its own assumptions during reasoning steps, but it's not perfect.
It's my understanding that the current approach to training LLMs means that hallucinations aren't possible to solve 100% - you can minimize them dramatically, but not eliminate them. We'd need new architectures and approaches for that.
Have you done a tool roll up like this for AI apps that write code?
I haven't, mainly because I'm not an authority on the topic and wouldn't quite be able to tell which tool is best. But maybe I can try research them one day.