U da real MVP dawg 🫶Totally with you and tbh I'm amazed at how good results have gotten with simple prompts and incorporating AI creativity that I wouldn't have even thought to include in my prompt. Also ... you've highlighted a super annoying attribute in all things tech. The army of zombie complexifiers comes quick to make everyone think you need their special vodoo and no you don't.
For sure! We have AI that can literally understand normal language, yet we still find ways to make it complicated for ourselves.
And yes: AI models are getting increasingly better at delivering great results with minimal effort, which is why I'm focusing so much on just getting people to try them out.
Like your minimalist idea, instead of the AI anxiety bullshit. Most people should get most they want without fancy prompt engineering, such as GPT-4 or DALL-E 3.
So good mate. Tracks beautifully with some themes that I've been kicking around lately too, as we've discussed on LI. You've articulated it beautifully here.
Super useful, I'm actually going to steal the concept of minimum viable prompt for a workshop that I'm doing in the near future. To explain the concept of prompt engineering to a group of novices, who haven't been exposed much to generative AI at all. The idea of first just asking for what you want, in the simplest way, can be a great starting point for anyone. No need to overthink things.
I sometimes hear pushback for this "less is more" approach. People who are further along the learning curve will bring up all the effective known ways to steer models in the right direction (e.g. context, role, few-shot prompting, etc.).
But I think the two can co-exist peacefully. You learn the ropes by sticking to MVP and testing the waters. Then, when you're ready, you dive into the many helpful prompt engineering manuals to learn the more advanced stuff.
Let me know how your workshop goes and whether MVP is well-received!
Just.. Thanks. I mean, you said it all. It's all about empirical learning -sry my english- There is so much you can learn by yourself browsing the web (like your substack Daniel). Could be just 10min of your day!
Agreed! Some of the best learning you can do, especially in the early stages, is by simply using AI. Hell, you can even ask a chatbot directly to teach you the ropes.
I'm all for this sort of experimentation, and I've been running these every day for like a year now! But of course, you have to watch out if you're limited by the number of prompts you can input (EG, ChatGPT's current cap will catch me every so often).
The iterative approach also yields amazing results, but you do need to be patient and have a little bit of time.
Yeah it looks like OpenAI recently lowered the GPT-4 query limit from 50 to 40 messages every three hours, which is a bummer. But luckily there are so many other free LLMs and chatbots out there that you'll never run out of opportunities to experiment!
I can live with 40! I just need to be mindful if I get really curious out there.
And yeah, it's amazing how many other tools there are out there. Bard alone makes life simpler for me, and I often go back and forth between GPT-4 and Bard when doing research. They offer slightly different "lenses", you might say. Ahem.
Useful and applied article, as always.
Great work
Thanks, trying to demystify AI for the average Joe, one post at a time.
Love your work
I appreciate the kind words!
U da real MVP dawg 🫶Totally with you and tbh I'm amazed at how good results have gotten with simple prompts and incorporating AI creativity that I wouldn't have even thought to include in my prompt. Also ... you've highlighted a super annoying attribute in all things tech. The army of zombie complexifiers comes quick to make everyone think you need their special vodoo and no you don't.
For sure! We have AI that can literally understand normal language, yet we still find ways to make it complicated for ourselves.
And yes: AI models are getting increasingly better at delivering great results with minimal effort, which is why I'm focusing so much on just getting people to try them out.
Happy to hear that the post resonated with you!
Like your minimalist idea, instead of the AI anxiety bullshit. Most people should get most they want without fancy prompt engineering, such as GPT-4 or DALL-E 3.
Agreed - for the vast majority of people, keeping it simple is the way to go, at least to begin with. Happy to hear this resonated!
Love this. Makes sense.
Happy it resonates, Nicole! Thanks for the comment.
So good mate. Tracks beautifully with some themes that I've been kicking around lately too, as we've discussed on LI. You've articulated it beautifully here.
Thanks for the kind words Mark. It's good to know many others are on the same page here!
Love this!
Thanks, happy you found it useful!
Super useful, I'm actually going to steal the concept of minimum viable prompt for a workshop that I'm doing in the near future. To explain the concept of prompt engineering to a group of novices, who haven't been exposed much to generative AI at all. The idea of first just asking for what you want, in the simplest way, can be a great starting point for anyone. No need to overthink things.
Exactly!
I sometimes hear pushback for this "less is more" approach. People who are further along the learning curve will bring up all the effective known ways to steer models in the right direction (e.g. context, role, few-shot prompting, etc.).
But I think the two can co-exist peacefully. You learn the ropes by sticking to MVP and testing the waters. Then, when you're ready, you dive into the many helpful prompt engineering manuals to learn the more advanced stuff.
Let me know how your workshop goes and whether MVP is well-received!
Just.. Thanks. I mean, you said it all. It's all about empirical learning -sry my english- There is so much you can learn by yourself browsing the web (like your substack Daniel). Could be just 10min of your day!
Agreed! Some of the best learning you can do, especially in the early stages, is by simply using AI. Hell, you can even ask a chatbot directly to teach you the ropes.
Thanks for the kind words, Thomas!
I'm all for this sort of experimentation, and I've been running these every day for like a year now! But of course, you have to watch out if you're limited by the number of prompts you can input (EG, ChatGPT's current cap will catch me every so often).
The iterative approach also yields amazing results, but you do need to be patient and have a little bit of time.
Finally, "Good Enough" is my middle name.
Yeah it looks like OpenAI recently lowered the GPT-4 query limit from 50 to 40 messages every three hours, which is a bummer. But luckily there are so many other free LLMs and chatbots out there that you'll never run out of opportunities to experiment!
Daniel "Just Kind Of Okay" Nest signing off.
I can live with 40! I just need to be mindful if I get really curious out there.
And yeah, it's amazing how many other tools there are out there. Bard alone makes life simpler for me, and I often go back and forth between GPT-4 and Bard when doing research. They offer slightly different "lenses", you might say. Ahem.
Dude. Someone should totally write a post about that!
Looks like a couple of clowns beat us to it:
https://goatfury.substack.com/p/lenses
Classic time-traveling plagiarism at its finest!