16 Comments

"Uh, I mean, it’s like…when you just…don’t use a prompt.

That’s kind of it.

See you in the next post, bye!"

I lol'd here, haha

will try out this approach! The flood of expert prompting ai guides on substack has been a bit much for me. the fact that the reasoning models can produce outputs without any context is impressive and slightly terrifying, haha.

this is a take i had in recent months, that the skills of ai prompt engineer should go away since these are the dumbest the models will ever be and new models should understand our intent instead of making us format it in a very specific and detailed way.

I think there should be some time spent with exploring different prompts to help provide a better mental model of how the self-attention work within LLMs. maybe i will work on that with claude via vibe coding. something like this is what i had in mind.

https://ig.ft.com/generative-ai/

Expand full comment

Happy you enjoyed the lolz.

And yes, I've always been allergic to self-proclaimed prompt experts. Sure, there's value in telling AI clearly what you want and knowing certain concepts, but I hate that a lot of it is presented as this secret knowledge that gurus have acquired by meditating atop a mountain for several weeks.

That's why I'm a fan of simpler pathways into using AI, such as - hey, how about starting with no prompt at all?

It's also the easiest way to get a "vanilla" experience in each model, so I find it fun to observe how differently two models can respond to the same context in the absence of a detailed prompt. Their vibe really comes through that way.

And yes, I remember reading that article last year and enjoying it a whole lot. It's fun to take a step back from using the tools and understand how they work under the hood every now and then.

Keep me posted about your vibe coding project: Would love to see how it turns out!

Expand full comment

https://claude.site/artifacts/0d0e742f-c22e-4cc2-a9e2-f5f8db277eb6

check out the analyzing data section. it is funny how the example you showed in your article of what deepseek to that titanic dataset with NO context seems comparable to what we would get with an 'advanced prompt' format suggested by claude.

Expand full comment

Ha, that's great. I think reasoning models are especially well-suited for a no-prompt approach actually, as long as the context of the task and the completeness of data is in place!

Expand full comment

I love it. Less is frequently more with AI prompting, and there are ways you can sort of paint yourself into a corner with a very detailed prompt. I think I've been experimenting with this idea for a while now on the mental back burner, and I agree that there's a ton of use here.

Expand full comment

If less is more, then zero is infinity!

Expand full comment

Are negative numbers in the "infinity and beyond" category?

Expand full comment

Are you trying to make the space-time continuum collapse onto itself, Andrew? Is that what you want?!

Expand full comment

*looks around at 2025; thinks about it seriously for a few moments

Expand full comment

love your approach! we’re building attap.ai in that direction. you can use one of the ATTAP (All Things to All People) VIBES to handle a variety of entertaining and useful use cases and build your own in literally a couple of minutes. also check out attap.ai/vision to see where we’re headed.

Expand full comment

Thanks for sharing, Bruce!

Feels a bit like your users can build up a library of pre-prompted chats, a la "Custom GPTs" or "Gems" in Gemini or Poe's "Bots"? And then the community gets to pick and choose from that list?

That's a worthy idea, although I guess it likely requires at least some familiarity with models and prompts for users to build their own. Or maybe there's more to it behind the scenes?

My take with no-prompt prompting is that you can avoid thinking of prompts altogether, albeit only in certain niche cases.

Expand full comment

atm you’re correct that there is still ‘friction’ associated with non technical users trying ai/llm’s but it seems like your best use case(s) require uploading to the model(s)? can you think of any no-prompt cases that could simply be pre built ala ATTAP VIBES and then used/shared from that point? one of the features of ATTAP VIBES is that users can share anyone they make with others who can then extrapolate it to suit their personal needs, wants, desires. i’m most interested in your feedback (and helping us develop ATTAP.ai and us helping you develop no-code/no-prompt) i can be reached directly at bruce at attap dot ai

Expand full comment

I think both paradigms have reasons to exist.

My focus in this specific post (and in many others) is on helping people find ways to extract value out of "vanilla" foundational LLMs.

But there's absolutely room for pre-built tools and---in the case of ATTAP---databases of templates that are made for a specific purpose. That's why Poe's been quite successful with the user-made and shared bots. The same goes for Custom GPTs, etc. - people want the convenience of picking up something that's been fine-tuned for a specific use case.

Expand full comment

Big fan! Sometimes because I'm just lazy, other times because I'm curious what the model will come back with. If I ignore the sycophantic tendencies, it's usually pretty solid.

Expand full comment

There you go - great minds and all that jazz! Have you discovered any particular hidden skills in some models doing things this way?

Expand full comment

Well I’m both cheap and easy so my main go to is ChatGPT and the 4o model is usually enough for my random ruminations. I find it uses context cues really well from prior interactions. After it dumps me back to 4, not so much.

Expand full comment