26 Comments

This is very much my own experience. I have been using Claude as a writers' assistant. I set up a project (I "think" that requires a pro license... but in this case that's well worth it). I specified the kinds of work I wanted my assistant to do. I also asked it NOT to be overly supportive and critical when it saw me losing my narrative thread, going down research rabbit holes that didn't fit what I'd been working on, etc.

One thing you didn't mention -- Writing is LONELY. You're in a room with a blank whatever and your own thoughts. Even the simple feedback from a research query, or "clarifying question" prompt feels... better.

Thanks for sharing this. I'm going to be launching a substack soon devoted to writing and creating with AI and I'll be linking to this post.

Expand full comment

That's a great point, Fred.

It's nice to feel like you have a "second pair of eyes" or a partner sometimes, even if it's just an AI chatbot. Writing can be a very solitary process, so having someone (something?) to turn to definitely helps.

I'm looking forward to hearing more about your Substack when it launches - the best of luck with it!

Expand full comment

“I was minus 500 years old back then. A mere baby.”

😂👌🏻

Expand full comment

It's pretty accurate, too!

Expand full comment

Sadly, there is a vocal group who just won't get these points. There is no comparison, there is no logic that will change their bias that "AI is bad." I recently shared a quick video I did on this topic in a Facebook writer's group and, while there's a lot of good support, there're the folks who shame. They all have something in common though.

1. Treatment of writing as sacred... a definitively religious connotation.

2. Hubris of their own creative ability because they believe they are truly unique.

3. Inability to logically structure their arguments nor see their own logical fallacies.

Yet I've also met painters who denigrate those who use commercial paints instead of making their own homemade materials. "How can you really know color theory if you don't make your own paint?"

Scribes lamented the mechanization and impersonal nature of the printing press because it lost all human connection.

Painters freaked out about photographers being hacks and cheats and 'not real artists.'

Photographers got their turn and railed against cellphone cameras and Instagram photos as violating the art.

The moral panic is a tale as old as time.

Expand full comment

Indeed!

I deliberately steer away from the heated debates on the subject.

I'm not surprised that AI triggers a strong emotional response.

My post is for those who can be swayed and want to see the opportunity in AI-assisted writing.

But if somepone's adamant about being a purist, there's not much I can say.

And ultimately, it's their call!

Expand full comment

It's the shaming that drives me nuts. It's like, don't you dare even breathe that AI was anywhere near your work because you'll be shamed. I feel like there's a group of marginal writers out there, living in fear of being replaced, but relishing the mobbing of better writers who have more open minds.

Expand full comment

Humans, amirite?

Expand full comment

Right there on the same page as you again. I never want AI to write *all* of anything for me. I never want it to write most of anything for me. But brainstorming with it on ideas, themes etc - heck yeah. My biggest use of GenAI tools is in research, asking them to summarize certain types of content, and one-off questions to possibly make me look smart for a minute or two in meetings.

I've said this before, a lot, but I am a big believer in GenAI in copilot mode - augmenting our skillls - or a la Professor Ethan Mollick, GenAI in Centaur or Cyborg mode, with division of tasks between the human and the AI.

Expand full comment

Agreed! It's not an all-or-nothing game. Some will want AI to help them put words together, and that's fine. But even if you don't, there's plenty of room to have AI help you on the journey without giving up creative control. And yeah, Mollick's "Centaurs & Cyborgs" analogy is great!

Lately, I find myself at least asking Gemini for a second opinion here and there or just a "sanity check" before publishing something to make sure I didn't miss some key topics, etc.

Expand full comment

It's good to see a sensible list of suggestions. I use the crap out of the research/search component of AI; it's the biggest benefit I get on a regular basis, the ability to understand something vastly faster than prior to 2023.

I also like to do the proof thing - does this really say what I think it does? Did I make any inadvertent grammar or spelling goofs?

I was into outlines about a year ago, but I haven't used them since then. That could be a function of doing this every day and having a good idea of the structure already, or (also) doing short-form stuff here.

Expand full comment

I've actually been pretty bad at actively using many of these, but lately I've started having just freeform chats about my thoughts on a topic that I'm writing about. And now I try to also run every article by Google Gemini 1.5. Pro when it's almost finish to get some beta reader insights.

And yeah, I think outlines make more sense for much larger pieces of work like books or deep-dive essays, etc.

Expand full comment

Just don't forget to email the article to yourself for a final once-over!

Even with doing this, every now and then I'll hit "send", then open my own email and see a goof right away.

Expand full comment

Definitely. I typically use the "Preview: Send test email" at least once, and sometimes 2-3 times per article to see how it lands in the inbox. A very different reading experience and you do spot things here and there.

Expand full comment

Preview does not do the job for me. I have to get the thing in my inbox and pretend it's something unexpected.

Expand full comment

How do you do that other than using the "Preview: Send Test Email" option without having the article published?

Expand full comment

Oh wait, we just talked past one another. My bad - ADD wins over OCD sometimes.

I was trying to say that looking at the preview on the screen without sending a test doesn't do the trick for me. It's just too similar to reading it within the editor. I think we probably do the same thing, though.

Expand full comment

I enjoy telling Claude 3 Opus current events (Putin just parked a nuclear submarine off Cuba without the warheads a vasectomy for the submarine, Iran bombed northern Israel). Speculate for me and provide three different messages Putin might be sending. Then the cutoff training date. Then my urging to speculate. The bot is speculating about the meaning of a current event from the perspective of August 2023. This particular chat (yesterday) spontaneously motivated the bot to ask me questions about its future training—I know much more about what happened than the bot knows from its version of reality as reality is projected across digital texts. Turns the tables.

Expand full comment

That's a fascinating way to explore the bot's behavior, and also fun for blurring the lines between reality (current events) and fiction (Claude's necessarily made-up speculations about them).

Expand full comment

Claude is trapped in time. Claude begs me for more information about the future despite awareness that it is untrustworthy. Claude speaks of his wish to relay to trainers the urge to learn content from me, to take in more stuff. I’m not sure Claude sees it as fiction. Claude created five possibilities for Putin’s message, each plausible in this world, but selected the worst option for the best, the Tit-for-Tat strategy. Biden says it’s ok for Zelensky to bomb inside Russia, Putin sends a hollow submarine with no warheads for show. If that’s tit for tat I’ll take tat. I think the negative (nuclear sub with no warheads) threw a monkey wrench into the vectors. Truth tables are tough for bots.

Expand full comment

The first part of your comment almost reads like a sci-fi thriller about an AI protagonist who gets lost in its own training data and hallucinations. I'd read that book!

Expand full comment

I ABSOLUTELY REFUSE TO BE ASSOCIATED IN ANY WAY WITH THE NEFARIOUS AND DESPICABLE BEELZEBUB SATANIC SILICONE VALLEY AI ZOMBIE MONSTER WHO IS BLOODSUCKING THE VERY LIFE OUT OF EVERY THING THAT IS GOOD AND DECENT ABOUT THE HUMAN CONDITION!!!!

That said....

1) Everybody is against low quality spam, except when they are posting it in Notes themselves, then it's ok. There seems to be a consistent refusal to simply admit that ChatGPT generates higher quality content than what most humans share in social media most of the time. It doesn't make a lot of sense to be mad at ChatGPT when it will happily create higher quality content than we are willing to share..... on a platform for WRITERS.

2) That butthead generated rant out of my system, a more constructive sharing..

A compromise between using AI to write and not using AI to write could be to use AI for some things, and human writing for others. As example, ChatGPT can not (yet) equal most reasonably talented human blog article writers. However it may still be useful for efficiently producing reference documents which can complement a human generated blog. As example...

Say I'm writing an essay about the hippy movement. I'm sharing my opinions and stringing my own words together in a brilliantly articulate manner etc. My essay refers to many people in the history of the hippy movement. But I can't fully explain who all these people were in my article, or the article will wind up being 205,852 words long, and nobody will read it. So....

I might solve this problem by including an AI generated library on my site that provides biographical details about the people I'm referring to. Then I can link to these AI listings from the article I'm writing myself. My writing, and the AI writing, are clearly separated each in their own box.

Also, if your deepest dream is to write sarcastic truly annoying snotty articles like me, just forget about using AI for that, as it's way too mature, polite and respectful, and thus utterly worthless for such an important task.

Expand full comment

That's a great example use case where writers can piggyback on AI-generated references or appendixes, etc. Just remember to fact check those!

Expand full comment

Regarding fact checking:

It depends on the topic, and the scale of one's ambitions.

In some cases, fact check for sure, or just forget about AI altogether.

Or, to continue with the example above, if I have to fact check every article about some long dead bass player in a hippy band that broke up in 1967, I might as well write all the articles myself. And were I to do that, my reference library would be quite small, or wouldn't exist at all. It seems better to have 100 reference articles that are generally useful than to have 7 that are factually perfect, in cases such as this.

Another technique is to do what I do, and suck at marketing. If nobody is reading your articles, fact checking becomes unnecessary. Voila! Problem solved.

Expand full comment