Let’s Make a Movie Teaser With AI
How to use free generative AI tools to make a teaser trailer.
Reminder: Upcoming Midjourney workshop
My first-ever live workshop for paid subscribers—hosted by Charlie Guo of Artificial Ignorance—is coming up on August 30 at 11 AM PST.
If you’re a paid subscriber to Why Try AI, see event details here.
If you’re a paid subscriber to Artificial Ignorance, see event details here.
Generative AI doesn’t need to be all about productivity and ROI.
You can use it for kid-friendly games and activities or to create fun image series.
But also, you now have enough free, off-the-shelf AI tools to put together a short movie or a movie trailer without ever leaving your seat.
So that’s exactly what we’ll be doing today.
We’ll try to make a teaser trailer for a dystopian sci-fi movie, consisting of several separate shots stitched together.
We’ll involve AI at every step, including brainstorming, music, sound effects, image creation, and animation.
The process
Here are the steps and the free tools we can use for each.
Brainstorm ideas & flesh out the concept.
Claude 3.5 Sonnet
Google Gemini 1.5 Pro
…or any other free LLM
Create starting frames for each scene.
FLUX.1 Pro
Ideogram
…or any other free text-to-image model
Bring the images to life.
Kling AI
Luma Dream Machine
Runway Gen-2
Generate the soundtrack.
Udio
Suno
Add sound effects.
ElevenLabs Sound Effects
ElevenLabs VideoToSoundEffects
Meta Audiobox
Put everything together.
…or any other free video editing tool.
Here we go.
Step #1: Brainstorm ideas
To begin with, we’ll ask Claude 3.5 Sonnet to give us suggestions for the trailer.
If you know me, you know that I propose treating AI like a human expert and just using natural language, so our Minimum Viable Prompt for this might be something like this:
I want to use AI tools to create a dramatic teaser trailer for a dystopian sci-fi movie. The teaser will be up to 1 minute long and should have between 8-10 snapshots / scenes. Give me 5 ideas for potential directions, with 8-10 scenes for each.
Plugging that into Claude 3.5 Sonnet gets the ball rolling:
We want ideas that are easy to fit into a teaser, don’t reveal the whole plot, and are easy to visualize with AI tools. We’re not after Oscar nominations here: This is just proof of concept.
So let’s ask for iterations:
We’ll continue this way until we have a concept that could work.
Because we’re trying to stick to free tools, we might need to turn to e.g. Google Gemini 1.5 Pro if we run into Claude’s daily limit.1
Going back and forth between Gemini and Claude, we eventually land on this Claude-written synopsis:
Wake: Movie Synopsis
In a world where humanity has achieved symbiosis with technology, organic life and machines have merged into a harmonious ecosystem. This fusion has created a society of augmented humans, cybernetic animals, and living architecture. However, beneath the surface of this seeming utopia lies a hidden truth: the loss of true human consciousness. When a mysterious pulse threatens to unravel this delicate balance, it may offer humanity its last chance to reclaim its essence and "wake" from its technological slumber.
Yeah, we’re not exactly making Hollywood history here.
But the point is to test the tools and have fun, so we’ll roll with it!
Claude’s scene-by-scene breakdown for our teaser trailer is as follows:
Scene 1: A tree with circuits running through its trunk and branches.
Scene 2: A person with neon veins and wires running beneath their skin.
Scene 3: Animals with mechanical limbs and glowing eyes.
Scene 4: Buildings with organic, pulsing veins.
Scene 5: Close-up of a heart, half-organic, half-machine, beating
Scene 6: A field of flowers opening to reveal robotic elements.
Scene 7: Wide angle shot of a cityscape of merged plant and tech life. Blinding EMP explosion. Darkness. Silence.
Scene 8: Close-up: the heart's mechanics sputter and die. Organic muscle falls still. Flatline beep.
Scene 9: Close-up: a person's eyes suddenly open, clear and focused, gasping for air.
Scene 10: Black screen. Title card: "Wake." Fade to black.
You heard what Claude said, now let’s make those images!
Step #2: Generate the images
Full disclosure: I accidentally ended up “cheating” by getting carried away with Midjourney. It’s my go-to image generator, and it only dawned on me that it didn’t fulfill the free criteria when I already had my images.
But there are so many free alternatives to choose from.
I recommend the following for photographic images:
FLUX.1 Pro via glif.app. (I made this simple FLUX.1 Pro glif you can use).
Ideogram (not as great at realism but passable for many shots).
You can even try skipping this step altogether and using the text-to-video option in Step #3. But in my experience, both Luma and Kling do better with image-to-video, at least for now.
To get our images, we’ll ask Claude for more nuanced prompts.
As an example, here’s what it suggested for the beating heart scene:
Extreme close-up of a beating heart. Half is organic muscle, pulsing and contracting. Other half is futuristic wires and gears.
Plugging that into Ideogram and using the “Magic Prompt” option gives us this:
We repeat the process for each scene until we have all the images.
Here’s my final Midjourney mix:
Now it’s time to—quite literally—set things in motion.
Step #3: Animate the scenes
This is the most fun part: watching our still images come to life!
In addition to the six AI video sites I tested in October 2023, we now also have Kling AI and Luma Dream Machine.2
Both of them do especially well when you use a starting image.
Aside from a few platform-specific features, the process looks pretty much the same for each tool:
Upload the starting image.
Add a prompt describing what should happen in the video.
Click “Generate.”
It also helps that both Kling and Luma have an optional “end frame” tool.
For instance, for the last shot of a girl waking up, I used Midjourney’s inpainting feature to create an additional frame of the girl with her eyes open.
This let me use the “eyes closed” image as the starting frame and the “eyes open” image as the end frame:
The result is this short video sequence:
We repeat the process for all of our images to get the animated clips:
The last thing our trailer’s missing is sound.
Let’s fix that!
Step #4: Generate the soundtrack
For this task, we’ll turn to one of our trusty music makers: Udio or Suno. (I compared them back in April.)
Our prompt could be as vague as:
Tense film score for a dystopian sci-fi movie
In the end, I got good results out of this somewhat more specific one:
Tense, dramatic, and atmospheric film score with a drum beat
After generating about two dozen tracks with both generators, I settled on this Suno one for the final teaser:
Step #5: Add some sound effects (optional)
This one isn’t strictly necessary for our project.
We could have gotten away with a music-only teaser trailer, but let’s explore the possibility.
Our two best free options are the two tools I compared in March:
“Sound Effects” by ElevenLabs
“Audiobox” by Meta AI
In both cases, we simply describe the sound we want to generate:
Deer footsteps running through a forest
ElevenLabs’ Sound Effects might give you something like this:
Not bad at all!
You can also try your luck with ElevenLabs’ VideoToSoundEffects.com, which automatically analyzes the content of a video and creates appropriate sounds.
The downside is that you can’t add any prompt to specify the sounds you want, but it does a great job in certain cases.
For our beating heart, it proposed a cool fusion of a heart rhythm with electronic sound. The best part is that it lets you download your video clip already pre-combined with the audio:
Do this for any scene that could benefit from sound effects, and you’re all set!
There’s just one thing left to do: Put the teaser trailer together.
Step #6: Putting it all together
This part requires the most manual work, and as you’ll soon see, I’m an absolute beginner when it comes to video editing.
My goal is to showcase the process, so you’ll just have to bear with the mediocre result.
There are many free video editing tools, so pick the one you’re comfortable with.
I’m sticking to Microsoft Clipchamp for no other reason than it came pre-installed on my PC.
Most video-editing tools have the same basic layout, where you drag image, video, and audio files into separate “tracks” to align their timing, add transitions, and so on:
Now it’s a matter of assembling our AI-generated assets into a coherent whole…which is something I’m clearly underqualified for.
But such is life.
The result
Dishearteningly, earlier today I lost a lot of progress after my computer crashed when I was about 70% done. So what you see below is somehow even crappier than what I’d originally imagined.
Huge bummer, but the point stands: Everything in the trailer is fully AI-generated.
If you give these tools to someone who knows what the fuck they’re doing, you can truly unlock new possibilities. But here’s what it looks like when you give access to powerful AI to a complete noob:
Ah well, I tried!
🫵 Over to you…
What do you think of the fact that almost everything you need for a movie can now be generated with AI? Are you familiar with any other ways that this process can be improved or streamlined by AI tools?
Leave a comment or shoot me an email at whytryai@gmail.com.
At some point, I fed Claude’s ideas to Google Gemini 1.5 Pro and asked it to come up with a twist/hook for the trailer.
There’s also Runway Gen-3, but that one is only for paying customers at this time.
This was almost spot on for how I put together my trailer for my second book:
https://youtu.be/YO-J5ItEiio?si=ML_6gCH4cFLnYMIk
Dang! I watched another one of these yesterday - it was an Olympics track event running through Lego, so a fun concept, but the execution was positively amateurish compared to your skills. That last scene with the girl opening her eyes and then the title... very slick.