Can AI Make a Star Wars Film? (100% AI Experiment)

preview_player
Показать описание
We recently gave a talk in Hollywood about the future of storytelling and were asked to put together a quick tech demo. The result was this very short film sequence that we put together in just a few hours.

Notable points:
- All Imagery, Movement, and Voices were generated by AI.
- AI invented an alien language in about 10 seconds. We would feed it lines, and it would output the language with verb conjugations, tense, etc.
- There's no reason why you couldn't use the same tech to create a long-form content piece.
- Lip syncing still needs to be improved, but with recent innovations, we're only a few months away from public-facing lip sync tools that feel natural and can overlay on existing video. We used Wav2Lip, but other tools like lalamu are also very helpful.
- The music and sound effects were from a stock library, but that was simply a time constraint. We could have easily used new tools like Stable Audio to create the sound. Sound effects need to be from a library at this point, we haven't found any good AI sound effects tools.
- Audio generations in Elevenlabs took about 2 iterations each.
- Midjourney iterations took about 20 renders per shot to get the right scene (with repeaters to save time).
- Pika renders took about 5 renders per shot. The camera controls made movement a lot easier to control. Adding more motion than the default value seemed to work pretty well.

Let us know if you'd like a more detailed video breakdown. While we do go into detail about all of these processes in our course, we'd love to share the steps with you here.
Рекомендации по теме
Комментарии
Автор

The most unrealistic part is that the merchant gives the artifact away for free. Great job!

xuttuh
Автор

This is like the most vivid dream I've ever had. Semi coherent, but minor details still shift and alter themselves.

Ethan-
Автор

I love how the troopers have different armor in every scene 😂

charlesreed
Автор

The whole sequence has as much coherence as any recent Star Wars series.

darthmastah
Автор

it was like a year ago that we were getting extremely rough dreamlike/nightmare like sketches out of ai. The speed we arrived at here is mind blowing.

chadocracy
Автор

That fact that the AI thought it *absolutely necessary* to include the fear of Vadar is impeccable😂

FreelancerAlpha-
Автор

I can’t even imagine what AI will be able to do in five years from now. This is just amazing.

FreshPickedLemon
Автор

What's unsettling to me is how AI movies move like my dreams...

joshdartist
Автор

I feel like this is at a level where it could probably be effectivly used as a background for a film

silenthawk
Автор

Just imagine how much smoother this'll look in 6 months.

austinmajeski
Автор

“If droids could think there’d be none of us here.” -Obi-Wan Kenobi

cstick
Автор

It's already better than most of Disney's releases.

rogerlimoseth
Автор

AI can mimic peoples voices so well, if they can do action then we could end up with all kinds of adaptations of popular stories from the books made into star wars movies without budgets getting in the way .

jacknoodles
Автор

The clunky dialogue certainly felt like it was written by Lucas.😂

ColinChick
Автор

Hey guys! Here's a breakdown of some of the things we learned putting this project together:

- All Imagery, Movement, and Voices were generated by AI.
- It took one person about 8 hours to create it.
- AI invented an alien language in about 10 seconds. We would feed it lines, and it would output the language with verb conjugations, tense, etc.
- There's no reason why you couldn't use the same tech to create a long-form content piece.
- Lip syncing still needs to be improved, but with recent innovations, we're only a few months away from public-facing lip sync tools that feel natural and can overlay on existing video. We used Wav2Lip, but other tools like lalamu are also very helpful.
- The music and sound effects were from a stock library, but that was simply a time constraint. We could have easily used new tools like Stable Audio to create the sound. Sound effects need to be from a library at this point, we haven't found any good AI sound effects tools.
- Audio generations in Elevenlabs took about 2 iterations each.
- Midjourney iterations took about 20 renders per shot to get the right scene (with repeaters to save time).
- Pika renders took about 5 renders per shot. The camera controls made movement a lot easier to control. Adding more motion than the default value seemed to work pretty well. We also made a tutorial about that process on our channel.

curiousrefuge
Автор

I feel it would be better at Making one of those weird horror movies where nobody knows what the hell is going on.

robbieh
Автор

In 10 years anyone will be able to make a full fan film like this in a matter of hours. Just incredible to speculate what the future looks like with ai

acbagel
Автор

I have imagined for a while now that in the near future people will be able to fully edit movies, possibly in real time at some point. Like if you want a different actor or different ending or overall plot. You will just ask for the changes and choose from some rough sketches and render... BAM! you have the film you wanted. Personalized films on demand. Not sure this will be a good thing, but it sure seems within the realm of achievable in the next decade.

Fastlan
Автор

I can appreciate the simplicity, the details, and the amount of time that it can save. But I will also say that I sincerely believe that synthetic behavior, and artificial development can only go so far. It may be able to pump out great stories (which despite my controversial position, I would undoubtedly watch and probably enjoy), I just sincerely appreciate the craft, especially when creators put their heart and soul in the creating content that can truly be felt by another human audience.

natrious
Автор

I would LOVE to see a "behind the scenese" look at the making of this video. That is really impressive!

BrianJurkowski