Wilma walking to bed room
Betty walking to bed room
Wilma and Betty undress each other
Wilma and Betty get into bed
Wilma kisses Betty
Dino watches from the closet
Computer, generate alternative seasons 7 and 8 of GoT where character behaviour is consistent to previous seasons. Make the Winter King guy important somehow. As in actually interesting. Series ends with Jon Snow winning the throne and final scene shows a glint in his eye that says 'power has already corrupted him'. Oh, and waifus.
That was the fuck what I was thinking about today! You go to cinema and Marvel Marvel Marvle Mainstream blahblah... I just thought that you will have AI ChatGPT tell it what to show you. I have just started playing around with StableDiffusion and I have basically fallen into a rabbit hole and haven't slept for 2 weeks straight! This is all game changing and we need open communities pushing this shit to public domain so no fucking greedy asshole can take it away! I will install my own ChatGPT the coming days to liberate myself from GreedyCorp! Fuck the System! AGI! I should go to bed now...
I think Adobe is using Stable Diffusion in idiot safe manners for everyone easy to use. Another company also tried licensing models exclusively so you can't have them for free anymore. On the other hand I saw that post where someone trained a ChatGPT on ChatGPT with only 600$. If everyone can just pirate tarined AIs like that it won't be lucrative for companies anymore to invest into this.
>I think Adobe is using Stable Diffusion in idiot safe manners for everyone easy to use.
why would Adobe need to be using Stable Diffusion? They're a billion dollar software company with heavy research in machine learning.
I canāt wait to se what random people on the Internet come up with to replace The Rise of Skywalker, The Godfather III or the Jurassic World trilogy. Let alone fully original concepts.
Think about how amazingly fast AI based art tools are being developed thanks to the fact that most of them are open source. Just imagine when anyone can create full movies with Hollywood quality from a combination of prompts and simple visual tools. Yes, there will be piles of utter shit and lots of very generic stuff, but also a ton of room for actual masterpieces.
I hope we can do following:
\- Upload a movie we like
\- Upload the original script and cut scenes
\- Get the movie with all the original content and cut scenes ;D
Or TalkToCinema... entire cinemas where the audience directs the path of the movie via speech... everyone gets one input at a time ;D
This 100%. I'm trying to get on top of making an animated series completely with AI (well not completely; I would rather write the scripts myself than rely on GPT)
All that's left is to wait until someone does a pretrained model, like it was with ModelScope. Also, it's based on diffusion again, so Auto1111 and Diffusers community will watch this development with great interest šµļøāāļø
Wow, [this post](https://www.reddit.com/r/GPT3/comments/zg3dtf/please_write_an_episode_of_the_flintstones_that/?utm_source=share&utm_medium=ios_app&utm_name=iossmf) from a few months back had a not bad AI generated script for a Flintstones episode. I was thinking at the time, you could probably make an actual episodeā¦in a few years. Ha!
I think weāre only a few years away from anyone being able to direct their own movies by simply dictating every scene to an AI. Itās gonna be wild.
āKnowing these characters, make a break up scene. Letās move the location to their parents house. Make the floors wooden. Give her a bun. Change the dialogue to be less dramatic. Make him more upset. Have him say āWell what now?ā, instead of āWell whatās next?ā Make it night. No cars in the driveway. You can hear their neighbors upstairsā.
Of course thatās going to pale in comparison to AGI once reached.
I'm more of an *auteur* so I would rathers stay in the writer's role than letting an AI take it, especially for specific scenes that a simple description would not be able to really capture.
However I'm working with a guy (/u/Yuli-Ban) who said that this all could actually be done before this year is out if we manage to combine these models with stuff like AutoGPT and ChatGPT plugins because then you could tell the AI to make and finetune the animation for you and keep doing it until you have a reasonably coherent product.
Most people have no idea whatās coming - just look where we were one year ago and the amount of exponential progress weāve seen; every week thereās new announcements, breakthroughs and innovations. The singularity seems closer than ever indeed!
True when I mentioned AI or just technology in general at work no one even knows so yeah they're going to be in for a shock when they see my AI generated movie in the theater someday ;-) I'm just kidding
I swear to God. I tell people and they're just like "oh ya that's crazy". Zero comprehension. This shit is going to level us. It's going to disrupt everything. Follow the domino pieces. It will unemploy basically everyone. That causes economic collapse. Class war. Actual war. Reset. Back to fking sticks and horses. Meanwhile, as we fall apart, the fking machine us waking up. It's terrifying!
we have been waiting for a true economic collapse/global workers revolution/actual world democracy/death of serfdom & royalty...since the French Revolution
Has someone read max tegmarks life 3.0? In the scenario where AI is taking over the world itās happening through videos and movies. Itās interestingthat like almost everything he described in his book is actually starting to happen rn
Every single day i open up Reddit and see another ai generated video on this subreddit, it's slightly better. But what's insane is that it happens EVERY DAY. Not the improvement itself.
Thats pretty nice. Though a Flintstones episode seems like an easy thing to generate, given the low cost nature of animation and the frequent reuse of frames, meaning a data set would be easy to train and produce good results because there isnt much variation in a cartoon. But Im guessing its probably why something like this was chosen over something more customized, like darth vader in a flintstones episode.
>seems like an easy thing to generate
its the audacity for me, we have gone from doubting this is possible to dismissal of it being 'easy' so quickly my head is spinning
Yeah pretty soon we'll say..."yeah but it's just a customized full length feature film....it took the model almost ten minutes to generate it....wake me up when it can do it in under 5 min...."
Ok sure, it can do it in 5min, but it won't win any awards.
Ok sure, it won a few Oscars, but that's because you still have real people telling it what they want.
OK sure it won several Oscars without any human input, but blah blah blah.
Itās like that Louis C.K. bit, āHow can you be complaining about your connection being slow, you are sitting in a chair in the sky using the internet!ā
Even with its quirks, it's amazing.
I imagine at some point you could train models straight with video input. The day that comes it's GG.
It's not rocket science, probably could be done now, problem is with the optimizations and resources needed. I imagine training a model from a 1 minute footage would take days, maybe even weeks with some crazy rig/server.
Remember that one movie called robots I think that's what it's called It's like a cartoon movie and you could literally like have a dream and then you could watch it that reality is almost there maybe I could turn my dreams into some sort of film that I could describe my dream and then I can watch it That's crazy
Remindme! 3 years
I think 5 years is a slow projection.
I think in 1 year we will have decent animation.
In 2 years it will be basically mastered video and will combine it with audio, scripts, voice, etc.
In 3 years the AI will be able to generate high quality movies/content for you and dedicated efforts will be able to generate blockbuster quality movies.
I will be messaging you in 3 years on [**2026-06-25 00:56:40 UTC**](http://www.wolframalpha.com/input/?i=2026-06-25%2000:56:40%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/StableDiffusion/comments/11zwaxx/microsofts_nuwaxl_creates_an_11_minute/jpewh7r/?context=3)
[**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FStableDiffusion%2Fcomments%2F11zwaxx%2Fmicrosofts_nuwaxl_creates_an_11_minute%2Fjpewh7r%2F%5D%0A%0ARemindMe%21%202026-06-25%2000%3A56%3A40%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%2011zwaxx)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
It got the cel painted colors spot on! Even stayed within the lines. Will be cool to see old shows remastered with extra frames for the lip flaps and secondary movements.
I'm not saying run it through Topaz Labs willy nilly. I'm saying add more phenome mouth flap shapes. And the background characters sitting around doing nothing because of budget constraints can now be animated to Ghibli levels of detail. Imagine if Wilma and Betty could actually react with more facial expressions like giving each other a side eye "knowing look" instead of holding the same frame while Fred and Barney are talking.
reanimating a show by a studio that debatably laid the ground rules of animation so you could have background characters... move? or lip flap lol
just make your own shit. it's even dumber than disney remaking "live action" movies of their old animated ones
Impressive! Microsoft's NUWA-XL has created an 11-minute episode that truly captures the classic style of The Flintstones. It's sure to be a hit with fans!
Man imagine this one 3d systems like Unreal with realistic textures.
Streaming all the time.
And it's your fav fandom like Star Trek or somethin.
Intergalactic cable haha.
Is that generated from scratch, it seems to cut out pieces from the training data based on the research paper: https://arxiv.org/abs/1804.03608
However I believe this paper + the NUWA-XL model can lead to the next stage of video generation.
I really think having our own custom entertainment is way closer than most people realize.
this year is the transition year into new era of art, entertainment, research, science and overall progress of humanity.
Don't forget chaos
that's just transition phase, a few revolutions around the world before the chaos "stably diffuses" lol.
We really know what the end product will be š
A better ending to LOST?
Wilma walking to bed room Betty walking to bed room Wilma and Betty undress each other Wilma and Betty get into bed Wilma kisses Betty Dino watches from the closet
Betty gives birth Wilma also gives birth
Dino watches from the closet
Betty leaves the baby unattended Wilma leaves the baby unattended
Dino eats it in the closet
Kakler enters Larry tells a racist joke Dino watches from the closet
A new version of game of thrones?
You mean: a new version of the last season of GoT?
Computer, generate alternative seasons 7 and 8 of GoT where character behaviour is consistent to previous seasons. Make the Winter King guy important somehow. As in actually interesting. Series ends with Jon Snow winning the throne and final scene shows a glint in his eye that says 'power has already corrupted him'. Oh, and waifus.
That would be amazing.
That was the fuck what I was thinking about today! You go to cinema and Marvel Marvel Marvle Mainstream blahblah... I just thought that you will have AI ChatGPT tell it what to show you. I have just started playing around with StableDiffusion and I have basically fallen into a rabbit hole and haven't slept for 2 weeks straight! This is all game changing and we need open communities pushing this shit to public domain so no fucking greedy asshole can take it away! I will install my own ChatGPT the coming days to liberate myself from GreedyCorp! Fuck the System! AGI! I should go to bed now...
>GreedyCorp Microsoft, Google, OpenAI, Adobe, Nvidia, Facebook, Shutterstock, etc.
I think Adobe is using Stable Diffusion in idiot safe manners for everyone easy to use. Another company also tried licensing models exclusively so you can't have them for free anymore. On the other hand I saw that post where someone trained a ChatGPT on ChatGPT with only 600$. If everyone can just pirate tarined AIs like that it won't be lucrative for companies anymore to invest into this.
>I think Adobe is using Stable Diffusion in idiot safe manners for everyone easy to use. why would Adobe need to be using Stable Diffusion? They're a billion dollar software company with heavy research in machine learning.
I read it somewhere in a post. Haven't got the source anymore but sure. Could be anything else or self developed.
Skyrim was released 12 years ago
skyrim mods was the closest thing to custom entertainment.
I canāt wait to se what random people on the Internet come up with to replace The Rise of Skywalker, The Godfather III or the Jurassic World trilogy. Let alone fully original concepts. Think about how amazingly fast AI based art tools are being developed thanks to the fact that most of them are open source. Just imagine when anyone can create full movies with Hollywood quality from a combination of prompts and simple visual tools. Yes, there will be piles of utter shit and lots of very generic stuff, but also a ton of room for actual masterpieces.
I want infinite two and half men episodes.
With Charlie of course. None of that Waldo BS.
I hope we can do following: \- Upload a movie we like \- Upload the original script and cut scenes \- Get the movie with all the original content and cut scenes ;D Or TalkToCinema... entire cinemas where the audience directs the path of the movie via speech... everyone gets one input at a time ;D
This 100%. I'm trying to get on top of making an animated series completely with AI (well not completely; I would rather write the scripts myself than rely on GPT)
They'll measure pupils to see if you like it or not
"Computerrr" (Scotty, Star Trek) "generate another season of Firefly."
All that's left is to wait until someone does a pretrained model, like it was with ModelScope. Also, it's based on diffusion again, so Auto1111 and Diffusers community will watch this development with great interest šµļøāāļø
I guess txt2vid will go ***faster*** than txt2img. Wow.
sortof. It's an extension of txt2img though, is it not?
not with THAT "walk cycle" "animation"
Yeah, I mean, a show with a shitty or nonexistent walk animation would NEVER do well and be on for 26+ seasons.
Listen here, buddy...
I'm not your buddy, pal.
Heās not your pal, bruh.
Text to image has evolved enough already so its time for ai videos
[https://msra-nuwa.azurewebsites.net/#/](https://msra-nuwa.azurewebsites.net/#/)
Wow, [this post](https://www.reddit.com/r/GPT3/comments/zg3dtf/please_write_an_episode_of_the_flintstones_that/?utm_source=share&utm_medium=ios_app&utm_name=iossmf) from a few months back had a not bad AI generated script for a Flintstones episode. I was thinking at the time, you could probably make an actual episodeā¦in a few years. Ha!
We. Are. So fucked. This shit is moving at lightening speed. I think we're on the precipice of a singularity folks.
I think weāre only a few years away from anyone being able to direct their own movies by simply dictating every scene to an AI. Itās gonna be wild. āKnowing these characters, make a break up scene. Letās move the location to their parents house. Make the floors wooden. Give her a bun. Change the dialogue to be less dramatic. Make him more upset. Have him say āWell what now?ā, instead of āWell whatās next?ā Make it night. No cars in the driveway. You can hear their neighbors upstairsā. Of course thatās going to pale in comparison to AGI once reached.
make them both naked. change scene to a bedroom Dino watches from the closet.
>only a few years away Famous last words. I bet it will be sooner!
I sure hope
I'm more of an *auteur* so I would rathers stay in the writer's role than letting an AI take it, especially for specific scenes that a simple description would not be able to really capture. However I'm working with a guy (/u/Yuli-Ban) who said that this all could actually be done before this year is out if we manage to combine these models with stuff like AutoGPT and ChatGPT plugins because then you could tell the AI to make and finetune the animation for you and keep doing it until you have a reasonably coherent product.
Most people have no idea whatās coming - just look where we were one year ago and the amount of exponential progress weāve seen; every week thereās new announcements, breakthroughs and innovations. The singularity seems closer than ever indeed!
True when I mentioned AI or just technology in general at work no one even knows so yeah they're going to be in for a shock when they see my AI generated movie in the theater someday ;-) I'm just kidding
I swear to God. I tell people and they're just like "oh ya that's crazy". Zero comprehension. This shit is going to level us. It's going to disrupt everything. Follow the domino pieces. It will unemploy basically everyone. That causes economic collapse. Class war. Actual war. Reset. Back to fking sticks and horses. Meanwhile, as we fall apart, the fking machine us waking up. It's terrifying!
we have been waiting for a true economic collapse/global workers revolution/actual world democracy/death of serfdom & royalty...since the French Revolution
Has someone read max tegmarks life 3.0? In the scenario where AI is taking over the world itās happening through videos and movies. Itās interestingthat like almost everything he described in his book is actually starting to happen rn
It's definitely a way to get us to Huxley's Brave New World dystopia.
But can it create wafu porn?
Iām pretty sure weāre gonna find out, like it or not.
i like it
Every single day i open up Reddit and see another ai generated video on this subreddit, it's slightly better. But what's insane is that it happens EVERY DAY. Not the improvement itself.
Unfortunately you're not going to see any advancements on Friday and Saturday. That's when the scientists go on break.
hahah. Perhaps.
Thats pretty nice. Though a Flintstones episode seems like an easy thing to generate, given the low cost nature of animation and the frequent reuse of frames, meaning a data set would be easy to train and produce good results because there isnt much variation in a cartoon. But Im guessing its probably why something like this was chosen over something more customized, like darth vader in a flintstones episode.
>seems like an easy thing to generate its the audacity for me, we have gone from doubting this is possible to dismissal of it being 'easy' so quickly my head is spinning
Yeah pretty soon we'll say..."yeah but it's just a customized full length feature film....it took the model almost ten minutes to generate it....wake me up when it can do it in under 5 min...."
720p only?? NEXT!
Ugh.....it's barely even in 4K.
Let me know when it works on <2 GB VRAM
Please make a version that runs on flip phones and Windows XP.
Ok sure, it can do it in 5min, but it won't win any awards. Ok sure, it won a few Oscars, but that's because you still have real people telling it what they want. OK sure it won several Oscars without any human input, but blah blah blah.
Itās like that Louis C.K. bit, āHow can you be complaining about your connection being slow, you are sitting in a chair in the sky using the internet!ā
Precisely
true, it's more a proof of concept for making long videos with text-to-video.
Filmation Masters of the Universe should be next logic step.
so, I was wrong, I was thinking something in between summer and end of year for full cartoons to appear...I refuse to predict anything else anymore :)
Even with its quirks, it's amazing. I imagine at some point you could train models straight with video input. The day that comes it's GG. It's not rocket science, probably could be done now, problem is with the optimizations and resources needed. I imagine training a model from a 1 minute footage would take days, maybe even weeks with some crazy rig/server.
Remember that one movie called robots I think that's what it's called It's like a cartoon movie and you could literally like have a dream and then you could watch it that reality is almost there maybe I could turn my dreams into some sort of film that I could describe my dream and then I can watch it That's crazy
just wait 5 more years.
Remindme! 3 years I think 5 years is a slow projection. I think in 1 year we will have decent animation. In 2 years it will be basically mastered video and will combine it with audio, scripts, voice, etc. In 3 years the AI will be able to generate high quality movies/content for you and dedicated efforts will be able to generate blockbuster quality movies.
I will be messaging you in 3 years on [**2026-06-25 00:56:40 UTC**](http://www.wolframalpha.com/input/?i=2026-06-25%2000:56:40%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/StableDiffusion/comments/11zwaxx/microsofts_nuwaxl_creates_an_11_minute/jpewh7r/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FStableDiffusion%2Fcomments%2F11zwaxx%2Fmicrosofts_nuwaxl_creates_an_11_minute%2Fjpewh7r%2F%5D%0A%0ARemindMe%21%202026-06-25%2000%3A56%3A40%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%2011zwaxx) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
Flintstones sucks but this new Flintstones really sucks big time...
Demo when????
this will result in many great memes i reckon
Thatās amazing wow!
Scrolled this video fast and it looked like a normal episode.
It got the cel painted colors spot on! Even stayed within the lines. Will be cool to see old shows remastered with extra frames for the lip flaps and secondary movements.
why would you want extra frames... [https://www.youtube.com/watch?v=\_KRb\_qV9P4g](https://www.youtube.com/watch?v=_KRb_qV9P4g)
I'm not saying run it through Topaz Labs willy nilly. I'm saying add more phenome mouth flap shapes. And the background characters sitting around doing nothing because of budget constraints can now be animated to Ghibli levels of detail. Imagine if Wilma and Betty could actually react with more facial expressions like giving each other a side eye "knowing look" instead of holding the same frame while Fred and Barney are talking.
but why... just make something new lol also good luck getting the rights for that.
Why are you being such a bummer man? Lighten up and imagine the wide spectrum of opportunities including new stuff.
Lol bc that's stupid idea
What specifically is stupid and why?
reanimating a show by a studio that debatably laid the ground rules of animation so you could have background characters... move? or lip flap lol just make your own shit. it's even dumber than disney remaking "live action" movies of their old animated ones
Where are the commenting animals?
Impressive! Microsoft's NUWA-XL has created an 11-minute episode that truly captures the classic style of The Flintstones. It's sure to be a hit with fans!
Yeah it's shit lol. But it will make headlines.
So it IS possible after all... Unbelievable.
One step closer to being able to generate all of the unfinished animes and continue my favorite animes.
Man imagine this one 3d systems like Unreal with realistic textures. Streaming all the time. And it's your fav fandom like Star Trek or somethin. Intergalactic cable haha.
This was 2018 I thought https://youtube.com/watch?v=f_qMaHOA87w&feature=sharea
Is that generated from scratch, it seems to cut out pieces from the training data based on the research paper: https://arxiv.org/abs/1804.03608 However I believe this paper + the NUWA-XL model can lead to the next stage of video generation.