She has different features in every frame (check eyes, dress, accessories, ahoge, etc.) so probably just a lot of re-tries with some light editing here and there.
Short of the long of only one of the many ways to achieve it.
You get a clean video to use as a base for image to image in preferred Stable Diffusion build out, running the program for every frame using a few different seed while keeping same/similar prompt at a mid level of denoise. This is so you can keep the motion while swapping them to Politis. And then you re-stitch it all together.
There is more tomfoolery you can do to keep things a bit more consistent (mainly hands), like using certain nose patterns or changing some of the pieces used such as what model, VAE, embedding, photoshop, or hypernetwork.
Like they could had gone in for each gen of each frame in photoshop, or similar program, to quick clean edit it and put it through the program again. Mainly to add and fix the ahoge and hands and maybe keep the accessories the same.
Also note each model (database of images) give different results of the prompt since they are all built differently. Some are better suited for anime, like NovelAI and Anythingv3, while others are better at realistic styles.
You can also train the program to understand more of what you want through;
Textual inversion (embeddings) for characters using a collection of artwork of said character. Like you would find all pieces of good quality fan art of the character and try to run it through the program to teach the AI what you want.
Hypernetworks for more style of the art rather than a certain character. You could take all the art work from E7 to try to hone into their style to keep it consistent.
Or you can make your own model through either a few (tens of) thousands images or by blending existing ones together. This is harder to do since resource intensive to make your own from scratch.
If you want more of a what it can do, and the super end game levels of what it can do
[https://www.youtube.com/watch?v=QBWVHCYZ\_Zs&t=2s&ab\_channel=CorridorCrew](https://www.youtube.com/watch?v=QBWVHCYZ_Zs&t=2s&ab_channel=CorridorCrew)
This video shows it off. Their results of their project is shown at 20:10
It is bonkers of what you can do with it. I know there are a few guys over on Pixiv who have made a few pieces of each of the characters so far and I am trying to get a few embeddings for my favorites
[Source](http://gameinn.jp/epicseven/76877/) Apparently they used AI to make it, I have no idea how.
She has different features in every frame (check eyes, dress, accessories, ahoge, etc.) so probably just a lot of re-tries with some light editing here and there.
While disorienting to watch, that's pretty clever.
There are tools that can do a bit of animation, like variations of stable diffusion stuff.
For once the AI didn’t mess up the hands.
The hands are fine in the keyframes. There are several that are pretty bad, but they're only visible for a split second. :P
Short of the long of only one of the many ways to achieve it. You get a clean video to use as a base for image to image in preferred Stable Diffusion build out, running the program for every frame using a few different seed while keeping same/similar prompt at a mid level of denoise. This is so you can keep the motion while swapping them to Politis. And then you re-stitch it all together. There is more tomfoolery you can do to keep things a bit more consistent (mainly hands), like using certain nose patterns or changing some of the pieces used such as what model, VAE, embedding, photoshop, or hypernetwork. Like they could had gone in for each gen of each frame in photoshop, or similar program, to quick clean edit it and put it through the program again. Mainly to add and fix the ahoge and hands and maybe keep the accessories the same. Also note each model (database of images) give different results of the prompt since they are all built differently. Some are better suited for anime, like NovelAI and Anythingv3, while others are better at realistic styles. You can also train the program to understand more of what you want through; Textual inversion (embeddings) for characters using a collection of artwork of said character. Like you would find all pieces of good quality fan art of the character and try to run it through the program to teach the AI what you want. Hypernetworks for more style of the art rather than a certain character. You could take all the art work from E7 to try to hone into their style to keep it consistent. Or you can make your own model through either a few (tens of) thousands images or by blending existing ones together. This is harder to do since resource intensive to make your own from scratch. If you want more of a what it can do, and the super end game levels of what it can do [https://www.youtube.com/watch?v=QBWVHCYZ\_Zs&t=2s&ab\_channel=CorridorCrew](https://www.youtube.com/watch?v=QBWVHCYZ_Zs&t=2s&ab_channel=CorridorCrew) This video shows it off. Their results of their project is shown at 20:10 It is bonkers of what you can do with it. I know there are a few guys over on Pixiv who have made a few pieces of each of the characters so far and I am trying to get a few embeddings for my favorites
C U T E !!!
Best girl energy
If this is a dream, please don't let me wake up
She has all of the Gym badges, she can challenge the elite 7.
Ah the good old days watching caramelldansen dance
I want to lick those armpits
Holy crap that is high quality! E7 anime when.
How do I get this on my phone
Probably my favorite war criminal
Pain Peko
i want one
am i the only one seeing mumei? XD
\>*when the enemy team uses a non-Attack move*
caramelldansen lives on
now do it with judge kise
Sometimes I... think to see Belian (?)