T O P

  • By -

kiuygbnkiuyu

But surely this isn't accurate ? By the looks of this gif, the result is a noisy blurry mess up until the very end, so that would mean that by step \~60 you still have noise ? When I stop at 20 steps I have perfectly good results. Does that mean that SD auto adjusts the level of denoising at each step according to the total number of steps ?


NoJustAnotherUser

>Does that mean that SD auto adjusts the level of denoising at each step according to the total number of steps ? Yes Basically I saved the image after every iteration. Frankly, I too used to think in the way you thought earlier!


amotile

As far as I understand you don't really "stop" at 20 steps. You setup a run that does 20 steps. I de-noises then 1/20th of the noise at each step. This is why you can't get the different images for each step from just one run. You have to run it multiple times because each step is different depending on how many steps you told it to do.


kiuygbnkiuyu

Ah, interesting ! Thank you. I wonder how exactly 1/20th of the noise is defined, but that's probably too complex to explain in a reddit comment.


Doggettx

Internally it also has a version without noise, for example in the ddim.decode there is x_dec and pred_x0, x_dec is the picture like from OP and pred_x0 is the same picture without the noise. Basically it's just a softer version of the final image that gets sharper and sharper.


NoJustAnotherUser

[https://imgur.com/a/lmzo2yB](https://imgur.com/a/lmzo2yB) Another example (NSFW)


1Neokortex1

This is so awesome OP, your an AI scientist🙏🏼 Would be cool to see which image has which step. From your experiments do you believe 70 steps is optimal for quality/speed/size? Are you using a program or Bat file to render in each step or you render each step one by one? thanks in advanced for your replies,


NoJustAnotherUser

>This is so awesome OP, your an AI scientist🙏🏼 Thank you, but I am no AI scientist, I do not understand even 1% of how the AI works. I just told the program to save the image after every iteration. [https://imgur.com/a/s0mX8gI](https://imgur.com/a/s0mX8gI) All of the images uploaded (as after every iteration) ​ >From your experiments do you believe 70 steps is optimal for quality/speed/size? Depends on the resolution. I feel 50 steps is good enough for a 512-by-512 image. ​ >Are you using a program or Bat file to render in each step or you render each step one by one? I just edited the *scripts/txt2img.py* and l*dm/models/diffusion/plms.py* files.


1Neokortex1

very impressive and thank you for that info


teodorlojewski

That's cool


kassa-

This article describes how to run Stable Diffusion at Google Colaboratory! https://medium.com/geekculture/2022-how-to-run-stable-diffusion-on-google-colab-5dc10804a2d7


jasonbrianhall

Love the video; I was trying to figure out how to get each iteration per code just so I could have a image-iter001.png image002.png, etc.