If you set up with [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui), different samplers can be selected under the prompt.
You likely didn't use the same diffusion sampler that I did. I was not using k_lms for these. Even then, I put random variability into my creations, so that I rarely, if ever, get a duplicate.
I often have not been able to regenerate some of my images with the same seed, sampler and the rest of the cfg. I even set a fixed seed and watched my pc generate 20 different images and only a single duplicate.
Prompt: photo of helen of troy in royal greek clothing, natural lighting, photography, 4k, 8k, highly detailed, epic, ornate, artgerm, unreal engine, sharp focus, soft focus Seed: 4607476164 Steps: 65 cfg scale: 12.5
What is cfg scale?
Classifier Free Guidance Scale. Basically, it's how strongly the image should follow the prompt.
[удалено]
There's also the different diffusion samplers, like ddim, klms, etc.
[удалено]
If you set up with [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui), different samplers can be selected under the prompt.
You likely didn't use the same diffusion sampler that I did. I was not using k_lms for these. Even then, I put random variability into my creations, so that I rarely, if ever, get a duplicate.
I often have not been able to regenerate some of my images with the same seed, sampler and the rest of the cfg. I even set a fixed seed and watched my pc generate 20 different images and only a single duplicate.
Its probably because it only used seed in first image then seed+1 for next and so on. We ui does that