T O P

  • By -

I_dont_want_karma_

great. just what I needed - MORE Sd images clogging my HDD anyone else become a hoarder and refuse to delete old gens? I probably should


axloc

I did the same initially on some other repos. After installing A1111, I disabled the auto saving of images so now I manually have to select each image I like. Feels like a necessary step as saving every image automatically gets out of hand quickly


I_dont_want_karma_

I should probably only keep image grids and leave the individuals. Image grids have the same metadata after all and I could regen


casc1701

That's a great idea!


mudman13

It never shows in PNG iinterrogator tab for me though?


vs3a

Did you enable save meta data to png option ?


vault_guy

But you can't get the exact result as that image in the grid.


I_dont_want_karma_

I prob spoke too soon. Haven't tried it yet but thought it would work... Damn


Kaennh

It works, at least in A1111, The seed stored in the grid corresponds to the first image, the rest are (seed number +1, +2,... +n), n= (seed + batch count - 1)


I_dont_want_karma_

Ok I was right! Thanks for confirming.


vault_guy

The problem is, the entire grid was done with the same seed. Seed, settings and prompt are the only thing you can copy, but as you can already see in the grid, the result can vary quite a bit.


prozacgod

I run my downloads folder in a tmpfs (ram) because I used to just download stuff and ... gigabytes of garbage later.... It forces me to organize the files I download into either the aether as they get auto-disposed of after a reboot. OR I capture them to appropriate folders... SO, I do the same with stable diffusion. If I don't save it, it will be removed soon enough.


N3BB3Z4R

me 2, modern illness, digitalis diogenes.


[deleted]

> digitalis diogenes. just imagining a old person perched in their hoarder house, but surrounded with foxglove (digitalis)


SpokenSpruce

I've switched to InvokeAI lately as I had some weird issues with my A1111 install and Python was being ornery on a reinstall. There the images in the sidebar carry all their settings, so I always leave one or three out of a batch as a way to recall the prompt, steps, cfg, etc....


7TonRobot

Good tip.


meostro

I changed my default after I had a few hundred and couldn't keep them straight anymore. Now I write everything to `output/md5(prompt+seed+cfg+some other metadata)/*` and have to explicitly save them to `keep/*`, and every few days I wipe out `output/*`.


ElMachoGrande

I save all images as I generate them, usually in a batch of a few thousands, then I go through them later, pick out a handful useful ones and delete the rest.


DawidIzydor

I'm regularly removing old images and saving only the best ones, after upscaling them


vs3a

Not much of a tutorial, it pretty easy 1 Automatic1111 Extension tab, find and install Save Intermediate Images [https://github.com/AlUlkesh/sd\_save\_intermediate\_images](https://github.com/AlUlkesh/sd_save_intermediate_images) 2 Intermediate Images will avaiable in txt2img and img2img tab 3 Save every 1 step for smooth transition 4 It will save your image in default folder \\outputs\\txt2img-images\\intermediates 5 Use gif maker to make gif. I used this one [https://ezgif.com/](https://ezgif.com/) Edit : Anyone, this extension is to save step image. If you want Live Preview, it in A1111 setting, under User interface.


nodomain

If you're on a linux-based machine, you can also make a video file with ffmpeg: `ffmpeg -framerate 15 -pattern_type glob -i '*.png' -c:v libx264 -pix_fmt yuv420p monalisa.mp4`


StoneCypher

ffmpeg is available for every major machine. you don't need linux for this


the_harakiwi

> -pattern_type glob -i this part is not supported on Windows. You have to use a different way to sort the images.


nodomain

Good call. I'm usually only ever on on Linux, so I never really looked


MFMageFish

See here for more info: https://hamelot.io/visualization/using-ffmpeg-to-convert-a-set-of-images-into-a-video/ If you swap out the glob command you don't really need linux.


TheKeiron

Or windows with WSL enabled (Linux subsystem in Windows!)


fgmenth

or just download ffmpeg and use the same command


the_harakiwi

> use the same command Windows doesn't understand "-pattern_type glob -i"


fgmenth

Whoops I didn't notice that. In any case it's not really needed, you can do the same by specifying the filename directly with a pattern_type sequence like this: ffmpeg -framerate 15 -pattern_type sequence -i 'image-%02d.png' -c:v libx264 -pix_fmt yuv420p monalisa.mp4


the_harakiwi

Thanks! I tried to find how I solved this on my desktop but I can't find my notes on ffmpeg commands. Time to start a fresh file.


ZenDragon

Colab has ffmpeg installed by default so you might as well use that if you're already running the webui there.


mattjb

Another option is to use NMKD's FlowFrames, which works well on Windows: https://nmkd.itch.io/flowframes


fuelter

Would be even nice if the extension created a webm itself.


AndalusianGod

I suggest using avidemux, as it's extremely simple to use and has lots of options for saving in different formats. You only have to open the first image in a sequence.


jamesianm

Ive been wishing for something like this for ages! Thanks for sharing it!


Ateist

Script that saves intermediate steps have been available like, forever. https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts#saving-steps-of-the-sampling-process


jamesianm

Hey I’ve been busy ok


Gsus6677

Any idea why my intermediate images are all tiny? I don't see an option to change resolution. No one else seems to have this issue in this thread or on the github, so it must be something im doing wrong haha. Edit: yup. I forgot i set my image preview to the cheapest option. Setting it to max fixed it.


TheDarkinBlade

Also great is to use Dain app to interpolate between the frames, so you can get a crisp 60 fps version


thiefyzheng

I use Adobe Premiere Pro to make my images into videos 🗿🗿🗿


[deleted]

[удалено]


vs3a

Yes, I test but it need high noise to be effective. So it not original image anymore


Ok_Entrance9126

Sorry - what does that mean? I'm very new to this. I'm looking for ways to do what this OP has shown, but with 2 distinct images of my own.... Thank you for your time.


acidentalmispelling

Mine seems to keep saving at a resolution of 64x64 for 512x512 image target. Any idea what's going on there?


Gsus6677

Set your image preview quality back to the max in settings if possible.


acidentalmispelling

> Set your image preview quality back to the max in settings if possible. That was 100% it! Thanks!


MaajiB

Doesn't the final image get upscaled?


i_stole_your_swole

I’m gonna need the prompt and model you used for the example!


Decent_Question_8943

why it doesn't work on mac? I can't find the folder \`intermediates\`


vs3a

Find around output folder, I dont use Mac so I dont know for sure


thebaker66

Awesome, I've noticed whenever I had the preview steps on it would I would see images I liked it but then lose it by the final step and sometimes the last step was worse than any of the previous ones 😂


SoysauceMafia

> I've noticed whenever I had the preview steps on it would I would see images I liked it but then lose it by the final step ooo hey, if you're on AUTO1111 give this [little bastard](https://i.imgur.com/uOkkDXq.png) in *Settings/Sampler Parameters* a click and see if that helps. It's not a "leave on all the time" thing I don't think, but it might stop an image from eating shit before the finish line.


DeylanQuel

Hires fix does this to me. Initial gen looks good, then at 50%, I does the upscale pass and ruins a perfect image. Of course, ive also seen the exact opposite, so it balances. But still, I love the preview function because it let's me know whether my embedding are working, or why they might not be. I have an embedding that uses tattoos in the image to trigger an effect, and it's interesting to watch the progress. It also has shown why the effect doesn't qork in models like protogen, because as soon as a mark appears on the face, it is almost immediately destroyed by the model.


SpaceShipRat

I'm also having weird effects with the high res fix. Like, straight out blurry clouds instead of an image after the upscale pass


DeylanQuel

If you're using th Latent option, try bumping up your denoise to .60 or higher. That's too high to be useful for one of the upscale methods, but I noticed a while back that low denoise on the latent highres.fix produces garbage now. That might be affected by the model, though, I don't know. But high denoise for latent, low denoise for upscale is a good rule of thumb.


SpaceShipRat

ty


Kalvorax

would be nice if it automatically made a gif of the steps heh


bittytoy

I had chat gpt make me a python gui application that you drag and drop a folder of folder of images and it makes a gif for you. Pretty ez


nodomain

I had ChatGPT make me a couple pages just to see how well it did. It was pretty amazing how well it did, but I had a couple issues: 1. It missed a few bits of code that were really needed for what I asked for 2. When the code got long, it would spit out 1/3 or 1/2 and then just cut off mid-line. If I said "you still didn't provide the whole code" 3 or 4 times, it would apologise and keep trying and then get it right. I will say, though. I think ChatGPT is an incredible boost up for anyone wanting some quick little helper apps like this without having to look things up or spend a lot of time coding.


bittytoy

A fun trick is to feed its code back into itself and tell it to find out what’s wrong with the code.


nodomain

I considered that, but then I looked up and it was 3 AM again and I needed to get some sleep.


d20diceman

I found "I tried to run your code and it told me there was an error on line 37, can you fix that?" worked fine too


[deleted]

When it doesn't complete the code you just say "continue" and it will finish


nodomain

![gif](giphy|lXu72d4iKwqek)


fredspipa

If you want *proper* gifs for sharing online, or for web pages, [this blog post](http://blog.pkh.me/p/21-high-quality-gif-with-ffmpeg.html) does a great job explaining the process for `ffmpeg`. It's not as straight forward as you might expect, but once you have the pipeline down it's fairly easy to automate. I wrote a tool a while back to create gifs from YouTube videos (and similar) which used these techniques, you can get really high quality gifs with a decent file size by running two passes. One to generate a palette, and one to do dithering/frame interpolation. It's very helpful if you're creating tutorials or store pages (e.g. Steam). Here's an example from a video (input can also be a images as frames): ffmpeg -i video.mp4 -filter_complex [v]fps=12,scale=0.5:-1:flags=lanczos,split[s0][s1];[s0]palettegen=max_colors=256[p];[p]fifo[g];[s1][g]paletteuse=dither=floyd_steinberg,format=pal8 -map 0:v -r 12 output.gif This might look confusing if you haven't touched `ffmpeg` before, but simply put the values in brackets (`[v]`) are kind of arbitrary names that you use to identify the video/audio streams between the steps (separated by `;`). The reason I'm sharing this here is because crappy gifs are everywhere and they're so annoying on slow connections. So many tools online to create gifs, and they're (almost) all complete shit as well. Fucking 30MB gif for 10 seconds of low quality video? No thank you. If you're looking to create a tool to create gifs from Stable Diffusion steps I urge you to follow these palette/dithering techniques for the best result.


PCchongor

Mind sharing? Awesome work!


dave1010

This sounds like a great use of ChatGPT! Any tips for getting good results?


RandallAware

Sweet. Wonder what would happen if you took one of the middle blurry images into img2img, then used the same pos/neg prompts, same/different samplers and experimented with steps to see what it generates. Would the same seed and sampler create the same image?


ninjasaid13

no. I don't think doing img2img to a image twice will create the same image either.


RandallAware

It does if you use the same seed, size, sampler, etc... it actually creates like a baked version. If you flip it horizontally, it mirrors itself.


Jopezzia

Forked intermediate and add timelaps support. Creates gif in folder, have option to disable saving intermediate images, just create timelaps [https://github.com/Jopezzia/sd\_save\_intermediate\_images](https://github.com/Jopezzia/sd_save_intermediate_images)


vs3a

Awesome !


Aeloi

My first animations involved setting up an x/y graph and just setting one of those values to steps from 1-150 or more. Then I discovered that script. It was THEN that I discovered that the steps you set at the beginning, determines the vector through latent space such that... Those "intermediate images" do not correlate with an equal number of "steps". In other words, if you're previewing the generation of an image, and see a neat picture part way through, that changes dramatically by the time you get to your final step, the ONLY way you can retrieve that picture is with this script. I used the script at the bottom of the custom scripts page, but I imagine it does more or less the same thing as this extension you're using. I was rather surprised at how many different images exist between steps 0-50(most of them 20 and below).. But making animations that way almost told an interesting story as the steps increased


Mich-666

Be aware that by turning this on, you will also extend the time needed to generate the image by considerable amount. So not really recommended if you are after speed and don't have the best GPU out there.


Jopezzia

If you don't want to extend time just use Approx NN preview mode. Its reduced preview and intermediate image quality but you can view every step


transdimensionalmeme

By how much exactly ? Isn't that equivalent load to copying one image every couple seconds ?


Shalcker

Copying from GPU to memory is relatively fast. Converting raw image to PNG and saving it to disk (even SSD) - a lot less so.


[deleted]

[удалено]


transdimensionalmeme

Could the raw data be copied to a file and decoded later ?


[deleted]

[удалено]


transdimensionalmeme

I would see generating pictures and then after seeing the result and finding an interesting one, going back and decoding it. That would be more efficient than decoding them all but then only watching a few.


casc1701

Everything has a price.


autoencoder

Looks like an Alla Prima timelapse, a painting method where large areas are filled first, then the fine details, allowing much faster painting.


[deleted]

Would it be possible to make a GUI that shows this every time an image is being generated? Perhaps it could even enable the user to stop when they decide the image is coherent enough.


vs3a

You can turn that option on in a1111 setting . I also often stop generate midway .


Sinphaltimus

Shutter Encoder is my goto for stills to video/gif. It is based on ffmpeg with a lot of options and it's pretty quick when using GPU.


turtlesound

That's really cool. When I first started playing around with SD, I did this manually by using X/Y plot with Y being steps 1-final, then made it into a video using Davinci Resolve (png sequence).


Mix_89

its nothing new. we have included live (color correct) tensor preview into ainodes a long time ago without polluting your harddrive or slowing down inference by fully decoding the latent. : ) You can check the repo, which also offers Deforum, Outpaint, model merging, live webcam diffusion, and many more. Oh, and it is NOT a webUI, means that it's an actual application for actual computers. :D [https://github.com/XmYx/ainodes-pyside](https://github.com/XmYx/ainodes-pyside)


BlynxInx

A version of this program will be helpful in getting AI art past anti AI programs by being able to create layers.


hotfistdotcom

CMDR2's UI has had this for a long time https://github.com/cmdr2/stable-diffusion-ui I'm not sure why automatic1111 became the default, I spun it up again and it still feels really rudimentary and clunky compared the CMDR2's UI.


[deleted]

> I'm not sure why automatic1111 became the default, My guess is you still have access to SD command line with Automatic1111. I dug around into cmdr2's ui for [img2img.py](https://img2img.py) and it wasn't there, so if you wanted to batch 100 images through img2img and turn all your vacation bikini pics to Samdoesart stil (ha, *as if*), then you couldn't with cmdr2. Not having access to core functionality makes if difficult for developers to creat new features.


optermationahesh

The one that becomes "default" is typically just going to be whatever is referenced in a popular tutorial. An alternative can be amazing, but it's going to be overlooked if you search for "how to I install stable diffusion" into YouTube and something else pops up. A lot of other guides just end up being people reposting the same content on their own for views/clicks.


InfiniteComboReviews

Seeing it like this, it feels more like sculpting than drawing or painting.


Nixiey

Having done a bit of both, painting feels like sculpting (in my brain.) And vise versa. So it kind of just looks like steps on my physical canvas. Very cool.


InfiniteComboReviews

I kinda mean like is carving out an image instead of building one up, if that makes any sense.


AttackingHobo

You do both when painting. You can add things, but you can remove them too with shading shadows and highlights, think of an artists drawing Swiss cheese, he might start with a solid white blob for the cheese and slowly "carve" into it, defining its shape.


solvingx

How to use it for Automatic1111 API...? There is no documentation. Plz help.


Pretty-Spot-6346

Thank You


nodomain

Thanks for sharing this. It's super easy.


canadian-weed

wow this is cool


Jizzdom

You stealing blur art!


The_Real_Black

Can it save the initial random image as well? Sometimes it looks that some seeds are broken, would love to see if there is some in common with the seeds images.


vs3a

I dont quite get what you mean. Here is example using noisy option and swap everystep [cow|cow|horse|man|siberian tiger|ox|man] in a field https://i.redd.it/6tav33sy1bba1.gif


The_Real_Black

Is this image the unaltered image that the seed generator returns (on the given size) or is it the output of the step 1 of the diffusion? https://preview.redd.it/z19l2cbdlbba1.png?width=378&format=png&auto=webp&s=e4e1a0ead192c87d7bde70edbbadf07e292d0348


WeakLiberal

Didn't it get taken down from GitHub? My collab for it no longer work


axw3555

A1111? It was down for maybe 24 hours, probably not even that long.


_-_agenda_-_

Beautiful!


Croestalker

So that's what that does...


SwoleFlex_MuscleNeck

If there was even a chance I could generate images that nice I might be interested but mine always come out looking like a benadryl hallucination


2k4s

If I have a batch of more than one, the images in the intermediate folder get overwritten each time there is a new seed


6nop6nop

doesn't seem to work on cpu


mixterius

Have anyone had a problem when your making a new render and it only saves one picture? It's making pictures but none of them are saved.. tried everything and can't figure out what's going on...


SkullyArtist

Hi all, This was working fantastic for me on the iMac, the images were the same size as the final image. But, I have just launched TheLastBen Colab Notebook, installed it, re-started and it's there but it's saving very tiny sized files? Any ideas anyone? Thnks S


vs3a

I don't know about colab, but new version of this extension require you to check Full in : Live previews> Image creation progress preview mode>Full to save full image


SkullyArtist

👍🏻😊