Im doing this in order to use the image to image and inpainting features to try and tweak images i get from the image generator. However, I've had trouble with getting it to work.
At first, I used the standard stable diffusion 1.4 checkpoint and got awful results when copy and pasting the prompt from the image generator website. i looked around and found that the website was apparently using flux.1.
I looked on Civit.ai for this model and found the model shown on the left hand side of the image. however after some errors I found i needed a bunch of other files for... interpretation or something, im not quite sure what these exactly do yet. obviously, im very new to all this. these are on the right.
my question is - is this what is used by the perchance image generator? my results are still very mixed and are now much slower than what i was getting from the initial stable diffision model.