I'd also like to know. One thing is for sure, the base is flux.1-schnell with LoRas. Except, it still accepts negative prompts to some degrees (weakly), which flux shouldn't do.
Koto
Nunchuks also has negative prompt, but Chroma seems to deal with fat ugly lip contouring a lot better it seems. My first impression.
The best ones are installed locally. If you have ComfyUI or similar interface, you can install ForgeFlux Kontext. Which is the best and the latest model in the field of image-to-image, imho.
Same prompts lead to the same results, give or take. The OP said FACES and EVERY TIME in caps, suggesting he's getting the same faces everytime he's using the i2t which is not true. I was just trying to point out that there are ways to describe many details and change that.
Check this, if that's what you were looking for: https://perchance.org/mytestgen You could do it via dynamic lists, but that's 100 more words than in the example. If it is indeed what you were looking for, let me know if you need any explanations.
Sure, camera brings even an anime image to life. Just a front picture everytime is boring. Try describing how a camera is looking at the characters or the scene and you'll notice the difference in the dynamics.
A simple example,
This picture tells a story rather than showing an anime character. And it's only 3 short sentence about the camera work, angle, focus.
It is achieved by the sum of all descriptions and the word choice. But one phrase/sentence is usually not enough. For example, you want a 2 inch fairy climbing, um, a bottle of water. The word "climb" suggests that the fairy is small enough. If you have frame/camera angle descriptions then you can add phrases like "the subject is taking one-third of the frame", to give it the perspective relative to the table surface. I suggest that you use https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one. Find a picture in a search, then ask joy-caption to describe it in relative and superlative terms.
Check your HTML part. <h1>[title]</h1>
which is not defined anywhere, and update()
which gives you [output] error. It's part of a new generator. You can completely clear that and start with a [darkroom].
You won't be losing consumable list on that created list, as you can .consumableList it again. Judging by your code, you figured it out!
I just hit "randomize" several times in my generator and they all look like different people to me. Of course, you need to fill a lot of details which my generator does for you, so you don't have to type anything.
Yes, it's possible. The correct syntax would be [a = originalList.consumableList.selectMany(2)]
. Then, we'll need to make a list of that slice by using createPerchanceTree() with the correct escape characters. The function works like this: [newList = createPerchanceTree("slicedList\n\ta\n\tb").slicedList] [newList]
The AI's have come a long way, especially the recent versions that catch a lot of details, but they are just not there yet for the task you're describing working on the prompt alone. If you really want a detailed scene of three or more people you will need to dive into the world of LoRas, Control Net and other tools, in ComfyUI or similar program. Another approach would be to make a very detailed image of each of the three characters here on perchance, and then, ask Flux.1 Kontext to put them all in one scene doing something you want without changing their appearance. I've had very good results with this approach, and it's much easier, too.