User
Write something
Generate your own Stable Diffusion Images
In Celebration of our first member I have setup a space for ComfyUI to be used by our members. ComfyUI is an image generator. If you don't know how to use it, don't worry. It has this great feature that lets you load the state that an image was generated in by extracting the data from the image and rebuilding the pipeline. Long story short, just drag and drop the image attached to this post into ComfyUI (make sure it gets dropped on the back ground). and the program will do the set for you. Now just hit Queue and you're generating art. Parameters to play with: - Prompt: the text fields on the board, they are often inside the nodes classified as a "Text Encoder" and are what the model uses to interpret your request. Focus on the the positives and break your concept into smaller chunks separated by a comma. Ex. "Beautiful sunset, blue skies, sandy beach, ocean lapping into shore, award wining illustration, very detailed, bold linework, bright saturated colors" - CFG: you can find this on the node called "Sampler" often K Sampler. This controls how closely the model tries to adhere to your request. Lower number means it will stick to your prompt, high number means it will be more creative. There is a diminishing returns though. After a certain point the image will look "burnt" or "overcooked." Basically over exposed and ugly. 7-9 is a good starting range. - Denoise: This is also on the sampler, it controls how much noise (think of static on a television but the static is the starting image) is taken away during the process. 1(or 100% they use 1 as the representation of the percent, just add two zeros to the number to figure out the percent) means no noise left over, 0 means nothing but noise. I usually go for 80-90% on an image generated from text and 10-30% for image to image. - Steps: You can select the number of passes that the model takes over the image. Each pass it removes a bit more white noise. To more passes the more clear the image gets, but like CFG it has diminishing returns. If it passes over an image with no noise it will start damaging the image removing something that is not there. I like 15-25 with regular stable diffusion models and 35-50 with XL models. - Sampler: Think of these as kind of like brushes, they give different textures and other characteristics that are not easily definable. i like dimm, euler and dpmm_3m_sde_gpu(the name rolls of the tongue doesn't it?), but play around see what works, what doesnt. - Scheduler: These are the algorithms that decide when and how much noise to take out. I personally almost always stick to exponential but you can get good results with any of them. It depends on your other settings. Best way to figure it out is to play with the know and hit the button to see what comes out.
1
13
New comment Feb 17
Generate your own Stable Diffusion Images
Stable Diffusion and ComfyUI
Learning to generate art can be daunting for the uninitiated. With so many settings and models, its hard to know what does what. Well this library will not make it any easier. It is a modular flow user interface for connecting different parts of an art generation pipeline together. If you want to be thrown into the deep end; it is a bit of a puzzle to get going, but is by far the most powerful and customizable web interface for Stable Diffusion. I will be doing a deep dive on its components, what plugs into what and some base line settings to create amazing works of art in seconds. You can see some examples of art pipelines I have created. https://github.com/ltdrdata/ComfyUI-Manager.git
1
0
Stable Diffusion and ComfyUI
1-3 of 3
Agent Artificial
Public group
A community for applied AI. Learn to design, build and deploy AI pipelines into your business workflows. Learn with useful projects.
powered by