Comfyui pony workflow reddit. A lot of people are just discovering this technology, and want to show off what they created. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. YMMV but lower CFG with pony has TREMENDOUSLY reduced my frustration with it Anyone have a workflow to do the following. ) Ctrl C, then in your workflow Ctrl V. Number 1: This will be the main control center. I use a lot of the merges on CivitAI, and one other key I've found is using a low CFG. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. but mine do include workflows for the most part in the video description. A sample workflow for running CosXL Edit models, such as my RobMix CosXL Edit checkpoint. Very proficient in furry, feet, almost every NSFW stuffs etc Beside that, if you have a large workflow built out, but want to add in a section from someone else's workflow, open the other workflow in another tab, you can hold shift and select each node individually to select a bunch (or hold down ctrl and drag around a group of nodes you want to copy. Specializes in adorable anime characters. 0 and upscalers Welcome to the unofficial ComfyUI subreddit. It's become such a different model that most of the loras don't work with it. I'm not sure if IP Adapter will. What samplers should I use? How many steps? What am I doing wrong? Some people there just post a lot of very similar workflows just to show of the picture which makes it a bit annoying when you want to find new interesting ways to do things in comfyUI. Comfy Workflows Comfy Workflows. 1 or not. There are plenty of ways, it depends on your needs, too many to count. Nothing fancy. Using the basic comfy workflow from huggingface, the sd3_medium_incl_clips model, latest version of comfy, all default workflow settings, on M3 Max MBP, all I can produce are these noise images. 0 of my AP Workflow for ComfyUI. in your workflow HandsRefiner works as a detailer for the properly generated hands, it is not a "fixer" for wrong anatomy - I say it because I have the same workflow myself (unless if you are trying to connect some depth controlnet to that detailer node) Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to Welcome to the unofficial ComfyUI subreddit. It was one of the earliest to add support for turbo, for example. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. AP Workflow v3. 2 - At least with pony hyper seems better. I just released version 4. So, up until today, I figured the "default workflow" was still always the best thing to use. It shines with LoRAs but I personally haven't used Pony itself for months. A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). ComfyUI is usualy on the cutting edge of new stuff. I really really love how lightweight and flexible it is. 5 not XL) I know you can do this by generating an image of 2 people using 1 lora (it will make the same person twice) and then inpainting the face with a different lora and use openpose / regional prompter. 9(just search in youtube sdxl 0. Take a Lora of person A and a Lora of Person B, place them into the same photo (SD1. Uncharacteristically, it's not as tidy as I'd like, mainly due to a challenge I have with passing the checkpoint/model name through reroute nodes. I wanted a very simple but efficient & flexible workflow. com/ How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. I've color-coded all related windows so you always know what's going on. hopefully this will be useful to you. A CosXL Edit model takes a source image as input alongside a prompt, and interprets the prompt as an instruction for how to alter the image, similar to InstructPix2Pix. I don't have much time to type but: The first is to use a model upscaler, which will work out of your image node, and you can download those from a website that has dozens of models listed, but a popular one is some sort is Ergan 4X. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units which are represented as nodes. Just load your image, and prompt and go. Also, if this is new and exciting to you, feel free to post Hello good people! I need your advice or some ready-2-go workflow to recreate this one workflow from A1111 in Comfy: 1 step: generating images with adding some (2-3) additional LORAs. Aug 2, 2024 · You can then load or drag the following image in ComfyUI to get the workflow: This image contains the workflow (https://comfyanonymous. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Welcome to the unofficial ComfyUI subreddit. Help me make it better! Welcome to the unofficial ComfyUI subreddit. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) Tutorial | Guide I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. And above all, BE NICE. . png) Flux Schnell is a distilled 4 step model. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. If you see any red nodes, I recommend using comfyui manager's "install missing custom nodes" function. io/ComfyUI_examples/flux/flux_dev_example. The "workflow" is different, but if you're willing to put in the effort to thoroughly learn a game like that and enjoy the process, then learning ComfyUI shouldn't be that much of a challenge Reply reply More replies More replies Welcome to the unofficial ComfyUI subreddit. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. You can also easily upload & share your own ComfyUI workflows, so that others can build on top Jul 9, 2024 · How the workflow progresses: Initial image generation; Hands fix; Watermark removal; Ultimate SD Upscale; Eye detailer; Save image; This workflow contains custom nodes from various sources and can all be found using comfyui manager. (I've also edited the post to include a link to the workflow) Under 4K: generate base SDXL size with extras like character models or control nets -> face / hand / manual area inpainting with differential diffusion -> Ultrasharp 4x -> unsampler -> second ksampler with a mixture of inpaint and tile controlnet (I found only using tile control net blurs the image) Pony is weird. Starting workflow. Share, discover, & run thousands of ComfyUI workflows. You can't change clipskip and get anything useful from some models (SD2. Any suggestions? That's awesome! ComfyUI had been one of the two repos I keep installed, SD-UX fork of auto and this. Nobody needs all that, LOL. Just my two cents. It’s becoming very overwhelming and counterproductive to my workflow. Please keep posted images SFW. Welcome to the unofficial ComfyUI subreddit. Here goes the philosophical thought of the day, yesterday I blew my ComfyUI (gazilions of custom nodes, that have wrecked the ComfyUI, half of the workflows did not worked, because dependency difference between the packages between those workflows were so huge, that I had to do basically a full-blown reinstall). For a dozen days, I've been working on a simple but efficient workflow for upscale. I share many results and many ask to share. Its default workflow works out of the box, and I definitely appreciate all the examples for different work flows. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. So, I just made this workflow ComfyUI. Please share your tips, tricks, and workflows for using this software to create your AI art. Belittling their efforts will get you banned. 0 I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. May 19, 2024 · Download the workflow and open it in ComfyUI. We would like to show you a description here but the site won’t allow us. 3 - At least to my eyes, 2 step lora @ 5 step is better than 4 step lora @ 5 steps. Not a specialist, just a knowledgeable beginner. The ui feels professional and directed. The graphic style I think it was 3DS Max. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. I have a question about how to use Pony V6 XL in comfyUI? SD generates blurry images for me. It's not for beginners, but that's OK. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. Oh, and if you would like to try out the workflow, check out the comments! I couldn't put it in the description as my account awaits verification. What’s New in 4. Hey everyone, We've built a quick way to share ComfyUI workflows through an API and an interactive widget. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. So I'm happy to announce today: my tutorial and workflow are available. I've been especially digging the detail in the clothing more than anything else. BTW , 1step lora's are unusable on both. Less is more approach. What im thinking of is setting up a workflow that uses Pony then run it back again for a second pass with IP Adapter img2img with the image from the pony pipeline and see how that goes. But it's reasonably clean to be used as a learning tool, which is and will always remain the main goal of this workflow. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. 5-5 most of the time. ComfyUI is a completely different conceptual approach to generative art. This is gonna replace lightning lora's when using with pony at least for me. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. 2 step loras @ 2 step also very bland, 4 step loras @ 4 step , same. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. I hope that having a comparison was useful nevertheless. ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the better developed comfy workflows You can just use someone elses workflow of 0. Just upload the JSON file, and we'll automatically download the custom nodes and models for you, plus offer online editing if necessary. I need a img2img pony Mar 23, 2024 · My Review for Pony Diffusion XL: Skilled in NSFW content. After all: Default workflow still uses the general clip encoder, ClipTextEncode Welcome to the unofficial ComfyUI subreddit. Like 2. Ending Workflow. Pony Diffusion and EpicRealism seem to be my “go to” options, but then I try something like Juggernaut or RealVis and I’m back to racking my brain. Hi. For your all-in-one workflow, use the Generate tab. Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from there. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Hey Reddit! I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. It's simple and straight to the point. Offers various art styles. github. Also, if this is new and exciting to you, feel free to post comfy uis inpainting and masking aint perfect. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. Upcoming tutorial - SDXL Lora + using 1. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) 10 upvotes · comments I’m finding it hard to stick with one and I’m constantly trying different combinations of Loras with Checkpoints. dqxazuclnqoquutxicabwaimbtbeuxroxfqatewskkedymmet