Comfyui workflow directory example reddit

Comfyui workflow directory example reddit. Add the SuperPrompter node to your ComfyUI workflow. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. be/ppE1W0-LJas - the tutorial. All the adapters that loads images from directories that I found (Inspire Pack and WAS Node Suite) seem to sort the files by name and don't give me an option to sort them by anything else. It covers the following topics: Introduction to Flux. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) r/StableDiffusion • A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 ControlNet and T2I-Adapter Examples. it's nothing spectacular but gives good consistent results without I just set up ComfyUI on my new PC this weekend, it was extremely easy, just follow the instructions on github for linking your models directory from A1111; it’s literally as simple as pasting the directory into the extra_model_paths. com:) Under ". For your all-in-one workflow, use the Generate tab. it has backwards compatibility with running existing workflow. It looks freaking amazing! Anyhow, here is a screenshot and the . Thanks. SDXL Pipeline. You can construct an image generation workflow by chaining different blocks (called nodes) together. Release: AP Workflow 9. Configure the input parameters according to your requirements. comfy uis inpainting and masking aint perfect. 5 model I don't even want. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. Comfy Workflows Comfy Workflows. Ignore the prompts and setup Welcome to the unofficial ComfyUI subreddit. Please keep posted images SFW. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. Connect the SuperPrompter node to other nodes in your workflow as needed. json files into an executable Python script that can run without launching the ComfyUI server. Going to python_embedded and using python -m pip install compel got the nodes working. Breakdown of workflow content. 73 votes, 25 comments. 1 ComfyUI install guidance, workflow and example. You can then load or drag the following image in ComfyUI to get the workflow: Jul 28, 2024 · It uses the built-in ComfyUI API to send data back and forth between the comfyui instance and the interface. I tried to keep the noodles under control and organized so that extending the workflow isn't a pain. Is there a way to load the workflow from an image within I am very interested in shifting from automatic1111 to working with ComfyUI I have seen a couple templates on GitHub and some more on civitAI ~ can anyone recommend the best source for ComfyUI templates? Is there a good set for doing standard tasks from automatic1111? Is there a version of ultimate SD upscale that has been ported to ComfyUI? Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. I found that sometimes simply uninstalling and reinstalling will do it. I downloaded the example IPAdapter workflow from Github and rearraged it a little bit to make it easier to look at so I can see what the heck is going on. Next, we need advise ComfyUI about the above folder, and again that requires some basic linux skills, else https://www. We would like to show you a description here but the site won’t allow us. I'll also share the inpainting methods I use to correct any issues that might pop up. 0 for ComfyUI - Now featuring SUPIR next-gen upscaler, IPAdapter Plus v2 nodes, a brand new Prompt Enricher, Dall-E 3 image generation, an advanced XYZ Plot, 2 types of automatic image selectors, and the capability to automatically generate captions for an image directory I'm using ComfyUI portable and had to install it into the embedded Python install. 1. hi u/Critical_Design4187, it's definitely an active work in progress, but the goal of the project is to be able to support/run all types of workflows. bing. 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. com/. I also had issues with this workflow with unusually-sized images. README. yet when i try ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. example. Files. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. (for 12 gb VRAM Max is about 720p resolution). They depend on complex pipelines and/or Mixture of Experts (MoE) that enrich the prompt in many different ways. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. You can find the Flux Dev diffusion model weights here. Execute the workflow to generate text based on your prompts and parameters. It seems also that what order you install things in can make the difference. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] Welcome to the unofficial ComfyUI subreddit. To get started with AI image generation, check out my guide on Medium. Welcome to the unofficial ComfyUI subreddit. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. json", which is designed to have 100% reproducibility . Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values It is pretty amazing, but man the documentation could use some TLC, especially on the example front. If anyone else is reading this and wanting the workflows, here's a few simple SDXL workflows, using the new OneButtonPrompt nodes, saving prompt to file (I don't guarantee tidiness): Well, I feel dumb. That's the one I'm referring to. If anyone else is reading this and wanting the workflows, here's a few simple SDXL workflows, using the new OneButtonPrompt nodes, saving prompt to file (I don't guarantee tidiness): If I understand correctly, the best (or maybe the only) way to do it is with the plugin using ComfyUI instead of A4. I looked into the code and when you save your workflow you are actually "downloading" the json file so it goes to your default browser download folder. For some workflow examples and see what ComfyUI can do you can check out: In the standalone windows build you can find this file in the ComfyUI directory. sft file in your: ComfyUI/models/unet/ folder. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in the workflows being generated. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. safetensors already in your ComfyUI/models/clip/ directory you can find them on: this link. example, edit it with your favorite editor. /ComfyUI" you will find the file extra_model_paths. Please share your tips, tricks, and workflows for using this software to create your AI art. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. I am building this around the [Coherent Facial Expressions] (… The goal of the r/ArtificialIntelligence is to provide a gateway to the many different facets of the Artificial Intelligence community, and to promote discussion relating to the ideas and concepts that we know of as AI. safetensors or clip_l. Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. My long-term goal is to use ComfyUI to create multi-modal pipelines that can reach results as good as the ones from the AI systems mentioned above without human intervention. ComfyUI is a completely different conceptual approach to generative art. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. AP Workflow 5. com/ImDarkTom/ComfyUIMini . it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. Thanks for the responses tho, I was unaware that the meta data of the generated files contain the entire workflow. I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). I couldn't find the workflows to directly import into Comfy. I stopped the process at 50GB, then deleted the custom node and the models directory. It should look like this: Welcome to the unofficial ComfyUI subreddit. I have no idea why the OP didn't bother to mention that this would require the same amount of storage space as 17 SDXL checkpoints - mainly for a garbage tier SD1. Workflow. 0 is the first step in that direction. json of the file I just used. yaml. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease I recently switched from A1111 to ComfyUI to mess around AI generated image. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. But let me know if you need help replicating some of the concepts in my process. Share, discover, & run thousands of ComfyUI workflows. It provides workflow for SDXL (base + refiner). If you don’t have t5xxl_fp16. You can use t5xxl_fp8_e4m3fn. 1 with ComfyUI Aug 2, 2024 · Flux Dev. json files saved via comfyui, but the launcher itself lets you export any project in a new type of file format called "launcher. https://youtu. 1; Flux Hardware Requirements; How to install and use Flux. In the Custom ComfyUI Workflow drop-down of the plugin window, I chose the real_time_lcm_sketching_api. 🙌 Acknowledgments: And with comfyui alot of errors occur that I cant seem to understand or figure out and only sometimes if i try to place the models in the default location it works, and IPAdapter models i dont know, i just dont think they work because I can transfer a few models to the regular location and run the workflow and it works perfectly. 1 or not. It's completely free and open-source but donations would be much appreciated, you can find the download as well as the source at https://github. With it (or any other "built-in" workflow located in the native_workflow directory), I always get this error: Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. This guide is about how to setup ComfyUI on your Windows computer to run Flux. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. I have made a workflow to enhance my images, but right now I have to load the image I want to enhance, and then, upload the next one, and so on, how can I make my workflow to grab images from a folder and for each queued gen, it loads the 001 image from the folder, and for the next gen, grab the 002 image from the same folder? My goal is that I start the ComfyUI workflow and the workflow loads the latest image in a given directory and works with it. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Somebody suggested that the previous version of this workflow was a bit too messy, so this is an attempt to address the issue while guaranteeing room for future growth (the different segments of the Bus can be moved horizontally and vertically to enlarge each section/function. Flux. safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram. true. Rename Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting point for using ControlNet: View Now: Inpainting workflow: A great starting 157 votes, 62 comments. Well, I feel dumb. problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. Put the flux1-dev. But it separates LORA to another workflow (and it's not based on SDXL either). this is just a simple node build off what's given and some of the newer nodes that have come out. This repo contains common workflows for generating AI images with ComfyUI. yaml instead of . My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. Not a specialist, just a knowledgeable beginner. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. example (text) file, then saving it as . EDIT: For example this workflow shows the use of the other prompt windows. but mine do include workflows for the most part in the video description. 1; Overview of different versions of Flux. A basic SDXL image generation pipeline with two stages (first pass and upscale/refiner pass) and optional optimizations. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Thank you u/AIrjen!Love the variant generator, super cool. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units which are represented as nodes. ComfyUI Workflows. . Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. But for a base to start at it'll work. But standard A1111 inpaint works mostly same as this ComfyUI example you provided. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to It works by converting your workflow. second pic. owl prwicby purefl dcaoixfku vqlu wtgz enaz isehd mxk hppdbz