How to use comfyui workflows reddit github.
How to use comfyui workflows reddit github Ive gotten used to it, but I dont use the UI. Then u/rikkar posts an SDXL artist study with accompanying git resources (like an artists. Creator mode: Users (also creators) can convert the ComfyUI workflow into a web application, run the application locally, or publish it to comfyflow. bat → Runs ComfyUI using GPU; 馃殌 To launch ComfyUI, simply double-click run_nvidia_gpu. I'll make things more "official" this week-end, I'll ask for them to be integrated in ComfyUI Manager list and I'll start a github page including all my work. 1 ComfyUI Workflow. It runs locally and lets you import & run any workflow json file with ZERO setup: Automatically installs custom nodes, missing model files from Huggingface & CivitAI, etc. A good place to start if you have no idea how any of this works is the: Grab the ComfyUI workflow JSON here. In this workflow I experiment with the cfg_scale, sigma_min and steps space randomly and use the same prompt and the rest of the settings. Going to python_embedded and using python -m pip install compel got the nodes working. I also had issues with this workflow with unusually-sized images. This is bit old but the basics are still here-ComfyUI Hi-Res Fix Upscaling Workflow Explained in detail | ComfyUI Tutorial | Hi-Res Fix ComfUI - YouTube Follow basic comfyui tutorials on comfyui github, like basic SD1. txt file, just right for a wildcard run) - SDXL 1. This guide is about how to setup ComfyUI on your Windows computer to run Flux. Download it and run it in ComfyUI. com). Please keep posted images SFW. But it is extremely light as we speak, so much so Here is this basic workflow, along with some parts we will be going over next. gets off high horse. Just saying. Word weighting, embeddings, timestepping, gligen, An image is worth a thousand words. This is necessary for media formats that don’t support metadata. Welcome to the unofficial ComfyUI subreddit. The results are a bit different, but I would not say they are better, just a bit different. Again, no idea, never used composable lora. No, because it's not there yet. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. 5 than you would for xl. All that should live in Krita is a 'send' button. 3B (1. I use ComfyUi running on my PC using my Z fold 5, number of things need to done for smooth usage. no manual setup needed! import any online workflow into your local ComfyUI, & we'll auto-setup all necessary custom nodes & model files. Yes, this is the way to go. though perhaps just links to existing git repos would be enough here. Feb 21, 2025 路 Load the AI upscaler workflow by dragging and dropping the image to ComfyUI or using the Load button to load. The workflow joson info is saved with the . Available on Windows, Linux, and macOS. It is licensed under the Apache 2. - Create an image in Krita. Use a "Mask from Color" node and set it to your first frame color. It works by converting your workflow. People want to find workflows that are based on SDXL, SD1. ControlNet Auxiliary Preprocessors. Less is best. Right-click on an empty space. something of an advantage comfyUI has over other interfaces is that the user has full control over every step of the process which allows you to load and unload models, images and use stuff entirely in latent space if you want. 0) Serves as the foundation for the desktop release; ComfyUI Desktop. - Ctrl+C. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. Please share your tips, tricks, and workflows for using this software to create your AI art. Explores various options to upscale (aka hires fix) a generated image. Advanced Text-to-Image techniques. Ideally they should be using FOSS licenses compatible with the GPL. The different version are usually just minor changes/improvements of the same flow. You could sync your workflows with your team by Git. Checkpoints. Any future workflow will be probably based on one of theses node layouts. 3. I do have an use Krita sometimes, I still rather photoshop and tend to use that more than krita still but yeah I’m taking about taking a video that’s already made, say it’s a skilled dancer dancing or a girl showin her titties (lol) and I want to take that video and process it through a workflow that swaps Welcome to the unofficial ComfyUI subreddit. Prerequisites in ComfyUI: Ultimate SD Upscale. 5 workflow (dont download workflows from YouTube videos or Advanced stuff on here!!). ComfyUI follows a weekly release cycle every Friday, with three interconnected repositories: ComfyUI Core. You have to search through Reddit posts. Installing via the ComfyUI App (Easiest Method) The simplest way to install ComfyUI is by using its official app. Great news - Swarm integrates ComfyUI as its backend (endorsed by comfy himself!), with the ability to modify comfy workflows at will, and even take any generation from the main tab and hit "Import" to import the easy-mode params to a comfy workflow and see how it works inside. Is there a way to load the workflow from an image within We would like to show you a description here but the site won’t allow us. Just use an upscale node. ckpt model For ease, you can download these models from here. bat (for CPU). In my AP Workflow 6. Bonus would be adding one for Video. It doesn't work. It's called "Image Refiner" you should look into. - Create a workflow in ComfyUI, and make sure the models are the exact same as in Krita (otherwise you will load new models everytime you switch from one to the other, which is time-consuming). png Simply load / drag the png into comfyUI and it will load the workflow. EDIT: There is something already like this built in to WAS. . Get Started The version numbers are linked to the different type of workflow, and the CleanUI was just the latest I'm using myself. Custom node: LoRA Caption in ComfyUI : comfyui (reddit. Weekly frontend updates are merged into the core Civitai has a ton of examples including many comfyui workflows that you can download and explore. Newcomers should familiarize themselves with easier to understand workflows, as it can be somewhat complex to understand a workflow with so many nodes in detail, despite the attempt at a clear structure. To get started, download our ComfyUI extension: https://github. ComfyUI now provides a dedicated application for easy installation. com/ComfyWorkflows/ComfyUI-Launcher. No idea. Svelte is a radical new approach to building user interfaces. - Click on an EMPTY SPACE in your ComfyUI workflow… and Ctrl+V. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. Step 3: Restart multiple times because half the time I get a blank white screen. ComfyUI lets you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Step 4: Try to run project. You can delete some of these, and the workflows will still work. Step 2: From manager, tell it to install missing nodes. The fancy multi-GPU part does less for you, but it still does quite a lot - ComfyUI workflows with an easy interface is pretty cool on its own. eh, if you build the right workflow, it will pop out 2k and 8k images without the need for alot of ram. It covers the following topics: We would like to show you a description here but the site won’t allow us. " This prevents accidental movement of nodes while dragging or swiping on the mobile screen. Try inpaint Try outpaint Hmm low Quality, try lantent upscale with 2 ksamplers. In this example, it will be 255 0 0. Here are the models that you will need to run this workflow:- Loosecontrol Model ControlNet_Checkpoint v3_sd15_adapter. Lock Workflow: Select the entire workflow with Ctrl+A, right-click any node, and choose "lock. A workflow can also be stored in a human-readable text file that follows the JSON data format. May 12, 2025 路 Flux. The Wan2. First, download the workflow with the link from the TLDR. A lot of people are just discovering this technology, and want to show off what they created. Use Everywhere. CGPT Prompt: "Concept Art of a malnourished trio consisting of mother, father, and child, each with gaunt faces and sharply etched wrinkles, garbed in tattered 1800s clothing, trudging wearily on a muddy footpath snaking through a dilapidated town of the 19th century, under a sun-bleached afternoon sky, with the broken-down, faded brick buildings casting long, melancholic shadows; style of Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). Install: Download the custom nodes, the relevant models, and just load that workflow into ComfyUI. Select Add Node > image > upscaling > Ultimate SD Upscale . Mar 11, 2025 路 run_nvidia_gpu. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Belittling their efforts will get you banned. I'm using ComfyUI portable and had to install it into the embedded Python install. 5 (or maybe SD2. 3 billion parameters), covering various tasks including text-to-video (T2V) and image-to-video (I2V). So for the past few weeks, i’ve been building this open-source tool called the ComfyUI Launcher: https://github. Welcome to use it and give me feedback. Basic Workflow. Please repost it to the OG question instead. Releases a new stable version (e. If you really want the json, you can save it after loading the png into comfyui. Yeah I’ve seen if I use the hand / face ones they work fine. , v0. however we need it unless there slight possibility of other alt or some1 nodes-pack can do same process . Upscaling models. Here is this basic workflow, along with some parts we will be going over next. SDXL can indeed generate a nude body, and the model itself doesn't stop you from fine-tuning it towards whatever spicy stuff there is with a dataset, at least by the looks of it. Also you can actually still use multi-GPU lol: you can for example boot up a comfy instance on Google Colab and use that as a second GPU, alongside your local GPU as the first. com/thecooltechguy/ComfyUI-ComfyRun Import any workflow from ComfyWorkflows with zero setup. 7. But it is extremely light as we speak, so much so Welcome to the unofficial ComfyUI subreddit. Next, load up the sketch and color panel images that we saved in the previous step. The trick is adding these workflows without deep diving how to install the requisite add-ons. A ComfyUI Krita plugin could - should - be assumed to be operated by a user who has Krita on one screen and Comfy in another; or at least willing to pull up the usual ComfyUI interface to interact with the workflow beyond requesting more generations. I start the server and use it via the python api, and run everything via a python script. Very good. The readme shows you how to install, but not how to actually run the thing. Well, I feel dumb. All four of these in one workflow including the mentioned preview, changed, final image displays. From the Img2Video of Stable Video Diffusion, with this ComfyUI Workflow you can create an image with the prompt, negative prompt and checkpoint(and vae) that you want and then a video will be created automatically with that image. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. There is a ton of stuff here and may be a bit overwhelming but worth exploring. 0 Artistic Studies : StableDiffusion (reddit. Which can detect around 80 categories ). For now you can download them from the link at the top of the post in the link above. you may need fo an external finding as most of missing custom nodes that may outdate from latest comfyui could not be detect or show to manager. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Welcome to the unofficial ComfyUI subreddit. com) I made them and posted them last week ^^. MASKING AND IMAGE PREP. Keep in mind that when using an acyclic graph-based ui like comfyui, usually one node is being executed at a time. basically, this lets you upload and version control your workflows, and then you can use your local machine / or any server with comfy UI installed, then use the endpoint just like any simple API, to trigger your custom workflow, it will also handle the generated output upload and stuff to s3 compatible storage. My seconds_total is set to 8, and the BPM I ask for in the prompt is set to 120BPM (two beats per second), meaning I get 16 beat bars. I don't know. It’s when I use the ones that have tags that it fails. I'm a bit ocd and like my shit on one screen. 1 if people still use that?) People want to find workflows that use AnimateDiff (and AnimateDiff Evolved!) to make animation, do txt2vid, vid2vid, animated controlNet, IP-Adapter, etc. And above all, BE NICE. Most workflows you see on GitHub can also be downloaded. Like with yolov8m-seg …. Thanks for the responses tho, I was unaware that the meta data of the generated files contain the entire workflow. The reason is that we need more LLM-focused nodes. Studio mode : Users need to download and install the ComfyUI web application from comfyflow. app , and finally run ComfyFlowApp locally. Many OpenArt AI workflows use unnecessary, fancy nodes and additional nodes that are not needed. 1 model, open-sourced by Alibaba in February 2025, is a benchmark model in the field of video generation. Once the container is running, all you need to do is expose port 80 to the outside world. png" in the file list on the top, and then you should click Download Raw File, but alas, in this case the workflow does not load. Note, this site has a lot of NSFW content. best external source willbe @comfyui-chat website which i believed is from comfyui official Welcome to the unofficial ComfyUI subreddit. Here is the list of all prerequisites. Not sure if there is a way to use an image color coded to map different areas of an image for different prompts, but you can achieve multi-area prompting with some kinda complicated workflows using native nodes, or just use the excellent plugin from Davemane42. I use this youtube video workflow, and he uses a basic one. This will allow you to access the Launcher and its workflow projects from a single port. Basic Touch Support: Use the ComfyUI-Custom-Scripts node. Backup your local private workflows to the cloud. Builds a new release using the latest stable core version; ComfyUI Frontend. We would like to show you a description here but the site won’t allow us. : comfyui (reddit. app to share it with other users. Also, if this is new and exciting to you, feel free to post I didnt know abouf "workflows" and dragging images to change things around. The workflow is very straightforward, but here is a detailed explanation: - Use Everywhere brings “WiFi” to the Users can also use them but if someone decides to package ComfyUI with some custom nodes that's when everything needs to be GPL compatible. 0 license and offers two versions: 14B (14 billion parameters) and 1. This will set our red frame as the mask. ckpt model v3_sd15_mm. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in the workflows being generated. May 12, 2025 路 Wan2. Try generating basic stuff with prompt, read about cfg, steps and noise. I kick off an overnight wildcard run pulling in artists from that text file in my random prompt, really excited by some of the images I see. Sure, it's not 2. You can also look into the custom node, "Ultimate SD Upscaler", and youtube tutorial for it. Clone the github repository into the custom_nodes folder in your ComfyUI directory You should have your desired SD v1 model in ComfyUI/models/diffusers in a format that works with diffusers (meaning not a safetensors or ckpt single file, but a folder having the different components of the model vae,text encoder, unet, etc) [ https://huggingface Tried the fooocus Ksampler using the same prompt, same number of steps, same seed and same samplers than with my usual workflow. com) Ready for the second part? Here is the EVOLVED EDITION! Much more intimidating in my opinion, but I will explain everything step by step. 1 ComfyUI install guidance, workflow and example. json files into an executable Python script that can run without launching the ComfyUI server. bat (for GPU) or run_cpu. Example: Welcome to the unofficial ComfyUI subreddit. But that’s what I’m looking to use. The goal is to enable easier sharing, batch processing, and use of workflows in apps/sites. Due to the complexity of the workflow, a basic understanding of ComfyUI and ComfyUI Manager is recommended. For more details on using the workflow, check out the full guide You must be mistaken, I will reiterate again, I am not the OG of this question. Try to install the reactor node directly via ComfyUI manager. 0, you'll find a function called Object Swapper which uses the power of GroundingDINO to detect many more objects than the 79 categories defined in the YOLO model pretraining, and the Impact SEGS detailer to swap those objects with anything you want: (1) THE LAB – A ComfyUI workflow to use with Photoshop. In theory nodes can be 'colorized' in levels, which will then enable parallelism, but the lightgraph library doesn't colorize that way. I looked into the code and when you save your workflow you are actually "downloading" the json file so it goes to your default browser download folder. I long hoped people would start using ComfyUI to create pure LLM pipelines. Basically you will use a diff workflow for 1. The ComfyUI workflow is automatically saved in the metadata of any generated image, allowing users to open and use the graph that generated the image. Apparently the dev uploaded some version with trimmed data But generally speaking, workflows seen on GitHub can also be used. That’s when it fails. And the reason for that is that, at some point, multi-modal AI models will force us to have LLM and T2I models cooperate within the same automation workflow. It is pretty amazing, but man the documentation could use some TLC, especially on the example front. Like where you can filter by SAM (Type in the words rather than just using the model. You must be mistaken, I will reiterate again, I am not the OG of this question. Go to the comfyUI Manager, click install custom nodes, and search for reactor. Just upload the JSON file, and we'll automatically download the custom nodes and models for you, plus offer online editing if necessary. Thanks for your comment though I don’t think we are taking about the same thing. We've built a quick way to share ComfyUI workflows through an API and an interactive widget. 1. I'm the one getting dissed sohaha. upload any workflow to make it instantly runnable by anyone (locally or online). g. Once installed, download the required files and add them to the appropriate folders. Feb 21, 2025 路 Step 1: Find workflow online for I2V. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. Apr 22, 2024 路 Discover the top resources for finding and sharing ComfyUI workflows, from community-driven platforms to GitHub repositories, and unlock new creative possibilities for your Stable Diffusion Saving workflows. Hi guys, I wrote a ComfyUI extension to manage outputs and workflows. There is the "example_workflow. Additionally, many workflows are using outdated nodes, which is why some of the workflows didnt work anymore. I am very well aware of how to inpaint/outpaint in comfyui - I use Krita. In this section you'll learn the basics of ComfyUI and Stable Diffusion. It's confusing to read with all the connections going everywhere, but you can just pull a node away if you NEED to see what's going on behind the scenes. iiwjls bxsfz yzykrpp hmg wdesef pim wrqquiwu yetucc cne ril