Comfyui adetailer reddit.
- Comfyui adetailer reddit #!++ a lightweight Debian-based distribution featuring the Openbox and GTK+ applications. Will add other image metadata display of things like models and seeds soon, they're already loaded from the file, just not in the UI yet. That was the reason why I preferred it over ReActor extension in A1111. " It will attempt to automatically detect hands in the generated image and try to inpaint them with the given prompt. com/ltdrdata/Comf Regarding the integration of ADetailer with ComfyUI, there are known limitations that might affect this process. I tend to like the mediapipe detectors because they're a bit less blunt than the square box selectors on the yolov ones. It is pretty amazing, but man the documentation could use some TLC, especially on the example front. Any way to preserve the "lora effect" and still fix imperfect faces? BTW, that pixelated image looks like it could be because the wrong VAE is being used. hello cool Comfy people! happy new year. Now unfortunatelly, I couldnt find anything helpful or even an answer via Google / YouTube, nor here with the subs search funtion. Please share your tips, tricks, and… Welcome to the unofficial ComfyUI subreddit. The video was pretty interesting, beyond the A1111 vs. com) 機能拡張マネージャーから入手できます。 We would like to show you a description here but the site won’t allow us. This one took 35 seconds to generate in A1111 with a 3070 8GB with a pass of ADetailer I observed that using Adetailer with SDXL models (both Turbo and Non-Turbo variants) leads to an overly smooth skin texture in upscaled faces, devoid of the natural imperfections and pores. I also had issues with this workflow with unusually-sized images. I've also seen a similar look when ADetailer is used for Turbo models with certain samplers. it's no longer maintained, do you have any recommendation custom node that can be use on ComfyUI (that have same functionality with aDetailer on A1111) beside FaceDetailer? someone give me direction to try ComfyUI-Impact-Pack, but it's too much for me, I can't quite get it right, especialy for SDXL. Forgot even comfyui exist. I am using adetailer (max. ADetailer works OK for faces but SD still doesn't know how to draw hands well so don't expect any miracles. and the adetailer repo: sd-webui-adetailer Adetailer was the only real thing I was missing coming from SDNext, but thanks to mcmonkey and fiddling around a bit I got adetailer-like functionality running without too much trouble. Adetailer model is for face/hand/person detection Detection threshold is for how sensitive it's detect (higher = stricter = less face detected / will ignore blurred face in background character) then mask that part Welcome to the unofficial ComfyUI subreddit. CrunchBangPlusPlus (or #!++) is an effort to continue the #! environment. i just want to be able to select model, vae if necessary, lora and thats it. If you want the ComfyUI workflow, let me know. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. im beginning to ask myself if that's even possible in Comfyui. true. Adetailer and other are just more automated extensions for it, but you don't really need to have a separate model to place a mask on a face (you can do it yourself), that's all that Adetailer and other detailer extensions do. Comparison: 128 votes, 32 comments. currently my "fix" for poor facial details at 1024x1024 resolution (SDXL) is two-cycle ksampling - ending the first sampler at 8/24 steps and… Installation is complicated and annoying to setup, most people would have to watch YT tutorials just to get A1111 installed properly. And the new interface is also an improvement as it's cleaner and tighter. Uncharacteristically, it's not as tidy as I'd like, mainly due to a challenge I have with passing the checkpoint/model name through reroute nodes. or want to add something similar to "adetailer" pluging from automatic1111 or a Hello guy, Sorry to ask, but i searched for hours, documentation internet, even the source code of Impact-Pack i found no way to add new bbox_detector. a few days ago installed it, speed is amazing but i cannot do anything almost. My main source is Civitai because it's honestl Apr 24, 2025 · Hello I've been using stable diffusion for a while now and recently I've been trying to migrate to comfyui but I'm struggling with getting good results on the adetailer process. and the adetailer repo: sd-webui-adetailer Welcome to the unofficial ComfyUI subreddit. A lot of people are just discovering this technology, and want to show off what they created. Belittling their efforts will get you banned. Currently I don't think ComfyUI lets you output outside the output folder but we could add options for choosing subfolders within that and template based file names. ) To clarify, there is a script in Automatic1111->scripts->x/y/z plot that promises to let you test each ADetailer model, same as you would a regular checkpoint, or CFG scale, or number of steps. If adetailer is not capable of doing it, what's your suggestion? 27 votes, 38 comments. While that's true, this is a different approach. Please share your tips, tricks, and workflows for using this software to create your AI art. used Eyes adetailer from civitai and sam_vit_l_0b3195. Please share your tips, tricks, and workflows for using this… Thanks for the reply - I’m familiar with ADetailer but I’m actually deliberately looking for something that does less. This wasn't the case before the updating to the newest version of A1111. Hello, I have been trying to find a solution to fix multiple faces in a single photo but I am unable to do so, a scene such as a bar full of people, if I use A1111 adetailer or ComfyUI Face detailer, every time there are more than 1 people in a photo, the face fixing just adds the same face for every single character. Update: I went ahead and reinstalled SD. It seems I may have made a mistake in my setup, as the results for the faces after Adetailer are not turning out well. And the clever tricks discovered from using ComfyUI will be ported to the Automatic1111-WebUI. I wanted to set up a chain of 2 facedetailer instances into my workflow. Man, you're damn right! I would never be able to do this in A1111; I would be stuck into A1111's predetermined flow order. Just make sure you update if it's already installed. It's amazing the quality of images that you can get with simple prompts, even panoramic images. But it's reasonably clean to be used as a learning However, I get subar results compared to adetailer from webui. Hopefully, some of the most important extensions such as Adetailer will be ported to ComfyUI. And Also Bypass the AnimateDiff Loader model to Original Model loader in the To Basic Pipe node else It will give you Noise on the face (as AnimateDiff loader dont work on single image, you need 4 atleast maybe and facedetailer can handle only 1 ) Hi there. It is what only masked inpainting does automatically. 21K subscribers in the comfyui community. Here's the repo with the install instructions (you'll have to uninstall the wildcards you already have): sd-webui-wildcards-ad. 4 denoise) after roop and codeformer and then SD Ultimate and normal Upscaler with Ultra Sharp. ComfyUI only has ReActor, so I was hoping the dev would add it too. I tried to upscale a low-res in img2img, with adetailer on, still doesn't do much. be/ynfNJEtvUtQHow to Install Manager: https://youtu. (In webui, adetailer runs after the animatediff generation, making the final video look unnatural. Use Ultralytics to get either a bbox/SEGS and feed that into one of the many Detailer nodes and you can automate a step to have it work on the face up close. Adetailer doesn't require an inpainting checkpoint or controlnet etc etc, simpler is better. com) 機能拡張マネージャーから入手できます。 BTW, that pixelated image looks like it could be because the wrong VAE is being used. That extension already had a tab with this feature, and it made a big difference in output. 23 votes, 21 comments. You can use Segs detailer in ComfyUI which if you create a mask around the eye, it will upscale the eye to a higher resolution of your choice like 512x512 and downscale it back. Clicking and dragging to move around a large field of settings might make sense for large workflows or complicated setups but the downside is, obviously, a loss of simple cohesion. I just released version 4. Or just throwing the image to img2img and running adetailer alone (with skip img2img checked) then photoshopping the results to get good hands and feet. Under the "ADetailer model" menu select "hand_yolov8n. using face_yolov8n_v2, and that works fine. Going to python_embedded and using python -m pip install compel got the nodes working. If there is only one face in the scene, there is no need for a node workflow. For something similar, I generate images with a low number of steps and no adetailer/upscaler/etc, then when I get one I like I'll drag it into back into the UI to recreate the exact workflow and up the step count/enable the extra quality features that were in groups set to bypass. I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA selector, Upscaler & Sharpener. Specifically, "img2img inpainting with skip img2img is not supported" due to bugs, which could be a potential issue for ComfyUI integration . also take out all the "realistic" eye stuff in ur pos/neg prompt that voodoo does nothing for better eyes, good eyes come from good resolution, to increase the face resolution during txt2img you use adetailer. 0,9 seconds. Put ImageBatchToImageList > Face Detailer > ImageListToImageBatch > Video Combine. But the problem I have with ComfyUI is unfortunately not with how long it takes to figure out, I just find it clunky. Most of them already are if you are using the DEV branch by the way. I'm actually using aDetailer recognition models in auto1111 but they are limited and cannot be combined in the same pass. i get nice tutorial from here, it seems work. How to Install ComfyUI: https://youtu. I even tried adetailer, but Roop always happens after adetailer, so it didn't help either. Hell it probably works better with mcmonkey's implementation now that I understand the ins and outs. There are some distortions and faces look more proportional but uncanny. 5ms to generate. If you're using ComfyUI you can right click on a Load Image node and select "Open in MaskEditor" to draw an inpanting mask. 3 it is not that important. I will have to play with it more to be sure it's working properly but it looks like that may have been the issue. However, the latest update has a "yolo world model", and I realised I don't know how to use the yolov8x and related models, other than in pre-defined models as above. The best use case is to just let it img2img on top of a generated image with an appropriate detection model, you can use the Img2Img tab and check "Skip img2img" for testing on a preexisting image like this one. Now, a world of possibilities has opened; next, I will try to use a segment to separate the face, upscale it, add a lora or detailer to fine-tune the face details, rescale to the image source size, and paste it back. Reply reply More replies Top 1% Rank by size Hi guys, adetailer can easily fix and generate beautiful faces, but when I tried it on hands, it only makes them even worse. More flexible. Okay, so it's complete test out and refiner is not used as img2img inside ComfyUI. I tried with "detailed face, realistic eyes, etc. Giving me the mask and letting me handle the inpaint myself would give me more flexibility for eg. All packages were forked directly from the #! repositories/Github and changed only where necessary to keep it up to date with newer packages. 2 noise value it changed quite a bit of face. g. Is there a way to have it only do the main (largest) face (or better yet, an arbitrary number) like you can in Adetailer? Any time there's a crowd, it'll try to do them all and it ends up giving them all the expression of the main subject. i always wanted to get in to ComfyUI due to speed. Despite relatively low 0. A1111 is REALLY unstable compared to ComfyUI. For example, the Adetailer extension automatically detects faces, masks them, creates new faces, and scales them to fit the masks. This is the setup for the eye detailer. I do a lot of plain generations, ComfyUI is It can help you do similar things that the adetailer extension does in A1111. It picked up the loras, prompt, seed, etc. If you want to have good hands without precise control on pose, you add a LoRA, put "hands" on negative and use adetailer for the fine retouch if needed. Losing a great amount of detail and also de-aging faces on a creepy way. While I can select that script, and plug in the different ADetailer models, it does not seem to have any effect. Both of my images have the flow embedded in the image so you can simply drag and drop the image into ComfyUI and it should open up the flow but I've also included the json in a zip file. pth Testing the same prompt keeps giving me the same result, except that this time is the eye on the right the one that came up good. i managed to find a simple SDXL workflow but nothing else. 5 Its getting over saturation bc facedatailer essentially just detects where face is, crops that region along with a mask matching only the face, resizes that region to the max_size, then it does an img2umg at low denoise, after that it just resizes the regenerated face to original size and patches it into the Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from… I was waiting for this. doing one face at a time with more control over the prompts. Help me make it better! As the title suggest, I'm using ADetailer for Comfy (the impact-pack) and works well, problem is I'm using a Lora to style the face after a specific person(s), and the FaceDetailer node makes it clearly "better" but kinda destroys the similarity and facial traits. turn adetailer on. I am curious if I can use Animatediff and adetailer simultaneously in ComfyUI without any issues. To continue talking to Dosu, mention @dosu. Did not pick up the ADetailer settings (expected, though there are nodes out there that can accomplish the same things). Then i bought a 4090 a couple of weeks ago (2 i think). That said, I'm looking for a front-end face swap, something that will inject the face into the mix at the point of ksampler, so if I prompt for something like Freckles they won't get lost in the swap/upscale but I've still got my likeness. tl;dr just check the "enable Adetailer" and generate like usual, it'll work just fine with default setting. Any tips are greatly appreciated. IOW, their detection maps conform better to faces, especially mesh, so it often avoids making changes to hair and background (in that noticeable way you can sometimes see when not using an inpainting model). I guess with adetailer denoising at 0. Tweaked a bit and reduced the basic sdxl generation to 6-14 seconds. Continued with extensions, got adetailer, control net etc with literally a click. Please keep posted images SFW. The original author of adetailer was kind enough to merge my changes. We would like to show you a description here but the site won’t allow us. With ComfyUI you just download the portable zip file, unzip it and get ComfyUI running instantly, even a kid can get ComfyUI installed. Adetailer was the only real thing I was missing coming from SDNext, but thanks to mcmonkey and fiddling around a bit I got adetailer-like functionality running without too much trouble. I didn't use any adetailer prompt. That way you can address each one respectively, eg. 1st pic is without ADetailer and the second is with it. Help me make it better! Just tried it again and it worked with an image I generated in A1111 earlier today. OP, you can greatly improve your results by generating, and then using aDetailer on your upscale, and instead of using a singular aDetailer prompt, you can choose the option to prompt faces individually from left to right. be/dyrhPVRsy9wComfyUI Impact Pack: https://github. I just made the move from A1111 to ComfyUI a few days ago. no prompt. Comfy speed comparison. If adetailer is not capable of doing it, what's your suggestion? We would like to show you a description here but the site won’t allow us. 0. My guess -- and it's purely a guess -- is that ComfyUI wasn't using the best cross-attention optimization. It works just like the regular VAE encoder but you need to connect it to the mask output from Load Image. Adetailer is actually doing something now, however minor. The thing that is insane is testing face fixing (used SD 1. I use ADetailer to find and enhance pre-defined features, e. The easiest solution to that is to specify a different sampler for ADetailer. I set up a workflow for first pass and highres pass. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in the workflows being generated. " but the results were basically the same. Mar 23, 2024 · Hey this is my first ComfyUI workflow hope you enjoy it! I've never shared a flow before so if it has problems please let me know. Hi. I've never tried to generate whole video with denoising 1, maybe I will give it a try. Most "ADetailer" files i have found work when placed in Ultralytics BBox folder. Noticed that speed was almost the same with a1111 compared to my 3080. From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. I'm new to all of this and I've have been looking online for BBox or Seg models that are not on the models list from the comfyui manager. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. try default settings. 3 in order to get rid of jaggies, unfortunately it will diminish the likeness during the Ultimate Upscale. I have to push around 0. Next since that one is apparently kept more up to date and so far this has made a difference. 149 votes, 33 comments. Before switching to ComfyUI I used FaceSwapLab extension in A1111. So when I tried it that way in ComfyUI, it comes out a little weird (eyes too far apart or sharp lines, not consistent to the overall style) to really bad (extremely deformed, ears, eyes, and ears not where there is supposed to be any). Tried comfyui just to see. And above all, BE NICE. One for faces, the other for hands. Is it true that we will forever be limited by the smaller size model from the original author? Can someone shed some light on it please? Thanks a lot. Dec 15, 2024 · I come from Forge UI and the way it's done there is HiRes Fix -> ADetailer. the amount of control you can have is frigging amazing with comfy. 5 just to see to compare times) the initial image took 127. To encode the image you need to use the "VAE Encode (for inpainting)" node which is under latent->inpaint. See, this is another big problem with IP adapter (and me) is that it's totally unclear what all it's for and what it should be used for. Change max size in Facedailer node to 1024 whenever using sdxl models, 512 for sd1. Maybe I will fork the ADetailer code and add it as an option. I wish there was some way to force adetailer only to a specific region to look for its subjects, that could help alleviate some of this. . Anything wrong here with this workflow?. i'm looking for a way to inpaint everything except certain parts of the image. 0 of my AP Workflow for ComfyUI. pt" and give it a prompt like "hand. The default settings for ADetailer are making faces much worse. 27 votes, 38 comments. I'm using ComfyUI portable and had to install it into the embedded Python install. How exactly do you use it to fix hands? When I use default inpaint to fix hands, the result is also not so good, no matter the checkpoint and the denoise value. One thing about human faces is that they are all unique. This is the first time I see Face Hand adetailer in Comfyui workflow /r/StableDiffusion is back open after the protest of Reddit killing open API access, which Welcome to the unofficial ComfyUI subreddit. I want to install 'adetailer' and 'dddetailer', the installation instruction says it goes into the 'extensions' folder, but there is none in ComfyUI. We know A1111 was using xformers, but weren't told, as far as i noticed, what ComfyUI was using. 25K subscribers in the comfyui community. Hi guys, adetailer can easily fix and generate beautiful faces, but when I tried it on hands, it only makes them even worse. I am using AnimatedIff + Adetailer + Highres, but when using animatediff + adetailer in webui, the face appears unnatural. Welcome to the unofficial ComfyUI subreddit. and 9 seconds total to refine it. Oct 29, 2023 · あるいはその代わりをするカスタムノードは? Facedetailer Facedetailer Comfy UIには「ADetailer」はありません。 そのかわりに「Facedetailer」というものがあります。 (19) ADetailer for ComfyUI : StableDiffusion (reddit. ya been reading and playing with it for few days. When I do two-pass the end result is better although still falls short from what I got on webui with adetailer, which is strange as they work in the same way from what I understand. Is stable diffusion's adetailer just better? Does it also upscale the mask? Sometimes in comfyui I even get worse results than the preview. Following this, I utilize Facedetailer to enhance faces (similar to Adetailer for A1111). sbahac vnhmg anffxe dchcw hltje omqh ltk urhnf vmqzge uazr