Sdxl refiner comfyui. 21. The refiner helps improve the quality of the generated image. Oct 20, 2023 · vedantroy. If you don’t want to use the Refiner, you must disable it in the “Functions” section, and set the “End at Step / Start at Step” switch to 1 in the “Parameters” section. You will need ComfyUI and some custom nodes from here and here . 0. Here's what I've found: When I pair the SDXL base with my LoRA on ComfyUI, things seem to click and work pretty well. pipeKSamplerSDXL v2. fix sections altogether as the SDXL base models that does already give pretty great results, or use the XL models of your choice without them either. SDXL Sampler (base and refiner in one) and Advanced CLIP Text Encode with an additional pipe output. Even now, a large community still favors Stable Diffusion’s version 1. What I have done is recreate the parts for one specific area. I also automated the split of the diffusion steps between the Base and the Jul 28, 2023 · 大家好,我是小志Jason。一个探索Latent Space的程序员。今天来深入讲解一下SDXL的工作流,顺便说一下SDXL和过去的SD流程有什么区别 官方在discord上chatbot测试的数据,文生图觉得SDXL 1. SDXLTurbo+ SDXL Refiner Workflow for more detailed Image Generation. SDXL 1. Step 5: Load the Workflow. ComfyUI Hand Face Refiner. Try DPM++ 2S a Karras, DPM++ SDE Karras, DPM++ 2M Karras, Euler a and DPM adaptive. Merging 2 Images together. am I missing something? Owner. In researching InPainting using SDXL 1. But, as I ventured further and tried adding the SDXL refiner into the mix, things SDXL 1. Flexibility and power: Searge's new interface setup is but one, but there are others such as RBR setups, (Refiner -> Base -> Refiner) and many more, and it is not terribly hard to customize and set a variant of This is the official repository of the paper HandRefiner: Refining Malformed Hands in Generated Images by Diffusion-based Conditional Inpainting . 1 Base and Refiner Models to the ComfyUI file. The CLIPTextEncodeSDXL has a lot of parameters. Whats happening is the first 0-13 steps are done with the base, then it gets chopped off before it finishes, and passes that latent to the refiner, which does the remaining steps 13-20 So the base handles the low frequency (big shape) details, and the refiner handles the high frequency (small shape) details. During the learning phase, the model's U-Net "backbone" uses these scores as yes, it is. Step 3: Install Dependencies. 9 Tutorial (better than Midjourney AI)Stability AI recently released SDXL 0. SDXL Base + SDXL Refiner. 2. that extension really helps. deviantart. Link to my workflows: https://drive. In ComfyUI it is possible. pth (for SD1. x, SD2. So if ComfyUI / A1111 sd-webui can't read the image metadata, open the last image in a text editor to read the details. In addition it also comes with 2 text fields to send different texts to the two CLIP models. 8. 14. 5 models. The process is straightforward and can be completed within minutes. Step 6: Select Models in Load Model Nodes. I upscaled it to a resolution of 10240x6144 px for us to examine the results. This produces the image at bottom right. I'm creating some cool images with some SD1. 3 Worfklow - Complejo - Complex workflow, two passes Yesterday, I came across a very interesting workflow that uses the SDXL base model, any SD 1. ( I am unable to upload the full-sized image. In this guide, we'll show you how to use the SDXL v1. To enable higher-quality previews with TAESD, download the taesd_decoder. This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. For example: 896x1152 or 1536x640 are good resolutions. RTX 3060 12GB VRAM, and 32GB system RAM here. you can use SDNext and set the diffusers to use sequential CPU offloading, it loads the part of the model its using while it generates the image, because of that you only end up using around 1-2GB of vram. ) Modded KSamplers with the ability to live preview generations and/or vae decode images. This comprehensive guide offers a step-by-step walkthrough of performing Image to Image conversion using SDXL, emphasizing a streamlined approach without the use of a refiner. Feature a special seed box that allows for a clearer management of seeds. Generate unique and creative images from text with OpenArt, the powerful AI image creation tool. Save the image and drop it into ComfyUI. 9 in ComfyUI, with both the base and refiner models together to achieve a magnificent quality of image generation. 5(majicmix The model learns by looking at thousands of existing paintings. Once they're installed, restart ComfyUI to enable high-quality previews. The workflow tutorial focuses on Face Restore using Base SDXL & Refiner, Face Enhancement (G A hub dedicated to development and upkeep of the Sytan SDXL workflow for ComfyUI he workflow is provided as a . 9_comfyui_colab (1024x1024 model) please use with: refiner_v0. I don't want it to get to the point where people are just making Jul 28, 2023 · Since we have released stable diffusion SDXL to the world, I might as well show you how to get the most from the models as this is the same workflow I use on Fine-tuned SDXL (or just the SDXL Base) All images are generated just with the SDXL Base model or a fine-tuned SDXL model that requires no Refiner. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. 左上角的 Prompt Group 內有 Prompt 及 Negative Prompt 是 String Node,再分別連到 Base 及 Refiner 的 Sampler。 左邊中間的 Image Size 就是用來設定圖片大小, 1024 x 1024 就是對了。 左下角的 Checkpoint 分別是 SDXL base, SDXL Refiner 及 Vae。 Is there an explanation for how to use the refiner in ComfyUI? You can just use someone elses workflow of 0. Belittling their efforts will get you banned. I trained a LoRA model of myself using the SDXL 1. 🤔 I also made the point that the refiner model does not improve my images much, so I do Ferniclestix. The impact pack doesn't seem to have these nodes 1024 - single image 25 base steps, no refiner 1024 - single image 20 base steps + 5 refiner steps - everything is better except the lapels Image metadata is saved, but I'm running Vlad's SDNext. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. json file which is easily loadable into the ComfyUI environment. 5 refined model) and a switchable face detailer. 2 Workflow - Face - for Base+Refiner+VAE, FaceFix and Upscaling 4K 1. g. Traditionally, working with SDXL required the use of two separate ksamplers—one for the base model and another for the refiner model. And above all, BE NICE. Detailed steps include downloading, unzipping, and setting up the necessary files for ComfyUI installation. x SDXL 專用的 Negative prompt ComfyUI SDXL 1. How to use the Prompts for Refine, Base, and General with the new SDXL Model. u/Entrypointjip However, the SDXL refiner obviously doesn't work with SD1. 0 base and have lots of fun with it. Play around with different Samplers and different amount of base Steps (30, 60, 90, maybe even higher). com/jaynl/art/ComfyUI-SDXL-Workflow-with-Refiner-975996293 JayNL ComfyUI Manager: Plugin for CompfyUI that helps detect and install missing plugins. All images are generated using both the SDXL Base model and the Refiner model, each automatically configured to perform a certain amount of diffusion steps according to the “Base/Refiner Step Ratio” formula defined in the dedicated widget. Jan 15, 2024 · Comfy's early support of the SDXL models forced the most eager AI image creators, myself included, to make the transition. Send results to vae encode and decode as needed. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Does it mean 8G VRAM is too little in A1111? Anybody able to run SDXL on 8G VRAM GPU in A1111 at "normal" generation Generate text2image "Picture of a futuristic Shiba Inu", with negative prompt "text, watermark" using SDXL base 0. x) and taesdxl_decoder. ) These images are zoomed-in views that I created to examine the details of the upscaling process, showing how much detail 小白一键升级Stable diffusion最新版!本地使用,人尽可会!,更接近MJ的体验!stable diffusion SDXL 1. Subscribed. Figure 1: Stable Diffusion (first two rows) and SDXL (last row) generate malformed hands (left in each pair), e. Oct 13, 2023 · 我们导入官方 SDXL 的标准流程,我在每个板块在旁边添加了中文备注,解释了每个板块的用途。接下来我演示一下如何使用 SDXL 模型在 comfyui 上的工作流. I think this is the best balanced I could find. 2. Why so slow? In comfyUI the speed was approx 2-3 it/s for 1024*1024 image. 9 fine, but when I try to add in the stable-diffusion-xl-refiner-0. I'ma try to get a background fix workflow goin, this blurry shit is starting to bother me. In ComfyUI this can be accomplished with the output of one KSampler node (using SDXL base) leading directly into the input of another KSampler node (using Welcome to the unofficial ComfyUI subreddit. x and SD2. best settings for Stable Diffusion XL 0. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. Use SDXL Refiner with old models. 0 base and refiner models with AUTOMATIC1111's Stable Diffusion WebUI. . A lot of people are just discovering this technology, and want to show off what they created. 5 models and I don't get good results with the upscalers either when using SD1. Jul 21, 2023 · After gathering some more knowledge about SDXL and ComfyUI, and experimenting few days with both I've ended up with this basic (no upscaling) 2-stage (base + refiner) workflow: It works pretty well for me - I change dimensions, prompts, and samplers parameters, but the flow itself stays as it is. crop_w/crop_h specify whether the image should be diffused as being cropped starting at those coordinates. Part 2 (this post)- we will add SDXL-specific conditioning implementation + test what impact that conditioning has on the generated images. Jul 27, 2023 · まず前提として、SDXLを使うためには web UIのバージョンがv1. Much appreciated if you can post the json workflow or a picture generated from this workflow so it can be easier to setup. Lets you use two different positive prompts. (Efficient), KSampler SDXL (Eff. 🙂‍ In this video, we show how to use the SDXL Base + Refiner model. Use "Load" button on Menu. I would expect these to be called crop top left / crop sdxl_v1. SDXL Base+Refiner. json workflow. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. •. So in this workflow each of them will run on your input image and you Aug 11, 2023 · SDXLをGoogle Colab上で簡単に使う方法をご紹介します。 Google Colabに既に設定済みのコードを使用することで、簡単にSDXLの環境をつくりあげす。 また、ComfyUIも難しい部分は飛ばし、わかりやすさ、応用性を意識した設定済みのworkflowファイルを使用することで Aug 6, 2023 · ComfyUI: Harder to learn, node based interface very fast generations, generating anywhere from 5-10x faster than AUTOMATIC1111. Starts at 1280x720 and generates 3840x2160 out the other end. Which makes it usable on some very low end GPUs, but at the expense of higher RAM requirements. 下载此workflow的json文件并把他Load加载到comfyUI里,即可以开始你的sdxl模型的comfyUI作图之旅了。 如下图refiner model生成的图片质量和细节捕捉要好过base model生成的图片,没有对比就没有伤害! base model image: refiner model Jul 26, 2023 · To make full use of SDXL, you'll need to load in both models, run the base model starting from an empty latent image, and then run the refiner on the base model's output to improve detail. I found it very helpful. 0以降 である必要があります(※もっと言うと後述のrefinerモデルを手軽に使うためにはv1. 6. ***For ComfyUI do not use the Hires. 0 ComfyUI Workflow With Nodes Use Of SDXL Base & Refiner ModelIn this tutorial, join me as we dive into the fascinating worl Aug 8, 2023 · 執筆時点ではStable Diffusion web UIのほうはrefinerモデルにまだ完全に対応していないのですが、ComfyUIは既にSDXLに対応済みで簡単にrefinerモデルを使うことができます。 動作が速い. Advanced Prompt Enhancer, now supports Anthropic (Claude) and Groq connections' Grog is a free service that provides a remote inferencing platform for the latest high quality open-source models including the new Llama 3 models (llama3-70b & llama3-8b) and Mixtral-8x7b. It might come handy as reference. 2K views 6 months ago Stable Diffusion A1111 ComfyUI. Explain the Ba Nov 27, 2023 · Is the best balanced I could find between image size (1024x720), models, steps (10+5 refiner), samplers/schedulers, so we can use SDXL on our laptops without those expensive/bulky desktop GPUs. 9 the latest Stable Jan 8, 2024 · 8. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. 0 and upscalers. google. After an entire weekend reviewing the material, I think (I hope!) I got the implementation right: As the title says, I included ControlNet XL OpenPose and FaceDefiner models. Step 2: Download ComfyUI. 0 仅用关键词生成18种风格高质量画面#comfyUI,Comfy UI 第三十八章 SDXL+Refiner模型 Welcome to the unofficial ComfyUI subreddit. The default installation includes a fast latent preview method that's low-resolution. 3K subscribers. With resolution 1080x720 and specific samplers/schedulers, I managed to get a good balanced and a good image quality, first image with base model not very high I can run SDXL + Refiner and more at such faster speeds, it is not even a race, and this is assuming A1111 won't crash on my for OOM. 9 - How to use SDXL 0. Prerequisites Before you can use this workflow, you need to have ComfyUI installed. pth (for SDXL) models and place them in the models/vae_approx folder. Ive had some success using SDXL base as my initial image generator and then going entirely 1. com/drive/folder Stable Diffusion XL comes with a Aug 2, 2023 · How To Use Stable Diffusion XL 1. 0以降が必要)。しばらくアップデートしていないよという方はアップデートを済ませておきましょう。 Feb 25, 2024 · The second stage is based on SDXL refiner model, and uses the same prompts conditioned for the refiner model, and the output of the first stage to run 25 passes on the image starting from step 20 make-sdxl-refiner-basic_pipe [4a53fd] make-basic_pipe [2c8c61] make-sdxl-base-basic_pipe [556f76] ksample-dec [7dd004] sdxl-ksample [3c7e70] Nodes that have failed to load will show as red on the graph. Welcome to the unofficial ComfyUI subreddit. It detects hands and improves what is already there. After reading the SDXL paper, I understand that. 110. This SDXL ComfyUI workflow has many versions including LORA support, Face Fix, etc. 4/26/2024 @11:47am PST Version 1. 5. This was the base for my Hand-FaceRefiner. This score indicates how aesthetically pleasing the painting is - let's call it the 'aesthetic score'. , incorrect number of fingers or irregular shapes, which can be effectively rectified by our HandRefiner (right in each pair). You know what to do. This is often my go-to workflow whenever I want to generate images in Stable Diffusion using ComfyUI. you cant share via image here. ControlNet Depth ComfyUI workflow. Step 5: Choose and Run GPU or CPU Version. Workflow is better than a video for others to diagnose issues or borrow concepts. Through meticulous preparation, the strategic use of positive and negative prompts, and the incorporation of Derfuu nodes for image scaling, users can Aug 3, 2023 · Welcome to my tutorial on setting up a 2D XY plot workflow for the base and refiner model using Stable Diffusion XL (SDXL) in ComfyUI! In this video, I share I've been having a blast experimenting with SDXL lately. Part 2 - (coming in 48 hours) we will add SDXL-specific conditioning implementation + test what impact that conditioning has on the generated images. Jul 15, 2023 · 論文でも書いてある通り、SDXL は入力として画像の縦横の長さがあるのでこのようなノードになるはずです。 Refiner を入れると以下のようになります。 最後に 最後まで読んでいただきありがとうございました。今回は 流行りの SDXL についてです。 Mar 20, 2024 · ComfyUI is a node-based GUI for Stable Diffusion. 9 (just search in youtube sdxl 0. The sample prompt as a test shows a really great result. Part 3 - we will add an SDXL refiner for the full SDXL process. 5 models for refining and upscaling. 2占最多,比SDXL 1. Feb 22, 2024 · The SDXL workflow includes wildcards, base+refiner stages, Ultimate SD Upscaler (using a 1. Sytan SDXL ComfyUI: Very nice workflow showing how to connect the base model with the refiner and include an upscaler. json: 🦒 1. There is zero tolerance for incivility toward others or for cheaters. 2pass: SDXL Base + SD1. 9-usage This repo is a tutorial intended to help beginners use the new released model, stable-diffusion-xl-0. Warning: the workflow does not save image generated by the SDXL Base model. Img2Img ComfyUI workflow. Using SDXL 1. Here's the guide to running SDXL with ComfyUI. Aug 22, 2023 · Install various Custom Nodes like: Stability-ComfyUI-nodes, ComfyUI-post-processing, WIP ComfyUI’s ControlNet preprocessor auxiliary models (make sure you remove previous version comfyui_controlnet_preprocessors if you had it installed) and MTB Nodes. 正直、あまり使われていない機能ですが、使い方によってはモデルの持つ特性を越えた生成が実現出来たりします SDXLのRefinerをComfyUIで使う時、Refinerがどのようなタイミングで作用しているのか理解していないと、潜在空間内で It is if you have less then 16GB and are using ComfyUI because it aggressively offloads stuff to RAM from VRAM as you gen to save on memory. Jun 30, 2023 · ComfyUI seems to work with the stable-diffusion-xl-base-0. The refiner improves hands, it DOES NOT remake bad hands. If there was an example workflow or method for using both the base and refiner in one workflow, that would be For me it makes a huge difference the refiner, since I have only have a laptop to run SDXL with 4GB vRAM, I manage to get as fast as possible by using very few steps, 10+5 refiner steps. Restart ComfyUI at this point. SDXL Lora + Refiner Workflow. The core of the composition is created by the base SDXL model and the refiner takes care of the minutiae. 0 to 10. 9" (not sure what this model is) to generate the image at top right-hand corner. Old node layout: With pipeLoaderSDXL and pipeKSamplerSDXL: pipeEDIT. 0 Base Only 多出4%左右 Comfyui工作流:Base onlyBase + RefinerBase + lora + Refiner SD1. KSampler (Efficient), KSampler Adv. The CLIP Text Encode SDXL (Advanced) node provides the same settings as its non SDXL version. 0 Base SDXL 1. 0 Alpha + SD XL Refiner 1. json and add to ComfyUI/web folder. Conclusion. 0! Usage Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. Here is the best way to get amazing results with the SDXL 0. You can construct an image generation workflow by chaining different blocks (called nodes) together. Please share your tips, tricks, and workflows for using this software to create your AI art. ノードベースでパイプラインを作って処理を行う. StableDiffusion用のUI. Aug 7, 2023 · JayNL on DeviantArt https://www. It has many extra nodes in order to show comparisons in outputs of different workflows. Hands are finally fixed! This solution will work about 90% of the time using ComfyUI and is easy to add to any workflow regardless of the model or LoRA you This is a place to get help with AHK, programming logic, syntax, design, to get feedback, or just to rubber duck. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. 5 model over SDXL. Aug 15, 2023 · In part 1 ( link ), we implemented the simplest SDXL Base workflow and generated our first images. Note: I used a 4x upscaling model which produces a 2048x2048, using a 2x model should get better times, probably with the same effect. 0 seed: 640271075062843 The creator of ComfyUI and I are working on releasing an officially endorsed SDXL workflow that uses far less steps, and gives amazing results such as the ones I am posting below Also, I would like to note you are not using the normal text encoders and not the specialty text encoders for base or for the refiner, which can also hinder results Aug 26, 2023 · This is a comprehensive tutorial on understanding the Basics of ComfyUI for Stable Diffusion. When you post stuff like this, please add a link to the . Yes, 8Gb card, ComfyUI workflow loads both SDXL base & refiner models, separate XL VAE, 3 XL LoRAs, plus Face Detailer and its sam model and bbox detector model, and Ultimate SD Upscale with its ESRGAN model and input from the same base SDXL model all work together. 0 Base+Refiner比较好的有26. SDXL Default ComfyUI workflow. What Step In this tutorial i am gonna show you how to use sdxlturbo combined with sdxl-refiner to generate more detailed images, i will also show you how to upscale yo 这一期呢我们来开个新坑,来讲sd的另一种打开方式,也就是这个节点化comfyUI。那熟悉我们频道的老观众都知道,我一直是用webUI去做演示和讲解的 Refiners should have at most half the steps that the generation has. ***Another option is to skip the SDXL refiner and hires. kun432 2023/07/31. 3つ目のメリットとして、ComfyUIは全体的に動作が速い点が挙げられます。 Jul 30, 2023 · ComfyUIでSDXLを試してみる. Table of contents. Basically it just creates a 512x512 as usual, then upscales it, then feeds it to the refiner. Create animations with AnimateDiff. 0_comfyui_colab (1024x1024 model) please use with refiner_v1. 0 given by a panel of expert art critics. Contribute to ltdrdata/ComfyUI-extension-tutorials development by creating an account on GitHub. 5 model, and the SDXL refiner model. and have to close terminal and restart a1111 again to clear that OOM effect. However, with the new custom node, I've combined Aug 20, 2023 · Step 1: Download SDXL v1. stable-diffusion-web-uiでのSDXLサポートがまだ足りないようで、こちらが使われている記事などをちらほら見かけたので、試してみた The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. 0 the embedding only contains the CLIP model output and the Aug 3, 2023 · The speed of image generation is about 10 s/it (10241024 batch size 1), refiner works faster up to 1+ s/it when refining at the same 10241024 resolution. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set XY Plot ControlNet with the XL OpenPose model (released by Thibaud Zamora) Aug 3, 2023 · Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool Sep 27, 2023 · use the base and refiner in conjunction (first some steps base model, then some steps refiner) and pipe them into the ultimate upscaler. SDXL clip encodes are more if you intend to do the whole process using SDXL specifically Apr 28, 2024 · SDXLの目玉機能であるRefiner…. With usable demo interfaces for ComfyUI to use the models (see below)! After test, it is also useful on SDXL-1. The Tutorial covers:1. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. I hope someone finds it useful. 5 models in ComfyUI but they're 512x768 and as such too small resolution for my uses. With some higher rez gens i've seen the RAM usage go as high as 20-30GB. and with the following setting: balance: tradeoff between the CLIP and openCLIP models. 2 Workflow - Simple - Easy to use and simple with Upscaling 4K, just Base+VAE 1. 9. ControlNet Workflow. on Oct 20, 2023. Now with controlnet, hires fix and a switchable face detailer. 在最左侧的这两个 load checkpoint 模块分别添加 base 模型和 refiner 模型。 Jan 17, 2024 · The installation process involves downloading the files from the GitHub page, unzipping, and setting up the appropriate checkpoints and models. 0 Refiner Automatic calculation of the steps required for both the Base and the Refiner models Quick selection of image width and height based on the SDXL training set Aug 2, 2023 · Designed to handle SDXL, this ksampler node has been meticulously crafted to provide you with an enhanced level of control over image details like never before. sdxl-0. 9, I run into issues. Hello everyone, I've been experimenting with SDXL last two days, and AFAIK, the right way to make LORAS work, is to load them after the base model, since loading them for the refiner model does not seem to be doing anything. Upscaling ComfyUI workflow. というものらしい。. FOR HANDS TO COME OUT PROPERLY: The hands from the original image must be in good shape. This is my workflow (downloaded from github and modified Feb 24, 2024 · It has the SDXL base and refiner sampling nodes along with image upscaling. Each painting also comes with a numeric score from 0. Jul 11, 2023 · finally,我们终于成功运行起了comfyUI的用户界面. Criticisms of SDXL range from greater hardware demands to the use of a more censored training dataset. Here is the wiki for using SDXL in SDNext. There are 2 text inputs, because there are 2 text encoders. 9 Model. Update/Overwrite any of the 8 original inputs in a Pipe line with new information. but if I run Base model (creating some images with it) without activating that extension or simply forgot to select the Refiner model, and LATER activating it, it gets OOM (out of memory) very much likely when generating images. json: sdxl_v0. just using SDXL base to run a 10 step dimm ksampler then converting to image and running it on 1. 0 設定. Download . Question about SDXL ComfyUI and loading LORAs for refiner model. How to install ComfyUI. yeah, thats not how reddit works. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. Step 4: Download and Use SDXL Workflow. from what I can see and from all different examples only one or the other is used as the ultimate upscale node only takes one model as input. The latent output from step 1 is also fed into img2img using the same prompt, but now using "SDXL_refiner_0. SDXL Loader and Advanced CLIP Text Encode with an additional pipe output. 8. With SDXL I often have most accurate results with ancestral samplers. XY Plot Jan 7, 2024 · This tutorial includes 4 Comfy UI workflows using Face Detailer. Aug 13, 2023 · In part 1 (this post), we will implement the simplest SDXL Base workflow and generate our first images. fix Script plug into the script. 6. At 0. High likelihood is that I am misunderstanding how I use both in conjunction within comfy. I created this comfyUI workflow to use the new SDXL Refiner with old models: json here. Step 4: Copy SDXL 0. Please keep posted images SFW. ha ej wi nl xt ca gh vt ix fh