Comfyui workflow png example reddit

Comfyui workflow png example reddit. png) 29 comments See full list on github. A text file with multiple lines in the format "emotionName|prompt for emotion" will be used. From what I see in the ControlNet and T2I-Adapter Examples, this allows me to set both a character pose and the position in the composition. I generated images from comfyUI. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. A1111 has great categories like Features and Extensions that simply show what repo can do, what addon out there and all that stuff. but mine do include workflows for the most part in the video description. And above all, BE NICE. This was really a test of Comfy UI. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. 5 noise, decoded, then saved. comfy uis inpainting and masking aint perfect. hey guys, i always love seeing a cool image online and trying to reproduce it, but trying to find the original method or workflow is troublesome since google‘s image search just shows similar looking images. Belittling their efforts will get you banned. Any ideas on this? Give it a folder of images of outfits (with, for example, outfit1. 0 version of the SDXL model already has that VAE embedded in it. Example: Starting workflow. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. com or https://imgur. Just started with ComfyUI and really love the drag and drop workflow feature. But for a base to start at it'll work. https://youtu. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Instead, I created a simplified 2048X2048 workflow. 157 votes, 62 comments. That's because the base 1. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. Just my two cents. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality) . The problem I'm having is that Reddit strips this information out of the png files when I try to upload them. com and then post a link back here if you are willing to share it. But reddit will strip it away. AnimateDiff in ComfyUI is an amazing way to generate AI Videos. For example I just glance at my workflows and pick the one that I want, drag and drop into ComfyUI and I'm ready to go. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. ComfyUI could have workflow screenshots like example repo has to demonstrate possible usage and also variety of extensions. Need help with FaceDetailer in ComfyUI? Join the discussion and find solutions from other users in r/StableDiffusion. A lot of people are just discovering this technology, and want to show off what they created. github. I'm using the ComfyUI notebook from their repo, using it remotely in Paperspace. 2) or (bad code:0. Click this and paste into Comfy. You can then load or drag the following image in ComfyUI to get the workflow: This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. Hello there. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. K12sysadmin is for K12 techs. Ending Workflow. So, i added reverse image search that queries a workflow catalog to find workflows that produce similar looking results. Also, if this is new and exciting to you, feel free to post ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. be/ppE1W0-LJas - the tutorial. EDIT: For example this workflow shows the use of the other prompt windows. If you want to post and aren't approved yet, click on a post, click "Request to Comment" and then you'll receive a vetting form. 1:8188 but when i try to load a flow through one of the example images it just does nothing. Unfortunately, Reddit strips the workflow info from uploaded png files. Those images have to contain a workflow, so one you've generated yourself for example. First of all, sorry if this has been covered before, i did search and nothing came back. I tried to find either of those two examples, but I have so many damn images I couldn't find them. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". u/wolowhatever we set 5 as the default but it really depends on the image and image style tbh - I tend to find that most images work well around Freedom of 3. - If the image was generated in ComfyUI, the civitai image page should have a "Workflow: xx Nodes" box. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to - If the image was generated in ComfyUI and metadata is intact (some users / websites remove the metadata), you can just drag the image into your ComfyUI window. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. 0 and ComfyUI to explore how doubling the sample count affects performance, especially on higher sample counts, seeing where the image changes relative to the sampling steps. I can load the comfyui through 192. You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. Svelte is a radical new approach to building user interfaces. Ignore the prompts and setup I think perfect place for them is Wiki on GitHub. Each time I do a step, I can see the color being somehow changed and the quality and color coherence of the newly generated pictures are hard to maintain. I'll do you one better, and send you a png you can directly load into Comfy. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. Where can one get such things? It would be nice to use ready-made, elaborate workflows! For example, ones that might do Tile Upscle like we're used to in AUTOMATIC 1111, to produce huge images. If you can't see that button, you need to check the 'enable dev mode options'. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. I conducted an experiment on a single image using SDXL 1. Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting point for using ControlNet: View Now: Inpainting workflow: A great starting I think it was 3DS Max. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. === How to prompt this workflow === Main Prompt ----- The subject of the image in natural language Example: a cat with a hat in a grass field Secondary Prompt ----- A list of keywords derived from the main prompts, at the end references to artists Example: cat, hat, grass field, style of [artist name] and [artist name] Style and References Welcome to the unofficial ComfyUI subreddit. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. To add content, your account must be vetted/verified. I have a workflow with this kind of loop where latest generated image is loaded, encoded to latent space, sampled with 0. There is the "example_workflow. io/ComfyUI_examples/flux/flux_schnell_example. This makes it potentially very convenient to share workflows with other. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. The png files produced by ComfyUI contain all the workflow info. 1 or not. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. Really chaotic images or images that actually benefit from added details from the prompt can look exceptionally good at ~8. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. And my workflow itself for something like SDXL with Refiner upscaled to 4kx4k is super simple. K12sysadmin is open to view and closed to post. It works by converting your workflow. 168. About a week or so ago, I've began to notice a weird bug - If I load my workflow by dragging the image into the site, it'll put the wrong positive prompt. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Using just the base model in AUTOMATIC with no VAE produces this same result. Hello, I'm wondering if the ability to read workflows embedded in images is connected to the workspace configuration. Upcoming tutorial - SDXL Lora + using 1. Posted by u/Kinfolk0117 - 37 votes and 7 comments A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and it will automatically contain the workflow as well. But the workflow is dead simple- model - dreamshaper_7 Pos Prompt - sexy ginger heroine in leather armor, anime Neg Prompt - ugly Sampler - euler steps - 20 cfg - 8 seed - 674367638536724 That's it. Generate one character at a time and remove the background with Rembg Background Removal Node for ComfyUI . Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. This repo contains examples of what is achievable with ComfyUI. And the documentation uses a highly technical language, with no examples to make it worse. Increasing the sample count leads to more stable and consistent results. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. com ComfyUI Examples. We would like to show you a description here but the site won’t allow us. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. Flux Schnell is a distilled 4 step model. If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. here i just use: futuristic robotic iguana, extreme minimalism, white porcelain robot animal, details, build by Tesla, Tesla factory in the background I'm not using breathtaking, professional, award winning, etc, because it's already handled by "sai-enhance" I totally agree. 5 from 512x512 to 2048x2048. Breakdown of workflow content. Share, discover, & run thousands of ComfyUI workflows. I had to place the image into a zip, because people have told me that Reddit strips . Comfy Workflows Comfy Workflows. png) Give it a folder of OpenPose poses to iterate over Create a list of emotion expressions. 8). I have a client who has asked me to produce a ComfyUI workflow as backend for a front-end mobile app (which someone else is developing using React) He wants a basic faceswap workflow. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Please share your tips, tricks, and workflows for using this software to create your AI art. . Hope you like some of them :) Workflow. No attempts to fix jpg artifacts, etc. You can construct an image generation workflow by chaining different blocks (called nodes) together. I found it very helpful. As a pogrammer, the workflow logic should be relatively easy to understand, but the function of each node cannot be inferred by simply looking at its name. Im trying to do the same as high res fix, with a model and weight below 0. For your all-in-one workflow, use the Generate tab. Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it. I'm facing a problem where, whenever I attempt to drag PNG/JPG files that include workflows into ComfyUI—be it examples The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably used before. OP probably thinks that comfyUI has the workflow included with the PNG, and it does. And you need to drag them into an empty spot, not a load image node or something. You can use () to change emphasis of a word or phrase like: (good code:1. Welcome to the unofficial ComfyUI subreddit. So OP, please upload the PNG to civitai. Aug 2, 2024 · You can then load or drag the following image in ComfyUI to get the workflow: This image contains the workflow (https://comfyanonymous. txt containing a prompt describing the outfit in outfit1. The workflow is kept very simple for this test; Load image Upscale Save image. I cant load workflows from the example images using a second computer. My ComfyUI workflow was created to solve that. 0. Hi Antique_Juggernaut_7 this could help me massively. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Apparently the dev uploaded some version with trimmed data But generally speaking, workflows seen on GitHub can also be used. true. I can load workflows from the example images through localhost:8188, this seems to work fine. This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. ai/profile/neuralunk?sort=most_liked. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Most workflows you see on GitHub can also be downloaded. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Here is the workflow for ComfyUI updated to a folder on google drive with both json and png of some of my workflows example by @midjourney_man - img2vid No refiner. The sample prompt as a test shows a really great result. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. png" in the file list on the top, and then you should click Download Raw File, but alas, in this case the workflow does not load. pngs of metadata. Plus there a ton of extensions which provide plenty ease of use cases. Here are approx. Remove 3/4 stick figures in the pose image. json files into an executable Python script that can run without launching the ComfyUI server. hopefully this will be useful to you. If I drag and drop the image it is supposed to load the workflow ? I also extracted the workflow from its metadata and tried to load it, but it doesn't load. Otherwise, please change the flare to "Workflow not included" First of all, sorry if this has been covered before, i did search and nothing came back. Please keep posted images SFW. adisq xvg sogh avbddk dykvc wosp ghjihlol awclong fzayy gvbcwl