Alright, everyone knows how it is with AI image generation, right? You get hooked on creating these amazing characters, these worlds, these visions. But then you want to bring that same character back in a different scene, a new outfit, maybe just chilling in a coffee shop instead of battling dragons. And that’s where things usually get… tricky. Enter LoRAs, right? They’re cool, they can help. But let’s be honest, training them? It’s a whole thing. Gathering images, tweaking settings, waiting… Sometimes you just want results now, you know? Well, hold onto your hats, folks, because there’s a new kid on the block, and it’s kind of blowing minds. It’s called ACE++, and get this it promises insane character consistency from just one single image and it works in comfyui.
Yeah, you read that right. One. No lengthy training sessions, no datasets, no fuss. Just bam, consistent characters popping out of your image generator like magic. And the best part? It’s playing nice with ComfyUI, that node-based interface we all know and… well, some of us love to tinker with. So, if you’re already in the ComfyUI universe, you’re in for a treat.


Table of contents
- ACE++? Sounds Fancy. What Is It?
- ACE++ in the Real World: Seeing is Believing
- Getting Your Hands Dirty: Installation and a Quick Peek
- Using with ComfyUI (Easier Workflow – Recommended!)
- Using ACE++ to Keep Your Characters Consistent Using ComfyUI
- The Future is Consistent (and Maybe Without LoRA?)
- Wrapping Up: Go Forth and Be Consistent!
ACE++? Sounds Fancy. What Is It?
Okay, so “ACE++” might sound like some top-secret government project, but it’s actually a research project hailing from Alibaba in China. Pretty cool, huh? They’re calling it “Instruction-Based Image Creation and Editing via Context-Aware Content Filling.” Woah, mouthful. Basically, it’s a smarty-pants way of saying it figures out what’s in your image and lets you mess with it or build on it using simple instructions.
Think of it like this: you give it a picture of your character, maybe it’s a cool elf warrior you dreamed up. Then you tell ACE++, “Hey, put this elf in a cyberpunk city, make them wear neon armor,” and poof, it (hopefully!) does it while keeping that elf looking like, well, that elf.
Model List:
They’ve actually got a few different “flavors” of ACE++, designed for different jobs. From what we can tell, there are three main models kicking around right now:
- ACE++ Portrait: This one’s all about keeping faces consistent. Perfect for when you want to see your character in different movie posters, or maybe you want to see them try on different hats (very important stuff, obviously).
- ACE++ Subject: Need to slap your logo onto a coffee mug? Or maybe you want to see your favorite toy in a jungle setting? This is your go-to. It’s about keeping objects consistent across different scenes.
- ACE++ Local Editing: This one’s a bit more hands-on. Imagine you’ve got a picture, and you want to redraw a specific part of it, maybe you want to change the background or fix a wonky hand (AI hands, am I right?). Local Editing lets you do that while trying to keep the rest of the image looking… well, not totally messed up.
And apparently, there’s a “ACE++ Fully” model coming soon. Sounds like the ultimate all-in-one package, but for now, we’ll have to wait and see what that brings to the table.
So, How Does This No-Training Magic Actually Work?
Alright, let’s get a little bit techy, but I promise to keep it breezy. From what I can gather from the YouTube guides and the documentation, ACE++ is working hand-in-hand with something called Flux Fill. Flux Fill, in a nutshell, is another AI model that’s really good at filling in missing parts of images or, in this case, expanding on existing images.
The workflow in ComfyUI seems to go something like this:
- You feed it your image. The one image of your character that you want to become the star of your AI art universe.
- ComfyUI magic. You load up a special ComfyUI workflow that includes the ACE++ nodes and the Flux Fill model. There’s a bit of downloading and node-installing involved, but hey, that’s the fun of ComfyUI, right?
- Prompts are your friend. You write a prompt describing what you want to do with your character. “Elf warrior drinking coffee,” “Elf warrior in space,” “Elf warrior doing the dishes” – whatever your heart desires. You can even throw in style prompts or LoRAs to nudge the style in a certain direction.
- Flux Guidance is key. Apparently, cranking up the “flux guidance” (a setting in the workflow) helps ACE++ really stick to your character. Think of it like telling the AI, “No, seriously, really focus on keeping this person the same!”
- Generate, generate, generate. Now, here’s the thing – and this is crucial. The results can be… variable. Some generations might be mind-blowingly good, like “wow, did it actually read my mind?” good. Others? Well, let’s just say not every AI creation is a masterpiece right out of the gate. So, the advice is to hit that “generate” button a few times. Reroll until you get something that makes you go, “YES! That’s the one!”
No Training Required!
Now, about the “no training” part. ACE++ itself is trained, of course. But you don’t have to do any training yourself. You’re leveraging pre-trained models and clever workflows. It’s kind of like using pre-made LEGO bricks instead of having to smelt your own plastic and mold each brick individually. Much faster, much easier, and you still get to build cool stuff.
Oh, and a little heads-up: running this stuff can be a bit VRAM-hungry. If your graphics card is feeling a bit… underpowered, you might run into some slowdowns or even crashes. But don’t despair! Services like Think Diffusion are out there, offering cloud-based machines with beefy GPUs. You can basically rent some serious processing power to run these demanding AI tools. Plus, they often come with ComfyUI and all the necessary bits pre-installed. Handy, right?

ACE++ in the Real World: Seeing is Believing
Okay, theory is cool and all, but let’s talk about what ACE++ can actually do. The demos and the project pages are packed with examples, and they are honestly pretty impressive.
Remember that Albert Einstein example? They took a photo of Einstein and then, just by prompting, dressed him up as a wizard, complete with elf ears and a robe. And it still looked like Einstein! Same with Stephen Curry being turned into Superman. It’s a bit surreal, but also… kind of awesome.
And it’s not just famous faces. The examples show it working with regular folks too. You can take a photo of yourself and then prompt ACE++ to put you in different outfits, different locations, different… situations. Want to see yourself as a cyberpunk samurai? A Victorian detective? A cartoon character? ACE++ seems to be able to pull it off, often with surprising accuracy.
The style thing is interesting too. Apparently, ACE++ tends to stick to the style of your input image. So, if you give it a photo that’s very realistic, it’ll likely generate more realistic-looking outputs. If your input is more painterly or illustrative, the outputs will probably lean that way too. You can try to force a different style with prompts or LoRAs, but it might be a bit of a wrestling match. Sometimes embracing the input style is the way to go for the best results.
Getting Your Hands Dirty: Installation and a Quick Peek
Alright, feeling the urge to try this out yourself? Good! Here’s the super-condensed version of how to get ACE++ up and running. (Definitely check the official documentation and video guides for the full scoop, though.)
Basically, you’ll need to:
- Grab the code. Head over to the ACE++ GitHub repo and clone it. Good old git clone command will do the trick.
- Install the bits and bobs. Navigate into the ACE++ folder in your terminal and run pip install -r requirements.txt. This will install all the Python packages it needs to work its magic.
- Download the models. ACE++ relies on the FLUX.1-Fill-dev base model, which you can snag from Hugging Face. You’ll also need to download the ACE++ portrait, subject, and local editing models themselves. The documentation and ComfyUI workflows will point you to the right download links.
- Set up environment variables. This sounds scarier than it is. Basically, you need to tell your system where you’ve put all those downloaded models. You’ll be setting environment variables like FLUX_FILL_PATH, PORTRAIT_MODEL_PATH, etc. The documentation will give you the exact commands to use, and you can usually just copy and paste them into your terminal.
- Run the inference script. Once everything’s set up, you can run python infer.py to start generating images using ACE++. There are example commands in the documentation to get you started.
Using with ComfyUI (Easier Workflow – Recommended!)
While the infer.py script lets you run ACE++ directly, it’s generally much easier to use with ComfyUI.
- Search for ComfyUI ACE++ Workflows: Look online (platforms like GitHub, CivitAI, or YouTube) or use This Workflow for ComfyUI workflows specifically designed for ACE++. These workflows will usually pre-configure the nodes and model loading for you.
- Install Custom Nodes (if needed): If the workflow uses custom ComfyUI nodes, ComfyUI will usually prompt you to install them when you load the workflow, Red nodes in ComfyUI usually just mean you’re missing a custom node or two – the ComfyUI manager makes it pretty painless to install those.

- Load Models in ComfyUI: Place the downloaded FLUX.1-Fill-dev model and any downloaded ACE++ LoRA models in your ComfyUI models directories. (e.g., ComfyUI/models/diffusion_models, ComfyUI/models/loras).

- Load the Workflow in ComfyUI: Drag the workflow file (.json or similar) into your ComfyUI interface.
- Configure and Generate: Adjust the nodes in the workflow (input image, prompts, settings) and click “Queue Prompt” to generate images using ACE++ in ComfyUI.
Using ACE++ to Keep Your Characters Consistent Using ComfyUI
Alright, the setup is done! Now for the fun part actually using ACE++ to create some consistent characters. Here’s how it works:
1. Load Up Your Character Image
The workflow starts with loading an image. This is going to be your “reference” image – the picture of the character you want to keep consistent. Make sure it’s an image where the face is clear. The workflow is set to automatically resize your image to 1024×1024. Why 1024×1024? Well, it’s a good size for Flux Fill to work its magic, and it also helps make sure you’re not accidentally loading in some giant image that’ll crash your system. You can try other sizes if you want to experiment, but 1024×1024 is a safe bet, especially if your computer isn’t a super-powered beast.
2. Mask Preview – What’s This About?
You’ll notice a preview of a mask in the workflow. Okay, this is a bit technical, but here’s the gist: Because of how Flux Fill works, the original image you loaded needs to be part of the generation process. Yeah, it’s a little weird, and honestly, the creator of the workflow mentioned they couldn’t find a way around it. If you figure out a clever workaround, definitely let them know! To deal with this, the workflow “pads” your original image. This padding is part of why you see that mask preview. Don’t worry too much about the technical details here just know it’s part of how ACE++ and Flux Fill play together.
3. Prompt Time! Tell it What You Want
Now for the creative part! Just like with any other image generation in ComfyUI, you need to write a prompt. This is where you tell the AI what you want to create with your consistent character. Think about things like:
- Scene: Where is your character? A coffee shop? A futuristic city? A fantasy forest?
- Action: What are they doing? Drinking coffee? Fighting dragons? Relaxing on a beach?
- Style: Do you want a realistic photo? A painterly style? Something cartoony?
Remember those example prompts we saw earlier? Something like: “Maintain the face and hat. Man in a white shirt is having a coffee in a middle of busy road.” You can use prompts like that as inspiration.
4. Settings Tweaks (But Keep it Simple to Start)
Okay, settings. You’ll see a few settings in the workflow. The most important one to pay attention to for ACE++ is the “Flux guidance.” This controls how strongly Flux Fill tries to stick to your reference image. The default in the workflow is set to 50, which is a pretty good starting point. You can experiment with this later to see how it affects your results.
The workflow also uses Euler sampler with 20 steps and a CFG of 1. Again, these are solid starting settings. Feel free to play around with these later too, but for your first few tries, it’s probably best to keep things as they are.
5. Generate and See the Magic!
Hit that “Generate” button and watch ComfyUI work its magic! Hopefully, you’ll get an image with your character looking consistently like the person in your reference image, but in the new scene and situation you described in your prompt.
The Future is Consistent (and Maybe Without LoRA?)
ACE++ is still pretty new on the scene, but it’s already making waves in the AI image generation community. The promise of high-quality character consistency without the hassle of training is a huge deal. Could this be the start of a trend away from LoRAs for character work? Maybe, maybe not. LoRAs still have their place, especially for fine-tuning specific styles and details. But ACE++ offers a different approach a more plug-and-play, instruction-based way to achieve consistency.
And the fact that it’s open-source and community-driven is fantastic. People are already experimenting with workflows, sharing tips, and probably figuring out even cooler ways to use ACE++. Who knows what amazing workflows and techniques the community will come up with next?
Plus, with that “ACE++ Fully” model on the horizon, it feels like this is just the beginning. Imagine a future where creating consistent characters and worlds in AI art is as simple as typing a few sentences and uploading a single image. That future might be closer than we think, thanks to projects like ACE++.
Wrapping Up: Go Forth and Be Consistent!
So, there you have it – ACE++, the no training AI marvel that’s shaking up the character consistency game. It’s not perfect, it can be a bit demanding on your hardware, and you might have to reroll a few times to get those killer results. But the potential is undeniable. The ability to generate consistent characters from a single image, without diving into the complexities of training, is a real game-changer for artists, creators, and anyone who just wants to have some fun with AI image generation.
If you’re even remotely curious, I highly recommend giving ACE++ a try. Download the ComfyUI workflow, wrestle with the installation for a bit (it’s a rite of passage, right?), and start experimenting. Share your creations, share your tips, and let’s see what amazing things we can all build with this new tool.
Who knows, maybe we’ll even see a world with fewer LoRAs and more… well, consistently awesome AI art. One can dream, right?
Now, go get generating! And let me know in the comments what you create with ACE++. I’m genuinely excited to see what you all come up with!
| Latest From Us
- NoLiMa Reveals LLM Performance Drops Beyond 1K Contexts
- InternVideo2.5, The Model That Sees Smarter in Long Videos
- SYNTHETIC-1 Uses DeepSeek-R1 for Next-Level Base Model Cold Start
- Microsoft Study Reveals How AI is Making You Dumber
- Clone Any Voice in Seconds With Zonos-v0.1 That Actually Sounds Human