MODEL :// SEE-THROUGH

See-through Is One of the Coolest Anime Workflow Projects I’ve Seen in a While

Most people looking at AI image tools are still thinking in a pretty flat way.

Generate image. Maybe edit image. Maybe cut out a background. Maybe inpaint something.

See-through goes after a much more interesting problem. Instead of treating an anime illustration like one finished picture, it tries to break a single character image into fully separated, inpainted, ordered layers that can actually be used for motion and 2.5D animation work. The project is described by its authors as “Single-image Layer Decomposition for Anime Characters,” and the repo says it has been conditionally accepted to appear in the ACM SIGGRAPH 2026 conference proceedings. 

That is what makes it interesting right away.

This is not just another “anime image in, anime image out” model. The whole point is to take a static drawing and turn it into something more usable. According to the project abstract, the system automates the transformation of a single anime illustration into a manipulatable 2.5D model by decomposing it into semantically distinct layers, filling in hidden regions, and inferring drawing order. In other words, it is trying to recover the structure that would normally be buried inside the final artwork. 

And honestly, that is a much bigger deal than it first sounds.

A lot of actual production work around anime characters, Live2D-style rigs, motion assets, and layered editing is still really manual. Someone has to separate parts, guess what is hiding behind the visible shapes, rebuild covered regions, and make the whole thing usable for movement. See-through is trying to remove a huge chunk of that pain by doing the decomposition automatically and exporting something artists can actually work with. 

The GitHub page says the framework can decompose a single character image into up to 23 fully inpainted semantic layers, including things like hair, face, eyes, clothing, and accessories. The main pipeline also exports the result as a layered PSD, along with intermediate depth maps and segmentation masks. That output format matters a lot, because PSD is not some abstract research format. It is something artists and production pipelines can actually pick up and use. 

That is probably the smartest part of the whole project.

A lot of research projects stop at “look, the result is plausible.” See-through feels more grounded in real workflow thinking. It is very clearly aimed at turning a finished illustration into something usable for 2.5D manipulation and animation, not just generating another pretty image. 

It also runs in ComfyUI now

This is where it gets even more interesting for actual workflow people.

There is already a separate wrapper called ComfyUI-See-through, and it turns the project into a proper ComfyUI plugin instead of something you only run as a standalone research setup. The plugin wraps See-through directly and is built around the same goal: decomposing a single anime illustration into manipulatable 2.5D layer-separated models with depth ordering for Live2D-style workflows. 

The ComfyUI version exposes this as a small node set under a SeeThrough category. According to the repo, the main nodes are SeeThrough Load LayerDiff Model, SeeThrough Load Depth Model, SeeThrough Decompose, and SeeThrough Save PSD. That means you can load the layer model, load the depth model, run the decomposition inside a graph, preview the reconstruction as a normal ComfyUI image output, and then export the layers into PSD from inside the workflow. 

That is a really big deal for usability.

Because now this is not just a paper tool or a one-off desktop utility. It can slot into an actual ComfyUI pipeline, which means people can start combining it with the rest of their node-based workflow. You can imagine using it after character generation, before animation prep, as part of a PSD-building chain, or together with other anime-specific tools in ComfyUI. That makes it feel way more practical. The plugin repo also says it supports PSD export directly in the browser, depth PSD export for parallax or 3D-style workflows, preview output, automatic Hugging Face model download on first use, and several VRAM-saving options for smaller GPUs. 

The feature list is actually pretty nice too. The plugin says it can output up to 24 semantic transparent layers, generate per-layer depth maps, split symmetric parts like eyes and ears into left and right, split hair into front and back through depth clustering, and export layered PSD files without needing a separate Python PSD dependency because the frontend handles that part with ag-psd. 

Installation looks straightforward by ComfyUI standards. The repo says to clone it into ComfyUI/custom_nodes, install the requirements, restart ComfyUI, and then the nodes show up under the SeeThrough category. It also says the required models can auto-download from Hugging Face on first use, with a manual placement option under ComfyUI/models/SeeThrough/ if needed. 

And importantly, the ComfyUI wrapper includes some actual VRAM thinking instead of just assuming everybody has absurd hardware. The repo documents tag embedding caching, text encoder unloading, optional group offload, configurable depth resolution, and a suggested ladder of settings for people in the 8 to 12 GB VRAM range. 

Why this matters

What I like most here is that the project is not stopping at the paper.

A lot of cool research never crosses over into real creative tooling. But once something shows up as a ComfyUI node pack, it starts becoming part of how people actually work. And for something like See-through, that matters even more, because this is exactly the sort of tool that benefits from being one stage in a bigger chain instead of a totally isolated app. 

So the bigger story is not just that See-through exists.

It is that See-through is already starting to become usable in real pipelines.

Forrige
Forrige

BUILDING WITH GPT :// CODEX

Neste
Neste

MARCH :// 2026