Skip to content

99oblivius/CorridorKey-Engine

 
 

Repository files navigation

CorridorKey Engine

Neural network green screen keying for professional VFX pipelines. Fork of nikopueringer/CorridorKey with async multi-GPU inference, optimization profiles, a JSON-RPC engine API, and a Textual TUI.


Install

Requires uv.

git clone https://github.com/99oblivius/CorridorKey-Engine.git && cd CorridorKey-Engine uv sync

Windows: run tools/Install_CorridorKey_Windows.bat instead.

Models

# CorridorKey (required, ~300 MB) uv run hf download nikopueringer/CorridorKey_v1.0 --local-dir CorridorKeyModule/checkpoints # BiRefNet — downloaded automatically via torchhub # GVM (optional, ~80 GB VRAM) uv run hf download geyongtao/gvm --local-dir ck_engine/generators/gvm/weights # VideoMaMa (optional, ~80 GB VRAM) uv run hf download SammyLim/VideoMaMa --local-dir ck_engine/generators/videomama/checkpoints/VideoMaMa uv run hf download stabilityai/stable-video-diffusion-img2vid-xt \ --local-dir ck_engine/generators/videomama/checkpoints/stable-video-diffusion-img2vid-xt \ --include "feature_extractor/*" "image_encoder/*" "vae/*" "model_index.json"

Quick Start

# Linux / macOS ./launch.sh # TUI ./launch.sh inference /path/to/clips --srgb --despill 5 --refiner 1 # headless ./launch.sh generate-alphas /path/to/clips --model birefnet # alpha hints ./launch.sh serve --listen :9400 # TCP daemon # Windows launch.bat launch.bat inference C:\path\to\clips --srgb --despill 5 --refiner 1

The launch scripts handle uv, the virtualenv, and OpenEXR setup automatically. You can also run directly with uv run corridorkey-engine [...] or install the package (uv pip install -e .) and use corridorkey-engine as a command.

Engine API

CorridorKey runs as a standalone process speaking JSON-RPC 2.0. Any language can connect — spawn as a subprocess (stdio) or connect to a daemon (TCP).

from ck_engine.client import EngineClient from ck_engine.api.types import InferenceParams, InferenceSettings with EngineClient.spawn() as engine: job_id = engine.submit_inference(InferenceParams( path="/path/to/clips", settings=InferenceSettings(despill_strength=0.5), )) for event in engine.iter_events(): print(event) if type(event).__name__ in ("JobCompleted", "JobFailed"): break

See Engine Protocol Reference for the full spec, and examples/ for complete stdio and TCP client scripts.

Outputs

Folder Format Contents
Matte/ EXR Linear alpha
FG/ EXR Straight foreground (sRGB gamut)
Processed/ EXR Premultiplied linear RGBA
Comp/ EXR/PNG Composite preview (transparent RGBA or checkerboard)

VRAM at a Glance

Profile Precision VRAM Warmup Key features
optimized (default) fp16 ~2-3 GB ~10-15s Flash attention, tiled refiner, cache clearing
original fp32 ~9-10 GB ~5s No tiling, no cache clearing
performance fp16 ~8-12 GB ~5-10 min Full refiner, cuDNN benchmark, max-autotune

Warmup is first-frame compilation time. Cached after the first run (~/.cache/torch/inductor/).

Add-on VRAM
+ GPU postprocessing +~1.5 GB
+ cuDNN auto-tune +2-5 GB
BiRefNet alpha hints ~4 GB
GVM / VideoMaMa alpha hints ~80 GB

8 GB GPU sufficient for default profile. See VRAM & Optimization Guide for benchmarks and tuning.

Documentation

Doc Audience
CLI Reference All flags, commands, profiles, multi-GPU, MLX
Engine Protocol JSON-RPC spec for plugin/integration developers
Architecture Package structure, model hierarchy, pipeline design
VRAM & Optimization Benchmarks, optimization profiles, VRAM breakdown
Async Pipeline Threading model, DMA pipeline, GIL analysis
Python Examples Complete stdio and TCP client scripts

Tests

uv sync --group dev uv run pytest # all tests (no GPU or weights needed) uv run pytest -m "not gpu" # skip CUDA tests

License

CC-BY-NC-SA-4.0 with additional terms by Corridor Digital. Commercial use of the tool is permitted. Repackaging, paid APIs, or integration into commercial software requires agreement. Forks must retain the "Corridor Key" name.

Acknowledgements

Corridor Creates Discord

About

Faster Green Screen Keys — async multi-GPU inference engine for professional VFX pipelines

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Contributors

No contributors

Languages

  • Python 98.9%
  • Other 1.1%