BACK_TO_FEEDAICRIER_2
Deplodock compiles PyTorch into CUDA
OPEN_SOURCE ↗
REDDIT · REDDIT// 3h agoTUTORIAL

Deplodock compiles PyTorch into CUDA

CloudRift’s Deplodock is a pure-Python, raw-CUDA LLM compiler stack that lowers PyTorch graphs through six IRs into emitted kernels. The article walks through RMSNorm and argues the stack can hit roughly 50-90% of the production path on selected workloads while staying small enough to inspect and hack.

// ANALYSIS

This reads less like a “beat Triton” pitch and more like a strong compiler teaching tool that happens to run real kernels. That is the right ambition here: make the stack legible first, then optimize the parts that matter.

  • The six-layer IR split is the main win; it isolates frontend capture, tensor semantics, loop fusion, scheduling, and codegen cleanly enough that each stage can be debugged on its own.
  • Tensor IR is the key design choice because it gives the compiler a frontend-agnostic core, which is what you want if PyTorch, ONNX, and JAX all need to converge later.
  • The performance claim is credible but bounded: 50-90% of production stack is useful for a handmade compiler, yet the hardest matmul and mixed-precision cases are still where industrial stacks earn their keep.
  • The article is strongest when it shows concrete lowering artifacts, especially the RMSNorm path from FX graph to fused loops to emitted CUDA, because that makes the compiler’s behavior inspectable instead of mystical.
  • For developers, the big value is reproducibility and pedagogy: a 5,000-line codebase is small enough to modify, which matters more than raw throughput if the goal is understanding compiler design.
// TAGS
deplodockllmgpuinferenceopen-sourcedevtoolcli

DISCOVERED

3h ago

2026-04-29

PUBLISHED

3h ago

2026-04-29

RELEVANCE

8/ 10

AUTHOR

NoVibeCoding