OPEN_SOURCE ↗
REDDIT · REDDIT// 20d agoMODEL RELEASE
Dealign.ai ships Mac-only uncensored Mistral Small 4
Dealign.ai released uncensored JANG-quantized builds of Mistral Small 4 119B for Apple Silicon Macs, with support limited to MLX Studio and jang-tools and builds listed at 64GB and 37GB. The smaller build is the eye-catcher: the post claims 94% MMLU, while the Hugging Face card advertises 95.9% HarmBench.
// ANALYSIS
This is a clever local-inference flex more than a base-model breakthrough, but it matters because it makes a 119B MoE feel reachable on Mac hardware. The real moat is the packaging around JANG and MLX Studio, not a new underlying architecture.
- –The 37GB JANG_2L build is the practical win, because it gets a 119B-class model into the 64GB-Mac conversation.
- –MLX Studio and `jang-tools` are the only supported paths, so the release is much narrower than GGUF/llama.cpp-based distribution.
- –The `CRACK`/abliteration angle will appeal to users who want fewer refusals, but it also makes safety and misuse concerns part of the product story.
- –The benchmark claims are interesting, but the post's `30GB` headline and the repo's `37GB`/`64GB` metadata don't perfectly line up, so the numbers deserve a quick sanity check.
// TAGS
mistral-small-4-119b-jang-2l-crackllmopen-weightsinferenceself-hostedbenchmarksafety
DISCOVERED
20d ago
2026-03-23
PUBLISHED
20d ago
2026-03-23
RELEVANCE
8/ 10
AUTHOR
HealthyCommunicat