OPEN_SOURCE ↗
REDDIT · REDDIT// 21d agoMODEL RELEASE
HauhauCS drops uncensored Qwen3.5-122B GGUF quants
HauhauCS released an uncensored GGUF build of Qwen3.5-122B-A10B, a 122B MoE with ~10B active parameters, plus a new K_P quant family and mmproj vision support. The goal is a fully unlocked local model that, according to HauhauCS, keeps the original Qwen behavior while stripping refusals.
// ANALYSIS
This is less a frontier-model debut than a packaging and quantization flex: the uncensoring will get the headlines, but the practical win is making a 122B-class MoE easier to use locally. If the K_P claims hold up, this is the kind of release that keeps a model relevant long after the original checkpoint is old news.
- –K_P quants are the real differentiator: the author says they use model-specific profiles and imatrix to preserve quality where it matters, delivering roughly 1-2 quant levels better quality for only 5-15% more file size.
- –The release is clearly aimed at local inference stacks, with llama.cpp, LM Studio, Jan, and koboldcpp called out, plus `--jinja`, `enable_thinking=false`, and mmproj guidance for the vision path.
- –"Aggressive" is an explicit trade-off: fewer refusals and no personality edits, but also fewer guardrails than the base Qwen release.
- –Dropping BF16 keeps the repo focused on distributable quants instead of a 250GB dense artifact, which is a sane trade at 122B scale.
- –The author's claims line up across the [announcement](https://www.reddit.com/r/LocalLLaMA/comments/1s0aa1y/qwen35122ba10b_uncensored_aggressive_gguf_release/) and [model card](https://huggingface.co/HauhauCS/Qwen3.5-122B-A10B-Uncensored-HauhauCS-Aggressive), including 0/465 refusals, the K_P lineup, and the local-runtime notes.
// TAGS
qwen3.5-122b-a10b-uncensored-aggressiveqwen3.5-122b-a10bllmmultimodalopen-weightsself-hostedinferencesafety
DISCOVERED
21d ago
2026-03-22
PUBLISHED
21d ago
2026-03-22
RELEVANCE
9/ 10
AUTHOR
hauhau901