OPEN_SOURCE ↗
REDDIT · REDDIT// 25d agoMODEL RELEASE
Falcon-H1R Heretic V2 drops uncensored reasoning weights
Reddit user netcat420 released Falcon-H1R-7B-Heretic-V2, an abliterated variant of TII’s hybrid Transformer+SSM Falcon-H1R-7B built with a custom Heretic fork targeting attention and SSM output layers. The post claims 3/100 refusals at 0.0001 KL divergence and frames the uncensored model for local distillation and defensive cybersecurity research with explicit high-risk warnings.
// ANALYSIS
This is a notable community model hack for hybrid architectures, but the core performance and safety claims are still mostly trust-based rather than independently validated.
- –The interesting technical bit is dual-layer targeting (`attn.o_proj` + `ssm.out_proj`) on a hybrid Falcon backbone, where many older abliteration workflows were built for pure Transformers.
- –Reported metrics (3% refusal, 0.0001 KL) come from a small self-reported test setup, so developers should treat them as early signals, not robust benchmark evidence.
- –The quantized footprint (~4.5 GB at Q4_K_M, per the post) makes this accessible for local experimentation on consumer hardware.
- –The release materially increases misuse risk by design; even with defensive intent, deployment needs strict sandboxing, logging, and access controls.
// TAGS
falcon-h1r-7b-heretic-v2falcon-h1r-7bllmreasoningopen-weightssafetyself-hosted
DISCOVERED
25d ago
2026-03-17
PUBLISHED
25d ago
2026-03-17
RELEVANCE
8/ 10
AUTHOR
PhysicsDisastrous462