OBLITERATUS toolkit surgically removes AI refusal guardrails
OBLITERATUS is a mechanistic interpretability toolkit that identifies and mathematically neutralizes "refusal vectors" within Large Language Models. By orthogonalizing model weights against safety directions, it permanently removes corporate guardrails without the performance degradation or "IQ loss" typically associated with uncensored fine-tuning.
Abliteration turns alignment from a training task into a geometric subtraction problem, exposing how thin corporate safety layers really are. The toolkit uses SVD and PCA to isolate refusal directions and project them out of weight matrices, revealing artifacts like Qwen identifying as Anthropic. It features 13 intervention methods that maintain core model logic while bypassing corporate safety layers.
DISCOVERED
5d ago
2026-04-07
PUBLISHED
5d ago
2026-04-06
RELEVANCE
AUTHOR
Dear-Relationship-39