OPEN_SOURCE ↗
REDDIT · REDDIT// 3d agoTUTORIAL
Developer uses AI digital twin to tune self
Developer ttkciar shares a unique personal workflow using two custom bash scripts, `actlikettk` and `critique`, to act as a "human reward model." By prompting high-end local LLMs with a 38,000-token corpus of their best writing and analyzing recent Reddit history for logical fallacies, the author uses AI as an objective mirror to align their actual behavior with their peak intellectual performance and style.
// ANALYSIS
This is a pioneer example of using LLMs for "AI-mediated self-actualization," leveraging the technology not just for productivity but for personal character and style alignment.
- –**A "Digital Twin" for Style:** By providing a model with a massive 38K token dataset of their best writing, the author creates a simulation of their "idealized self" to use as a benchmark.
- –**Anti-Sycophancy Strategy:** Utilizing models like Big-Tiger-Gemma-27B-v3 (known for being an anti-sycophancy fine-tune) ensures the AI provides harsh, objective criticism of their real-world actions rather than simple agreement.
- –**Human-in-the-Loop RLHF:** The author is effectively performing "Reinforcement Learning from Human Feedback" on themselves, with the AI acting as the reward model that highlights the gap between their actual and ideal behavior.
- –**Local LLM Utility:** Demonstrates a high-value use case for local models (Gemma-2, GLM-4) that allows for processing sensitive personal writing history without privacy concerns.
// TAGS
llmprompt-engineeringreasoningautomationcliactlikettkcritiquegemma
DISCOVERED
3d ago
2026-04-09
PUBLISHED
3d ago
2026-04-08
RELEVANCE
6/ 10
AUTHOR
ttkciar