OPEN_SOURCE ↗
REDDIT · REDDIT// 19d agoPRODUCT UPDATE
Francesca sharpens companion personality, memory
Francesca is an AI companion built on Qwen3.5-27B, tuned with 35k SFT examples and 46k DPO pairs to keep a steadier personality. After roughly 2,000 real-user conversations, the creator says ranking, memory caps, and guardrails matter more than prompt wording alone, and the app now includes XTTS-v2 voice cloning.
// ANALYSIS
The model is only part of the story here; the moat is the orchestration layer that filters generic replies, controls memory, and catches self-contradictions.
- –Generating three candidates and ranking them for crutch phrases is a practical way to keep the persona from collapsing into generic therapist mode.
- –The opener experiment is the most interesting product insight: grounded specifics seem to retain users better than vague psychoanalysis.
- –Proportional memory with category caps is the right compromise for companion apps; unlimited memory tends to fossilize one user’s quirks into the whole persona.
- –Self-fact guards are underrated, because tiny mirroring mistakes feel much bigger in intimate chat than they do in ordinary chatbot use.
- –Voice cloning plus local inference makes the product feel embodied, but it also raises the consistency bar across every layer.
// TAGS
francescallmfine-tuningchatbotsafetyspeech
DISCOVERED
19d ago
2026-03-23
PUBLISHED
19d ago
2026-03-23
RELEVANCE
7/ 10
AUTHOR
Crypto_Stoozy