BACK_TO_FEEDAICRIER_2
Cicikuş v2-3B targets 4.5GB inference
OPEN_SOURCE ↗
REDDIT · REDDIT// 35d agoMODEL RELEASE

Cicikuş v2-3B targets 4.5GB inference

Prometech has released Cicikuş v2-3B, a Hugging Face Llama 3.2 3B fine-tune pitched for local inference on roughly 4.5GB VRAM, with Turkish/English support and a 26.8k-row reasoning dataset. The release leans heavily on proprietary “Behavioral Consciousness Engine” branding, but the concrete developer takeaway is a small reasoning-focused model with published baseline evals and downloadable weights.

// ANALYSIS

This is the kind of niche local-model drop that gets attention because the hardware target is realistic, even if the marketing copy is doing far more work than the benchmarks.

  • The practical hook is simple: a 3B model that aims to stay usable on modest local hardware instead of demanding 24GB+ GPUs.
  • Under the hood, this looks more like a tuned Llama 3.2 3B stack than a fundamentally new architecture, with Unsloth, QLoRA, and SFT-style training doing the heavy lifting.
  • Prometech publishes baseline scores like MMLU 58.7%, BBH 48.5%, GSM8K 40%, and MBPP 50%, which gives developers something to inspect even if the “consciousness” framing is unverifiable.
  • The bilingual Turkish/English angle is more interesting than the sci-fi branding because that can matter for regional and edge deployments.
  • The biggest catch is licensing: the model is publicly downloadable, but the “license: other” and commercial-use restrictions make it harder to treat as a default open model for production work.
// TAGS
cicikus-v2-3bllmreasoningedge-aifine-tuningopen-weights

DISCOVERED

35d ago

2026-03-07

PUBLISHED

36d ago

2026-03-07

RELEVANCE

7/ 10

AUTHOR

Connect-Bid9700