BACK_TO_FEEDAICRIER_2
Mortar ships local FIM autocomplete
OPEN_SOURCE ↗
REDDIT · REDDIT// 5h agoOPENSOURCE RELEASE

Mortar ships local FIM autocomplete

Mortar is a minimal VS Code extension for ghost-text code autocomplete against llama.cpp or OpenAI-compatible servers. It favors native llama.cpp `/infill`, falls back to `/v1/completions`, and deliberately skips chat, embeddings, and agent features.

// ANALYSIS

Mortar is small by design, which is exactly the point: local-code-completion users often need less product surface, not another full AI IDE.

  • The setup flow is built around endpoint and model selection, making it a cleaner companion for `llama-server` and `llama-swap`.
  • Native `/infill` support matters because FIM-capable coding models can use prefix and suffix context without prompt-template guesswork.
  • The OpenAI-compatible fallback broadens backend support while preserving the same autocomplete-only UX.
  • This is early and tiny, with a v0.1.0 release and minimal community signal, but it targets a real pain point in local AI coding workflows.
// TAGS
mortarideai-codinginferenceopen-sourceself-hostedllm

DISCOVERED

5h ago

2026-04-22

PUBLISHED

6h ago

2026-04-21

RELEVANCE

7/ 10

AUTHOR

xhimaros