OPEN_SOURCE ↗
REDDIT · REDDIT// 22d agoPRODUCT LAUNCH
MacinAI Local runs TinyLlama on PowerBook G4
MacinAI Local is a custom C89 LLM inference platform for classic Macintosh hardware, running on a 2002 PowerBook G4 with Mac OS 9. It supports GPT-2, TinyLlama, Qwen, SmolLM, and a custom 100M model, with disk paging and AppleScript-based system control.
// ANALYSIS
This is less a nostalgia stunt than a real systems demo: the impressive part is not just getting TinyLlama to boot on a G4, but stitching together a full local AI stack for classic Mac OS. The tradeoff is obvious, though, because the architecture is clever enough to work and slow enough to make the result feel like a technical proof rather than an everyday assistant.
- –The from-scratch C89 engine and export pipeline make this model-agnostic, which is far more interesting than the usual one-off retro LLM port.
- –Disk paging is the enabling trick for 1.1B parameters on 1GB RAM, but the 9.9 seconds per token figure shows how quickly practicality falls off.
- –The AltiVec work is the real engineering flex here, especially since the project uncovered a CodeWarrior compiler bug along the way.
- –AppleScript generation plus confirmation prompts gives the project a genuine agentic angle, not just text completion on old iron.
- –Compared with earlier retro-AI demos, this reads as a platform with a UI, tokenizer, memory model, and automation layer, not a novelty benchmark.
// TAGS
llminferenceself-hostedautomationspeechagentmacinai-local
DISCOVERED
22d ago
2026-03-20
PUBLISHED
22d ago
2026-03-20
RELEVANCE
9/ 10
AUTHOR
SDogAlex