BACK_TO_FEEDAICRIER_2
Google launches Gemma 4 open multimodal models
OPEN_SOURCE ↗
REDDIT · REDDIT// 9d agoMODEL RELEASE

Google launches Gemma 4 open multimodal models

Google has released the Gemma 4 family of open-weight models under an Apache 2.0 license, featuring sizes from 2B to 31B parameters. The models boast native multimodality, advanced reasoning capabilities, and massive context windows tailored for both high-end hardware and efficient edge execution.

// ANALYSIS

Google is pushing hard into the local AI space, providing powerful tools that bypass cloud dependency and latency.

  • Native multimodality (including video, image, and native audio on edge models) combined with agentic workflow support makes these ideal for building autonomous, on-device agents.
  • The 26B Mixture of Experts (MoE) model is highly optimized, activating only 3.8 billion parameters to maximize speed without sacrificing workstation performance.
  • Delivering 128K context windows on the smaller E2B and E4B edge models is a massive leap for processing large documents on mobile devices and IoT hardware.
  • The permissive Apache 2.0 license signals a continuing commitment to the open-source developer community to counter closed-source alternatives.
// TAGS
gemma-4open-weightsmultimodaledge-aiagentinferencellm

DISCOVERED

9d ago

2026-04-02

PUBLISHED

9d ago

2026-04-02

RELEVANCE

10/ 10

AUTHOR

jferments