Google DeepMind solicits feedback for next Gemma models
Following the launch of Gemma 4, Google DeepMind's Omar Sanseviero is polling the developer community to define the next milestone for the open-weight model family.
The "Gemma 4 era" marks a strategic pivot toward agent-first open models, but the community is already pushing for more compute-heavy and efficient variants. High demand persists for a 124B MoE model to rival top-tier proprietary models in reasoning and complex multi-step task handling. Developers are calling for improved native tool-calling and function-calling reliability to support autonomous, proactive local agents. Additionally, 1-bit quantization and extreme context windows beyond 256K are top priorities for high-efficiency, memory-constrained edge environments, as the shift to the Apache 2.0 license has accelerated enterprise adoption.
DISCOVERED
3h ago
2026-04-21
PUBLISHED
3h ago
2026-04-21
RELEVANCE
AUTHOR
jacek2023