Google launches Gemma 4 open models with 140 languages ​​and 400 million downloads


Google launches Gemma 4 open models with 140 languages ​​and 400 million downloads

Google DeepMind launched Gemma 4 on Wednesday, April 1.

This marks Google’s most intelligent open model yet designed for advanced reasoning and agentic workflows under a permissive Apache 2.0 license.

Google introduced four versatile sizes, including effective 2B (E2B), effective 4B (E4B), a 26B combination of experts (MoE), and a 31B dense model.

For now, the 31B is ranked as the third best open model globally in the Arena AI text rankings.

Additionally, Google reports that the 26B model ranks sixth, beating out models 20 times its size.

In the official blog post, Google DeepMind’s VP of Research wrote: “Gemma 4 delivers an unprecedented level of intelligence per parameter.”

Since the first Gemma model was released, models have been downloaded over 400 million times, creating a “Gemmaverse” of over 100,000 variants.

The new models support native function calls, structured JSON output, and system commands, enabling the creation of autonomous agents that can interact with tools and APIs.

All models support native video, image and text processing; Models E2B and E4B support native audio input for voice recognition.

The model supports over 140 languages ​​and provides context windows of up to 256,000 tokens for larger models, allowing developers to process entire code repositories or long documents in a single message.

The edge-centric E2B and E4B models are optimized for mobile and IoT devices and run completely offline on phones, Raspberry Pi, and NVIDIA Jetson Orin Nano with near-zero latency. Google has worked with Qualcomm and MediaTek on mobile optimizations in collaboration with the Pixel team.

Users can access the models on Hugging Face, Kaggle, Ollama, and Google AI Studio.

Leave a Comment

Your email address will not be published. Required fields are marked *