G
Google
2026-03-04
Product Launch Important Medium 90% Confidence

Google Launches Efficient Inference Model Gemini 3.1 Flash-Lite

Summary

Google released Gemini 3.1 Flash-Lite, optimized for high-frequency workloads with 2.5x faster first-token response and 45% higher output speed. Available via AI Studio and Vertex AI, it features thinking depth adjustment for scalable AI applications like translation and content moderation.

Key Takeaways

Google launched Gemini 3.1 Flash-Lite as the fastest and most cost-effective model in the Gemini 3 series, priced at $0.25 per million input tokens and $1.50 per million output tokens.

Benchmarks show 2.5x improvement in time to first token and 45% higher output speed, outperforming predecessors in GPQA Diamond (86.9%) and MMMU Pro (76.8%).

Features thinking depth adjustment and is used by early testers like Latitude and Cartwheel for large-scale complex problems.

Why It Matters

promote large-scale deployment of AI applications...

Sign up to view full strategic analysis

Sign Up Free
Source: Google Blog
View Original →