G
Google
2026-03-11
Product Launch Important High 90% Confidence

Google Releases Native Multimodal Embedding Model Gemini Embedding 2

Summary

Google DeepMind launches its first native multimodal embedding model based on Gemini architecture, supporting unified embedding space for text, images, video, audio, and documents. It incorporates Matryoshka Representation Learning for dynamic dimension scaling, optimizing storage-performance trade-offs and enhancing cross-modal semantic understanding.

Key Takeaways

Google releases Gemini Embedding 2, its first native multimodal embedding model based on Gemini architecture, now in public preview. It maps text, images, video, audio, and documents into a unified embedding space, capturing semantic intent across 100+ languages.

Technical specs include support for up to 8192 input tokens for text, processing up to 6 images (PNG/JPEG) or 120-second videos (MP4/MOV) per request, native audio embedding without transcription, and direct handling of up to 6-page PDFs. Key innovation is native understanding of interleaved inputs, mixing modalities like image+text to capture complex cross-media relationships.

Model integrates Matryoshka Representation Learning (MRL), allowing dynamic scaling from default 3072-dim to flexible outputs (recommended 3072, 1536, 768 dims) for performance-storage trade-offs. Google claims it outperforms leading models in text, image, and video tasks, setting new benchmarks for multimodal depth.

Why It Matters

which may accelerate enterprise cross-modal application deployment and affect the AI ecosystem competition landscape....

Sign up to view full strategic analysis

Sign Up Free
Source: Google Blog
View Original →