Gemini Embedding 2
Gemini Embedding models, including the newer Gemini Embedding 2, are part of Google’s Gemini AI ecosystem and are designed to convert text, phrases, sentences, and code into numerical vector representations that capture their semantic meaning. Unlike generative models that produce new content, the embedding model transforms input data into dense vectors that represent meaning in a mathematical format, allowing computers to compare and analyze information based on conceptual similarity rather than exact wording. These embeddings enable applications such as semantic search, recommendation systems, document retrieval, clustering, classification, and retrieval-augmented generation pipelines. The model can process input in more than 100 languages and supports up to 2048 tokens per request, allowing it to embed longer pieces of text or code while maintaining strong contextual understanding.
Learn more
voyage-code-3
Voyage AI introduces voyage-code-3, a next-generation embedding model optimized for code retrieval. It outperforms OpenAI-v3-large and CodeSage-large by an average of 13.80% and 16.81% on a suite of 32 code retrieval datasets, respectively. It supports embeddings of 2048, 1024, 512, and 256 dimensions and offers multiple embedding quantization options, including float (32-bit), int8 (8-bit signed integer), uint8 (8-bit unsigned integer), binary (bit-packed int8), and ubinary (bit-packed uint8). With a 32 K-token context length, it surpasses OpenAI's 8K and CodeSage Large's 1K context lengths. Voyage-code-3 employs Matryoshka learning to create embeddings with a nested family of various lengths within a single vector. This allows users to vectorize documents into a 2048-dimensional vector and later use shorter versions (e.g., 256, 512, or 1024 dimensions) without re-invoking the embedding model.
Learn more
Gemini Embedding
Gemini Embedding’s first text model (gemini-embedding-001) is now generally available via the Gemini API and Gemini Enterprise Agent Platform, having held a top spot on the Massive Text Embedding Benchmark Multilingual leaderboard since its experimental launch in March, thanks to superior performance across retrieval, classification, and other embedding tasks compared to both legacy Google and external proprietary models. Exceptionally versatile, it supports over 100 languages with a 2,048‑token input limit and employs the Matryoshka Representation Learning (MRL) technique to let developers choose output dimensions of 3072, 153,6, or 768 for optimal quality, performance, and storage efficiency.
Learn more
voyage-3-large
Voyage AI has unveiled voyage-3-large, a cutting-edge general-purpose and multilingual embedding model that leads across eight evaluated domains, including law, finance, and code, outperforming OpenAI-v3-large and Cohere-v3-English by averages of 9.74% and 20.71%, respectively. Enabled by Matryoshka learning and quantization-aware training, it supports embeddings of 2048, 1024, 512, and 256 dimensions, along with multiple quantization options such as 32-bit floating point, signed and unsigned 8-bit integer, and binary precision, significantly reducing vector database costs with minimal impact on retrieval quality. Notably, voyage-3-large offers a 32K-token context length, surpassing OpenAI's 8K and Cohere's 512 tokens. Evaluations across 100 datasets in diverse domains demonstrate its superior performance, with flexible precision and dimensionality options enabling substantial storage savings without compromising quality.
Learn more