Compare the Top Embedding Models for Cloud as of December 2025 - Page 2

  • 1
    Voyage AI

    Voyage AI

    Voyage AI

    Voyage AI delivers state-of-the-art embedding and reranking models that supercharge intelligent retrieval for enterprises, driving forward retrieval-augmented generation and reliable LLM applications. Available through all major clouds and data platforms. SaaS and customer tenant deployment (in-VPC). Our solutions are designed to optimize the way businesses access and utilize information, making retrieval faster, more accurate, and scalable. Built by academic experts from Stanford, MIT, and UC Berkeley, alongside industry professionals from Google, Meta, Uber, and other leading companies, our team develops transformative AI solutions tailored to enterprise needs. We are committed to pushing the boundaries of AI innovation and delivering impactful technologies for businesses. Contact us for custom or on-premise deployments as well as model licensing. Easy to get started, pay as you go, with consumption-based pricing.
  • 2
    Mixedbread

    Mixedbread

    Mixedbread

    Mixedbread is a fully-managed AI search engine that allows users to build production-ready AI search and Retrieval-Augmented Generation (RAG) applications. It offers a complete AI search stack, including vector stores, embedding and reranking models, and document parsing. Users can transform raw data into intelligent search experiences that power AI agents, chatbots, and knowledge systems without the complexity. It integrates with tools like Google Drive, SharePoint, Notion, and Slack. Its vector stores enable users to build production search engines in minutes, supporting over 100 languages. Mixedbread's embedding and reranking models have achieved over 50 million downloads and outperform OpenAI in semantic search and RAG tasks while remaining open-source and cost-effective. The document parser extracts text, tables, and layouts from PDFs, images, and complex documents, providing clean, AI-ready content without manual preprocessing.
  • 3
    NVIDIA NeMo Retriever
    NVIDIA NeMo Retriever is a collection of microservices for building multimodal extraction, reranking, and embedding pipelines with high accuracy and maximum data privacy. It delivers quick, context-aware responses for AI applications like advanced retrieval-augmented generation (RAG) and agentic AI workflows. As part of the NVIDIA NeMo platform and built with NVIDIA NIM, NeMo Retriever allows developers to flexibly leverage these microservices to connect AI applications to large enterprise datasets wherever they reside and fine-tune them to align with specific use cases. NeMo Retriever provides components for building data extraction and information retrieval pipelines. The pipeline extracts structured and unstructured data (e.g., text, charts, tables), converts it to text, and filters out duplicates. A NeMo Retriever embedding NIM converts the chunks into embeddings and stores them in a vector database, accelerated by NVIDIA cuVS, for enhanced performance and speed of indexing.
  • 4
    Codestral Embed
    Codestral Embed is Mistral AI's first embedding model, specialized for code, optimized for high-performance code retrieval and semantic understanding. It significantly outperforms leading code embedders in the market today, such as Voyage Code 3, Cohere Embed v4.0, and OpenAI’s large embedding model. Codestral Embed can output embeddings with different dimensions and precisions; for instance, with a dimension of 256 and int8 precision, it still performs better than any model from competitors. The dimensions of the embeddings are ordered by relevance, allowing users to choose the first n dimensions for a smooth trade-off between quality and cost. It excels in retrieval use cases on real-world code data, particularly in benchmarks like SWE-Bench, which is based on real-world GitHub issues and corresponding fixes, and Text2Code (GitHub), relevant for providing context for code completion or editing.
  • 5
    voyage-code-3

    voyage-code-3

    Voyage AI

    Voyage AI introduces voyage-code-3, a next-generation embedding model optimized for code retrieval. It outperforms OpenAI-v3-large and CodeSage-large by an average of 13.80% and 16.81% on a suite of 32 code retrieval datasets, respectively. It supports embeddings of 2048, 1024, 512, and 256 dimensions and offers multiple embedding quantization options, including float (32-bit), int8 (8-bit signed integer), uint8 (8-bit unsigned integer), binary (bit-packed int8), and ubinary (bit-packed uint8). With a 32 K-token context length, it surpasses OpenAI's 8K and CodeSage Large's 1K context lengths. Voyage-code-3 employs Matryoshka learning to create embeddings with a nested family of various lengths within a single vector. This allows users to vectorize documents into a 2048-dimensional vector and later use shorter versions (e.g., 256, 512, or 1024 dimensions) without re-invoking the embedding model.
  • 6
    EmbeddingGemma
    EmbeddingGemma is a 308-million-parameter multilingual text embedding model, lightweight yet powerful, optimized to run entirely on everyday devices such as phones, laptops, and tablets, enabling fast, offline embedding generation that protects user privacy. Built on the Gemma 3 architecture, it supports over 100 languages, processes up to 2,000 input tokens, and leverages Matryoshka Representation Learning (MRL) to offer flexible embedding dimensions (768, 512, 256, or 128) for tailored speed, storage, and precision. Its GPU-and EdgeTPU-accelerated inference delivers embeddings in milliseconds, under 15 ms for 256 tokens on EdgeTPU, while quantization-aware training keeps memory usage under 200 MB without compromising quality. This makes it ideal for real-time, on-device tasks such as semantic search, retrieval-augmented generation (RAG), classification, clustering, and similarity detection, whether for personal file search, mobile chatbots, or custom domain use.
  • 7
    Llama

    Llama

    Meta

    Llama (Large Language Model Meta AI) is a state-of-the-art foundational large language model designed to help researchers advance their work in this subfield of AI. Smaller, more performant models such as Llama enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field. Training smaller foundation models like Llama is desirable in the large language model space because it requires far less computing power and resources to test new approaches, validate others’ work, and explore new use cases. Foundation models train on a large set of unlabeled data, which makes them ideal for fine-tuning for a variety of tasks. We are making Llama available at several sizes (7B, 13B, 33B, and 65B parameters) and also sharing a Llama model card that details how we built the model in keeping with our approach to Responsible AI practices.