Showing 85 open source projects for "encoder"

View related business solutions
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • 1
    Ministral 3 14B Instruct 2512

    Ministral 3 14B Instruct 2512

    Efficient 14B multimodal instruct model with edge deployment and FP8

    Ministral 3 14B Instruct 2512 is the largest model in the Ministral 3 family, delivering frontier performance comparable to much larger systems while remaining optimized for edge-level deployment. It combines a 13.5B-parameter language model with a 0.4B-parameter vision encoder, enabling strong multimodal understanding in both text and image tasks. This FP8 instruct-tuned variant is designed specifically for chat, instruction following, and agentic workflows with robust system-prompt adherence. Despite its size, the model is engineered for practical deployment, capable of running locally on a single 24GB GPU when served in FP8 and even less with further quantization. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Mistral Large 3 675B Instruct 2512

    Mistral Large 3 675B Instruct 2512

    Frontier-scale 675B multimodal instruct MoE model for enterprise AIMis

    ...As the instruct-tuned FP8 variant, it is optimized for reliable instruction following, agentic workflows, production-grade assistants, and long-context enterprise tasks. It incorporates a massive 673B-parameter language MoE backbone and a 2.5B-parameter vision encoder, enabling rich multimodal understanding across text and images. The model supports dozens of languages and maintains strong system-prompt adherence, making it suitable for global and structured enterprise use. Designed for high performance, it runs on a single node of B200 or H200 GPUs in FP8, and can also operate in NVFP4 mode on H100 or A100 hardware. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Ministral 3 3B Reasoning 2512

    Ministral 3 3B Reasoning 2512

    Compact 3B-param multimodal model for efficient on-device reasoning

    Ministral 3 3B Reasoning 2512 is the smallest reasoning-capable model in the Ministal-3 family, yet delivers a surprisingly capable multimodal and multilingual base for lightweight AI applications. It pairs a 3.4B-parameter language model with a 0.4B-parameter vision encoder, enabling it to understand both text and image inputs. This reasoning-tuned variant is optimized for tasks like math, coding, and other STEM-related problem solving, making it suitable for applications that require logical reasoning, analysis, or structured thinking. Despite its modest size, the model is designed for edge deployment and can run locally, fitting in ~16 GB of VRAM in BF16 or under 8 GB of RAM/VRAM when quantized. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Ministral 3 8B Base 2512

    Ministral 3 8B Base 2512

    Versatile 8B-base multimodal LLM, flexible foundation for custom AI

    Ministral 3 8B Base 2512 is a mid-sized, dense model in the Ministral 3 series, designed as a general-purpose foundation for text and image tasks. It pairs an 8.4B-parameter language model with a 0.4B-parameter vision encoder, enabling unified multimodal capabilities out of the box. As a “base” model (i.e., not fine-tuned for instruction or reasoning), it offers a flexible starting point for custom downstream tasks or fine-tuning. The model supports a large 256k token context window, making it capable of handling long documents or extended dialogues. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 5
    Ministral 3 14B Base 2512

    Ministral 3 14B Base 2512

    Powerful 14B-base multimodal model — flexible base for fine-tuning

    Ministral 3 14B Base 2512 is the largest model in the Ministral 3 line, offering state-of-the-art language and vision capabilities in a dense, base-pretrained form. It combines a 13.5B-parameter language model with a 0.4B-parameter vision encoder, enabling both high-quality text understanding/generation and image-aware tasks. As a “base” model (i.e. not fine-tuned for instruction or reasoning), it provides a flexible foundation ideal for custom fine-tuning or downstream specialization. The model remains efficient enough for on-prem or local deployment — it fits in ~32 GB VRAM in BF16, and requires under ~24 GB when quantized. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Qwen3.6-35B-A3B

    Qwen3.6-35B-A3B

    Open multimodal model for coding, agents, and long-context tasks

    Qwen3.6-35B-A3B is an open-weight multimodal model built for real-world coding, agent workflows, and long-context reasoning. It combines a causal language model with a vision encoder, supports text, image, and video inputs, and is optimized for frameworks such as Transformers, vLLM, SGLang, and KTransformers. The model emphasizes stability, responsiveness, and practical developer productivity, with major improvements in agentic coding, frontend generation, and repository-level reasoning. A notable addition is thinking preservation, which allows the model to retain reasoning context from earlier messages, improving iterative work and reducing redundant computation. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    t5-base

    t5-base

    Flexible text-to-text transformer model for multilingual NLP tasks

    t5-base is a pre-trained transformer model from Google’s T5 (Text-To-Text Transfer Transformer) family that reframes all NLP tasks into a unified text-to-text format. With 220 million parameters, it can handle a wide range of tasks, including translation, summarization, question answering, and classification. Unlike traditional models like BERT, which output class labels or spans, T5 always generates text outputs. It was trained on the C4 dataset, along with a variety of supervised NLP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    bart-large-cnn

    bart-large-cnn

    Summarization model fine-tuned on CNN/DailyMail articles

    facebook/bart-large-cnn is a large-scale sequence-to-sequence transformer model developed by Meta AI and fine-tuned specifically for abstractive text summarization. It uses the BART architecture, which combines a bidirectional encoder (like BERT) with an autoregressive decoder (like GPT). Pre-trained on corrupted text reconstruction, the model was further trained on the CNN/DailyMail dataset—a collection of news articles paired with human-written summaries. It performs particularly well in generating concise, coherent, and human-readable summaries from longer texts. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Qwen3.6-35B-A3B-FP8

    Qwen3.6-35B-A3B-FP8

    FP8 Qwen model for efficient multimodal coding and agent tasks

    Qwen3.6-35B-A3B-FP8 is an FP8-quantized version of Qwen3.6 designed to deliver nearly the same performance as the original model while improving deployment efficiency. It is a multimodal open-weight model that combines a causal language model with a vision encoder, supporting text, image, and video inputs. Built for stability and real-world developer use, it emphasizes agentic coding, repository-level reasoning, and productive long-context workflows. A key capability is thinking preservation, which allows the model to retain reasoning traces from earlier messages, helping reduce repeated computation and improving consistency in iterative tasks. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Train ML Models With SQL You Already Know Icon
    Train ML Models With SQL You Already Know

    BigQuery automates data prep, analysis, and predictions with built-in AI assistance.

    Build and deploy ML models using familiar SQL. Automate data prep with built-in Gemini. Query 1 TB and store 10 GB free monthly.
    Try Free
  • 10
    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP ViT-bigG/14: Zero-shot image-text model trained on LAION-2B

    CLIP-ViT-bigG-14-laion2B-39B-b160k is a powerful vision-language model trained on the English subset of the LAION-5B dataset using the OpenCLIP framework. Developed by LAION and trained by Mitchell Wortsman on Stability AI’s compute infrastructure, it pairs a ViT-bigG/14 vision transformer with a text encoder to perform contrastive learning on image-text pairs. This model excels at zero-shot image classification, image-to-text and text-to-image retrieval, and can be adapted for tasks such as image captioning or generation guidance. It achieves an impressive 80.1% top-1 accuracy on ImageNet-1k without any fine-tuning, showcasing its robustness in open-domain settings. ...
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB