Showing 81 open source projects for "encoder"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Custom VMs From 1 to 96 vCPUs With 99.95% Uptime Icon
    Custom VMs From 1 to 96 vCPUs With 99.95% Uptime

    General-purpose, compute-optimized, or GPU/TPU-accelerated. Built to your exact specs.

    Live migration and automatic failover keep workloads online through maintenance. One free e2-micro VM every month.
    Try Free
  • 1
    Kimi K2.6

    Kimi K2.6

    Multimodal agent model for coding, orchestration, and autonomy

    ...One of its most distinctive capabilities is horizontal agent scaling, supporting up to 300 sub-agents and 4,000 coordinated steps in a single run, which enables parallel task decomposition and end-to-end completion of outputs such as documents, websites, and spreadsheets. Architecturally, it uses a 1T-parameter Mixture-of-Experts design with 32B activated parameters, a MoonViT vision encoder, and a 256K context window.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Qwen3.6-35B-A3B

    Qwen3.6-35B-A3B

    Open multimodal model for coding, agents, and long-context tasks

    Qwen3.6-35B-A3B is an open-weight multimodal model built for real-world coding, agent workflows, and long-context reasoning. It combines a causal language model with a vision encoder, supports text, image, and video inputs, and is optimized for frameworks such as Transformers, vLLM, SGLang, and KTransformers. The model emphasizes stability, responsiveness, and practical developer productivity, with major improvements in agentic coding, frontend generation, and repository-level reasoning. A notable addition is thinking preservation, which allows the model to retain reasoning context from earlier messages, improving iterative work and reducing redundant computation. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    t5-base

    t5-base

    Flexible text-to-text transformer model for multilingual NLP tasks

    t5-base is a pre-trained transformer model from Google’s T5 (Text-To-Text Transfer Transformer) family that reframes all NLP tasks into a unified text-to-text format. With 220 million parameters, it can handle a wide range of tasks, including translation, summarization, question answering, and classification. Unlike traditional models like BERT, which output class labels or spans, T5 always generates text outputs. It was trained on the C4 dataset, along with a variety of supervised NLP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    bart-large-cnn

    bart-large-cnn

    Summarization model fine-tuned on CNN/DailyMail articles

    facebook/bart-large-cnn is a large-scale sequence-to-sequence transformer model developed by Meta AI and fine-tuned specifically for abstractive text summarization. It uses the BART architecture, which combines a bidirectional encoder (like BERT) with an autoregressive decoder (like GPT). Pre-trained on corrupted text reconstruction, the model was further trained on the CNN/DailyMail dataset—a collection of news articles paired with human-written summaries. It performs particularly well in generating concise, coherent, and human-readable summaries from longer texts. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Full-stack observability with actually useful AI | Grafana Cloud Icon
    Full-stack observability with actually useful AI | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 5
    Qwen3.6-35B-A3B-FP8

    Qwen3.6-35B-A3B-FP8

    FP8 Qwen model for efficient multimodal coding and agent tasks

    Qwen3.6-35B-A3B-FP8 is an FP8-quantized version of Qwen3.6 designed to deliver nearly the same performance as the original model while improving deployment efficiency. It is a multimodal open-weight model that combines a causal language model with a vision encoder, supporting text, image, and video inputs. Built for stability and real-world developer use, it emphasizes agentic coding, repository-level reasoning, and productive long-context workflows. A key capability is thinking preservation, which allows the model to retain reasoning traces from earlier messages, helping reduce repeated computation and improving consistency in iterative tasks. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP-ViT-bigG-14-laion2B-39B-b160k

    CLIP ViT-bigG/14: Zero-shot image-text model trained on LAION-2B

    CLIP-ViT-bigG-14-laion2B-39B-b160k is a powerful vision-language model trained on the English subset of the LAION-5B dataset using the OpenCLIP framework. Developed by LAION and trained by Mitchell Wortsman on Stability AI’s compute infrastructure, it pairs a ViT-bigG/14 vision transformer with a text encoder to perform contrastive learning on image-text pairs. This model excels at zero-shot image classification, image-to-text and text-to-image retrieval, and can be adapted for tasks such as image captioning or generation guidance. It achieves an impressive 80.1% top-1 accuracy on ImageNet-1k without any fine-tuning, showcasing its robustness in open-domain settings. ...
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB