145 projects for "artificial intelligence algorithm" with 2 filters applied:

  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    unidepth-v2-vitl14

    unidepth-v2-vitl14

    Metric monocular depth estimation (vision model)

    Estimates absolute (metric) depth from single RGB images, along with camera intrinsics and uncertainty. Designed to generalize across domains (zero-shot) using a self‑prompting camera module and pseudo-spherical prediction space.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    BLEURT-20-D12

    BLEURT-20-D12

    Custom BLEURT model for evaluating text similarity using PyTorch

    BLEURT-20-D12 is a PyTorch implementation of BLEURT, a model designed to assess the semantic similarity between two text sequences. It serves as an automatic evaluation metric for natural language generation tasks like summarization and translation. The model predicts a score indicating how similar a candidate sentence is to a reference sentence, with higher scores indicating greater semantic overlap. Unlike standard BLEURT models from TensorFlow, this version is built from a custom PyTorch...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    roberta-base

    roberta-base

    Robust BERT-based model for English with improved MLM training

    roberta-base is a robustly optimized variant of BERT, pretrained on a significantly larger corpus of English text using dynamic masked language modeling. Developed by Facebook AI, RoBERTa improves on BERT by removing the Next Sentence Prediction objective, using longer training, larger batches, and more data, including BookCorpus, English Wikipedia, CC-News, OpenWebText, and Stories. It captures contextual representations of language by masking 15% of input tokens and predicting them....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    bge-large-en-v1.5

    bge-large-en-v1.5

    BGE-Large v1.5: High-accuracy English embedding model for retrieval

    BAAI/bge-large-en-v1.5 is a powerful English sentence embedding model designed by the Beijing Academy of Artificial Intelligence to enhance retrieval-augmented language model systems. It uses a BERT-based architecture fine-tuned to produce high-quality dense vector representations optimized for sentence similarity, search, and retrieval. This model is part of the BGE (BAAI General Embedding) family and delivers improved similarity distribution and state-of-the-art results on the MTEB benchmark...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Photo and Video Editing APIs and SDKs Icon
    Photo and Video Editing APIs and SDKs

    Trusted by 150 million+ creators and businesses globally

    Unlock Picsart's full editing suite by embedding our Editor SDK directly into your platform. Offer your users the power of a full design suite without leaving your site.
    Learn More
  • 5
    bge-small-en-v1.5

    bge-small-en-v1.5

    Compact English sentence embedding model for semantic search tasks

    BAAI/bge-small-en-v1.5 is a lightweight English sentence embedding model developed by the Beijing Academy of Artificial Intelligence (BAAI) as part of the BGE (BAAI General Embedding) series. Designed for dense retrieval, semantic search, and similarity tasks, it produces 384-dimensional embeddings that can be used to compare and rank sentences or passages. This version (v1.5) improves similarity distribution, enhancing performance without the need for special query instructions. The model...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    DeepSeek-V3.1-Terminus

    DeepSeek-V3.1-Terminus

    685B model with improved agents and consistency

    DeepSeek-V3.1-Terminus is an updated release in the DeepSeek-V3.1 series, maintaining the original model’s large-scale reasoning and generative capabilities while addressing several key user-reported issues. It improves language consistency, reducing mixed Chinese-English outputs and eliminating abnormal characters, enhancing reliability in multilingual scenarios. The update also refines agentic capabilities, especially for the Code Agent and Search Agent, leading to better tool integration...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    VaultGemma

    VaultGemma

    VaultGemma: 1B DP-trained Gemma variant for private NLP tasks

    VaultGemma is a sub-1B parameter variant of Google’s Gemma family that is pre-trained from scratch with Differential Privacy (DP), providing mathematically backed guarantees that its outputs do not reveal information about any single training example. Using DP-SGD with a privacy budget across a large English-language corpus (web documents, code, mathematics), it prioritizes privacy over raw utility. The model follows a Gemma-2–style architecture, outputs text from up to 1,024 input tokens,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Qwen3-Next

    Qwen3-Next

    Qwen3-Next: 80B instruct LLM with ultra-long context up to 1M tokens

    Qwen3-Next-80B-A3B-Instruct is the flagship release in the Qwen3-Next series, designed as a next-generation foundation model for ultra-long context and efficient reasoning. With 80B total parameters and 3B activated at a time, it leverages hybrid attention (Gated DeltaNet + Gated Attention) and a high-sparsity Mixture-of-Experts architecture to achieve exceptional efficiency. The model natively supports a context length of 262K tokens and can be extended up to 1 million tokens using RoPE...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Mellum-4b-base

    Mellum-4b-base

    JetBrains’ 4B parameter code model for completions

    Mellum-4b-base is JetBrains’ first open-source large language model designed and optimized for code-related tasks. Built with 4 billion parameters and a LLaMA-style architecture, it was trained on over 4.2 trillion tokens across multiple programming languages, including datasets such as The Stack, StarCoder, and CommitPack. With a context window of 8,192 tokens, it excels at code completion, fill-in-the-middle tasks, and intelligent code suggestions for professional developer tools and IDEs....
    Downloads: 0 This Week
    Last Update:
    See Project
  • Build Securely on Azure with Proven Frameworks Icon
    Build Securely on Azure with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 10
    Jan-v1-edge

    Jan-v1-edge

    Jan-v1-edge: efficient 1.7B reasoning model optimized for edge devices

    Jan-v1-edge is a lightweight agentic language model developed by JanHQ, designed for fast and reliable on-device execution. It is the second release in the Jan Family and was distilled from the larger Jan-v1 model, retaining strong reasoning and problem-solving capabilities while reducing its computational footprint. The model was refined through a two-stage post-training process: Supervised Fine-Tuning (SFT) to transfer knowledge from Jan-v1, followed by Reinforcement Learning with...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    VibeVoice

    VibeVoice

    VibeVoice: Open-source multi-speaker long-form text-to-speech model

    VibeVoice-1.5B is Microsoft’s frontier open-source text-to-speech (TTS) model designed for generating expressive, long-form, multi-speaker conversational audio such as podcasts. Unlike traditional TTS systems, it excels in scalability, speaker consistency, and natural turn-taking for up to 90 minutes of continuous speech with as many as four distinct speakers. A key innovation is its use of continuous acoustic and semantic speech tokenizers operating at an ultra-low frame rate of 7.5 Hz,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Hunyuan-MT-7B

    Hunyuan-MT-7B

    Tencent’s 36-language state-of-the-art translation model

    Hunyuan-MT-7B is a large-scale multilingual translation model developed by Tencent, designed to deliver state-of-the-art translation quality across 36 languages, including several Chinese ethnic minority languages. It forms part of the Hunyuan Translation Model family, alongside Hunyuan-MT-Chimera, which ensembles outputs for even higher accuracy. Trained with a comprehensive framework spanning pretraining, cross-lingual pretraining, supervised fine-tuning, enhancement, and ensemble...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Qwen-Image-Edit

    Qwen-Image-Edit

    An advanced bilingual image editing with semantic control

    Qwen-Image-Edit is the image editing extension of Qwen-Image, a 20B parameter model that combines advanced visual and text-rendering capabilities for creative and precise editing. It leverages both Qwen2.5-VL for semantic control and a VAE Encoder for appearance control, enabling users to edit at both the content and detail level. The model excels at semantic edits like style transfer, object rotation, and novel view synthesis, while also handling precise appearance edits such as adding or...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Hermes 4

    Hermes 4

    Hermes 4 FP8: hybrid reasoning Llama-3.1-405B model by Nous Research

    Hermes 4 405B FP8 is a cutting-edge large language model developed by Nous Research, built on Llama-3.1-405B and optimized for frontier reasoning and alignment. It introduces a hybrid reasoning mode with explicit <think> segments, enabling the model to deliberate deeply when needed and switch to faster responses when desired. Post-training improvements include a vastly expanded corpus with ~60B tokens, boosting performance across math, code, STEM, logic, creativity, and structured outputs....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Grok-2.5

    Grok-2.5

    Large-scale xAI model for local inference with SGLang, Grok-2.5

    Grok-2.5 is a large-scale AI model developed and released by xAI in 2024, made available through Hugging Face for research and experimentation. The model is distributed as raw weights that require specialized infrastructure to run, rather than being hosted by inference providers. To use it, users must download over 500 GB of files and set them up locally with the SGLang inference engine. Grok-2.5 supports advanced inference with multi-GPU configurations, requiring at least 8 GPUs with more...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    NuMarkdown-8B-Thinking

    NuMarkdown-8B-Thinking

    Reasoning-powered OCR VLM for converting complex documents to Markdown

    NuMarkdown-8B-Thinking is the first reasoning OCR vision-language model (VLM) designed to convert documents into clean Markdown optimized for retrieval-augmented generation (RAG). Built on Qwen 2.5-VL-7B and fine-tuned with synthetic Doc → Reasoning → Markdown examples, it generates thinking tokens before producing the final Markdown to better handle complex layouts and tables. It uses a two-phase training process: supervised fine-tuning (SFT) followed by reinforcement learning (GRPO) with a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    gpt-oss-20b

    gpt-oss-20b

    OpenAI’s compact 20B open model for fast, agentic, and local use

    GPT-OSS-20B is OpenAI’s smaller, open-weight language model optimized for low-latency, agentic tasks, and local deployment. With 21B total parameters and 3.6B active parameters (MoE), it fits within 16GB of memory thanks to native MXFP4 quantization. Designed for high-performance reasoning, it supports Harmony response format, function calling, web browsing, and code execution. Like its larger sibling (gpt-oss-120b), it offers adjustable reasoning depth and full chain-of-thought visibility...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    gpt-oss-120b

    gpt-oss-120b

    OpenAI’s open-weight 120B model optimized for reasoning and tooling

    GPT-OSS-120B is a powerful open-weight language model by OpenAI, optimized for high-level reasoning, tool use, and agentic tasks. With 117B total parameters and 5.1B active parameters, it’s designed to fit on a single H100 GPU using native MXFP4 quantization. The model supports fine-tuning, chain-of-thought reasoning, and structured outputs, making it ideal for complex workflows. It operates in OpenAI’s Harmony response format and can be deployed via Transformers, vLLM, Ollama, LM Studio,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    FLUX.1-Krea-dev

    FLUX.1-Krea-dev

    Text-to-image model optimized for artistic quality and safe generation

    FLUX.1-Krea-dev is a 12 billion parameter rectified flow transformer for text-to-image generation, developed by Black Forest Labs in collaboration with Krea. It delivers aesthetic, high-quality outputs focused on photography and visual coherence, making it a strong competitor to closed-source models. Trained using guidance distillation, it offers efficient inference while preserving creative fidelity. The model is distributed under a non-commercial license, with conditions to prevent misuse...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    GLM-4.5-Air

    GLM-4.5-Air

    Compact hybrid reasoning language model for intelligent responses

    GLM-4.5-Air is a multilingual large language model with 106 billion total parameters and 12 billion active parameters, designed for conversational AI and intelligent agents. It is part of the GLM-4.5 family developed by Zhipu AI, offering hybrid reasoning capabilities via two modes: a thinking mode for complex reasoning and tool use, and a non-thinking mode for immediate responses. The model is optimized for efficiency and deployment, delivering strong results across 12 industry benchmarks,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    OpenVLA 7B

    OpenVLA 7B

    Vision-language-action model for robot control via images and text

    OpenVLA 7B is a multimodal vision-language-action model trained on 970,000 robot manipulation episodes from the Open X-Embodiment dataset. It takes camera images and natural language instructions as input and outputs normalized 7-DoF robot actions, enabling control of multiple robot types across various domains. Built on top of LLaMA-2 and DINOv2/SigLIP visual backbones, it allows both zero-shot inference for known robot setups and parameter-efficient fine-tuning for new domains. The model...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Kimi K2

    Kimi K2

    1T MoE model with 32B active params, optimized for agentic reasoning

    Kimi K2 Instruct is a high-performance Mixture-of-Experts (MoE) language model developed by Moonshot AI, activating 32B parameters per forward pass from a total 1 trillion. Designed for agentic reasoning, tool use, and advanced coding tasks, it achieves SOTA-level performance on multiple benchmarks such as SWE-Bench, AIME, and MMLU. Trained on 15.5T tokens using the Muon optimizer, it incorporates novel techniques for scaling stability. Kimi K2 supports a 128K context window, enabling...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Devstral

    Devstral

    Agentic 24B LLM optimized for coding tasks with 128k context support

    Devstral-Small-2505 is a 23.6B parameter language model fine-tuned by Mistral AI and All Hands AI, built specifically for agentic software engineering tasks. Based on Mistral-Small-3.1, it supports a 128k context window and excels in exploring codebases, editing multiple files, and tool usage. The model achieves state-of-the-art open-source performance on SWE-Bench Verified with a score of 46.8%, surpassing much larger models. Devstral is designed for local and production-level deployments,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    SmolLM3

    SmolLM3

    Multilingual 3B LLM optimized for reasoning, math, and long contexts

    SmolLM3 is a 3.08B parameter decoder-only language model designed by Hugging Face to deliver high performance in reasoning, math, and multilingual understanding. Trained on 11.2T tokens with a curriculum of web, code, and mathematical data, it uses advanced features like GQA and NoPE. The model supports extended context lengths up to 128k tokens via YaRN extrapolation, making it highly suitable for long-context applications. It outperforms or rivals larger models like Qwen3 and LLaMA3 on...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    t5-base

    t5-base

    Flexible text-to-text transformer model for multilingual NLP tasks

    t5-base is a pre-trained transformer model from Google’s T5 (Text-To-Text Transfer Transformer) family that reframes all NLP tasks into a unified text-to-text format. With 220 million parameters, it can handle a wide range of tasks, including translation, summarization, question answering, and classification. Unlike traditional models like BERT, which output class labels or spans, T5 always generates text outputs. It was trained on the C4 dataset, along with a variety of supervised NLP...
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.