The "/3.1.0" file could not be found or is not available. Please select another file.

Search Results for "artificial intelligence algorithm" - Page 65

Showing 1634 open source projects for "artificial intelligence algorithm"

View related business solutions
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • Enterprise-grade ITSM, for every business Icon
    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity.

    Freshservice is an intuitive, AI-powered platform that helps IT, operations, and business teams deliver exceptional service without the usual complexity. Automate repetitive tasks, resolve issues faster, and provide seamless support across the organization. From managing incidents and assets to driving smarter decisions, Freshservice makes it easy to stay efficient and scale with confidence.
    Try it Free
  • 1
    stable-diffusion-3-medium

    stable-diffusion-3-medium

    Efficient text-to-image model with enhanced quality and typography

    Stable Diffusion 3 Medium is a next-generation text-to-image model by Stability AI, designed using a Multimodal Diffusion Transformer (MMDiT) architecture. It offers notable improvements in image quality, prompt comprehension, typography, and computational efficiency over previous versions. The model integrates three fixed, pretrained text encoders—OpenCLIP-ViT/G, CLIP-ViT/L, and T5-XXL—to interpret complex prompts more effectively. Trained on 1 billion synthetic and filtered public images,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    stable-diffusion-2-1

    stable-diffusion-2-1

    Latent diffusion model for high-quality text-to-image generation

    Stable Diffusion 2.1 is a text-to-image generation model developed by Stability AI, building on the 768-v architecture with additional fine-tuning for improved safety and image quality. It uses a latent diffusion framework that operates in a compressed image space, enabling faster and more efficient image synthesis while preserving detail. The model is conditioned on text prompts via the OpenCLIP-ViT/H encoder and supports generation at resolutions up to 768×768. Released under the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    ControlNet

    ControlNet

    Extension for Stable Diffusion using edge, depth, pose, and more

    ControlNet is a neural network architecture that enhances Stable Diffusion by enabling image generation conditioned on specific visual structures such as edges, poses, depth maps, and segmentation masks. By injecting these auxiliary inputs into the diffusion process, ControlNet gives users powerful control over the layout and composition of generated images while preserving the style and flexibility of generative models. It supports a wide range of conditioning types through pretrained...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    stable-video-diffusion-img2vid-xt

    stable-video-diffusion-img2vid-xt

    Generates high-quality short videos from a single still image input

    Stable Video Diffusion Img2Vid XT is an advanced image-to-video latent diffusion model developed by Stability AI, designed to generate short video clips from a single static image. It produces 25 frames at 576x1024 resolution, offering improved temporal consistency by fine-tuning from an earlier 14-frame version. The model operates without text prompts and instead uses a single input frame to guide visual generation, making it ideal for stylized motion or animation. It includes both a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 5

    Savant

    Python Computer Vision & Video Analytics Framework With Batteries Incl

    Savant is an open-source, high-level framework for building real-time, streaming, highly efficient multimedia AI applications on the Nvidia stack. It helps to develop dynamic, fault-tolerant inference pipelines that utilize the best Nvidia approaches for data center and edge accelerators. Savant is built on DeepStream and provides a high-level abstraction layer for building inference pipelines. It is designed to be easy to use, flexible, and scalable. It is a great choice for building...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    mozg is a flexible, fast neurolibrary. It allows one to create, learn and use multi-layer perceptron (MLP), which is the most popular artificial neural network (ANN). This work is based on library originally written by Alexy Filin.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Veggente is a semantic bridge between incompatible protocols. Initial implementation will include support for bridging between HL7 version 3 and HL7 version 2.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Representation method of abstract information based on hierarchical network of simplest nodes. Each node can represent both value and attribute. Network is able to accumulate, classify and learn information.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Generic engine to filter information. We wish to show that the power of expression of a filter makes it possible to appreciably reduce the size of the code necessary to extract information and that it is possible in Python.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Picsart Enterprise Background Removal API for Stunning eCommerce Visuals Icon
    Picsart Enterprise Background Removal API for Stunning eCommerce Visuals

    Instantly remove the background from your images in just one click.

    With our Remove Background API tool, you can access the transformative capabilities of automation , which will allow you to turn any photo asset into compelling product imagery. With elevated visuals quality on your digital platforms, you can captivate your audience, and therefore achieve higher engagement and sales.
    Learn More
  • 10
    Top Engine the Semantic Web Engine for the Enterprise. Top Engine is a Business Rule Engine that utilize OWL DL ontologies for vocabulary primitive to write rules on top of ontology. Top Engine support forward and backward chaining with truth maintenance
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    segmentation-3.0

    segmentation-3.0

    Speaker segmentation model for 10s audio chunks with powerset labels

    segmentation-3.0 is a voice activity and speaker segmentation model from the pyannote.audio framework, designed to analyze 10-second mono audio sampled at 16kHz. It outputs a (num_frames, num_classes) matrix using a powerset encoding that includes non-speech, individual speakers, and overlapping speech for up to three speakers. Trained with pyannote.audio 3.0.0 on a rich blend of datasets—including AISHELL, DIHARD, VoxConverse, and more—it enables downstream tasks like voice activity...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    MiniMax-M1

    MiniMax-M1

    Open-weight, large-scale hybrid-attention reasoning model

    MiniMax-M1 is the world’s first open-weight, large-scale hybrid-attention reasoning model designed for long-context and complex reasoning tasks. Powered by a hybrid Mixture-of-Experts (MoE) architecture combined with a lightning attention mechanism, it efficiently supports context lengths up to 1 million tokens—eight times larger than many contemporary models. MiniMax-M1 significantly reduces computational overhead at generation time, consuming only about 25% FLOPs compared to comparable...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    ERNIE-4.5-300B-A47B-FP8-Paddle

    ERNIE-4.5-300B-A47B-FP8-Paddle

    ERNIE 4.5 MoE model in FP8 for efficient high-performance inference

    ERNIE-4.5-300B-A47B-FP8-Paddle is a quantized version of Baidu’s MoE large language model, post-trained for text generation tasks and optimized for FP8 precision. This variant retains the original’s 300 billion total parameters with 47 billion active per token, enabling powerful language understanding while dramatically improving inference efficiency. Built using PaddlePaddle, it supports multi-GPU distributed deployment and leverages advanced routing strategies and expert parallelism. It is...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Kokoro-82M

    Kokoro-82M

    Lightweight, fast, and high-quality open TTS model with 82M params

    Kokoro-82M is an open-weight, lightweight text-to-speech (TTS) model featuring 82 million parameters, developed to deliver high-quality voice synthesis with exceptional efficiency. Despite its compact size, Kokoro rivals the output quality of much larger models while remaining significantly faster and cheaper to run. Built on StyleTTS2 and ISTFTNet architectures, it uses a decoder-only setup without diffusion, enabling rapid audio generation with low computational overhead. Kokoro supports...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    whisper-large-v3

    whisper-large-v3

    High-accuracy multilingual speech recognition and translation model

    Whisper-large-v3 is OpenAI’s most advanced multilingual automatic speech recognition (ASR) and speech translation model, featuring 1.54 billion parameters and trained on 5 million hours of labeled and pseudo-labeled audio. Built on a Transformer-based encoder-decoder architecture, it supports 99 languages and delivers significant improvements in transcription accuracy, robustness to noise, and handling of diverse accents. Compared to previous versions, v3 introduces a 128 Mel bin spectrogram...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Llama-2-7b-chat-hf

    Llama-2-7b-chat-hf

    Dialogue-optimized 7B language model for safe and helpful chatting

    Llama-2-7b-chat-hf is a fine-tuned large language model developed by Meta, designed specifically for dialogue use cases. With 7 billion parameters and built on an optimized transformer architecture, it uses supervised fine-tuning and reinforcement learning with human feedback (RLHF) to enhance helpfulness, coherence, and safety. It outperforms most open-source chat models and rivals proprietary systems like ChatGPT in human evaluations. Trained on 2 trillion tokens of public text and over 1...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Llama-2-7b

    Llama-2-7b

    7B-parameter foundational LLM by Meta for text generation tasks

    Llama-2-7B is a foundational large language model developed by Meta as part of the Llama 2 family, designed for general-purpose text generation in English. It has 7 billion parameters and uses an optimized transformer-based, autoregressive architecture. Trained on 2 trillion tokens of publicly available data, it serves as the base for fine-tuned models like Llama-2-Chat. The model is pretrained only, meaning it is not optimized for dialogue but can be adapted for various natural language...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Llama-3.1-8B-Instruct

    Llama-3.1-8B-Instruct

    Multilingual 8B-parameter chat-optimized LLM fine-tuned by Meta

    Llama-3.1-8B-Instruct is a multilingual, instruction-tuned language model developed by Meta, designed for high-quality dialogue generation across eight languages, including English, Spanish, French, German, Italian, Portuguese, Hindi, and Thai. It uses a transformer-based, autoregressive architecture with Grouped-Query Attention and supports a 128k token context window. The model was fine-tuned using a combination of supervised fine-tuning (SFT), reinforcement learning with human feedback...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Meta-Llama-3-8B-Instruct

    Meta-Llama-3-8B-Instruct

    Instruction-tuned 8B LLM by Meta for helpful, safe English dialogue

    Meta-Llama-3-8B-Instruct is an instruction-tuned large language model from Meta’s Llama 3 family, optimized for safe and helpful English dialogue. It uses an autoregressive transformer architecture with Grouped-Query Attention (GQA) and supports an 8k token context length. Fine-tuned using supervised learning and reinforcement learning with human feedback (RLHF), the model achieves strong results on benchmarks like MMLU, GSM8K, and HumanEval. Trained on over 15 trillion tokens of publicly...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    FLUX.1-schnell

    FLUX.1-schnell

    12B-parameter image generator using fast rectified flow transformers

    FLUX.1-schnell is a 12 billion parameter text-to-image model developed by Black Forest Labs, designed for high-quality image generation using rectified flow transformers. It produces competitive visual results with strong prompt adherence, rivaling closed-source models in just 1 to 4 inference steps. Trained using latent adversarial diffusion distillation, the model is optimized for both quality and speed. It is released under the Apache 2.0 license, allowing commercial, scientific, and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    phi-2

    phi-2

    Small, high-performing language model for QA, chat, and code tasks

    Phi-2 is a 2.7 billion parameter Transformer model developed by Microsoft, designed for natural language processing and code generation tasks. It was trained on a filtered dataset of high-quality web content and synthetic NLP texts created by GPT-3.5, totaling 1.4 trillion tokens. Phi-2 excels in benchmarks for common sense, language understanding, and logical reasoning, outperforming most models under 13B parameters despite not being instruction-tuned or aligned via RLHF. It performs best...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    stable-diffusion-3.5-large

    stable-diffusion-3.5-large

    Advanced MMDiT text-to-image model for high-quality visual generation

    Stable Diffusion 3.5 Large is a multimodal diffusion transformer (MMDiT) developed by Stability AI, designed for generating high-quality images from text prompts. It integrates three pretrained text encoders—OpenCLIP-ViT/G, CLIP-ViT/L, and T5-XXL—with QK-normalization for improved training stability and prompt understanding. This model excels in handling typography, detailed scenes, and creative compositions while maintaining resource efficiency. It supports inference via ComfyUI, Hugging...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    starcoder

    starcoder

    Code generation model trained on 80+ languages with FIM support

    StarCoder is a 15.5B parameter language model developed by BigCode for code generation tasks across more than 80 programming languages. It is trained on 1 trillion tokens from the permissively licensed dataset The Stack v1.2, using the Fill-in-the-Middle (FIM) objective and Multi-Query Attention to enhance performance. With an extended context window of 8192 tokens and pretraining in bfloat16, StarCoder can generate, complete, or refactor code in various languages, with English as the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    chatglm-6b

    chatglm-6b

    Bilingual 6.2B parameter chatbot optimized for Chinese and English

    ChatGLM-6B is a 6.2 billion parameter bilingual language model developed by THUDM, based on the General Language Model (GLM) framework. It is optimized for natural and fluent dialogue in both Chinese and English, supporting applications in conversational AI, question answering, and assistance. Trained on approximately 1 trillion tokens, the model benefits from supervised fine-tuning, feedback self-training, and reinforcement learning with human feedback to align its outputs with human...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Mistral-7B-Instruct-v0.2

    Mistral-7B-Instruct-v0.2

    Instruction-tuned 7B model for chat and task-oriented text generation

    Mistral-7B-Instruct-v0.2 is a fine-tuned version of the Mistral-7B-v0.2 language model, designed specifically for following instructions in a conversational format. It supports a 32k token context window, enabling more detailed and longer interactions compared to its predecessor. The model is trained to respond to user prompts formatted with [INST] and [/INST] tags, and it performs well in instruction-following tasks like Q&A, summarization, and explanations. It can be used via the official...
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.