1871 projects for "artificial intelligence algorithm" with 2 filters applied:

  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    The database for AI-powered applications.

    MongoDB Atlas is the developer-friendly database used to build, scale, and run gen AI and LLM-powered apps—without needing a separate vector database. Atlas offers built-in vector search, global availability across 115+ regions, and flexible document modeling. Start building AI apps faster, all in one place.
    Start Free
  • 1
    ViKi (Virtual Interactive keyboard Interface) is a global framework that enables contactless human machine interaction using computer vision techniques. Only a simple webcam is sufficient to emulate traditional devices such as mouse and keyboard do.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    A straight-forward Java implementation of a mixture model with pluggable mixture functions, e.g. a mixture of Gaussian functions. The number and dimensionality of the mixture functions is not limited. All critical calculations are performed in log-space.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    A distributed genetic engine which evolves a b&w image vectorizer. The individuals have an "eye", a "brain" and a "hand". The purpose of the project is to get insight of the auto-organization (if any) of drawing primitives in the individual's brain.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    A platform (SDK) to create an open Arena where developers can program and develop teams and/or individual bots to compete against each other
    Downloads: 0 This Week
    Last Update:
    See Project
  • Keep company data safe with Chrome Enterprise Icon
    Keep company data safe with Chrome Enterprise

    Protect your business with AI policies and data loss prevention in the browser

    Make AI work your way with Chrome Enterprise. Block unapproved sites and set custom data controls that align with your company's policies.
    Download Chrome
  • 5
    DeepSeek-V3.2-Speciale

    DeepSeek-V3.2-Speciale

    High-compute ultra-reasoning model surpassing model surpassing GPT-5

    DeepSeek-V3.2-Speciale is the high-compute, ultra-reasoning variant of DeepSeek-V3.2, designed specifically to push the boundaries of mathematical, logical, and algorithmic intelligence. It builds on the DeepSeek Sparse Attention (DSA) framework, delivering dramatically improved long-context efficiency while preserving full model quality. Unlike the standard version, Speciale is tuned exclusively for deep reasoning and therefore does not support tool-calling, focusing its full capacity on...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    This projects implements various optimization heuristics and meta-heuristics (such as local search, VND, GRASP, Simulated Annealing, and more still to come) finding solutions on the post enrolment course timetabling problem.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Interlogy - next generation knowledge storage and prezentation system. Gives wide functionality for knwoledge evaluation. Interlogy is the same as forum, wiki, chat, socialnetwork, but all together.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Generic engine to filter information. We wish to show that the power of expression of a filter makes it possible to appreciably reduce the size of the code necessary to extract information and that it is possible in Python.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Multilayer neural network sudoku generator that can be trained to produce puzzles with different degree of dificulty and imitate those made by people.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 10
    Jatobo (Java TOC Bot) is another Bot connecting to the ICQ and AIM networks via the TOC2 library. For that purpose it uses Jatoli (Java TOC Library) also included in this project. Feel free to use it for your projects.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Gamel (pronounced like camel) is a library written in C++ for use in games. Gamel includes common search and sort algorithms, easy-to-use networking methods, logging, 2D/3D distance calculation, entity classes for RPG and FPS games, pathfinding and more.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Hunyuan-MT-7B

    Hunyuan-MT-7B

    Tencent’s 36-language state-of-the-art translation model

    Hunyuan-MT-7B is a large-scale multilingual translation model developed by Tencent, designed to deliver state-of-the-art translation quality across 36 languages, including several Chinese ethnic minority languages. It forms part of the Hunyuan Translation Model family, alongside Hunyuan-MT-Chimera, which ensembles outputs for even higher accuracy. Trained with a comprehensive framework spanning pretraining, cross-lingual pretraining, supervised fine-tuning, enhancement, and ensemble...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Hermes 4

    Hermes 4

    Hermes 4 FP8: hybrid reasoning Llama-3.1-405B model by Nous Research

    Hermes 4 405B FP8 is a cutting-edge large language model developed by Nous Research, built on Llama-3.1-405B and optimized for frontier reasoning and alignment. It introduces a hybrid reasoning mode with explicit <think> segments, enabling the model to deliberate deeply when needed and switch to faster responses when desired. Post-training improvements include a vastly expanded corpus with ~60B tokens, boosting performance across math, code, STEM, logic, creativity, and structured outputs....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Grok-2.5

    Grok-2.5

    Large-scale xAI model for local inference with SGLang, Grok-2.5

    Grok-2.5 is a large-scale AI model developed and released by xAI in 2024, made available through Hugging Face for research and experimentation. The model is distributed as raw weights that require specialized infrastructure to run, rather than being hosted by inference providers. To use it, users must download over 500 GB of files and set them up locally with the SGLang inference engine. Grok-2.5 supports advanced inference with multi-GPU configurations, requiring at least 8 GPUs with more...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    NuMarkdown-8B-Thinking

    NuMarkdown-8B-Thinking

    Reasoning-powered OCR VLM for converting complex documents to Markdown

    NuMarkdown-8B-Thinking is the first reasoning OCR vision-language model (VLM) designed to convert documents into clean Markdown optimized for retrieval-augmented generation (RAG). Built on Qwen 2.5-VL-7B and fine-tuned with synthetic Doc → Reasoning → Markdown examples, it generates thinking tokens before producing the final Markdown to better handle complex layouts and tables. It uses a two-phase training process: supervised fine-tuning (SFT) followed by reinforcement learning (GRPO) with a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    GLM-4.5-Air

    GLM-4.5-Air

    Compact hybrid reasoning language model for intelligent responses

    GLM-4.5-Air is a multilingual large language model with 106 billion total parameters and 12 billion active parameters, designed for conversational AI and intelligent agents. It is part of the GLM-4.5 family developed by Zhipu AI, offering hybrid reasoning capabilities via two modes: a thinking mode for complex reasoning and tool use, and a non-thinking mode for immediate responses. The model is optimized for efficiency and deployment, delivering strong results across 12 industry benchmarks,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    t5-base

    t5-base

    Flexible text-to-text transformer model for multilingual NLP tasks

    t5-base is a pre-trained transformer model from Google’s T5 (Text-To-Text Transfer Transformer) family that reframes all NLP tasks into a unified text-to-text format. With 220 million parameters, it can handle a wide range of tasks, including translation, summarization, question answering, and classification. Unlike traditional models like BERT, which output class labels or spans, T5 always generates text outputs. It was trained on the C4 dataset, along with a variety of supervised NLP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    t5-small

    t5-small

    T5-Small: Lightweight text-to-text transformer for NLP tasks

    T5-Small is a lightweight variant of the Text-To-Text Transfer Transformer (T5), designed to handle a wide range of NLP tasks using a unified text-to-text approach. Developed by researchers at Google, this model reframes all tasks—such as translation, summarization, classification, and question answering—into the format of input and output as plain text strings. With only 60 million parameters, T5-Small is compact and suitable for fast inference or deployment in constrained environments. It...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Qwen2.5-VL-3B-Instruct

    Qwen2.5-VL-3B-Instruct

    Qwen2.5-VL-3B-Instruct: Multimodal model for chat, vision & video

    Qwen2.5-VL-3B-Instruct is a 3.75 billion parameter multimodal model by Qwen, designed to handle complex vision-language tasks in both image and video formats. As part of the Qwen2.5 series, it supports image-text-to-text generation with capabilities like chart reading, object localization, and structured data extraction. The model can serve as an intelligent visual agent capable of interacting with digital interfaces and understanding long-form videos by dynamically sampling resolution and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Llama-3.2-1B

    Llama-3.2-1B

    Llama 3.2–1B: Multilingual, instruction-tuned model for mobile AI

    meta-llama/Llama-3.2-1B is a lightweight, instruction-tuned generative language model developed by Meta, optimized for multilingual dialogue, summarization, and retrieval tasks. With 1.23 billion parameters, it offers strong performance in constrained environments like mobile devices, without sacrificing versatility or multilingual support. It is part of the Llama 3.2 family, trained on up to 9 trillion tokens and aligned using supervised fine-tuning, preference optimization, and safety...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    bart-large-cnn

    bart-large-cnn

    Summarization model fine-tuned on CNN/DailyMail articles

    facebook/bart-large-cnn is a large-scale sequence-to-sequence transformer model developed by Meta AI and fine-tuned specifically for abstractive text summarization. It uses the BART architecture, which combines a bidirectional encoder (like BERT) with an autoregressive decoder (like GPT). Pre-trained on corrupted text reconstruction, the model was further trained on the CNN/DailyMail dataset—a collection of news articles paired with human-written summaries. It performs particularly well in...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    mms-300m-1130-forced-aligner

    mms-300m-1130-forced-aligner

    CTC-based forced aligner for audio-text in 158 languages

    mms-300m-1130-forced-aligner is a multilingual forced alignment model based on Meta’s MMS-300M wav2vec2 checkpoint, adapted for Hugging Face’s Transformers library. It supports forced alignment between audio and corresponding text across 158 languages, offering broad multilingual coverage. The model enables accurate word- or phoneme-level timestamping using Connectionist Temporal Classification (CTC) emissions. Unlike other tools, it provides significant memory efficiency compared to the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Qwen2.5-VL-7B-Instruct

    Qwen2.5-VL-7B-Instruct

    Multimodal 7B model for image, video, and text understanding tasks

    Qwen2.5-VL-7B-Instruct is a multimodal vision-language model developed by the Qwen team, designed to handle text, images, and long videos with high precision. Fine-tuned from Qwen2.5-VL, this 7-billion-parameter model can interpret visual content such as charts, documents, and user interfaces, as well as recognize common objects. It supports complex tasks like visual question answering, localization with bounding boxes, and structured output generation from documents. The model is also...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    wav2vec2-large-xlsr-53-portuguese

    wav2vec2-large-xlsr-53-portuguese

    Portuguese ASR model fine-tuned on XLSR-53 for 16kHz audio input

    wav2vec2-large-xlsr-53-portuguese is an automatic speech recognition (ASR) model fine-tuned on Portuguese using the Common Voice 6.1 dataset. It is based on Facebook’s wav2vec2-large-xlsr-53, a multilingual self-supervised learning model, and is optimized to transcribe Portuguese speech sampled at 16kHz. The model performs well without a language model, though adding one can improve word error rate (WER) and character error rate (CER). It achieves a WER of 11.3% (or 9.01% with LM) on Common...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    bge-base-en-v1.5

    bge-base-en-v1.5

    Efficient English embedding model for semantic search and retrieval

    bge-base-en-v1.5 is an English sentence embedding model from BAAI optimized for dense retrieval tasks, part of the BGE (BAAI General Embedding) family. It is a fine-tuned BERT-based model designed to produce high-quality, semantically meaningful embeddings for tasks like semantic similarity, information retrieval, classification, and clustering. This version (v1.5) improves retrieval performance and stabilizes similarity score distribution without requiring instruction-based prompts. With...
    Downloads: 0 This Week
    Last Update:
    See Project