Open Source BSD Artificial Intelligence Software - Page 5

Artificial Intelligence Software for BSD

  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • AestheticsPro Medical Spa Software Icon
    AestheticsPro Medical Spa Software

    Our new software release will dramatically improve your medspa business performance while enhancing the customer experience

    AestheticsPro is the most complete Aesthetics Software on the market today. HIPAA Cloud Compliant with electronic charting, integrated POS, targeted marketing and results driven reporting; AestheticsPro delivers the tools you need to manage your medical spa business. It is our mission To Provide an All-in-One Cutting Edge Software to the Aesthetics Industry.
    Learn More
  • 1
    Fast Artificial Neural Network Library is a free open source neural network library, which implements multilayer artificial neural networks in C with support for both fully connected and sparsely connected networks. Cross-platform execution in both fixed and floating point are supported. It includes a framework for easy handling of training data sets. It is easy to use, versatile, well documented, and fast. Bindings to more than 15 programming languages are available. An easy to read introduction article and a reference manual accompanies the library with examples and recommendations on how to use the library. Several graphical user interfaces are also available for the library.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 2
    AudioCraft

    AudioCraft

    Audiocraft is a library for audio processing and generation

    AudioCraft is a PyTorch library for text-to-audio and text-to-music generation, packaging research models and tooling for training and inference. It includes MusicGen for music generation conditioned on text (and optionally melody) and AudioGen for text-conditioned sound effects and environmental audio. Both models operate over discrete audio tokens produced by a neural codec (EnCodec), which acts like a tokenizer for waveforms and enables efficient sequence modeling. The repo provides inference scripts, checkpoints, and simple Python APIs so you can generate clips from prompts or incorporate the models into applications. It also contains training code and recipes, so researchers can fine-tune on custom data or explore new objectives without building infrastructure from scratch. Example notebooks, CLI tools, and audio utilities help with prompt design, conditioning on reference audio, and post-processing to produce ready-to-share outputs.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    AudioLM - Pytorch

    AudioLM - Pytorch

    Implementation of AudioLM audio generation model in Pytorch

    Implementation of AudioLM, a Language Modeling Approach to Audio Generation out of Google Research, in Pytorch It also extends the work for conditioning with classifier free guidance with T5. This allows for one to do text-to-audio or TTS, not offered in the paper. Yes, this means VALL-E can be trained from this repository. It is essentially the same. This repository now also contains a MIT licensed version of SoundStream. It is also compatible with EnCodec, however, be aware that it has a more restrictive non-commercial license, if you choose to use it.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    DINOv2

    DINOv2

    PyTorch code and models for the DINOv2 self-supervised learning

    DINOv2 is a self-supervised vision learning framework that produces strong, general-purpose image representations without using human labels. It builds on the DINO idea of student–teacher distillation and adapts it to modern Vision Transformer backbones with a carefully tuned recipe for data augmentation, optimization, and multi-crop training. The core promise is that a single pretrained backbone can transfer well to many downstream tasks—from linear probing on classification to retrieval, detection, and segmentation—often requiring little or no fine-tuning. The repository includes code for training, evaluating, and feature extraction, with utilities to run k-NN or linear evaluation baselines to assess representation quality. Pretrained checkpoints cover multiple model sizes so practitioners can trade accuracy for speed and memory depending on their deployment constraints.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 5
    DeepSeek Math

    DeepSeek Math

    Pushing the Limits of Mathematical Reasoning in Open Language Models

    DeepSeek-Math is DeepSeek’s specialized model (or dataset + evaluation) focusing on mathematical reasoning, symbolic manipulation, proof steps, and advanced quantitative problem solving. The repository is likely to include fine-tuning routines or task datasets (e.g. MATH, GSM8K, ARB), demonstration notebooks, prompt templates, and evaluation results on math benchmarks. The goal is to push DeepSeek’s performance in domains that require rigorous symbolic steps, calculus, linear algebra, number theory, or multi-step derivations. The repo may also include modules that integrate external computational tools (e.g. a CAS / computer algebra system) or calculator assistance backends to enhance correctness. Because math reasoning is a high bar for LLMs, DeepSeek-Math aims to showcase their model’s ability not just in natural text but in precise formal reasoning.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 6
    DeepSeekMath-V2

    DeepSeekMath-V2

    Towards self-verifiable mathematical reasoning

    DeepSeekMath-V2 is a large-scale open-source AI model designed specifically for advanced mathematical reasoning, theorem proving, and rigorous proof verification. It’s built by DeepSeek as a successor to their earlier math-specialist models. Unlike general-purpose LLMs that might generate plausible-looking math but sometimes hallucinate or mishandle rigorous logic, Math-V2 is engineered to not only generate solutions but also self-verify them, meaning it examines the derivations, checks logical consistency, and flags or corrects mistakes, producing proofs + verification rather than just a final answer. Under the hood, Math-V2 uses a massive Mixture-of-Experts (MoE) architecture (activated parameter count reportedly in the hundreds of billions) derived from DeepSeek’s experimental base architecture. For math problems, it employs a generator-verifier loop: it first generates a candidate proof (or solution path), then runs a verifier that assesses correctness and completeness.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    FastbuildAI

    FastbuildAI

    An open-source AI framework for developers and entrepreneurs

    FastbuildAI is a pragmatic framework for building agentic applications that can plan, call tools, and produce reliable outputs without forcing you into a heavy platform. It emphasizes fast iteration: you describe tasks declaratively, wire up tools with typed schemas, and let the runtime handle planning, retries, and result aggregation. The project leans into reproducibility with run records, seed control, and structured traces so you can compare behaviors across versions and inputs. Prompt and memory management are treated as first-class concerns, enabling short-lived scratchpads for reasoning as well as long-horizon state when an agent operates over multiple sessions. The codebase favors small, composable pieces—executors, routers, guards—so teams can adopt just what they need instead of buying into a monolith. It is equally comfortable running locally for development or behind a simple API for production bots and automation workflows.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we recommend Mesh Transformer JAX. If you are not looking to train models with billions of parameters from scratch, this is likely the wrong library to use. For generic inference needs, we recommend you use the Hugging Face transformers library instead which supports GPT-NeoX models.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    Generative AI Docs

    Generative AI Docs

    Documentation for Google's Gen AI site - including Gemini API & Gemma

    Generative AI Docs is Google’s official documentation repository for Gemini, Vertex AI, and related generative AI APIs. It contains guides, API references, and examples for developers building applications using Google’s large language models, text-to-image models, embeddings, and multimodal capabilities. The repository includes markdown source files that power the Google AI developer documentation site, as well as sample code snippets in Python, JavaScript, and other languages that demonstrate how to use Google’s Generative AI SDKs and REST APIs effectively.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Turn more customers into advocates. Icon
    Turn more customers into advocates.

    Fight skyrocketing paid media costs by turning your customers into a primary vehicle for acquisition, awareness, and activation with Extole.

    The platform's advanced capabilities ensure companies get the most out of their referral programs. Leverage custom events, profiles, and attributes to enable dynamic, audience-specific referral experiences. Use first-party data to tailor customer segment messaging, rewards, and engagement strategies. Use our flexible APIs to build management capabilities and consumer experiences–headlessly or hybrid. We have all the tools you need to build scalable, secure, and high-performing referral programs.
    Learn More
  • 10
    HunyuanImage-3.0

    HunyuanImage-3.0

    A Powerful Native Multimodal Model for Image Generation

    HunyuanImage-3.0 is a powerful, native multimodal text-to-image generation model released by Tencent’s Hunyuan team. It unifies multimodal understanding and generation in a single autoregressive framework, combining text and image modalities seamlessly rather than relying on separate image-only diffusion components. It uses a Mixture-of-Experts (MoE) architecture with many expert subnetworks to scale efficiently, deploying only a subset of experts per token, which allows large parameter counts without linear inference cost explosion. The model is intended to be competitive with closed-source image generation systems, aiming for high fidelity, prompt adherence, fine detail, and even “world knowledge” reasoning (i.e. leveraging context, semantics, or common sense in generation). The GitHub repo includes code, scripts, model loading instructions, inference utilities, prompt handling, and integration with standard ML tooling (e.g. Hugging Face / Transformers).
    Downloads: 3 This Week
    Last Update:
    See Project
  • 11
    LLaMA 3

    LLaMA 3

    The official Meta Llama 3 GitHub site

    This repository is the former home for Llama 3 model artifacts and getting-started code, covering pre-trained and instruction-tuned variants across multiple parameter sizes. It introduced the public packaging of weights, licenses, and quickstart examples that helped developers fine-tune or run the models locally and on common serving stacks. As the Llama stack evolved, Meta consolidated repositories and marked this one deprecated, pointing users to newer, centralized hubs for models, utilities, and docs. Even as a deprecated repo, it documents the transition path and preserves references that clarify how Llama 3 releases map into the current ecosystem. Practically, it functioned as a bridge between Llama 2 and later Llama releases by standardizing distribution and starter code for inference and fine-tuning. Teams still treat it as historical reference material for version lineage and migration notes.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    MLflow

    MLflow

    Open source platform for the machine learning lifecycle

    MLflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. MLflow offers a set of lightweight APIs that can be used with any existing machine learning application or library (TensorFlow, PyTorch, XGBoost, etc), wherever you currently run ML code (e.g. in notebooks, standalone applications or the cloud).
    Downloads: 3 This Week
    Last Update:
    See Project
  • 13
    MatlabFunc

    MatlabFunc

    Matlab codes for feature learning

    MatlabFunc is a collection of MATLAB functions developed by the ZJULearning group to support various tasks in computer vision, machine learning, and numerical computation. The repository brings together a wide range of utility scripts, algorithms, and implementations that serve as building blocks for research and development. These functions cover areas such as matrix operations, optimization, data processing, and visualization, making them broadly applicable across different research domains. The project is intended to provide reusable and adaptable MATLAB code that can save time for researchers and students working on experimental or applied projects. By consolidating these tools in one place, MatlabFunc serves as a practical reference and toolkit for both academic and engineering purposes. Contributions and improvements from the community are encouraged, allowing the repository to grow into a richer resource over time.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    Music Source Separation

    Music Source Separation

    Separate audio recordings into individual sources

    Music Source Separation is a PyTorch-based open-source implementation for the task of separating a music (or audio) recording into its constituent sources — for example isolating vocals, instruments, bass, accompaniment, or background from a mixed track. It aims to give users the ability to take any existing song and decompose it into separate stems (vocals, accompaniment, etc.), or to train custom separation models on their own datasets (e.g. for speech enhancement, instrument isolation, or other audio-separation tasks). The repository provides training scripts (e.g. using datasets such as MUSDB18), preprocessing steps (audio-to-HDF5 packing, indexing), evaluation pipelines, and inference scripts to perform separation on arbitrary audio files. This makes the project useful both for researchers in music information retrieval / audio machine learning and for hobbyists or practitioners who want to experiment with remixing, karaoke, or audio editing.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 15
    OpenAI Cookbook

    OpenAI Cookbook

    Examples and guides for using the OpenAI API

    openai-cookbook is a repository containing example code, tutorials, and guidance for how to build real applications on top of the OpenAI API. It covers a wide range of use cases: prompt engineering, embeddings and semantic search, fine-tuning, agent architectures, function calling, working with images, chat workflows, and more. The content is primarily in Python (notebooks, scripts), but the conceptual guidance is applicable across languages. The repository is kept up to date and often expanded, and its examples are intended to serve both beginners and intermediate users of the API. It also includes deployment recipes, integration snippets (e.g. with GitHub Actions), and production considerations. Because OpenAI’s API evolves rapidly, the Cookbook acts as a living, community-curated reference to show “how to do X with the API” rather than only reprinting documentation.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    Resume-Matcher

    Resume-Matcher

    Improve your resumes with Resume Matcher

    Resume-Matcher is a command-line application that compares resumes against job descriptions using natural language processing. It provides a compatibility score based on keyword relevance and highlights areas where the resume aligns—or doesn't—with the target role. Designed for job seekers and HR professionals, it helps improve resume tailoring and streamlines candidate screening.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 17
    Solon

    Solon

    Java enterprise application development framework

    Solon is a full-scenario Java enterprise application framework that positions itself as a lean, high-performance alternative to heavy stacks. It advertises large concurrency gains, lower memory use, much faster startup, and dramatically smaller packages while remaining compatible from Java 8 through Java 24. The framework focuses on restrained APIs and an open ecosystem, with modules that cover web, data, cloud, and microservice patterns. Its messaging emphasizes “replaceable Spring” ergonomics—keeping developer familiarity while reducing overhead and improving deployment characteristics. Releases and companion repositories present an active community and production-oriented guidance. The broader Solon org also maintains AI/agent extensions (e.g., MCP components) that complement application development.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 18
    Speech-AI-Forge

    Speech-AI-Forge

    Speech-AI-Forge is a project developed around TTS generation model

    Speech-AI-Forge is a full-stack project built around modern text-to-speech generation models, providing both an API server and a Gradio-based web UI for interactive use. At its core, it acts as a hub that wires together multiple speech-related capabilities, including TTS, speech-to-text and LLM-based control flows, behind a consistent interface. The system is designed to be deployed in several ways: you can try it online via hosted demos, spin it up in a one-click Colab environment, run it in Docker containers, or set it up locally with its environment preparation scripts. It is model-agnostic and advertises support for a variety of TTS and speech models such as ChatTTS, CosyVoice, Fish-Speech, FireredTTS and others, as well as Whisper-based ASR, giving you a flexible playground for experimenting with different speech stacks. The project also integrates with general-purpose LLMs (for example GPT- or LLaMA-style models), which can be used to pre-process text, manage conversations.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 19
    Vision Transformer Pytorch

    Vision Transformer Pytorch

    Implementation of Vision Transformer, a simple way to achieve SOTA

    This repository provides a from-scratch, minimalist implementation of the Vision Transformer (ViT) in PyTorch, focusing on the core architectural pieces needed for image classification. It breaks down the model into patch embedding, positional encoding, multi-head self-attention, feed-forward blocks, and a classification head so you can understand each component in isolation. The code is intentionally compact and modular, which makes it easy to tinker with hyperparameters, depth, width, and attention dimensions. Because it stays close to vanilla PyTorch, you can integrate custom datasets and training loops without framework lock-in. It’s widely used as an educational reference for people learning transformers in vision and as a lightweight baseline for research prototypes. The project encourages experimentation—swap optimizers, change augmentations, or plug the transformer backbone into downstream tasks.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 20
    VoltAgent

    VoltAgent

    Open Source TypeScript AI Agent Framework

    An AI Agent Framework provides the foundational structure and tools needed to build applications powered by autonomous agents. These agents, often driven by Large Language Models (LLMs), can perceive their environment, make decisions, and take actions to achieve specific goals. Building such agents from scratch involves managing complex interactions with LLMs, handling state, connecting to external tools and data, and orchestrating workflows. VoltAgent is an open source TypeScript framework that acts as this essential toolkit. It simplifies the development of AI agent applications by providing modular building blocks, standardized patterns, and abstractions. Whether you're creating chatbots, virtual assistants, automated workflows, or complex multi-agent systems, VoltAgent handles the underlying complexity, allowing you to focus on defining your agents' capabilities and logic.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 21
    deep-q-learning

    deep-q-learning

    Minimal Deep Q Learning (DQN & DDQN) implementations in Keras

    The deep-q-learning repository authored by keon provides a Python-based implementation of the Deep Q-Learning algorithm — a cornerstone method in reinforcement learning. It implements the core logic needed to train an agent using Q-learning with neural networks (i.e. approximating Q-values via deep nets), setting up environment interaction loops, experience replay, network updates, and policy behavior. For learners and researchers interested in reinforcement learning, this repo offers a concrete, runnable example bridging theory and practice: you can execute the code, play with hyperparameters, observe convergence behavior, and see how deep Q-learning learns policies over time in standard environments. Because it’s self-contained and Python-based, it's well-suited for experimentation, modifications, or extension — for instance adapting to custom Gym environments, tweaking network architecture, or combining with other RL techniques.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 22
    robosuite

    robosuite

    A Modular Simulation Framework and Benchmark for Robot Learning

    Robosuite is a modular and extensible simulation framework for robotic manipulation tasks, built on top of MuJoCo. Developed by the ARISE Initiative, Robosuite offers a set of standardized benchmarks and customizable environments designed to advance research in robotic manipulation, control, and imitation learning. It emphasizes realistic simulations and ease of use for both single-task and multi-task learning.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 23
    chessPDFBrowser

    chessPDFBrowser

    Chess application whichs allows working with chess PDF books and PGNs.

    Chess application which allows working with PDFs and PGNs. You can work with the chess games of the PDF and edit their tree of variants. Graphical environment. Standard PGN TAGs. PGN comments. Ocr like (Fen string detection from chess board position images). Connection to Uci chess engines (like stockfish). Position analysis, full game analysis. You can now play games against uci engines. pdf2pgn command line command included. Detailed documentation. Multilanguage currently support for English, Spanish and Catalan. Dark mode option. JDK-17 compatibility
    Downloads: 41 This Week
    Last Update:
    See Project
  • 24
    A free OCR-A font, conformant to ANSI X3.17-1977, in TrueType format, with sources.
    Leader badge
    Downloads: 72 This Week
    Last Update:
    See Project
  • 25
    Qwen2.5

    Qwen2.5

    Open source large language model by Alibaba

    Qwen2.5 is a series of large language models developed by the Qwen team at Alibaba Cloud, designed to enhance natural language understanding and generation across multiple languages. The models are available in various sizes, including 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B parameters, catering to diverse computational requirements. Trained on a comprehensive dataset of up to 18 trillion tokens, Qwen2.5 models exhibit significant improvements in instruction following, long-text generation (exceeding 8,000 tokens), and structured data comprehension, such as tables and JSON formats. They support context lengths up to 128,000 tokens and offer multilingual capabilities in over 29 languages, including Chinese, English, French, Spanish, and more. The models are open-source under the Apache 2.0 license, with resources and documentation available on platforms like Hugging Face and ModelScope. This is a full ZIP snapshot of the Qwen2.5 code.
    Downloads: 38 This Week
    Last Update:
    See Project