• Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 1
    Genv

    Genv

    GPU environment management and cluster orchestration

    Genv is an open-source environment and cluster management system for GPUs. Genv lets you easily control, configure, monitor and enforce the GPU resources that you are using in a GPU machine or cluster. It is intended to ease up the process of GPU allocation for data scientists without code changes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    LMCache

    LMCache

    Supercharge Your LLM with the Fastest KV Cache Layer

    ...These capabilities aim to lower latency, cut GPU cycles, and stabilize performance for production workloads with overlapping prompts or retrieval-augmented contexts. The end result is a cache fabric for LLMs that complements engines rather than replacing them.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    HunyuanVideo

    HunyuanVideo

    HunyuanVideo: A Systematic Framework For Large Video Generation Model

    ...The framework aims to push the boundaries of video generation quality, incorporating multiple innovative approaches to improve the realism and coherence of the generated content. Release of FP8 model weights to reduce GPU memory usage / improve efficiency. Parallel inference code to speed up sampling, utilities and tests included.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 4
    Lama Cleaner

    Lama Cleaner

    Image inpainting tool powered by SOTA AI Model

    ...You can use it to remove any unwanted object, defect, or people from your pictures or erase and replace anything on your pictures. Many AICG creators are using Lama Cleaner to clean-up their work. Completely free and open-source, fully self-hosted, supports CPU & GPU. Windows 1-Click Installer, classical image inpainting algorithm powered by cv2. Multiple SOTA AI models, and various inpainting strategies. Run as a desktop application. Interactive Segmentation on any object.
    Downloads: 51 This Week
    Last Update:
    See Project
  • Cloud-based help desk software with ServoDesk Icon
    Cloud-based help desk software with ServoDesk

    Full access to Enterprise features. No credit card required.

    What if You Could Automate 90% of Your Repetitive Tasks in Under 30 Days? At ServoDesk, we help businesses like yours automate operations with AI, allowing you to cut service times in half and increase productivity by 25% - without hiring more staff.
    Try ServoDesk for free
  • 5
    SkyPilot

    SkyPilot

    SkyPilot: Run AI and batch jobs on any infra

    SkyPilot is a framework for running AI and batch workloads on any infra, offering unified execution, high cost savings, and high GPU availability. Run AI and batch jobs on any infra (Kubernetes or 12+ clouds). Get unified execution, cost savings, and high GPU availability via a simple interface.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    clone-voice

    clone-voice

    A sound cloning tool with a web interface, using your voice

    ...The app is designed to be very easy to use: you download a precompiled package, double-click app.exe, and it launches a browser-based web interface where you control cloning and synthesis. It does not require an NVIDIA GPU to run basic tasks, although GPU acceleration can be used when available, making it accessible on modest machines. The tool supports around sixteen languages, including Chinese, English, Japanese, Korean, French, German, Italian, and others, and can capture reference voices directly from a microphone or from uploaded audio.
    Downloads: 10 This Week
    Last Update:
    See Project
  • 7
    FastChat

    FastChat

    Open platform for training, serving, and evaluating language models

    ...If you do not have enough memory, you can enable 8-bit compression by adding --load-8bit to the commands above. This can reduce memory usage by around half with slightly degraded model quality. It is compatible with the CPU, GPU, and Metal backend. Vicuna-13B with 8-bit compression can run on a single NVIDIA 3090/4080/T4/V100(16GB) GPU. In addition to that, you can add --cpu-offloading to commands above to offload weights that don't fit on your GPU onto the CPU memory. This requires 8-bit compression to be enabled and the bitsandbytes package to be installed, which is only available on linux operating systems.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    higgsfield

    higgsfield

    Fault-tolerant, highly scalable GPU orchestration

    Higgsfield is an open-source, fault-tolerant, highly scalable GPU orchestration, and a machine learning framework designed for training models with billions to trillions of parameters, such as Large Language Models (LLMs).
    Downloads: 7 This Week
    Last Update:
    See Project
  • 9
    Text Generation Web UI

    Text Generation Web UI

    A gradio web UI for running Large Language Models like LLaMA

    ...Custom chat characters. Advanced chat features (send images, get audio responses with TTS). Very efficient text streaming. Parameter presets, 8-bit mode. Layers splitting across GPU(s), CPU, and disk. CPU mode, FlexGen, DeepSpeed ZeRO-3, API with streaming and without streaming. LLaMA model, including 4-bit GPTQ. RWKV model, LoRA (loading and training), Softprompts, and extensions.
    Downloads: 45 This Week
    Last Update:
    See Project
  • Scheduling for Home Care Software Icon
    Scheduling for Home Care Software

    Home care agency owners, schedulers, nurses, managers, and caregivers searching for a solution to improve their operations

    Find the right caregiver for any client fast! Our database matches your caregivers with clients by skills, location, and availability. Monitor your leads and referral sources, with a complete history of your activities. You can even link your website to AdaCare's database. Keep records of all your staff, names, addresses, phone numbers, available hours, CEUs, and license expirations. Our "instant timecard" replaces paperwork, and sends alerts if caregivers are late. Less work, and better documentation. Your caregivers can sign in to view their schedules, calendars, maps, and more. Help your caregivers and your office staff. Print and export hours and mileage, for easier billing and payroll. Plus, reports and charts for managing your business. Secure and reliable, and you can work from anywhere, home, office, or in the field. Your caregivers can log in from home and print their own schedules and maps.
    Learn More
  • 10
    DeepSeed

    DeepSeed

    Deep learning optimization library making distributed training easy

    ...DeepSpeed delivers extreme-scale model training for everyone, from data scientists training on massive supercomputers to those training on low-end clusters or even on a single GPU. Using current generation of GPU clusters with hundreds of devices, 3D parallelism of DeepSpeed can efficiently train deep learning models with trillions of parameters. With just a single GPU, ZeRO-Offload of DeepSpeed can train models with over 10B parameters, 10x bigger than the state of arts, democratizing multi-billion-parameter model training such that many deep learning scientists can explore bigger and better models. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    AlphaFold 3

    AlphaFold 3

    AlphaFold 3 inference pipeline

    ...Users can perform local predictions via Docker containers, integrating AlphaFold 3’s inference process with provided JSON input configurations. The software includes flexible options for running both data preprocessing and GPU-accelerated inference, allowing users to adapt to available computational resources.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 12
    Video-subtitle-extractor

    Video-subtitle-extractor

    A GUI tool for extracting hard-coded subtitle (hardsub) from videos

    ...Use local OCR recognition, no need to set up and call any API, and do not need to access online OCR services such as Baidu and Ali to complete text recognition locally. Support GPU acceleration, after GPU acceleration, you can get higher accuracy and faster extraction speed. (CLI version) No need for users to manually set the subtitle area, the project automatically detects the subtitle area through the text detection model. Filter the text in the non-subtitle area and remove the watermark (station logo) text.
    Downloads: 49 This Week
    Last Update:
    See Project
  • 13
    ChatGLM-6B

    ChatGLM-6B

    ChatGLM-6B: An Open Bilingual Dialogue Language Model

    ...It is optimized for dialogue and question answering with a balance between performance and deployability in consumer hardware settings. Support for quantized inference (INT4, INT8) to reduce GPU memory requirements. Automatic mode switching between precision/memory tradeoffs (full/quantized).
    Downloads: 9 This Week
    Last Update:
    See Project
  • 14
    Keras

    Keras

    Python-based neural networks API

    Python Deep Learning library
    Downloads: 9 This Week
    Last Update:
    See Project
  • 15
    SageMaker Python SDK

    SageMaker Python SDK

    Training and deploying machine learning models on Amazon SageMaker

    ...You can also train and deploy models with Amazon algorithms, which are scalable implementations of core machine learning algorithms that are optimized for SageMaker and GPU training. If you have your own algorithms built into SageMaker-compatible Docker containers, you can train and host models using these as well.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 16
    CogVideo

    CogVideo

    text and image to video generation: CogVideoX (2024) and CogVideo

    ...The codebase emphasizes practical deployment: prompt-optimization utilities (LLM-assisted long-prompt expansion), Colab notebooks, a Gradio web app, and multiple performance knobs (tiling/slicing, CPU offload, torch.compile, multi-GPU, and FA3 backends via partner projects).
    Downloads: 23 This Week
    Last Update:
    See Project
  • 17
    LTX-2

    LTX-2

    Python inference and LoRA trainer package for the LTX-2 audio–video

    LTX-2 is a powerful, open-source toolkit developed by Lightricks that provides a modular, high-performance base for building real-time graphics and visual effects applications. It is architected to give developers low-level control over rendering pipelines, GPU resource management, shader orchestration, and cross-platform abstractions so they can craft visually compelling experiences without starting from scratch. Beyond basic rendering scaffolding, LTX-2 includes optimized math libraries, resource loaders, utilities for texture and buffer handling, and integration points for native event loops and input systems. ...
    Downloads: 15 This Week
    Last Update:
    See Project
  • 18
    Appfl

    Appfl

    Advanced Privacy-Preserving Federated Learning framework

    APPFL (Advanced Privacy-Preserving Federated Learning) is a Python framework enabling researchers to easily build and benchmark privacy-aware federated learning solutions. It supports flexible algorithm development, differential privacy, secure communications, and runs efficiently on HPC and multi-GPU setups.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    PEFT

    PEFT

    State-of-the-art Parameter-Efficient Fine-Tuning

    Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters. Fine-tuning large-scale PLMs is often prohibitively costly. In this regard, PEFT methods only fine-tune a small number of (extra) model parameters, thereby greatly decreasing the computational and storage costs. Recent State-of-the-Art PEFT techniques achieve performance comparable to that of full...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    FastKoko

    FastKoko

    Dockerized FastAPI wrapper for Kokoro-82M text-to-speech model

    FastKoko is a self-hosted text-to-speech server built around the Kokoro-82M model and exposed through a FastAPI backend. It is designed to be easy to deploy via Docker, with separate CPU and GPU images so that users can choose between pure CPU inference and NVIDIA GPU acceleration. The project exposes an OpenAI-compatible speech endpoint, which means existing code that talks to the OpenAI audio API can often be pointed at a Kokoro-FastAPI instance with minimal changes. It supports multiple languages and voicepacks and allows phoneme based generation for more accurate pronunciation and prosody. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Simple StyleGan2 for Pytorch

    Simple StyleGan2 for Pytorch

    Simplest working implementation of Stylegan2

    Simple Pytorch implementation of Stylegan2 that can be completely trained from the command-line, no coding needed. You will need a machine with a GPU and CUDA installed. You can also specify the location where intermediate results and model checkpoints should be stored. You can increase the network capacity (which defaults to 16) to improve generation results, at the cost of more memory. By default, if the training gets cut off, it will automatically resume from the last checkpointed file. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    DGL

    DGL

    Python package built to ease deep learning on graph

    ...We also want to make the combination of graph based modules and tensor based modules (PyTorch or MXNet) as smooth as possible. DGL provides a powerful graph object that can reside on either CPU or GPU. It bundles structural data as well as features for a better control. We provide a variety of functions for computing with graph objects including efficient and customizable message passing primitives for Graph Neural Networks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Kitten TTS

    Kitten TTS

    State-of-the-art TTS model under 25MB

    KittenTTS is an open-source, ultra-lightweight, and high-quality text-to-speech model featuring just 15 million parameters and a binary size under 25 MB. It is designed for real-time CPU-based deployment across diverse platforms. Ultra-lightweight, model size less than 25MB. CPU-optimized, runs without GPU on any device. High-quality voices, several premium voice options available. Fast inference, optimized for real-time speech synthesis.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 24
    SSD in PyTorch 1.0

    SSD in PyTorch 1.0

    High quality, fast, modular reference implementation of SSD in PyTorch

    ...The implementation is heavily influenced by the projects ssd.pytorch, pytorch-ssd and maskrcnn-benchmark. This repository aims to be the code base for research based on SSD. Multi-GPU training and inference: We use DistributedDataParallel, you can train or test with arbitrary GPU(s), the training schema will change accordingly. Add your own modules without pain. We abstract backbone, Detector, BoxHead, BoxPredictor, etc. You can replace every component with your own code without changing the code base. For example, You can add EfficientNet as the backbone, just add efficient_net.py (ALREADY ADDED) and register it, specific it in the config file, It's done! ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    MESHROOM

    MESHROOM

    3D reconstruction software

    ...Automatically estimate fisheye circle or manually edit it. Take advantage of motorized-head file. Easy to integrate in your Renderfarm System. Add specific rules to select the most suitable machines regarding CPU, RAM, GPU requirements of each Node.
    Downloads: 141 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next