Showing 236 open source projects for "diffusion"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Context for your AI agents Icon
    Context for your AI agents

    Crawl websites, sync to vector databases, and power RAG applications. Pre-built integrations for LLM pipelines and AI assistants.

    Build data pipelines that feed your AI models and agents without managing infrastructure. Crawl any website, transform content, and push directly to your preferred vector store. Use 10,000+ tools for RAG applications, AI assistants, and real-time knowledge bases. Monitor site changes, trigger workflows on new data, and keep your AIs fed with fresh, structured information. Cloud-native, API-first, and free to start until you need to scale.
    Try for free
  • 1

    PDP-OmniSim

    PDP-OmniSim simulating parallel and distributed processing systems

    PDP-OmniSim 🧬 Scientific Overview PDP-OmniSim is an advanced computational framework for simulating parallel and distributed processing systems, with cutting-edge applications in computational neuroscience, distributed computing, and complex systems modeling. The framework provides researchers with robust tools for large-scale simulations of networked systems and their emergent behaviors. 🎯 Key Scientific Contributions 🔬 Interdisciplinary Research Domains Computational...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    DiT (Diffusion Transformers)

    DiT (Diffusion Transformers)

    Official PyTorch Implementation of "Scalable Diffusion Models"

    DiT (Diffusion Transformer) is a powerful architecture that applies transformer-based modeling directly to diffusion generative processes for high-quality image synthesis. Unlike CNN-based diffusion models, DiT represents the diffusion process in the latent space and processes image tokens through transformer blocks with learned positional encodings, offering scalability and superior sample quality.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Basaran

    Basaran

    Basaran, an open-source alternative to the OpenAI text completion API

    ...It provides a compatible streaming API for your Hugging Face Transformers-based text generation models. The open source community will eventually witness the Stable Diffusion moment for large language models (LLMs), and Basaran allows you to replace OpenAI's service with the latest open-source model to power your application without modifying a single line of code. Stream generation using various decoding strategies. Support both decoder-only and encoder-decoder models. Detokenizer that handles surrogates and whitespace. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    macara

    macara

    A converter for seamless transformation of files, data, and media ...

    ...Serving as a versatile tool, it facilitates efficient file management, especially when handling a substantial volume of images, whether sorting by name or other attributes. These scripts are crafted to complement generative art AI technologies like Dall-e or stable diffusion.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 5
    Anse

    Anse

    Supercharged experience for multiple models such as ChatGPT

    Anse is a modern, polished web UI built to serve as a unified interface for interacting with multiple AI-model backends (such as OpenAI’s models, DALL-E, Stable Diffusion, etc.). It emphasizes a clean, user-friendly experience and supports different conversation modes (single prompt, continuous dialogue, image generation, etc.). Anse uses client-side storage (IndexDB) to keep session history locally, prioritizing user privacy and avoiding automatic uploads of sensitive chat content. It is responsive and optimized for mobile, supports dark mode, and is designed for deployment in a variety of environments (Vercel, Netlify, Docker, etc.). ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Stable-Dreamfusion

    Stable-Dreamfusion

    Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion

    A pytorch implementation of the text-to-3D model Dreamfusion, powered by the Stable Diffusion text-to-2D model. This project is a work-in-progress and contains lots of differences from the paper. The current generation quality cannot match the results from the original paper, and many prompts still fail badly! Since the Imagen model is not publicly available, we use Stable Diffusion to replace it (implementation from diffusers).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    audio-diffusion-pytorch

    audio-diffusion-pytorch

    Audio generation using diffusion models, in PyTorch

    A fully featured audio diffusion library, for PyTorch. Includes models for unconditional audio generation, text-conditional audio generation, diffusion autoencoding, upsampling, and vocoding. The provided models are waveform-based, however, the U-Net (built using a-unet), DiffusionModel, diffusion method, and diffusion samplers are both generic to any dimension and highly customizable to work on other formats.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 8
    stable-diffusion-webui-colab

    stable-diffusion-webui-colab

    Stable diffusion webui colab

    Stable Diffusion webui colab. lite has a stable WebUI and stable installed extensions. stable has ControlNet, a stable WebUI, and stable installed extensions. Nightly has ControlNet, the latest WebUI, and daily installed extension updates. If you want to use more models, you can download your model into Colab, which has an empty 50GB space. You can also free up more space by deleting the default model in your drive.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    MMGeneration

    MMGeneration

    MMGeneration is a powerful toolkit for generative models

    MMGeneration has been merged in MMEditing. And we have supported new-generation tasks and models. MMGeneration is a powerful toolkit for generative models, especially for GANs now. It is based on PyTorch and MMCV. The master branch works with PyTorch 1.5+. We currently support training on Unconditional GANs, Internal GANs, and Image Translation Models. Support for conditional models will come soon. A plentiful toolkit containing multiple applications in GANs is provided to users. GAN...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Desktop and Mobile Device Management Software Icon
    Desktop and Mobile Device Management Software

    It's a modern take on desktop management that can be scaled as per organizational needs.

    Desktop Central is a unified endpoint management (UEM) solution that helps in managing servers, laptops, desktops, smartphones, and tablets from a central location.
    Learn More
  • 10
    Diffusion WebUI Colab

    Diffusion WebUI Colab

    Choose your diffusion models and spin up a WebUI on Colab in one click

    The most simplistic Colab with most models included by default. Custom models can be added easily. Stable Diffusion 2.0 in testing phase. Choose your diffusion models and spin up a WebUI on Colab in one click. Share your generations in our mastodon server - (This is hosted by a third party. I am not associated with the instance in any way.) The instructions are on the Colab.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    DeepMozart

    DeepMozart

    Audio generation using diffusion models

    Audio generation using diffusion models in PyTorch. The code is based on the audio-diffusion-pytorch repository.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    TradeMaster

    TradeMaster

    TradeMaster is an open-source platform for quantitative trading

    TradeMaster is a first-of-its-kind, best-in-class open-source platform for quantitative trading (QT) empowered by reinforcement learning (RL), which covers the full pipeline for the design, implementation, evaluation and deployment of RL-based algorithms. TradeMaster is composed of 6 key modules: 1) multi-modality market data of different financial assets at multiple granularities; 2) whole data preprocessing pipeline; 3) a series of high-fidelity data-driven market simulators for mainstream...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    flat

    flat

    All-in-one image generation AI

    All-in-one image generation AI. Launch StableDiffusionWebUI with just a few clicks. No Python installation or repository cloning is required. Displays generated images in a list with information such as prompts. The image folder can be set freely.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Stable Diffusion v 2.1 web UI

    Stable Diffusion v 2.1 web UI

    Lightweight Stable Diffusion v 2.1 web UI: txt2img, img2img, depth2img

    Lightweight Stable Diffusion v 2.1 web UI: txt2img, img2img, depth2img, in paint and upscale4x. Gradio app for Stable Diffusion 2 by Stability AI. It uses Hugging Face Diffusers implementation. Currently supported pipelines are text-to-image, image-to-image, inpainting, upscaling and depth-to-image.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Point-E

    Point-E

    Point cloud diffusion for 3D model synthesis

    ...Its principal advantage is speed: it can generate 3D assets in just 1–2 minutes on a single GPU, which is significantly faster than many competing text-to-3D models. The model works via a two-stage diffusion approach: first, it uses a text → image diffusion network to produce a synthetic 2D view consistent with the prompt; then a second diffusion model converts that image into a 3D point cloud. While it does not match the fine detail of some slower methods, the tradeoff in speed makes it practical for prototyping and interactive 3D generation. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    DDPM-CD

    DDPM-CD

    Remote sensing change detection using denoising diffusion models

    This is the Pytorch implementation of Remote Sensing Change Detection using Denoising Diffusion Probabilistic Models. The generated images contain objects that we commonly see in real remote sensing images, such as buildings, trees, roads, vegetation, water surfaces, etc., demonstrating the powerful ability of the diffusion models to extract key semantics that can be further used in remote sensing change detection. We fine-tune a light-weight change detection head which takes multi-level feature representations from the pre-trained diffusion model as inputs and outputs change prediction map.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Minimal text diffusion

    Minimal text diffusion

    A minimal implementation of diffusion models for text generation

    A minimal implementation of diffusion models of text: learns a diffusion model of a given text corpus, allowing to generate text samples from the learned model. The main idea was to retain just enough code to allow training a simple diffusion model and generating samples, remove image-related terms, and make it easier to use. To train a model, run scripts/train.sh.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    NÜWA - Pytorch

    NÜWA - Pytorch

    Implementation of NÜWA, attention network for text to video synthesis

    Implementation of NÜWA, state of the art attention network for text-to-video synthesis, in Pytorch. It also contains an extension into video and audio generation, using a dual decoder approach. It seems as though a diffusion-based method has taken the new throne for SOTA. However, I will continue on with NUWA, extending it to use multi-headed codes + hierarchical causal transformer. I think that direction is untapped for improving on this line of work. In the paper, they also present a way to condition the video generation based on segmentation mask(s). ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    G-Diffuser Bot

    G-Diffuser Bot

    Discord bot and Interface for Stable Diffusion

    The first release of the all-in-one installer version of G-Diffuser is here. This release no longer requires the installation of WSL or Docker and has a systray icon to keep track of and launch G-Diffuser components. The infinite zoom scripts have been updated with some improvements, notably a new compositer script that is hundreds of times faster than before. The first release of the all-in-one installer is here. It notably features much easier "one-click" installation and updating, as well...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    DiffSinger

    DiffSinger

    Singing Voice Synthesis via Shallow Diffusion Mechanism

    DiffSinger is an open-source PyTorch implementation of a diffusion-based acoustic model for singing-voice synthesis (SVS) and also text-to-speech (TTS) in a related variant. The core idea is to view generation of a sung voice (mel-spectrogram) as a diffusion process: starting from noise, the model iteratively “denoises” while being conditioned on a music score (lyrics, pitch, musical timing).
    Downloads: 46 This Week
    Last Update:
    See Project
  • 21
    Stable Diffusion Web UI

    Stable Diffusion Web UI

    Feature showcase for stable-diffusion-webui

    This repository curates a living gallery of examples that demonstrate what the Stable Diffusion Web UI and its ecosystem can do. It documents practical recipes that pair prompts with parameters, samplers, upscalers, and extensions so others can reproduce results reliably. The focus is on education by example: side-by-side comparisons reveal how settings like CFG scale, denoising strength, schedulers, or ControlNet inputs change an image.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Guided Diffusion

    Guided Diffusion

    Codebase for Diffusion Models Beat GANS on Image Synthesis

    The guided-diffusion repository is centered on diffusion models for image synthesis, with a focus on classifier guidance and improvements over earlier diffusion frameworks. It is derived from OpenAI’s improved-diffusion work, enhanced to include guided generation where a classifier (or other guidance mechanism) can steer sampling toward desired classes or attributes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Disco Diffusion

    Disco Diffusion

    Notebooks, models and techniques for the generation of AI Art

    A frankensteinian amalgamation of notebooks, models, and techniques for the generation of AI art and animations. This project uses a special conversion tool to convert the Python files into notebooks for easier development. What this means is you do not have to touch the notebook directly to make changes to it. The tool being used is called Colab-Convert. Initial QoL improvements added, including user-friendly UI, settings+prompt saving, and improved google drive folder organization. Now...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Diffusers-Interpret

    Diffusers-Interpret

    Model explainability for Diffusers

    ...It is possible to visualize pixel attributions of the input image as a saliency map. diffusers-interpret also computes these token/pixel attributions for generating a particular part of the image. To analyze how a token in the input prompt influenced the generation, you can study the token attribution scores. You can also check all the images that the diffusion process generated at the end of each step. Gradient checkpointing also reduces GPU usage, but makes computations a bit slower.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    AI Atelier

    AI Atelier

    Based on the Disco Diffusion, version of the AI art creation software

    Based on the Disco Diffusion, we have developed a Chinese & English version of the AI art creation software "AI Atelier". We offer both Text-To-Image models (Disco Diffusion and VQGAN+CLIP) and Text-To-Text (GPT-J-6B and GPT-NEOX-20B) as options. Making available complete source code of licensed works and modifications, which include larger works using a licensed work, under the same license.
    Downloads: 1 This Week
    Last Update:
    See Project