Browse free open source Python Generative AI and projects below. Use the toggles on the left to filter open source Python Generative AI by OS, license, language, programming language, and project status.

  • Grafana: The open and composable observability platform Icon
    Grafana: The open and composable observability platform

    Faster answers, predictable costs, and no lock-in built by the team helping to make observability accessible to anyone.

    Grafana is the open source analytics & monitoring solution for every database.
    Learn More
  • Keep company data safe with Chrome Enterprise Icon
    Keep company data safe with Chrome Enterprise

    Protect your business with AI policies and data loss prevention in the browser

    Make AI work your way with Chrome Enterprise. Block unapproved sites and set custom data controls that align with your company's policies.
    Download Chrome
  • 1
    Point-E

    Point-E

    Point cloud diffusion for 3D model synthesis

    point-e is the official repository for Point-E, a generative model developed by OpenAI that produces 3D point clouds from textual (or image) prompts. Its principal advantage is speed: it can generate 3D assets in just 1–2 minutes on a single GPU, which is significantly faster than many competing text-to-3D models. The model works via a two-stage diffusion approach: first, it uses a text → image diffusion network to produce a synthetic 2D view consistent with the prompt; then a second diffusion model converts that image into a 3D point cloud. While it does not match the fine detail of some slower methods, the tradeoff in speed makes it practical for prototyping and interactive 3D generation. The repository includes inference scripts, utilities for converting point clouds to meshes (e.g. via signed distance function regression), sample notebooks, and weight checkpoints. It also provides documentation on limitations, usage instructions, and example outputs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    PyTorch pretrained BigGAN

    PyTorch pretrained BigGAN

    PyTorch implementation of BigGAN with pretrained weights

    An op-for-op PyTorch reimplementation of DeepMind's BigGAN model with the pre-trained weights from DeepMind. This repository contains an op-for-op PyTorch reimplementation of DeepMind's BigGAN that was released with the paper Large Scale GAN Training for High Fidelity Natural Image Synthesis. This PyTorch implementation of BigGAN is provided with the pretrained 128x128, 256x256 and 512x512 models by DeepMind. We also provide the scripts used to download and convert these models from the TensorFlow Hub models. This reimplementation was done from the raw computation graph of the Tensorflow version and behave similarly to the TensorFlow version (variance of the output difference of the order of 1e-5). This implementation currently only contains the generator as the weights of the discriminator were not released (although the structure of the discriminator is very similar to the generator so it could be added pretty easily.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Python Client For NLP Cloud

    Python Client For NLP Cloud

    NLP Cloud serves high performance pre-trained or custom models for NER

    NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, dialogue summarization, paraphrasing, intent classification, product description and ad generation, chatbot, grammar and spelling correction, keywords and keyphrases extraction, text generation, image generation, blog post generation, source code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API. You can either use the NLP Cloud pre-trained models, fine-tune your own models, or deploy your own models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Quote2Image

    Quote2Image

    A Python library for turning text quotes into graphical images

    A Python library for turning text quotes into graphical images. Generate an image using RGB background and foreground. The package comes with a built-in GenerateColors function that generates a fg and bg color with the correct amount of luminosity and returns them in tuples. Generate an image using a custom background image. The package comes with a builtin GenerateColors function that generates a fg and bg color with the correct amount of luminosity and returns them in tuples. We can generate an image using a custom background image using the ImgObject that gives us alot of flexibility on how we want our background Image to be. You are allowed to use, modify, and distribute the module. You are allowed to distribute modified versions of the module, as long as you follow the terms of the license.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 5
    RQ-Transformer

    RQ-Transformer

    Implementation of RQ Transformer, autoregressive image generation

    Implementation of RQ Transformer, which proposes a more efficient way of training multi-dimensional sequences autoregressively. This repository will only contain the transformer for now. You can use this vector quantization library for the residual VQ. This type of axial autoregressive transformer should be compatible with memcodes, proposed in NWT. It would likely also work well with multi-headed VQ. I also think there is something deeper going on, and have generalized this to any number of dimensions. You can use it by importing the HierarchicalCausalTransformer. For autoregressive (AR) modeling of high-resolution images, vector quantization (VQ) represents an image as a sequence of discrete codes. A short sequence length is important for an AR model to reduce its computational costs to consider long-range interactions of codes. However, we postulate that previous VQ cannot shorten the code sequence and generate high-fidelity images together in terms of the rate-distortion trade-off.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Recurrent Interface Network (RIN)

    Recurrent Interface Network (RIN)

    Implementation of Recurrent Interface Network (RIN)

    Implementation of Recurrent Interface Network (RIN), for highly efficient generation of images and video without cascading networks, in Pytorch. The author unawaredly reinvented the induced set-attention block from the set transformers paper. They also combine this with the self-conditioning technique from the Bit Diffusion paper, specifically for the latents. The last ingredient seems to be a new noise function based around the sigmoid, which the author claims is better than cosine scheduler for larger images. The big surprise is that the generations can reach this level of fidelity. Will need to verify this on my own machine. Additionally, we will try adding an extra linear attention on the main branch as well as self-conditioning in the pixel space. The insight of being able to self-condition on any hidden state of the network as well as the newly proposed sigmoid noise schedule are the two main findings.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Reliable Metrics for Generative Models

    Reliable Metrics for Generative Models

    Code base for the precision, recall, density, and coverage metrics

    Reliable Fidelity and Diversity Metrics for Generative Models (ICML 2020). Devising indicative evaluation metrics for the image generation task remains an open problem. The most widely used metric for measuring the similarity between real and generated images has been the Fréchet Inception Distance (FID) score. Because it does not differentiate the fidelity and diversity aspects of the generated images, recent papers have introduced variants of precision and recall metrics to diagnose those properties separately. In this paper, we show that even the latest version of the precision and recall (Kynkäänniemi et al., 2019) metrics are not reliable yet. For example, they fail to detect the match between two identical distributions, they are not robust against outliers, and the evaluation hyperparameters are selected arbitrarily. We propose density and coverage metrics that solve the above issues.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    SentenceTransformers

    SentenceTransformers

    Multilingual sentence & image embeddings with BERT

    SentenceTransformers is a Python framework for state-of-the-art sentence, text and image embeddings. The initial work is described in our paper Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. You can use this framework to compute sentence / text embeddings for more than 100 languages. These embeddings can then be compared e.g. with cosine-similarity to find sentences with a similar meaning. This can be useful for semantic textual similar, semantic search, or paraphrase mining. The framework is based on PyTorch and Transformers and offers a large collection of pre-trained models tuned for various tasks. Further, it is easy to fine-tune your own models. Our models are evaluated extensively and achieve state-of-the-art performance on various tasks. Further, the code is tuned to provide the highest possible speed.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Seq2seq Chatbot for Keras

    Seq2seq Chatbot for Keras

    This repository contains a new generative model of chatbot

    This repository contains a new generative model of chatbot based on seq2seq modeling. The trained model available here used a small dataset composed of ~8K pairs of context (the last two utterances of the dialogue up to the current point) and respective response. The data were collected from dialogues of English courses online. This trained model can be fine-tuned using a closed-domain dataset to real-world applications. The canonical seq2seq model became popular in neural machine translation, a task that has different prior probability distributions for the words belonging to the input and output sequences since the input and output utterances are written in different languages. The architecture presented here assumes the same prior distributions for input and output words. Therefore, it shares an embedding layer (Glove pre-trained word embedding) between the encoding and decoding processes through the adoption of a new model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • eProcurement Software Icon
    eProcurement Software

    Enterprises and companies seeking a solution to manage all their procurement operations and processes

    eBuyerAssist by Eyvo is a cloud-based procurement solution designed for businesses of all sizes and industries. Fully modular and scalable, it streamlines the entire procurement lifecycle—from requisition to fulfillment. The platform includes powerful tools for strategic sourcing, supplier management, warehouse operations, and contract oversight. Additional modules cover purchase orders, approval workflows, inventory and asset management, customer orders, budget control, cost accounting, invoice matching, vendor credit checks, and risk analysis. eBuyerAssist centralizes all procurement functions into a single, easy-to-use system—improving visibility, control, and efficiency across your organization. Whether you're aiming to reduce costs, enhance compliance, or align procurement with broader business goals, eBuyerAssist helps you get there faster, smarter, and with measurable results.
    Learn More
  • 10
    Simple StyleGan2 for Pytorch

    Simple StyleGan2 for Pytorch

    Simplest working implementation of Stylegan2

    Simple Pytorch implementation of Stylegan2 that can be completely trained from the command-line, no coding needed. You will need a machine with a GPU and CUDA installed. You can also specify the location where intermediate results and model checkpoints should be stored. You can increase the network capacity (which defaults to 16) to improve generation results, at the cost of more memory. By default, if the training gets cut off, it will automatically resume from the last checkpointed file. Once you have finished training, you can generate images from your latest checkpoint. If a previous checkpoint contained a better generator, (which often happens as generators start degrading towards the end of training), you can load from a previous checkpoint with another flag. A technique used in both StyleGAN and BigGAN is truncating the latent values so that their values fall close to the mean. The small the truncation value, the better the samples will appear at the cost of sample variety.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    StudioGAN

    StudioGAN

    StudioGAN is a Pytorch library providing implementations of networks

    StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation. StudioGAN aims to offer an identical playground for modern GANs so that machine learning researchers can readily compare and analyze a new idea. Moreover, StudioGAN provides an unprecedented-scale benchmark for generative models. The benchmark includes results from GANs (BigGAN-Deep, StyleGAN-XL), auto-regressive models (MaskGIT, RQ-Transformer), and Diffusion models (LSGM++, CLD-SGM, ADM-G-U). StudioGAN is a self-contained library that provides 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 13 regularization modules, 6 augmentation modules, 8 evaluation metrics, and 5 evaluation backbones. Among these configurations, we formulate 30 GANs as representatives. Each modularized option is managed through a configuration system that works through a YAML file.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Swirl

    Swirl

    Swirl queries any number of data sources with APIs

    Swirl queries any number of data sources with APIs and uses spaCy and NLTK to re-rank the unified results without extracting and indexing anything! Includes zero-code configs for Apache Solr, ChatGPT, Elastic Search, OpenSearch, PostgreSQL, Google BigQuery, RequestsGet, Google PSE, NLResearch.com, Miro & more! SWIRL adapts and distributes queries to anything with a search API - search engines, databases, noSQL engines, cloud/SaaS services etc - and uses AI (Large Language Models) to re-rank the unified results without extracting and indexing anything. It's intended for use by developers and data scientists who want to solve multi-silo search problems from enterprise search to new monitoring & alerting solutions that push information to users continuously. Built on the Python/Django/RabbitMQ stack, SWIRL includes connectors to Apache Solr, ChatGPT, Elastic, OpenSearch | PostgreSQL, Google BigQuery plus generic HTTP/GET/JSON with configurations for premium services.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    TFKit

    TFKit

    Handling multiple nlp task in one pipeline

    TFKit is a tool kit mainly for language generation. It leverages the use of transformers on many tasks with different models in this all-in-one framework. All you need is a little change of config. You can use tfkit for model training and evaluation with tfkit-train and tfkit-eval. The key to combine different task together is to make different task with same data format. All data will be in csv format - tfkit will use csv for all task, normally it will have two columns, first columns is the input of models, the second column is the output of models. Plane text with no tokenization - there is no need to tokenize text before training, or do re-calculating for tokenization, tfkit will handle it for you. No header is needed.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    TGAN

    TGAN

    Generative adversarial training for generating synthetic tabular data

    We are happy to announce that our new model for synthetic data called CTGAN is open-sourced. The new model is simpler and gives better performance on many datasets. TGAN is a tabular data synthesizer. It can generate fully synthetic data from real data. Currently, TGAN can generate numerical columns and categorical columns. TGAN has been developed and runs on Python 3.5, 3.6 and 3.7. Also, although it is not strictly required, the usage of a virtualenv is highly recommended in order to avoid interfering with other software installed in the system where TGAN is run. For development, you can use make install-develop instead in order to install all the required dependencies for testing and code listing. In order to be able to sample new synthetic data, TGAN first needs to be fitted to existing data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Texar-PyTorch

    Texar-PyTorch

    Integrating the Best of TF into PyTorch, for Machine Learning

    Texar-PyTorch is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides a library of easy-to-use ML modules and functionalities for composing whatever models and algorithms. The tool is designed for both researchers and practitioners for fast prototyping and experimentation. Texar-PyTorch was originally developed and is actively contributed by Petuum and CMU in collaboration with other institutes. A mirror of this repository is maintained by Petuum Open Source. Texar-PyTorch integrates many of the best features of TensorFlow into PyTorch, delivering highly usable and customizable modules superior to PyTorch native ones. Texar-PyTorch (this repo) and Texar-TF have mostly the same interfaces. Both further combine the best design of TF and PyTorch. Data processing, model architectures, loss functions, training and inference algorithms, evaluation, etc.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Text Gen

    Text Gen

    Almost state of art text generation library

    Almost state of art text generation library. Text gen is a python library that allow you build a custom text generation model with ease. Something sweet built with Tensorflow and Pytorch(coming soon). Load your data, your data must be in a text format. Download the example data from the example folder. Tune your model to know the best optimizer, activation method to use.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    TextBox

    TextBox

    A text generation library with pre-trained language models github.com

    TextBox 2.0 is an up-to-date text generation library based on Python and PyTorch focusing on building a unified and standardized pipeline for applying pre-trained language models to text generation. From a task perspective, we consider 13 common text generation tasks such as translation, story generation, and style transfer, and their corresponding 83 widely-used datasets. From a model perspective, we incorporate 47 pre-trained language models/modules covering the categories of general, translation, Chinese, dialogue, controllable, distilled, prompting, and lightweight models (modules). From a training perspective, we support 4 pre-training objectives and 4 efficient and robust training strategies, such as distributed data parallel and efficient generation. Compared with the previous version of TextBox, this extension mainly focuses on building a unified, flexible, and standardized framework for better supporting PLM-based text generation models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    TorchGAN

    TorchGAN

    Research Framework for easy and efficient training of GANs

    The torchgan package consists of various generative adversarial networks and utilities that have been found useful in training them. This package provides an easy-to-use API which can be used to train popular GANs as well as develop newer variants. The core idea behind this project is to facilitate easy and rapid generative adversarial model research. TorchGAN is a Pytorch-based framework for designing and developing Generative Adversarial Networks. This framework has been designed to provide building blocks for popular GANs and also to allow customization for cutting-edge research. Using TorchGAN's modular structure allows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    VQGAN-CLIP web app

    VQGAN-CLIP web app

    Local image generation using VQGAN-CLIP or CLIP guided diffusion

    VQGAN-CLIP has been in vogue for generating art using deep learning. Searching the r/deepdream subreddit for VQGAN-CLIP yields quite a number of results. Basically, VQGAN can generate pretty high-fidelity images, while CLIP can produce relevant captions for images. Combined, VQGAN-CLIP can take prompts from human input, and iterate to generate images that fit the prompts. Thanks to the generosity of creators sharing notebooks on Google Colab, the VQGAN-CLIP technique has seen widespread circulation. However, for regular usage across multiple sessions, I prefer a local setup that can be started up rapidly. Thus, this simple Streamlit app for generating VQGAN-CLIP images on a local environment. Be advised that you need a beefy GPU with lots of VRAM to generate images large enough to be interesting. (Hello Quadro owners!).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    amrlib

    amrlib

    A python library that makes AMR parsing, generation and visualization

    A python library that makes AMR parsing, generation and visualization simple. amrlib is a python module designed to make processing for Abstract Meaning Representation (AMR) simple by providing the following functions. Sentence to Graph (StoG) parsing to create AMR graphs from English sentences. Graph to Sentence (GtoS) generation for turning AMR graphs into English sentences. A QT-based GUI to facilitate the conversion of sentences to graphs and back to sentences. Methods to plot AMR graphs in both the GUI and as library functions. Training and test code for both the StoG and GtoS models. A SpaCy extension that allows direct conversion of SpaCy Docs and Spans to AMR graphs. Sentence to Graph alignment routines FAA_Aligner (Fast_Align Algorithm), based on the ISI aligner code detailed in this paper. RBW_Aligner (Rule Based Word) for a simple, single token to single node alignment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    bert4keras

    bert4keras

    Keras implement of transformers for humans

    Our light reimplementation of bert for keras. A cleaner, lighter version of bert for keras. This is the keras version of the transformer model library re-implemented by the author and is committed to combining transformer and keras with as clean code as possible. The original intention of this project is for the convenience of modification and customization, so it may be updated frequently. Load the pre-trained weights of bert/roberta/albert for fine-tune. Implement the attention mask required by the language model and seq2seq. Pre-training code from zero (supports TPU, multi-GPU, please see pertaining). Compatible with keras, tf.keras.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    cerche

    cerche

    Experimental search engine for conversational AI such as parl.ai

    This is an experimental search engine for conversational AI such as parl.ai, large language models such as OpenAI GPT3, and humans (maybe).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    commit-autosuggestions

    commit-autosuggestions

    A tool that AI automatically recommends commit messages

    This is implementation of CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model. CommitBERT is accepted in ACL workshop : NLP4Prog. Have you ever hesitated to write a commit message? Now get a commit message from Artificial Intelligence! CodeBERT: A Pre-Trained Model for Programming and Natural Languages introduces a pre-trained model in a combination of Program Language and Natural Language(PL-NL). It also introduces the problem of converting code into natural language (Code Documentation Generation). We can use CodeBERT to create a model that generates a commit message when code is added. However, most code changes are not made only by add of the code, and some parts of the code are deleted. We plan to slowly conquer languages that are not currently supported. To run this project, you need a flask-based inference server (GPU) and a client (commit module). If you don't have a GPU, don't worry, you can use it through Google Colab.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    gpt-2-simple

    gpt-2-simple

    Python package to easily retrain OpenAI's GPT-2 text-generating model

    A simple Python package that wraps existing model fine-tuning and generation scripts for OpenAI's GPT-2 text generation model (specifically the "small" 124M and "medium" 355M hyperparameter versions). Additionally, this package allows easier generation of text, generating to a file for easy curation, allowing for prefixes to force the text to start with a given phrase. For finetuning, it is strongly recommended to use a GPU, although you can generate using a CPU (albeit much more slowly). If you are training in the cloud, using a Colaboratory notebook or a Google Compute Engine VM w/ the TensorFlow Deep Learning image is strongly recommended. (as the GPT-2 model is hosted on GCP) You can use gpt-2-simple to retrain a model using a GPU for free in this Colaboratory notebook, which also demos additional features of the package. Note: Development on gpt-2-simple has mostly been superceded by aitextgen, which has similar AI text generation capabilities with more efficient training time.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    gpt2-client

    gpt2-client

    Easy-to-use TensorFlow Wrapper for GPT-2 117M, 345M, 774M, etc.

    GPT-2 is a Natural Language Processing model developed by OpenAI for text generation. It is the successor to the GPT (Generative Pre-trained Transformer) model trained on 40GB of text from the internet. It features a Transformer model that was brought to light by the Attention Is All You Need paper in 2017. The model has 4 versions - 124M, 345M, 774M, and 1558M - that differ in terms of the amount of training data fed to it and the number of parameters they contain. Finally, gpt2-client is a wrapper around the original gpt-2 repository that features the same functionality but with more accessiblity, comprehensibility, and utilty. You can play around with all four GPT-2 models in less than five lines of code. Install client via pip. The generation options are highly flexible. You can mix and match based on what kind of text you need generated, be it multiple chunks or one at a time with prompts.
    Downloads: 0 This Week
    Last Update:
    See Project