Generative AI for BSD

Browse free open source Generative AI and projects for BSD below. Use the toggles on the left to filter open source Generative AI by OS, license, language, programming language, and project status.

  • SKUDONET Open Source Load Balancer Icon
    SKUDONET Open Source Load Balancer

    Take advantage of Open Source Load Balancer to elevate your business security and IT infrastructure with a custom ADC Solution.

    SKUDONET ADC, operates at the application layer, efficiently distributing network load and application load across multiple servers. This not only enhances the performance of your application but also ensures that your web servers can handle more traffic seamlessly.
  • Red Hat Ansible Automation Platform on Microsoft Azure Icon
    Red Hat Ansible Automation Platform on Microsoft Azure

    Red Hat Ansible Automation Platform on Azure allows you to quickly deploy, automate, and manage resources securely and at scale.

    Deploy Red Hat Ansible Automation Platform on Microsoft Azure for a strategic automation solution that allows you to orchestrate, govern and operationalize your Azure environment.
  • 1
    llama.cpp

    llama.cpp

    Port of Facebook's LLaMA model in C/C++

    Inference of LLaMA model in pure C/C++
    Downloads: 16 This Week
    Last Update:
    See Project
  • 2
    GPT AI Assistant

    GPT AI Assistant

    OpenAI + LINE + Vercel = GPT AI Assistant

    GPT AI Assistant is an application that is implemented using the OpenAI API and LINE Messaging API. Through the installation process, you can start chatting with your own AI assistant using the LINE mobile app.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 3
    Make-A-Video - Pytorch (wip)

    Make-A-Video - Pytorch (wip)

    Implementation of Make-A-Video, new SOTA text to video generator

    Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch. They combine pseudo-3d convolutions (axial convolutions) and temporal attention and show much better temporal fusion. The pseudo-3d convolutions isn't a new concept. It has been explored before in other contexts, say for protein contact prediction as "dimensional hybrid residual networks". The gist of the paper comes down to, take a SOTA text-to-image model (here they use DALL-E2, but the same learning points would easily apply to Imagen), make a few minor modifications for attention across time and other ways to skimp on the compute cost, do frame interpolation correctly, get a great video model out. Passing in images (if one were to pretrain on images first), both temporal convolution and attention will be automatically skipped. In other words, you can use this straightforwardly in your 2d Unet and then port it over to a 3d Unet once that phase of the training is done.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 4
    AudioLM - Pytorch

    AudioLM - Pytorch

    Implementation of AudioLM audio generation model in Pytorch

    Implementation of AudioLM, a Language Modeling Approach to Audio Generation out of Google Research, in Pytorch It also extends the work for conditioning with classifier free guidance with T5. This allows for one to do text-to-audio or TTS, not offered in the paper. Yes, this means VALL-E can be trained from this repository. It is essentially the same. This repository now also contains a MIT licensed version of SoundStream. It is also compatible with EnCodec, however, be aware that it has a more restrictive non-commercial license, if you choose to use it.
    Downloads: 3 This Week
    Last Update:
    See Project
  • Tigerpaw One | Business Automation Software for SMBs Icon
    Tigerpaw One | Business Automation Software for SMBs

    Fed up with not having the time, money and resources to grow your business?

    The only software you need to increase cash flow, optimize resource utilization, and take control of your assets and inventory.
  • 5
    GPT-Code UI

    GPT-Code UI

    An open source implementation of OpenAI's ChatGPT Code interpreter

    An open source implementation of OpenAI's ChatGPT Code interpreter. Simply ask the OpenAI model to do something and it will generate & execute the code for you. You can put a .env in the working directory to load the OPENAI_API_KEY environment variable. For Azure OpenAI Services, there are also other configurable variables like deployment name. See .env.azure-example for more information. Note that model selection on the UI is currently not supported for Azure OpenAI Services.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 6
    ChatGPT API

    ChatGPT API

    Node.js client for the official ChatGPT API. 🔥

    This package is a Node.js wrapper around ChatGPT by OpenAI. TS batteries included. ✨ The official OpenAI chat completions API has been released, and it is now the default for this package! 🔥 Note: We strongly recommend using ChatGPTAPI since it uses the officially supported API from OpenAI. We may remove support for ChatGPTUnofficialProxyAPI in a future release. 1. ChatGPTAPI - Uses the gpt-3.5-turbo-0301 model with the official OpenAI chat completions API (official, robust approach, but it's not free) 2. ChatGPTUnofficialProxyAPI - Uses an unofficial proxy server to access ChatGPT's backend API in a way that circumvents Cloudflare (uses the real ChatGPT and is pretty lightweight, but relies on a third-party server and is rate-limited)
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    HyperGAN

    HyperGAN

    Composable GAN framework with api and user interface

    A composable GAN built for developers, researchers, and artists. HyperGAN builds generative adversarial networks in PyTorch and makes them easy to train and share. HyperGAN is currently in pre-release and open beta. Everyone will have different goals when using hypergan. HyperGAN is currently beta. We are still searching for a default cross-data-set configuration. Each of the examples supports search. Automated search can help find good configurations. If you are unsure, you can start with the 2d-distribution.py. Check out random_search.py for possibilities, you'll likely want to modify it. The examples are capable of (sometimes) finding a good trainer, like 2d-distribution. Mixing and matching components seems to work.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    LangChain

    LangChain

    âš¡ Building applications with LLMs through composability âš¡

    Large language models (LLMs) are emerging as a transformative technology, enabling developers to build applications that they previously could not. But using these LLMs in isolation is often not enough to create a truly powerful app - the real power comes when you can combine them with other sources of computation or knowledge. This library is aimed at assisting in the development of those types of applications.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    MusicLM - Pytorch

    MusicLM - Pytorch

    Implementation of MusicLM music generation model in Pytorch

    Implementation of MusicLM, Google's new SOTA model for music generation using attention networks, in Pytorch. They are basically using text-conditioned AudioLM, but surprisingly with the embeddings from a text-audio contrastive learned model named MuLan. MuLan is what will be built out in this repository, with AudioLM modified from the other repository to support the music generation needs here.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Finance Automation that puts you in charge Icon
    Finance Automation that puts you in charge

    Tipalti delivers smart payables that elevate modern business.

    Our robust pre-built connectors and our no-code, drag-and-drop interface makes it easy and fast to automatically sync vendors, invoices, and invoice payment data between Tipalti and your ERP or accounting software.
  • 10
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we recommend Mesh Transformer JAX. If you are not looking to train models with billions of parameters from scratch, this is likely the wrong library to use. For generic inference needs, we recommend you use the Hugging Face transformers library instead which supports GPT-NeoX models.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    Edge GPT

    Edge GPT

    Reverse engineered API of Microsoft's Bing Chat

    Reverse engineered API of Microsoft's Bing Chat The reverse engineering the chat feature of the new version of Bing. Requirements: - Python 3.8+ - A Microsoft account with Bing Chat access
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    LlamaIndex

    LlamaIndex

    Central interface to connect your LLM's with external data

    LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data. LlamaIndex is a simple, flexible interface between your external data and LLMs. It provides the following tools in an easy-to-use fashion.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    Megatron is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision. Megatron is also used in NeMo Megatron, a framework to help enterprises overcome the challenges of building and training sophisticated natural language processing models with billions and trillions of parameters. Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    revChatGPT

    revChatGPT

    Reverse engineered ChatGPT API

    Reverse Engineered ChatGPT API by OpenAI. Extensible for chatbots etc. This is not an official OpenAI product.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    terminalGPT

    terminalGPT

    Get GPT like ChatGPT on your terminal

    Get GPT like ChatGPT on your terminal Note: This doesn't use OpenAI ChatGPT, it uses text-davinci-003 model (by default) You'll need to have your own OpenAi apikey to operate this package. 1. Go to https://beta.openai.com 2. Select you profile menu and go to View API Keys 3. Select + Create new secret key 4. Copy generated key Get started: Using tgpt: npm -g install terminalgpt or yarn global add terminalgpt Run tgpt chat ps.: If it is your first time running it, it will ask for open AI key , paste generated key from pre-requisite steps
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next