AI Image Generators for Linux

View 7 business solutions

Browse free open source AI Image Generators and projects for Linux below. Use the toggles on the left to filter open source AI Image Generators by OS, license, language, programming language, and project status.

  • Our Free Plans just got better! | Auth0 by Okta Icon
    Our Free Plans just got better! | Auth0 by Okta

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your secuirty. Auth0 now, thank yourself later.
    Try free now
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 1
    Intelligent Java

    Intelligent Java

    Integrate with the latest language models, image generation and speech

    Intelligent java (IntelliJava) is the ultimate tool to integrate with the latest language models and deep learning frameworks using java. The library provides an intuitive functions for sending input to models like ChatGPT and DALL·E, and receiving generated text, speech or images. With just a few lines of code, you can easily access the power of cutting-edge AI models to enhance your projects. Access ChatGPT, GPT3 to generate text and DALL·E to generate images. OpenAI is preferred for quality results without tuning. Generate text; Cohere allows you to generate a language model to suit your specific needs. Generate audio from text; Access DeepMind’s speech models. The only dependencies is GSON. Required to add manually when using IntelliJava jar. However, if you imported this repo through Maven, it will handle the dependencies.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 2
    InvokeAI

    InvokeAI

    InvokeAI is a leading creative engine for Stable Diffusion models

    InvokeAI is an implementation of Stable Diffusion, the open source text-to-image and image-to-image generator. It provides a streamlined process with various new features and options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM. InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products. This fork is supported across Linux, Windows and Macintosh. Linux users can use either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm driver). We do not recommend the GTX 1650 or 1660 series video cards. They are unable to run in half-precision mode and do not have sufficient VRAM to render 512x512 images.
    Downloads: 12 This Week
    Last Update:
    See Project
  • 3
    Stable Diffusion v 2.1 web UI

    Stable Diffusion v 2.1 web UI

    Lightweight Stable Diffusion v 2.1 web UI: txt2img, img2img, depth2img

    Lightweight Stable Diffusion v 2.1 web UI: txt2img, img2img, depth2img, in paint and upscale4x. Gradio app for Stable Diffusion 2 by Stability AI. It uses Hugging Face Diffusers implementation. Currently supported pipelines are text-to-image, image-to-image, inpainting, upscaling and depth-to-image.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 4
    Stable-Dreamfusion

    Stable-Dreamfusion

    Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion

    A pytorch implementation of the text-to-3D model Dreamfusion, powered by the Stable Diffusion text-to-2D model. This project is a work-in-progress and contains lots of differences from the paper. The current generation quality cannot match the results from the original paper, and many prompts still fail badly! Since the Imagen model is not publicly available, we use Stable Diffusion to replace it (implementation from diffusers). Different from Imagen, Stable-Diffusion is a latent diffusion model, which diffuses in a latent space instead of the original image space. Therefore, we need the loss to propagate back from the VAE's encoder part too, which introduces extra time costs in training. We use the multi-resolution grid encoder to implement the NeRF backbone (implementation from torch-ngp), which enables much faster rendering.
    Downloads: 7 This Week
    Last Update:
    See Project
  • Manage printing in a cost-efficient and eco-friendly way with Gelato. Icon
    Manage printing in a cost-efficient and eco-friendly way with Gelato.

    Gelato offers an extensive catalog of custom products, a zero-inventory business model, and free designing tools—all in one place.

    The world's largest print on demand network with 140+ production partners across 32 countries. Gelato offers end-to-end design, production and logistics for individuals looking to start their own business today!
    Sign up for Free
  • 5
    Disco Diffusion

    Disco Diffusion

    Notebooks, models and techniques for the generation of AI Art

    A frankensteinian amalgamation of notebooks, models, and techniques for the generation of AI art and animations. This project uses a special conversion tool to convert the Python files into notebooks for easier development. What this means is you do not have to touch the notebook directly to make changes to it. The tool being used is called Colab-Convert. Initial QoL improvements added, including user-friendly UI, settings+prompt saving, and improved google drive folder organization. Now includes sizing options, intermediate saves and fixed image prompts and Perlin inits. the unexposed batch option since it doesn't work.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 6
    Big Sleep

    Big Sleep

    A simple command line tool for text to image generation

    A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Ryan Murdock has done it again, combining OpenAI's CLIP and the generator from a BigGAN! This repository wraps up his work so it is easily accessible to anyone who owns a GPU. You will be able to have the GAN dream-up images using natural language with a one-line command in the terminal. User-made notebook with bug fixes and added features, like google drive integration. Images will be saved to wherever the command is invoked. If you have enough memory, you can also try using a bigger vision model released by OpenAI for improved generations. You can set the number of classes that you wish to restrict Big Sleep to use for the Big GAN with the --max-classes flag as follows (ex. 15 classes). This may lead to extra stability during training, at the cost of lost expressivity.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    Dream Textures

    Dream Textures

    Stable Diffusion built-in to Blender

    Create textures, concept art, background assets, and more with a simple text prompt. Use the 'Seamless' option to create textures that tile perfectly with no visible seam. Texture entire scenes with 'Project Dream Texture' and depth to image. Re-style animations with the Cycles render pass. Run the models on your machine to iterate without slowdowns from a service. Create textures, concept art, and more with text prompts. Learn how to use the various configuration options to get exactly what you're looking for. Texture entire models and scenes with depth to image. Inpaint to fix up images and convert existing textures into seamless ones automatically. Outpaint to increase the size of an image by extending it in any direction. Perform style transfer and create novel animations with Stable Diffusion as a post processing step. Dream Textures has been tested with CUDA and Apple Silicon GPUs. Over 4GB of VRAM is recommended.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    OpenAI DALL·E AsyncImage SwiftUI

    OpenAI DALL·E AsyncImage SwiftUI

    OpenAI swift async text to image for SwiftUI app using OpenAI

    SwiftUI views that asynchronously loads and displays an OpenAI image from open API. You just type in your idea and AI will give you an art solution. DALL-E and DALL-E 2 are deep learning models developed by OpenAI to generate digital images from natural language descriptions, called "prompts". You need to have Xcode 13 installed in order to have access to Documentation Compiler (DocC) OpenAI's text-to-image model DALL-E 2 is a recent example of diffusion models. It uses diffusion models for both the model's prior (which produces an image embedding given a text caption) and the decoder that generates the final image. In machine learning, diffusion models, also known as diffusion probabilistic models, are a class of latent variable models. They are Markov chains trained using variational inference. The goal of diffusion models is to learn the latent structure of a dataset by modeling the way in which data points diffuse through the latent space.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    video-subtitle-remover

    video-subtitle-remover

    AI-based tool for removing hardsubs and text-like watermarks

    Video-subtitle-remover (VSR) is an AI-based software that removes hardcoded subtitles from videos or Pictures.
    Downloads: 59 This Week
    Last Update:
    See Project
  • Accounting Software for Small Businesses | Xero Icon
    Accounting Software for Small Businesses | Xero

    Save 90% for 6 months on Xero's award-winning accounting and online bookkeeping platform for businesses of all sizes and stages of growth.

    Xero offers a robust ecosystem of connected apps and integrations with banks and financial institutions, enabling small businesses to access a wide range of solutions within Xero's open platform to streamline operations and manage finances. Additionally, accounting and bookkeeping firms benefit from efficient compliance tools, advanced practice management software, and a cloud-based unified accounting ledger for all clients, centralized in one place.
    Get 90% off for 6 months
  • 10
    DALL·E Mini

    DALL·E Mini

    Generate images from a text prompt

    DALL·E Mini, generate images from a text prompt. OpenAI had the first impressive model for generating images with DALL·E. Craiyon/DALL·E mini is an attempt at reproducing those results with an open-source model. The model is trained by looking at millions of images from the internet with their associated captions. Over time, it learns how to draw an image from a text prompt. Some concepts are learned from memory as they may have seen similar images. However, it can also learn how to create unique images that don't exist, such as "the Eiffel tower is landing on the moon," by combining multiple concepts together. Optimizer updated to Distributed Shampoo, which proved to be more efficient following comparison of different optimizers. New architecture based on NormFormer and GLU variants following comparison of transformer variants, including DeepNet, Swin v2, NormFormer, Sandwich-LN, RMSNorm with GeLU/Swish/SmeLU.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    Diffusers

    Diffusers

    State-of-the-art diffusion models for image and audio generation

    Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, Diffusers is a modular toolbox that supports both. Our library is designed with a focus on usability over performance, simple over easy, and customizability over abstractions. State-of-the-art diffusion pipelines that can be run in inference with just a few lines of code. Interchangeable noise schedulers for different diffusion speeds and output quality. Pretrained models that can be used as building blocks, and combined with schedulers, for creating your own end-to-end diffusion systems. We recommend installing Diffusers in a virtual environment from PyPi or Conda. For more details about installing PyTorch and Flax, please refer to their official documentation.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    Diffusion WebUI Colab

    Diffusion WebUI Colab

    Choose your diffusion models and spin up a WebUI on Colab in one click

    The most simplistic Colab with most models included by default. Custom models can be added easily. Stable Diffusion 2.0 in testing phase. Choose your diffusion models and spin up a WebUI on Colab in one click. Share your generations in our mastodon server - (This is hosted by a third party. I am not associated with the instance in any way.) The instructions are on the Colab.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    flat

    flat

    All-in-one image generation AI

    All-in-one image generation AI. Launch StableDiffusionWebUI with just a few clicks. No Python installation or repository cloning is required. Displays generated images in a list with information such as prompts. The image folder can be set freely.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    AI Image Generator

    AI Image Generator

    Create images from text or modify existing images.

    This AI Image Generator can create new images based on text descriptions or transform existing images. It can generate pictures of anything you describe, such as people, places, or objects, in various styles and settings. If you use your own image, the AI can edit or enhance it by following new instructions. The generator works in many contexts, whether you need a completely new picture or want to adjust an old one. It's simple to use and provides a creative way to visualize ideas quickly.
    Leader badge
    Downloads: 53 This Week
    Last Update:
    See Project
  • 15
    Core ML Stable Diffusion

    Core ML Stable Diffusion

    Stable Diffusion with Core ML on Apple Silicon

    Run Stable Diffusion on Apple Silicon with Core ML. python_coreml_stable_diffusion, a Python package for converting PyTorch models to Core ML format and performing image generation with Hugging Face diffusers in Python. StableDiffusion, a Swift package that developers can add to their Xcode projects as a dependency to deploy image generation capabilities in their apps. The Swift package relies on the Core ML model files generated by python_coreml_stable_diffusion. Hugging Face ran the conversion procedure on the following models and made the Core ML weights publicly available on the Hub. If you would like to convert a version of Stable Diffusion that is not already available on the Hub, please refer to the Converting Models to Core ML. Log in to or register for your Hugging Face account, generate a User Access Token and use this token to set up Hugging Face API access by running huggingface-cli login in a Terminal window.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    DALL-E 2 - Pytorch

    DALL-E 2 - Pytorch

    Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis

    Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch. The main novelty seems to be an extra layer of indirection with the prior network (whether it is an autoregressive transformer or a diffusion network), which predicts an image embedding based on the text embedding from CLIP. Specifically, this repository will only build out the diffusion prior network, as it is the best performing variant (but which incidentally involves a causal transformer as the denoising network) To train DALLE-2 is a 3 step process, with the training of CLIP being the most important. To train CLIP, you can either use x-clip package, or join the LAION discord, where a lot of replication efforts are already underway. Then, you will need to train the decoder, which learns to generate images based on the image embedding coming from the trained CLIP.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    Karlo

    Karlo

    Text-conditional image generation model based on OpenAI's unCLIP

    Karlo is a text-conditional image generation model based on OpenAI's unCLIP architecture with the improvement over the standard super-resolution model from 64px to 256px, recovering high-frequency details only in the small number of denoising steps. We train all components from scratch on 115M image-text pairs including COYO-100M, CC3M, and CC12M. In the case of Prior and Decoder, we use ViT-L/14 provided by OpenAI’s CLIP repository. Unlike the original implementation of unCLIP, we replace the trainable transformer in the decoder into the text encoder in ViT-L/14 for efficiency. In the case of the SR module, we first train the model using the DDPM objective in 1M steps, followed by additional 234K steps to fine-tune the additional component.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    OpenAI Web Application

    OpenAI Web Application

    A web application that allows users to interact with OpenAI's models

    A web application that allows users to interact with OpenAI's modles through a simple and user-friendly interface. This app is for demo purpose to test OpenAI API and may contain issues/bugs. User-friendly interface for making requests to the OpenAI API. Responses are displayed in a chat-like format. Select Models (Davinci, Codex, DALL·E, Whisper) based on your needs. Create AI Images (DALL·E). Audio-Text Transcribe (Whisper). Highlight code syntax. Type in the input field and press enter or click on the send button to make a request to the OpenAI API. Use control+enter to add line breaks in the input field. Responses are displayed in the chat-like format on top of the page. Generate code, including translating natural language to code. Take advantage of DALL·E models to generate AI images. Utilize Whisper Model to transcribe audio into text.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    VQGAN-CLIP web app

    VQGAN-CLIP web app

    Local image generation using VQGAN-CLIP or CLIP guided diffusion

    VQGAN-CLIP has been in vogue for generating art using deep learning. Searching the r/deepdream subreddit for VQGAN-CLIP yields quite a number of results. Basically, VQGAN can generate pretty high-fidelity images, while CLIP can produce relevant captions for images. Combined, VQGAN-CLIP can take prompts from human input, and iterate to generate images that fit the prompts. Thanks to the generosity of creators sharing notebooks on Google Colab, the VQGAN-CLIP technique has seen widespread circulation. However, for regular usage across multiple sessions, I prefer a local setup that can be started up rapidly. Thus, this simple Streamlit app for generating VQGAN-CLIP images on a local environment. Be advised that you need a beefy GPU with lots of VRAM to generate images large enough to be interesting. (Hello Quadro owners!).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    pdf-extractor

    pdf-extractor

    Node.js module for rendering pdf pages to images, svgs and HTML files

    Pdf-extractor is a wrapper around pdf.js to generate images, svgs, html files, text files and json files from a pdf on node.js. A DOM Canvas is used to render and export the graphical layer of the pdf. Canvas exports *.png as a default but can be extended to export to other file types like .jpg. Pdf objects are converted to svg using the SVGGraphics parser of pdf.js. Pdf text is converted to HTML. This can be used as a (transparent) layer over the image to enable text selection. Pdf text is extracted to a text file for different usages (e.g. indexing the text). This library is in it's most basic form a node.js wrapper for pdf.js. It has default renderers to generate a default output, but is easily extended to incorporate custom logic or to generate different output. It uses a node.js DOM and the node domstub from pdf.js do make pdf parsing available on node.js without a browser.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    pwa-asset-generator

    pwa-asset-generator

    Automates PWA asset generation and image declaration

    Automates PWA asset generation and image declaration. Automatically generates icon and splash screen images, favicons and mstile images. Updates manifest.json and index.html files with the generated images according to Web App Manifest specs and Apple Human Interface guidelines. When you build a PWA with a goal of providing native-like experiences on multiple platforms and stores, you need to meet with the criteria of those platforms and stores with your PWA assets; icon sizes and splash screens. Google's Android platform respects Web App Manifest API specs, and it expects you to provide at least 2 icon sizes in your manifest file. Apple's iOS currently doesn't support Web App Manifest API specs. You need to introduce custom HTML tags to set icons and splash screens to your PWA. You need to introduce a special html link tag with rel apple-touch-icon to provide icons for your PWA when it's added to home screen.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    ruDALL-E

    ruDALL-E

    Generate images from texts. In Russian

    We present a family of generative models from SberDevices and Sber AI! Models allow you to create images that did not exist before. All you need is a text description in Russian or another language. Try to create unique images together with generative artists using your own formulations. Ask generative artists to depict something special for you as well. The Kandinsky 2.0 model uses the reverse diffusion method and creates colorful images on various topics in a matter of seconds by text query in Russian and other languages. You can even combine different languages within a single query. This neural network has been developed and trained by Sber AI researchers in close collaboration with scientists from Artificial Intelligence Research Institute using joined datasets by Sber AI and SberDevices. Russian text-to-image model that generates images from text. The architecture is the same as ruDALL-E XL. Even more parameters in the new version.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    website-to-gif

    website-to-gif

    Turn your website into a GIF

    This Github Action automatically creates an animated GIF or WebP from a given web page to display on your project README (or anywhere else). In your GitHub repo, create a workflow file or extend an existing one. You have to also include a step to checkout and commit to the repo. You can use the following example gif.yml. Make sure to modify the url value and add any other input you want to use. WebP rendering will take a lot of time to benefit from lossless quality and file size optimization.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    AI Models Downloads

    AI Models Downloads

    Open Source AI Models Your One-Stop Destination for AI Model Downloads

    The download is a sample Voice Pack Trained and Used with Coqui TTS AIModels.org provides a comprehensive directory of AI models for download, ranging from text-to-speech, large language models, to text-to-image models. Our goal is to provide you with easy access to a vast array of pre-trained models, complete with examples and demonstrations. Discover the Power of AI Models AI models have come a long way, empowering various applications that improve our lives on a daily basis. Here are just a few examples of the capabilities available: Text-to-Speech: Convert written text into audible speech, enhancing user experience or assisting those with limited reading abilities. Large Language Models: Generate human-like responses to textual input, providing a transformative basis for chatbots, cohesive summaries, and much more.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 25
    A Netflix film cover generator Nuxt.js

    A Netflix film cover generator Nuxt.js

    A tool for generating Netflix show image

    We love Netflix, but we love memes even more. We thought that helping Netflix on their UI/UX testing with a tool that can create show images easily with an export function to png. A tool for generating Netflix shows an image. You can visit the demo website hosted on Netlify. This is an open-source tool and it is available on Github. On this tool you have a full editable canvas where you can edit content, text position, text dimension, gradient position and change the background image. In order to change the element position you can just click and drag anywhere. Meanwhile, if yuo want to change the content inside an element you need to double-click on it. By double clicking on an element it will show a textarea where you can edit and confirm the changes by clicking elsewhere or by clicking Enter. In order to change the background image you can drag-n-drop any image onto the canvas and it will change the background image.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next