Showing 41 open source projects for "compiler python linux"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Gen AI apps are built with MongoDB Atlas Icon
    Gen AI apps are built with MongoDB Atlas

    Build gen AI apps with an all-in-one modern database: MongoDB Atlas

    MongoDB Atlas provides built-in vector search and a flexible document model so developers can build, scale, and run gen AI apps without stitching together multiple databases. From LLM integration to semantic search, Atlas simplifies your AI architecture—and it’s free to get started.
    Start Free
  • 1
    DALL·E Mini

    DALL·E Mini

    Generate images from a text prompt

    DALL·E Mini, generate images from a text prompt. OpenAI had the first impressive model for generating images with DALL·E. Craiyon/DALL·E mini is an attempt at reproducing those results with an open-source model. The model is trained by looking at millions of images from the internet with their associated captions. Over time, it learns how to draw an image from a text prompt. Some concepts are learned from memory as they may have seen similar images. However, it can also learn how to create...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    ruDALL-E

    ruDALL-E

    Generate images from texts. In Russian

    We present a family of generative models from SberDevices and Sber AI! Models allow you to create images that did not exist before. All you need is a text description in Russian or another language. Try to create unique images together with generative artists using your own formulations. Ask generative artists to depict something special for you as well. The Kandinsky 2.0 model uses the reverse diffusion method and creates colorful images on various topics in a matter of seconds by text...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    AI Atelier

    AI Atelier

    Based on the Disco Diffusion, version of the AI art creation software

    Based on the Disco Diffusion, we have developed a Chinese & English version of the AI art creation software "AI Atelier". We offer both Text-To-Image models (Disco Diffusion and VQGAN+CLIP) and Text-To-Text (GPT-J-6B and GPT-NEOX-20B) as options. Making available complete source code of licensed works and modifications, which include larger works using a licensed work, under the same license. Copyright and license notices must be preserved. When a modified version is used to provide a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    RQ-Transformer

    RQ-Transformer

    Implementation of RQ Transformer, autoregressive image generation

    Implementation of RQ Transformer, which proposes a more efficient way of training multi-dimensional sequences autoregressively. This repository will only contain the transformer for now. You can use this vector quantization library for the residual VQ. This type of axial autoregressive transformer should be compatible with memcodes, proposed in NWT. It would likely also work well with multi-headed VQ. I also think there is something deeper going on, and have generalized this to any number of...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Cloud SQL for MySQL, PostgreSQL, and SQL Server Icon
    Cloud SQL for MySQL, PostgreSQL, and SQL Server

    Focus on your application, and leave the database to us

    Fully managed, cost-effective relational database service for PostgreSQL, MySQL, and SQL Server. Try Enterprise Plus edition for a 99.99% availability SLA and category-leading performance.
    Try it for free
  • 5
    GANformer

    GANformer

    Generative Adversarial Transformers

    This is an implementation of the GANformer model, a novel and efficient type of transformer, explored for the task of image generation. The network employs a bipartite structure that enables long-range interactions across the image, while maintaining computation of linearly efficiency, that can readily scale to high-resolution synthesis. The model iteratively propagates information from a set of latent variables to the evolving visual features and vice versa, to support the refinement of...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Deep Feature Rotation Multimodal Image

    Deep Feature Rotation Multimodal Image

    Implementation of Deep Feature Rotation for Multimodal Image

    Official implementation of paper Deep Feature Rotation for Multimodal Image Style Transfer [NICS'21] We propose a simple method for representing style features in many ways called Deep Feature Rotation (DFR), while still achieving effective stylization compared to more complex methods in style transfer. Our approach is a representative of the many ways of augmentation for intermediate feature embedding without consuming too much computational expense. Prepare your content image and style...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Big Sleep

    Big Sleep

    A simple command line tool for text to image generation

    A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Ryan Murdock has done it again, combining OpenAI's CLIP and the generator from a BigGAN! This repository wraps up his work so it is easily accessible to anyone who owns a GPU. You will be able to have the GAN dream-up images using natural language with a one-line command in the terminal. User-made notebook with bug fixes and added features, like google drive integration. Images will be saved to...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    VQGAN-CLIP web app

    VQGAN-CLIP web app

    Local image generation using VQGAN-CLIP or CLIP guided diffusion

    VQGAN-CLIP has been in vogue for generating art using deep learning. Searching the r/deepdream subreddit for VQGAN-CLIP yields quite a number of results. Basically, VQGAN can generate pretty high-fidelity images, while CLIP can produce relevant captions for images. Combined, VQGAN-CLIP can take prompts from human input, and iterate to generate images that fit the prompts. Thanks to the generosity of creators sharing notebooks on Google Colab, the VQGAN-CLIP technique has seen widespread...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    CLIP Guided Diffusion

    CLIP Guided Diffusion

    A CLI tool/python module for generating images from text

    A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI. Text to image generation (multiple prompts with weights). Non-square Generations (experimental) Generate portrait or landscape images by specifying a number to offset the width and/or height. Uses fewer timesteps over the same diffusion schedule. Sacrifices accuracy/alignment for quicker runtime. options: - 25, 50, 150, 250, 500, 1000, ddim25,ddim50,ddim150, ddim250,ddim500,ddim1000 (default...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Build Securely on AWS with Proven Frameworks Icon
    Build Securely on AWS with Proven Frameworks

    Lay a foundation for success with Tested Reference Architectures developed by Fortinet’s experts. Learn more in this white paper.

    Moving to the cloud brings new challenges. How can you manage a larger attack surface while ensuring great network performance? Turn to Fortinet’s Tested Reference Architectures, blueprints for designing and securing cloud environments built by cybersecurity experts. Learn more and explore use cases in this white paper.
    Download Now
  • 10
    Deep Exemplar-based Video Colorization

    Deep Exemplar-based Video Colorization

    The source code of CVPR 2019 paper "Deep Exemplar-based Colorization"

    The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization". End-to-end network for exemplar-based video colorization. The main challenge is to achieve temporal consistency while remaining faithful to the reference style. To address this issue, we introduce a recurrent framework that unifies the semantic correspondence and color propagation steps. Both steps allow a provided reference image to guide the colorization of every frame, thus reducing accumulated propagation...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 11
    FLUX.1-dev

    FLUX.1-dev

    Powerful 12B parameter model for top-tier text-to-image creation

    FLUX.1-dev is a powerful 12-billion parameter rectified flow transformer designed for generating high-quality images from text prompts. It delivers cutting-edge output quality, just slightly below the flagship FLUX.1 [pro] model, and matches or exceeds many closed-source competitors in prompt adherence. The model is trained using guidance distillation, making it more efficient and accessible for developers and artists alike. FLUX.1-dev is openly available with weights provided to support...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
     stable-diffusion-v1-4

    stable-diffusion-v1-4

    Text-to-image diffusion model for high-quality image generation

    stable-diffusion-v1-4 is a high-performance text-to-image latent diffusion model developed by CompVis. It generates photo-realistic images from natural language prompts using a pretrained CLIP ViT-L/14 text encoder and a UNet-based denoising architecture. This version builds on v1-2, fine-tuned over 225,000 steps at 512×512 resolution on the “laion-aesthetics v2 5+” dataset, with 10% text-conditioning dropout for improved classifier-free guidance. It is optimized for use with Hugging Face’s...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    stable-diffusion-xl-base-1.0

    stable-diffusion-xl-base-1.0

    Advanced base model for high-quality text-to-image generation

    stable-diffusion-xl-base-1.0 is a next-generation latent diffusion model developed by Stability AI for producing highly detailed images from text prompts. It forms the core of the SDXL pipeline and can be used on its own or paired with a refinement model for enhanced results. This base model utilizes two pretrained text encoders—OpenCLIP-ViT/G and CLIP-ViT/L—for richer text understanding and improved image quality. The model supports two-stage generation, where the base model creates initial...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    stable-diffusion-3-medium

    stable-diffusion-3-medium

    Efficient text-to-image model with enhanced quality and typography

    Stable Diffusion 3 Medium is a next-generation text-to-image model by Stability AI, designed using a Multimodal Diffusion Transformer (MMDiT) architecture. It offers notable improvements in image quality, prompt comprehension, typography, and computational efficiency over previous versions. The model integrates three fixed, pretrained text encoders—OpenCLIP-ViT/G, CLIP-ViT/L, and T5-XXL—to interpret complex prompts more effectively. Trained on 1 billion synthetic and filtered public images,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    stable-diffusion-2-1

    stable-diffusion-2-1

    Latent diffusion model for high-quality text-to-image generation

    Stable Diffusion 2.1 is a text-to-image generation model developed by Stability AI, building on the 768-v architecture with additional fine-tuning for improved safety and image quality. It uses a latent diffusion framework that operates in a compressed image space, enabling faster and more efficient image synthesis while preserving detail. The model is conditioned on text prompts via the OpenCLIP-ViT/H encoder and supports generation at resolutions up to 768×768. Released under the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    ControlNet

    ControlNet

    Extension for Stable Diffusion using edge, depth, pose, and more

    ControlNet is a neural network architecture that enhances Stable Diffusion by enabling image generation conditioned on specific visual structures such as edges, poses, depth maps, and segmentation masks. By injecting these auxiliary inputs into the diffusion process, ControlNet gives users powerful control over the layout and composition of generated images while preserving the style and flexibility of generative models. It supports a wide range of conditioning types through pretrained...
    Downloads: 0 This Week
    Last Update:
    See Project
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.