Showing 55 open source projects for "using"

View related business solutions
  • Vibes don’t ship, Retool does Icon
    Vibes don’t ship, Retool does

    Start from a prompt and build production-ready apps on your data—with security, permissions, and compliance built in.

    Vibe coding tools create cool demos, but Retool helps you build software your company can actually use. Generate internal apps that connect directly to your data—deployed in your cloud with enterprise security from day one. Build dashboards, admin panels, and workflows with granular permissions already in place. Stop prototyping and ship on a platform that actually passes security review.
    Build apps that ship
  • Outgrown Windows Task Scheduler? Icon
    Outgrown Windows Task Scheduler?

    Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

    Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
    Download Free Tool
  • 1
    DeepMozart

    DeepMozart

    Audio generation using diffusion models

    Audio generation using diffusion models in PyTorch. The code is based on the audio-diffusion-pytorch repository.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Minimal text diffusion

    Minimal text diffusion

    A minimal implementation of diffusion models for text generation

    ...The main idea was to retain just enough code to allow training a simple diffusion model and generating samples, remove image-related terms, and make it easier to use. To train a model, run scripts/train.sh. By default, this will train a model on the simple corpus. However, you can change this to any text file using the --train_data argument. Note that you may have to increase the sequence length (--seq_len) if your corpus is longer than the simple corpus. The other default arguments are set to match the best setting I found for the simple corpus.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Emb-GAM

    Emb-GAM

    An interpretable and efficient predictor using pre-trained models

    ...In contrast, generalized additive models (GAMs) can maintain interpretability but often suffer from poor prediction performance due to their inability to effectively capture feature interactions. In this work, we aim to bridge this gap by using pre-trained neural language models to extract embeddings for each input before learning a linear model in the embedding space. The final model (which we call Emb-GAM) is a transparent, linear function of its input features and feature interactions. Leveraging the language model allows Emb-GAM to learn far fewer linear coefficients, model larger interactions, and generalize well to novel inputs. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    AnimeGAN

    AnimeGAN

    A simple PyTorch Implementation of Generative Adversarial Networks

    ...The images are not clean, some outliers can be observed, which degrades the quality of the generated images. Anime-style images of 126 tags are collected from danbooru.donmai.us using the crawler tool gallery-dl. The images are then processed by an anime face detector python-anime face. The resulting dataset contains ~143,000 anime faces. Note that some of the tags may no longer be meaningful after cropping, i.e. the cropped face images under the 'uniform' tag may not contain visible parts of uniforms.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 5
    ruDALL-E

    ruDALL-E

    Generate images from texts. In Russian

    ...Models allow you to create images that did not exist before. All you need is a text description in Russian or another language. Try to create unique images together with generative artists using your own formulations. Ask generative artists to depict something special for you as well. The Kandinsky 2.0 model uses the reverse diffusion method and creates colorful images on various topics in a matter of seconds by text query in Russian and other languages. You can even combine different languages within a single query. This neural network has been developed and trained by Sber AI researchers in close collaboration with scientists from Artificial Intelligence Research Institute using joined datasets by Sber AI and SberDevices. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    AI Atelier

    AI Atelier

    Based on the Disco Diffusion, version of the AI art creation software

    ...We offer both Text-To-Image models (Disco Diffusion and VQGAN+CLIP) and Text-To-Text (GPT-J-6B and GPT-NEOX-20B) as options. Making available complete source code of licensed works and modifications, which include larger works using a licensed work, under the same license. Copyright and license notices must be preserved. When a modified version is used to provide a service over a network, the complete source code of the modified version must be made available. Create 2D and 3D animations and not only still frames (from Disco Diffusion v5 and VQGAN Animations). ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    RQ-Transformer

    RQ-Transformer

    Implementation of RQ Transformer, autoregressive image generation

    Implementation of RQ Transformer, which proposes a more efficient way of training multi-dimensional sequences autoregressively. This repository will only contain the transformer for now. You can use this vector quantization library for the residual VQ. This type of axial autoregressive transformer should be compatible with memcodes, proposed in NWT. It would likely also work well with multi-headed VQ. I also think there is something deeper going on, and have generalized this to any number of...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Deep Daze

    Deep Daze

    Simple command line tool for text to image generation

    Simple command-line tool for text to image generation using OpenAI's CLIP and Siren (Implicit neural representation network). In true deep learning fashion, more layers will yield better results. Default is at 16, but can be increased to 32 depending on your resources. Technique first devised and shared by Mario Klingemann, it allows you to prime the generator network with a starting image, before being steered towards the text.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    GANformer

    GANformer

    Generative Adversarial Transformers

    ...In contrast to the classic transformer architecture, it utilizes multiplicative integration that allows flexible region-based modulation and can thus be seen as a generalization of the successful StyleGAN network. Using the pre-trained models (generated after training for 5-7x less steps than StyleGAN2 models! Training our models for longer will improve the image quality further).
    Downloads: 0 This Week
    Last Update:
    See Project
  • Grafana: The open and composable observability platform Icon
    Grafana: The open and composable observability platform

    Faster answers, predictable costs, and no lock-in built by the team helping to make observability accessible to anyone.

    Grafana is the open source analytics & monitoring solution for every database.
    Learn More
  • 10
    Machine Learning PyTorch Scikit-Learn

    Machine Learning PyTorch Scikit-Learn

    Code Repository for Machine Learning with PyTorch and Scikit-Learn

    ...For those who are interested in knowing what this book covers in general, I’d describe it as a comprehensive resource on the fundamental concepts of machine learning and deep learning. The first half of the book introduces readers to machine learning using scikit-learn, the defacto approach for working with tabular datasets. Then, the second half of this book focuses on deep learning, including applications to natural language processing and computer vision.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    Big Sleep

    Big Sleep

    A simple command line tool for text to image generation

    A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Ryan Murdock has done it again, combining OpenAI's CLIP and the generator from a BigGAN! This repository wraps up his work so it is easily accessible to anyone who owns a GPU. You will be able to have the GAN dream-up images using natural language with a one-line command in the terminal. User-made notebook with bug fixes and added features, like google drive integration.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    VQGAN-CLIP web app

    VQGAN-CLIP web app

    Local image generation using VQGAN-CLIP or CLIP guided diffusion

    VQGAN-CLIP has been in vogue for generating art using deep learning. Searching the r/deepdream subreddit for VQGAN-CLIP yields quite a number of results. Basically, VQGAN can generate pretty high-fidelity images, while CLIP can produce relevant captions for images. Combined, VQGAN-CLIP can take prompts from human input, and iterate to generate images that fit the prompts. Thanks to the generosity of creators sharing notebooks on Google Colab, the VQGAN-CLIP technique has seen widespread circulation. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    CLIP Guided Diffusion

    CLIP Guided Diffusion

    A CLI tool/python module for generating images from text

    A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI. Text to image generation (multiple prompts with weights). Non-square Generations (experimental) Generate portrait or landscape images by specifying a number to offset the width and/or height. Uses fewer timesteps over the same diffusion schedule. Sacrifices accuracy/alignment for quicker runtime. options: - 25, 50, 150, 250, 500, 1000, ddim25,ddim50,ddim150, ddim250,ddim500,ddim1000 (default: 1000) Prepending a number with ddim will use the ddim scheduler. e.g. ddim25 will use the 25 timstep ddim scheduler. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    gpt-2-simple

    gpt-2-simple

    Python package to easily retrain OpenAI's GPT-2 text-generating model

    ...Additionally, this package allows easier generation of text, generating to a file for easy curation, allowing for prefixes to force the text to start with a given phrase. For finetuning, it is strongly recommended to use a GPU, although you can generate using a CPU (albeit much more slowly). If you are training in the cloud, using a Colaboratory notebook or a Google Compute Engine VM w/ the TensorFlow Deep Learning image is strongly recommended. (as the GPT-2 model is hosted on GCP) You can use gpt-2-simple to retrain a model using a GPU for free in this Colaboratory notebook, which also demos additional features of the package. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    GPT Neo

    GPT Neo

    An implementation of model parallel GPT-2 and GPT-3-style models

    An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here to play with our pre-trained models, we strongly recommend you try out the HuggingFace Transformer integration. Training and inference is officially supported on TPU and should work on GPU as well. This repository will be (mostly) archived as we move focus to our GPU-specific repo, GPT-NeoX.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 16
    TorchGAN

    TorchGAN

    Research Framework for easy and efficient training of GANs

    ...TorchGAN is a Pytorch-based framework for designing and developing Generative Adversarial Networks. This framework has been designed to provide building blocks for popular GANs and also to allow customization for cutting-edge research. Using TorchGAN's modular structure allows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    hebrew-gpt_neo

    hebrew-gpt_neo

    Hebrew text generation models based on EleutherAI's gpt-neo

    ...Each was trained on a TPUv3-8 which was made available to me via the TPU Research Cloud Program. The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Hands-on Unsupervised Learning

    Hands-on Unsupervised Learning

    Code for Hands-on Unsupervised Learning Using Python (O'Reilly Media)

    ...Unsupervised learning can be applied to unlabeled datasets to discover meaningful patterns buried deep in the data, patterns that may be near impossible for humans to uncover. Author Ankur Patel provides practical knowledge on how to apply unsupervised learning using two simple, production-ready Python frameworks - scikit-learn and TensorFlow. With the hands-on examples and code provided, you will identify difficult-to-find patterns in data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Pipeline for training Language Models

    Pipeline for training Language Models

    Pipeline for training Language Models using PyTorch.

    Pipeline for training Language Models using PyTorch. Inspired by Yandex Data School NLP Course (week 03: Language Modeling) Prepared text file with space-separated words on each line.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    onnxt5

    onnxt5

    Summarization, translation, sentiment-analysis, text-generation, etc.

    Summarization, translation, sentiment analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX. This package is still in the alpha stage, therefore some functionalities such as beam searches are still in development. The simplest way to get started for generation is to use the default pre-trained version of T5 on ONNX included in the package. Please note that the first time you call get_encoder_decoder_tokenizer, the models are being downloaded which might take a minute or two. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    GIMP ML

    GIMP ML

    AI for GNU Image Manipulation Program

    ...Additionally, operations on images such as edge detection and color clustering have also been added. GIMP-ML relies on standard Python packages such as numpy, scikit-image, pillow, pytorch, open-cv, scipy. In addition, GIMP-ML also aims to bring the benefits of using deep learning networks used for computer vision tasks to routine image processing workflows.
    Downloads: 19 This Week
    Last Update:
    See Project
  • 22
    commit-autosuggestions

    commit-autosuggestions

    A tool that AI automatically recommends commit messages

    This is implementation of CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model. CommitBERT is accepted in ACL workshop : NLP4Prog. Have you ever hesitated to write a commit message? Now get a commit message from Artificial Intelligence! CodeBERT: A Pre-Trained Model for Programming and Natural Languages introduces a pre-trained model in a combination of Program Language and Natural Language(PL-NL).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    HyperGAN

    HyperGAN

    Composable GAN framework with api and user interface

    ...HyperGAN builds generative adversarial networks in PyTorch and makes them easy to train and share. HyperGAN is currently in pre-release and open beta. Everyone will have different goals when using hypergan. HyperGAN is currently beta. We are still searching for a default cross-data-set configuration. Each of the examples supports search. Automated search can help find good configurations. If you are unsure, you can start with the 2d-distribution.py. Check out random_search.py for possibilities, you'll likely want to modify it. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    Aida Lib

    Aida Lib

    Aida is a language agnostic library for text generation

    Aida is a language-agnostic library for text generation. When using Aida, first you compose a tree of operations on your text that includes conditions via branches and other control flow. Later, you fill the tree with data and render the text. A building block is a variable class: Var. Use it to represent a value that you want to control later. A variable can hold numbers (e.g. float, int) or strings.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Image Super-Resolution (ISR)

    Image Super-Resolution (ISR)

    Super-scale your images and run experiments with Residual Dense

    ...This project contains Keras implementations of different Residual Dense Networks for Single Image Super-Resolution (ISR) as well as scripts to train these networks using content and adversarial loss components. Docker scripts and Google Colab notebooks are available to carry training and prediction. Also, we provide scripts to facilitate training on the cloud with AWS and Nvidia-docker with only a few commands. When training your own model, start with only PSNR loss (50+ epochs, depending on the dataset) and only then introduce GANS and feature loss. ...
    Downloads: 2 This Week
    Last Update:
    See Project