Showing 21 open source projects for "ai research"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Context for your AI agents Icon
    Context for your AI agents

    Crawl websites, sync to vector databases, and power RAG applications. Pre-built integrations for LLM pipelines and AI assistants.

    Build data pipelines that feed your AI models and agents without managing infrastructure. Crawl any website, transform content, and push directly to your preferred vector store. Use 10,000+ tools for RAG applications, AI assistants, and real-time knowledge bases. Monitor site changes, trigger workflows on new data, and keep your AIs fed with fresh, structured information. Cloud-native, API-first, and free to start until you need to scale.
    Try for free
  • 1
    x-unet

    x-unet

    Implementation of a U-net complete with efficient attention

    Implementation of a U-net complete with efficient attention as well as the latest research findings. For 3d (video or CT / MRI scans).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    Megatron is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision. Megatron is also used in NeMo Megatron, a framework to help enterprises overcome the challenges of building and...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    AudioLM - Pytorch

    AudioLM - Pytorch

    Implementation of AudioLM audio generation model in Pytorch

    Implementation of AudioLM, a Language Modeling Approach to Audio Generation out of Google Research, in Pytorch It also extends the work for conditioning with classifier free guidance with T5. This allows for one to do text-to-audio or TTS, not offered in the paper. Yes, this means VALL-E can be trained from this repository. It is essentially the same. This repository now also contains a MIT licensed version of SoundStream. It is also compatible with EnCodec, however, be aware that it...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Albumentations

    Albumentations

    Fast image augmentation library and an easy-to-use wrapper

    Albumentations is a computer vision tool that boosts the performance of deep convolutional neural networks. Albumentations is a Python library for fast and flexible image augmentations. Albumentations efficiently implements a rich variety of image transform operations that are optimized for performance, and does so while providing a concise, yet powerful image augmentation interface for different computer vision tasks, including object classification, segmentation, and detection....
    Downloads: 0 This Week
    Last Update:
    See Project
  • Reach Your Audience with Rise Vision, the #1 Cloud Digital Signage Software Solution Icon
    Reach Your Audience with Rise Vision, the #1 Cloud Digital Signage Software Solution

    K-12 Schools, Higher Education, Businesses, Restaurants

    Rise Vision is the #1 digital signage company, offering easy-to-use cloud digital signage software compatible with any player across multiple screens. Forget about static displays. Save time and boost sales with 500+ customizable content templates for your screens. If you ever need help, get free training and exceptionally fast support.
    Learn More
  • 5

    Infinite Sides

    Infinite Craft but in Pyside6 and Python with local LLM

    Infinite Craft but in Pyside6 and Python with local LLM (llama2 & others) using Ollama that also lets you create your own crafting game based on any topic Customize the game any way you like in the settings.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    scraper-with-chatgpt
    It is a powerful data scraping tool that helps you extract information from various online sources. Easily collect data from Google SERP, Maps, Shopify, Zillow, and more. With a user-friendly interface, you can scrape and save data in JSON or Excel formats. Unlock insights from the web effortlessly with scrape-it.cloud API.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    DALL-E in Pytorch

    DALL-E in Pytorch

    Implementation / replication of DALL-E, OpenAI's Text to Image

    Implementation / replication of DALL-E (paper), OpenAI's Text to Image Transformer, in Pytorch. It will also contain CLIP for ranking the generations. Kobiso, a research engineer from Naver, has trained on the CUB200 dataset here, using full and deepspeed sparse attention. You can also skip the training of the VAE altogether, using the pretrained model released by OpenAI! The wrapper class should take care of downloading and caching the model for you auto-magically. You can also use the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    revChatGPT

    revChatGPT

    This app allows you to chat with ChatGPT using reverse-engineered API

    This app allows you to chat with ChatGPT using a reverse-engineered API library called revChatGPT. Replies from the Chatbot are streamed back to the user in real-time, which gives the user an experience similar to how ChatGPT streams back its answers. To get started with the app, you'll need to create an account on OpenAI's ChatGPT and save your credentials. You can choose from three authentication methods: Email/Password, Session token, or Access token. Once you have your credentials, you...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we...
    Downloads: 3 This Week
    Last Update:
    See Project
  • Lightspeed golf course management software Icon
    Lightspeed golf course management software

    Lightspeed Golf is all-in-one golf course management software to help courses simplify operations, drive revenue and deliver amazing golf experiences.

    From tee sheet management, point of sale and payment processing to marketing, automation, reporting and more—Lightspeed is built for the pro shop, restaurant, back office, beverage cart and beyond.
    Learn More
  • 10
    audio-diffusion-pytorch

    audio-diffusion-pytorch

    Audio generation using diffusion models, in PyTorch

    A fully featured audio diffusion library, for PyTorch. Includes models for unconditional audio generation, text-conditional audio generation, diffusion autoencoding, upsampling, and vocoding. The provided models are waveform-based, however, the U-Net (built using a-unet), DiffusionModel, diffusion method, and diffusion samplers are both generic to any dimension and highly customizable to work on other formats. Note: no pre-trained models are provided here, this library is meant for research...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    LaMDA-pytorch

    LaMDA-pytorch

    Open-source pre-training implementation of Google's LaMDA in PyTorch

    Open-source pre-training implementation of Google's LaMDA research paper in PyTorch. The totally not sentient AI. This repository will cover the 2B parameter implementation of the pre-training architecture as that is likely what most can afford to train. You can review Google's latest blog post from 2022 which details LaMDA here. You can also view their previous blog post from 2021 on the model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    ruDALL-E

    ruDALL-E

    Generate images from texts. In Russian

    ...You can even combine different languages within a single query. This neural network has been developed and trained by Sber AI researchers in close collaboration with scientists from Artificial Intelligence Research Institute using joined datasets by Sber AI and SberDevices. Russian text-to-image model that generates images from text. The architecture is the same as ruDALL-E XL. Even more parameters in the new version.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    PaddleGAN

    PaddleGAN

    PaddlePaddle GAN library, including lots of interesting applications

    ...GAN-Generative Adversarial Network, was praised by "the Father of Convolutional Networks" Yann LeCun (Yang Likun) as [One of the most interesting ideas in the field of computer science in the past decade]. It's the one research area in deep learning that AI researchers are most concerned about.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    TorchGAN

    TorchGAN

    Research Framework for easy and efficient training of GANs

    The torchgan package consists of various generative adversarial networks and utilities that have been found useful in training them. This package provides an easy-to-use API which can be used to train popular GANs as well as develop newer variants. The core idea behind this project is to facilitate easy and rapid generative adversarial model research. TorchGAN is a Pytorch-based framework for designing and developing Generative Adversarial Networks. This framework has been designed to...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    hebrew-gpt_neo

    hebrew-gpt_neo

    Hebrew text generation models based on EleutherAI's gpt-neo

    Hebrew text generation models based on EleutherAI's gpt-neo. Each was trained on a TPUv3-8 which was made available to me via the TPU Research Cloud Program. The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Hands-on Unsupervised Learning

    Hands-on Unsupervised Learning

    Code for Hands-on Unsupervised Learning Using Python (O'Reilly Media)

    This repo contains the code for the O'Reilly Media, Inc. book "Hands-on Unsupervised Learning Using Python: How to Build Applied Machine Learning Solutions from Unlabeled Data" by Ankur A. Patel. Many industry experts consider unsupervised learning the next frontier in artificial intelligence, one that may hold the key to the holy grail in AI research, the so-called general artificial intelligence. Since the majority of the world's data is unlabeled, conventional supervised learning cannot be applied; this is where unsupervised learning comes in. Unsupervised learning can be applied to unlabeled datasets to discover meaningful patterns buried deep in the data, patterns that may be near impossible for humans to uncover. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 17
    HyperGAN

    HyperGAN

    Composable GAN framework with api and user interface

    A composable GAN built for developers, researchers, and artists. HyperGAN builds generative adversarial networks in PyTorch and makes them easy to train and share. HyperGAN is currently in pre-release and open beta. Everyone will have different goals when using hypergan. HyperGAN is currently beta. We are still searching for a default cross-data-set configuration. Each of the examples supports search. Automated search can help find good configurations. If you are unsure, you can start with...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    GPT2 for Multiple Languages

    GPT2 for Multiple Languages

    GPT2 for Multiple Languages, including pretrained models

    With just 2 clicks (not including Colab auth process), the 1.5B pretrained Chinese model demo is ready to go. The contents in this repository are for academic research purpose, and we do not provide any conclusive remarks. Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC) Simplifed GPT2 train scripts(based on Grover, supporting TPUs). Ported bert tokenizer, multilingual corpus compatible. 1.5B GPT2 pretrained Chinese model (~15G corpus, 10w steps)....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    NiftyNet

    NiftyNet

    An open-source convolutional neural networks platform for research

    An open-source convolutional neural networks platform for medical image analysis and image-guided therapy. NiftyNet is a TensorFlow-based open-source convolutional neural networks (CNNs) platform for research in medical image analysis and image-guided therapy. NiftyNet’s modular structure is designed for sharing networks and pre-trained models. Using this modular structure you can get started with established pre-trained networks using built-in tools. Adapt existing networks to your imaging...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Finetune Transformer LM

    Finetune Transformer LM

    Code for "Improving Language Understanding by Generative Pre-Training"

    finetune-transformer-lm is a research codebase that accompanies the paper “Improving Language Understanding by Generative Pre-Training,” providing a minimal implementation focused on fine-tuning a transformer language model for evaluation tasks. The repository centers on reproducing the ROCStories Cloze Test result and includes a single-command training workflow to run the experiment end to end. It documents that runs are non-deterministic due to certain GPU operations and reports a median...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 21
    Exposure

    Exposure

    Learning infinite-resolution image processing with GAN and RL

    Learning infinite-resolution image processing with GAN and RL from unpaired image datasets, using a differentiable photo editing model. ACM Transactions on Graphics (presented at SIGGRAPH 2018) Exposure is originally designed for RAW photos, which assumes 12+ bit color depth and linear "RGB" color space (or whatever we get after demosaicing). jpg and png images typically have only 8-bit color depth (except 16-bit pngs) and the lack of information (dynamic range/activation resolution) may...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next