Showing 181 open source projects for "open"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Context for your AI agents Icon
    Context for your AI agents

    Crawl websites, sync to vector databases, and power RAG applications. Pre-built integrations for LLM pipelines and AI assistants.

    Build data pipelines that feed your AI models and agents without managing infrastructure. Crawl any website, transform content, and push directly to your preferred vector store. Use 10,000+ tools for RAG applications, AI assistants, and real-time knowledge bases. Monitor site changes, trigger workflows on new data, and keep your AIs fed with fresh, structured information. Cloud-native, API-first, and free to start until you need to scale.
    Try for free
  • 1
    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese LLaMA & Alpaca large language model + local CPU/GPU training

    This project has open-sourced the Chinese LLaMA model and the Alpaca large model with instruction fine-tuning to further promote the open research of large models in the Chinese NLP community. Based on the original LLaMA , these models expand the Chinese vocabulary and use Chinese data for secondary pre-training, which further improves the basic semantic understanding of Chinese.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    PRM800K

    PRM800K

    800,000 step-level correctness labels on LLM solutions to MATH problem

    PRM800K is a process supervision dataset accompanying the paper Let’s Verify Step by Step, providing 800,000 step-level correctness labels on model-generated solutions to problems from the MATH dataset. The repository releases the raw labels and the labeler instructions used in two project phases, enabling researchers to study how human raters graded intermediate reasoning. Data are stored as newline-delimited JSONL files tracked with Git LFS, where each line is a full solution sample that...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 3
    Metaseq

    Metaseq

    Repo for external large-scale work

    ...Built on top of PyTorch, it provides distributed training, model sharding, mixed-precision computation, and memory-efficient checkpointing to support models with hundreds of billions of parameters. The framework was used internally at Meta to train models like OPT (Open Pre-trained Transformer) and serves as a reference implementation for scaling transformer architectures efficiently across GPUs and nodes. It supports both pretraining and fine-tuning workflows with data pipelines for text, multilingual corpora, and custom tokenization schemes. Metaseq also includes APIs for evaluation, generation, and model serving, enabling seamless transitions from training to inference.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    DiT (Diffusion Transformers)

    DiT (Diffusion Transformers)

    Official PyTorch Implementation of "Scalable Diffusion Models"

    DiT (Diffusion Transformer) is a powerful architecture that applies transformer-based modeling directly to diffusion generative processes for high-quality image synthesis. Unlike CNN-based diffusion models, DiT represents the diffusion process in the latent space and processes image tokens through transformer blocks with learned positional encodings, offering scalability and superior sample quality. The model architecture parallels large language models but for image tokens—each block...
    Downloads: 1 This Week
    Last Update:
    See Project
  • BoldTrail Real Estate CRM Icon
    BoldTrail Real Estate CRM

    A first-of-its-kind homeownership solution that puts YOU at the center of the coveted lifetime consumer relationship.

    BoldTrail, the #1 rated real estate platform, is built to power your entire brokerage with next-generation technology your agents will use and love. Showcase your unique brand with customizable websites for your company, offices, and every agent. Maximize lead capture with a modern, portal-like consumer search experience and intelligent behavior tracking. Hyper-local area pages, home valuation pages and options for rich lifestyle data keep customers searching with your brokerage as the local experts. The most robust lead gen tools on the market help your brokerage, teams & agents effectively drive new business - no matter their budget. Empower your agents to generate free leads instantly with our simple to use landing pages & IDX squeeze pages. Drive more leads with higher quality and lower cost through in-house tools built within the platform. Diversify lead sources with our automated social media posting, integrated Google and Facebook advertising, custom text codes and more.
    Learn More
  • 5
    VALL-E

    VALL-E

    PyTorch implementation of VALL-E (Zero-Shot Text-To-Speech)

    We introduce a language modeling approach for text to speech synthesis (TTS). Specifically, we train a neural codec language model (called VALL-E) using discrete codes derived from an off-the-shelf neural audio codec model, and regard TTS as a conditional language modeling task rather than continuous signal regression as in previous work. During the pre-training stage, we scale up the TTS training data to 60K hours of English speech which is hundreds of times larger than existing systems....
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    Stable-Dreamfusion

    Stable-Dreamfusion

    Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion

    A pytorch implementation of the text-to-3D model Dreamfusion, powered by the Stable Diffusion text-to-2D model. This project is a work-in-progress and contains lots of differences from the paper. The current generation quality cannot match the results from the original paper, and many prompts still fail badly! Since the Imagen model is not publicly available, we use Stable Diffusion to replace it (implementation from diffusers). Different from Imagen, Stable-Diffusion is a latent diffusion...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    ToMe (Token Merging)

    ToMe (Token Merging)

    A method to increase the speed and lower the memory footprint

    ToMe (Token Merging) is a PyTorch-based optimization framework designed to significantly accelerate Vision Transformer (ViT) architectures without retraining. Developed by researchers at Facebook (Meta AI), ToMe introduces an efficient technique that merges similar tokens within transformer layers, reducing redundant computation while preserving model accuracy. This approach differs from token pruning, which removes background tokens entirely; instead, ToMe merges tokens based on feature...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    pyllama

    pyllama

    LLaMA: Open and Efficient Foundation Language Models

    📢 pyllama is a hacked version of LLaMA based on original Facebook's implementation but more convenient to run in a Single consumer grade GPU.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Desktop and Mobile Device Management Software Icon
    Desktop and Mobile Device Management Software

    It's a modern take on desktop management that can be scaled as per organizational needs.

    Desktop Central is a unified endpoint management (UEM) solution that helps in managing servers, laptops, desktops, smartphones, and tablets from a central location.
    Learn More
  • 10
    ConvNeXt V2

    ConvNeXt V2

    Code release for ConvNeXt V2 model

    ConvNeXt V2 is an evolution of the ConvNeXt architecture that co-designs convolutional networks alongside self-supervised learning. The V2 version introduces a fully convolutional masked autoencoder (FCMAE) framework where parts of the image are masked and the network reconstructs the missing content, marrying convolutional inductive bias with powerful pretraining. A key innovation is a new Global Response Normalization (GRN) layer added to the ConvNeXt backbone, which enhances feature...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    minGPT

    minGPT

    A minimal PyTorch re-implementation of the OpenAI GPT

    minGPT is a minimalist, educational re-implementation of the GPT (Generative Pretrained Transformer) architecture built in PyTorch, designed by Andrej Karpathy to expose the core structure of a transformer-based language model in as few lines of code as possible. It strips away extraneous bells and whistles, aiming to show how a sequence of token indices is fed into a stack of transformer blocks and then decoded into the next token probabilities, with both training and inference supported....
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    Menagerie

    Menagerie

    A collection of high-quality models for the MuJoCo physics engine

    MuJoCo Menagerie, developed by Google DeepMind, is a curated collection of high-quality simulation models designed for use with the MuJoCo physics engine. It serves as a comprehensive library of accurate and ready-to-use robotic, biomechanical, and mechanical models, ensuring users can perform reliable simulations without having to build or tune models from scratch. The repository aims to improve reproducibility and quality across robotics research by providing verified models that adhere to...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 13
    Video Pre-Training

    Video Pre-Training

    Learning to Act by Watching Unlabeled Online Videos

    The Video PreTraining (VPT) repository provides code and model artifacts for a project where agents learn to act by watching human gameplay videos—specifically, gameplay of Minecraft—using behavioral cloning. The idea is to learn general priors of control from large-scale, unlabeled video data, and then optionally fine-tune those priors for more goal-directed behavior via environment interaction. The repository contains demonstration models of different widths, fine-tuned variants (e.g. for...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    LaMDA-pytorch

    LaMDA-pytorch

    Open-source pre-training implementation of Google's LaMDA in PyTorch

    Open-source pre-training implementation of Google's LaMDA research paper in PyTorch. The totally not sentient AI. This repository will cover the 2B parameter implementation of the pre-training architecture as that is likely what most can afford to train. You can review Google's latest blog post from 2022 which details LaMDA here. You can also view their previous blog post from 2021 on the model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Mask2Former

    Mask2Former

    Code release for "Masked-attention Mask Transformer

    Mask2Former is a unified segmentation architecture that handles semantic, instance, and panoptic segmentation with one model and one training recipe. Its core idea is to cast segmentation as mask classification: a transformer decoder predicts a set of mask queries, each with an associated class score, eliminating the need for task-specific heads. A pixel decoder fuses multi-scale features and feeds masked attention in the transformer so each query focuses computation on its current spatial...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    MAE (Masked Autoencoders)

    MAE (Masked Autoencoders)

    PyTorch implementation of MAE

    MAE (Masked Autoencoders) is a self-supervised learning framework for visual representation learning using masked image modeling. It trains a Vision Transformer (ViT) by randomly masking a high percentage of image patches (typically 75%) and reconstructing the missing content from the remaining visible patches. This forces the model to learn semantic structure and global context without supervision. The encoder processes only the visible patches, while a lightweight decoder reconstructs the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    GLIDE (Text2Im)

    GLIDE (Text2Im)

    GLIDE: a diffusion-based text-conditional image synthesis model

    glide-text2im is an open source implementation of OpenAI’s GLIDE model, which generates photorealistic images from natural language text prompts. It demonstrates how diffusion-based generative models can be conditioned on text to produce highly detailed and coherent visual outputs. The repository provides both model code and pretrained checkpoints, making it possible for researchers and developers to experiment with text-to-image synthesis.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 18
    GPT Neo

    GPT Neo

    An implementation of model parallel GPT-2 and GPT-3-style models

    An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here to play with our pre-trained models, we strongly recommend you try out the HuggingFace Transformer integration. Training and inference is officially supported on TPU and should work on GPU as well. This repository will be (mostly) archived as we move focus to our GPU-specific repo, GPT-NeoX. NB, while neo can technically run a training step at 200B+ parameters, it is very...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 19
    Denoiser

    Denoiser

    Real Time Speech Enhancement in the Waveform Domain (Interspeech 2020)

    Denoiser is a real-time speech enhancement model operating directly on raw waveforms, designed to clean noisy audio while running efficiently on CPU. It uses a causal encoder-decoder architecture with skip connections, optimized with losses defined both in the time domain and frequency domain to better suppress noise while preserving speech. Unlike models that operate on spectrograms alone, this design enables lower latency and coherent waveform output. The implementation includes data...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Image GPT

    Image GPT

    Large-scale autoregressive pixel model for image generation by OpenAI

    Image-GPT is the official research code and models from OpenAI’s paper Generative Pretraining from Pixels. The project adapts GPT-2 to the image domain, showing that the same transformer architecture can model sequences of pixels without altering its fundamental structure. It provides scripts to download pretrained checkpoints of different model sizes (small, medium, large) trained on large-scale datasets and includes utilities for handling color quantization with a 9-bit palette....
    Downloads: 1 This Week
    Last Update:
    See Project
  • 21
    Multi-Agent Emergence Environments

    Multi-Agent Emergence Environments

    Environment generation code for the paper "Emergent Tool Use"

    multi-agent-emergence-environments is an open source research environment framework developed by OpenAI for the study of emergent behaviors in multi-agent systems. It was designed for the experiments described in the paper and blog post “Emergent Tool Use from Multi-Agent Autocurricula”, which investigated how complex cooperative and competitive behaviors can evolve through self-play.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    PyTorch GAN Zoo

    PyTorch GAN Zoo

    A mix of GAN implementations including progressive growing

    PyTorch GAN Zoo is a comprehensive open research toolbox designed for experimenting with and developing Generative Adversarial Networks (GANs) using PyTorch. The project provides modular implementations of popular GAN architectures, including Progressive Growing of GANs (PGAN), DCGAN, and an experimental StyleGAN version. It is built to support both researchers and developers who want to train, evaluate, and extend GANs efficiently across diverse datasets such as CelebA-HQ, FashionGen, DTD, and CIFAR-10. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    DeepSDF

    DeepSDF

    Learning Continuous Signed Distance Functions for Shape Representation

    DeepSDF is a deep learning framework for continuous 3D shape representation using Signed Distance Functions (SDFs), as presented in the CVPR 2019 paper DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation by Park et al. The framework learns a continuous implicit function that maps 3D coordinates to their corresponding signed distances from object surfaces, allowing compact, high-fidelity shape modeling. Unlike traditional discrete voxel grids or meshes, DeepSDF...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 24
    PyTorch-BigGraph

    PyTorch-BigGraph

    Generate embeddings from large-scale graph-structured data

    PyTorch-BigGraph (PBG) is a system for learning embeddings on massive graphs—think billions of nodes and edges—using partitioning and distributed training to keep memory and compute tractable. It shards entities into partitions and buckets edges so that each training pass only touches a small slice of parameters, which drastically reduces peak RAM and enables horizontal scaling across machines. PBG supports multi-relation graphs (knowledge graphs) with relation-specific scoring functions,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    MUSE

    MUSE

    A library for Multilingual Unsupervised or Supervised word Embeddings

    MUSE is a framework for learning multilingual word embeddings that live in a shared space, enabling bilingual lexicon induction, cross-lingual retrieval, and zero-shot transfer. It supports both supervised alignment with seed dictionaries and unsupervised alignment that starts without parallel data by using adversarial initialization followed by Procrustes refinement. The code can align pre-trained monolingual embeddings (such as fastText) across dozens of languages and provides standardized...
    Downloads: 0 This Week
    Last Update:
    See Project