Showing 507 open source projects for "source code"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Enable remote access to your apps, desktops, and files on any device. Icon
    Enable remote access to your apps, desktops, and files on any device.

    Enable remote and hybrid work for your organization

    Get the most cost-efficient and scalable remote access and application delivery solution. Create secure digital workspaces that users can access with just a web browser.
    Learn More
  • 1
    CPT

    CPT

    CPT: A Pre-Trained Unbalanced Transformer

    A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation. We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV. Position Embeddings We extend the max_position_embeddings from 512 to 1024. We...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    UnionML

    UnionML

    Build and deploy machine learning microservices

    ...Using industry-standard machine learning methods, implement endpoints for fetching data, training models, serving predictions (and much more) to write a complete ML stack in one place. Data science, ML engineering, and MLOps practitioners can all gather around UnionML apps as a way of defining a single source of truth about your ML system’s behavior. This helps you maintain consistent code across your ML stack, from training to prediction logic.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    d2l-zh

    d2l-zh

    Chinese-language edition of Dive into Deep Learning

    d2l‑zh is the Chinese-language edition of Dive into Deep Learning, an interactive, open‑source deep learning textbook that combines code, math, and explanatory text. It features runnable Jupyter notebooks compatible with multiple frameworks (e.g., PyTorch, MXNet, TensorFlow), comprehensive theoretical analysis, and exercises. Widely adopted in over 70 countries and used by more than 500 universities for teaching deep learning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    CleanRL

    CleanRL

    High-quality single file implementation of Deep Reinforcement Learning

    CleanRL is a Deep Reinforcement Learning library that provides high-quality single-file implementation with research-friendly features. The implementation is clean and simple, yet we can scale it to run thousands of experiments using AWS Batch. CleanRL is not a modular library and therefore it is not meant to be imported. At the cost of duplicate code, we make all implementation details of a DRL algorithm variant easy to understand, so CleanRL comes with its own pros and cons. You should...
    Downloads: 3 This Week
    Last Update:
    See Project
  • Model-Based Systems Engineering Software Icon
    Model-Based Systems Engineering Software

    Systems requirements, Modeling and Simulation, Verification and Validation in one seamless solution.

    SPEC Innovations’ flagship model-based systems engineering solution can help your team reduce time-to-market, cost, and risk on even some of the most complex systems. This cloud or on-premise application uses a modern web browser, with an intuitive graphical user interface.
    Learn More
  • 5
    Minimal text diffusion

    Minimal text diffusion

    A minimal implementation of diffusion models for text generation

    A minimal implementation of diffusion models of text: learns a diffusion model of a given text corpus, allowing to generate text samples from the learned model. The main idea was to retain just enough code to allow training a simple diffusion model and generating samples, remove image-related terms, and make it easier to use. To train a model, run scripts/train.sh. By default, this will train a model on the simple corpus. However, you can change this to any text file using the --train_data...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    ostRAT

    ostRAT

    OpenSourceTelegramRAT - Remote PC access via Telegram Bot.

    ostRAT is free and open source. GPLv3 Сomputer remote control software. Works via telegram bot. A lot of functions, for example: - Screenshot: sends a screenshot - Off: turns off the computer - Url: opens entered link - Write: sends your text to the computer - Move: changes mouse location with x and y - and more! WARNING: Using the bot is recommended only on your device. Failure to comply with the recommendation may result in criminal liability.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    PyTorch Transfer-Learning-Library

    PyTorch Transfer-Learning-Library

    Transfer Learning Library for Domain Adaptation, Task Adaptation, etc.

    TLlib is an open-source and well-documented library for Transfer Learning. It is based on pure PyTorch with high performance and friendly API. Our code is pythonic, and the design is consistent with torchvision. You can easily develop new algorithms or readily apply existing algorithms. We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Guided Diffusion

    Guided Diffusion

    Codebase for Diffusion Models Beat GANS on Image Synthesis

    The guided-diffusion repository is centered on diffusion models for image synthesis, with a focus on classifier guidance and improvements over earlier diffusion frameworks. It is derived from OpenAI’s improved-diffusion work, enhanced to include guided generation where a classifier (or other guidance mechanism) can steer sampling toward desired classes or attributes. The code provides model definitions (UNet, diffusion schedules), sampling and training scripts, and utilities for guidance and...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    Video Pre-Training

    Video Pre-Training

    Learning to Act by Watching Unlabeled Online Videos

    The Video PreTraining (VPT) repository provides code and model artifacts for a project where agents learn to act by watching human gameplay videos—specifically, gameplay of Minecraft—using behavioral cloning. The idea is to learn general priors of control from large-scale, unlabeled video data, and then optionally fine-tune those priors for more goal-directed behavior via environment interaction. The repository contains demonstration models of different widths, fine-tuned variants (e.g. for...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Trusted by thousands of websites for website monitoring. Icon
    Trusted by thousands of websites for website monitoring.

    For SREs, IT/DevOps Managers, System Admins, Directors/VPs of Engineering, CTOs

    We provide peace of mind to thousands of customers like Apple, Microsoft, IBM, Palo Alto Networks, Kraft, and BNP Paribas who trust us to monitor the performance, health, and downtime of their websites, applications, and infrastructure.
    Learn More
  • 10
    ConvNeXt

    ConvNeXt

    Code release for ConvNeXt model

    ConvNeXt is a modernized convolutional neural network (CNN) architecture designed to rival Vision Transformers (ViTs) in accuracy and scalability while retaining the simplicity and efficiency of CNNs. It revisits classic ResNet-style backbones through the lens of transformer design trends—large kernel sizes, inverted bottlenecks, layer normalization, and GELU activations—to bridge the performance gap between convolutions and attention-based models. ConvNeXt’s clean, hierarchical structure...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Alphafold2

    Alphafold2

    Unofficial Pytorch implementation / replication of Alphafold2

    To eventually become an unofficial working Pytorch implementation of Alphafold2, the breathtaking attention network that solved CASP14. Will be gradually implemented as more details of the architecture is released. Once this is replicated, I intend to fold all available amino acid sequences out there in-silico and release it as an academic torrent, to further science. Deepmind has open sourced the official code in Jax, along with the weights! This repository will now be geared towards a...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 12
    nlpaug

    nlpaug

    Data augmentation for NLP

    This Python library helps you with augmenting nlp for your machine learning projects. Visit this introduction to understand Data Augmentation in NLP. Augmenter is the basic element of augmentation while Flow is a pipeline to orchestra multi augmenters together.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    WaveRNN

    WaveRNN

    WaveRNN Vocoder + TTS

    WaveRNN is a PyTorch implementation of DeepMind’s WaveRNN vocoder, bundled with a Tacotron-style TTS front end to form a complete text-to-speech stack. As a vocoder, WaveRNN models raw audio with a compact recurrent neural network that can generate high-quality waveforms more efficiently than many traditional autoregressive models. The repository includes scripts and code for preprocessing datasets such as LJSpeech, training Tacotron to produce mel spectrograms, training WaveRNN on those...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    AI Atelier

    AI Atelier

    Based on the Disco Diffusion, version of the AI art creation software

    Based on the Disco Diffusion, we have developed a Chinese & English version of the AI art creation software "AI Atelier". We offer both Text-To-Image models (Disco Diffusion and VQGAN+CLIP) and Text-To-Text (GPT-J-6B and GPT-NEOX-20B) as options. Making available complete source code of licensed works and modifications, which include larger works using a licensed work, under the same license. Copyright and license notices must be preserved. When a modified version is used to provide a service over a network, the complete source code of the modified version must be made available. Create 2D and 3D animations and not only still frames (from Disco Diffusion v5 and VQGAN Animations). ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    Fairseq

    Fairseq

    Facebook AI Research Sequence-to-Sequence Toolkit written in Python

    Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. We provide reference implementations of various sequence modeling papers. Recent work by Microsoft and Google has shown that data parallel training can be made significantly more efficient by sharding the model parameters and optimizer state across data parallel workers. These ideas are encapsulated in the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Mask2Former

    Mask2Former

    Code release for "Masked-attention Mask Transformer

    Mask2Former is a unified segmentation architecture that handles semantic, instance, and panoptic segmentation with one model and one training recipe. Its core idea is to cast segmentation as mask classification: a transformer decoder predicts a set of mask queries, each with an associated class score, eliminating the need for task-specific heads. A pixel decoder fuses multi-scale features and feeds masked attention in the transformer so each query focuses computation on its current spatial...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    Catalyst

    Catalyst

    Accelerated deep learning R&D

    Catalyst is a PyTorch framework for accelerated Deep Learning research and development. It allows you to write compact but full-featured Deep Learning pipelines with just a few lines of code. With Catalyst you get a full set of features including a training loop with metrics, model checkpointing and more, all without the boilerplate. Catalyst is focused on reproducibility, rapid experimentation, and codebase reuse so you can break the cycle of writing another regular train loop and make...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 18
    TensorFlowOnSpark

    TensorFlowOnSpark

    TensorFlowOnSpark brings TensorFlow programs to Apache Spark clusters

    By combining salient features from the TensorFlow deep learning framework with Apache Spark and Apache Hadoop, TensorFlowOnSpark enables distributed deep learning on a cluster of GPU and CPU servers. It enables both distributed TensorFlow training and inferencing on Spark clusters, with a goal to minimize the amount of code changes required to run existing TensorFlow programs on a shared grid.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    DeeProtGO

    DeeProtGO

    DeeProtGO is a deep learning model for predicting GO terms of proteins

    This project contains the source code of DeeProtGO as well as an example of its use when predicting GO terms of the biological process sub-ontology for eukaryotic proteins.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    LayoutParser

    LayoutParser

    A Unified Toolkit for Deep Learning Based Document Image Analysis

    With the help of state-of-the-art deep learning models, Layout Parser enables extracting complicated document structures using only several lines of code. This method is also more robust and generalizable as no sophisticated rules are involved in this process. A complete instruction for installing the main Layout Parser library and auxiliary components. Learn how to load DL Layout models and use them for layout detection. The full list of layout models currently available in Layout Parser....
    Downloads: 2 This Week
    Last Update:
    See Project
  • 21
    RQ-Transformer

    RQ-Transformer

    Implementation of RQ Transformer, autoregressive image generation

    Implementation of RQ Transformer, which proposes a more efficient way of training multi-dimensional sequences autoregressively. This repository will only contain the transformer for now. You can use this vector quantization library for the residual VQ. This type of axial autoregressive transformer should be compatible with memcodes, proposed in NWT. It would likely also work well with multi-headed VQ. I also think there is something deeper going on, and have generalized this to any number of...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    EasyNLP

    EasyNLP

    EasyNLP: A Comprehensive and Easy-to-use NLP Toolkit

    EasyNLP is an easy-to-use NLP development and application toolkit in PyTorch, first released inside Alibaba in 2021. It is built with scalable distributed training strategies and supports a comprehensive suite of NLP algorithms for various NLP applications. EasyNLP integrates knowledge distillation and few-shot learning for landing large pre-trained models, together with various popular multi-modality pre-trained models. It provides a unified framework of model training, inference, and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    bert4keras

    bert4keras

    Keras implement of transformers for humans

    Our light reimplementation of bert for keras. A cleaner, lighter version of bert for keras. This is the keras version of the transformer model library re-implemented by the author and is committed to combining transformer and keras with as clean code as possible. The original intention of this project is for the convenience of modification and customization, so it may be updated frequently. Load the pre-trained weights of bert/roberta/albert for fine-tune. Implement the attention mask...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    GLIDE (Text2Im)

    GLIDE (Text2Im)

    GLIDE: a diffusion-based text-conditional image synthesis model

    glide-text2im is an open source implementation of OpenAI’s GLIDE model, which generates photorealistic images from natural language text prompts. It demonstrates how diffusion-based generative models can be conditioned on text to produce highly detailed and coherent visual outputs. The repository provides both model code and pretrained checkpoints, making it possible for researchers and developers to experiment with text-to-image synthesis.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 25
    YOLOv3

    YOLOv3

    Object detection architectures and models pretrained on the COCO data

    Fast, precise and easy to train, YOLOv5 has a long and successful history of real time object detection. Treat YOLOv5 as a university where you'll feed your model information for it to learn from and grow into one integrated tool. You can get started with less than 6 lines of code. with YOLOv5 and its Pytorch implementation. Have a go using our API by uploading your own image and watch as YOLOv5 identifies objects using our pretrained models. Start training your model without being an...
    Downloads: 87 This Week
    Last Update:
    See Project