Showing 532 open source projects for "code"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 1
    unit-minions

    unit-minions

    AI R&D Efficiency Improvement Research: Do-It-Yourself Training LoRA

    "AI R&D Efficiency Improvement Research: Do-It-Yourself Training LoRA", including Llama (Alpaca LoRA) model, ChatGLM (ChatGLM Tuning) related Lora training. Training content: user story generation, test code generation, code-assisted generation, text to SQL, text generation code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    AI-Agent-Host

    AI-Agent-Host

    The AI Agent Host is a module-based development environment.

    ...The AI Agent Host is a module-based environment designed to facilitate rapid experimentation and testing. It includes a docker-compose configuration with QuestDB, Grafana, Code-Server and Nginx. The AI Agent Host provides a seamless interface for managing and querying data, visualizing results, and coding in real-time. The AI Agent Host is built specifically for LangChain, a framework dedicated to developing applications powered by language models. LangChain recognizes that the most powerful and distinctive applications go beyond simply utilizing a language model and strive to be data-aware and agentic. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    VALL-E

    VALL-E

    PyTorch implementation of VALL-E (Zero-Shot Text-To-Speech)

    We introduce a language modeling approach for text to speech synthesis (TTS). Specifically, we train a neural codec language model (called VALL-E) using discrete codes derived from an off-the-shelf neural audio codec model, and regard TTS as a conditional language modeling task rather than continuous signal regression as in previous work. During the pre-training stage, we scale up the TTS training data to 60K hours of English speech which is hundreds of times larger than existing systems....
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    ChatGPT Plugins Collection

    ChatGPT Plugins Collection

    An unofficial collection of Plugins for ChatGPT

    ...By centralizing community contributions, the repository highlights practical applications of plugins across domains such as productivity, data access, and automation. The project also serves as a starting point for developers interested in building their own custom plugins, offering inspiration and code samples. With its open structure, it encourages collaboration and knowledge sharing in the growing ecosystem of ChatGPT extensions.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Enterprise-Class Managed File Transfer. Icon
    Enterprise-Class Managed File Transfer.

    For organizations that need to automate secure file transfers to protect sensitive data.

    Diplomat MFT by Coviant Software is a secure, reliable managed file transfer solution designed to simplify and automate SFTP, FTPS, and HTTPS file transfers. Built for seamless integration, Diplomat MFT works across major cloud storage platforms, including AWS S3, Azure Blob, Google Cloud, Oracle Cloud, SharePoint, Dropbox, Box, and more.
    Learn More
  • 5
    LM Human Preferences

    LM Human Preferences

    Code for the paper Fine-Tuning Language Models from Human Preferences

    ...The repository includes scripts to train the reward model (learning to rank or score pairs of outputs), and to fine-tune a policy (a language model) with reinforcement learning (or related techniques) guided by that reward model. The code is provided “as is” and explicitly says it may no longer run out-of-the-box due to dependencies or dataset migrations. It was tested on the smallest GPT-2 (124M parameters) under a specific environment (TensorFlow 1.x, specific CUDA / cuDNN combinations). It includes utilities for launching experiments, sampling from policies, and simple experiment orchestration.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Img2Txt

    Img2Txt

    Img2Txt - Extract Text From Images using AI

    Important: If you are sharing this program. Please Include the official Download Link What is Img2Txt? Img2Txt is a Python-based application packaged using PyInstaller that utilizes the power of pytesseract, an AI-powered optical character recognition (OCR) library, to extract text from images and convert it into plain text. The application features a simple and modern user-friendly interface created using customtkinter, allowing users to easily process images and obtain the text...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 7
    Bot on Anything

    Bot on Anything

    Large model-based chatbot builder that can quickly integrate AI models

    ...Configuration is handled simply through a central JSON file where you define which model and which application channel you want to glue together, so developers can create sophisticated AI assistants without rewriting integration code from scratch. The architecture emphasizes reusability and extensibility, allowing the addition of new model backends or new channels with relative ease. It supports switching between multiple AI models and targets within the same project.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    artikelschreiber

    artikelschreiber

    Frontend and Backend Code for ArtikelSchreiber.com and UNAIQUE.NET

    ...All rights reserved. Frontend and Backend Source Code for Project: https://github.com/sebastianenger1981/ https://www.artikelschreiber.com/ https://www.artikelschreiben.com/ https://www.unaique.net/ https://www.artikelschreiber.com/opensource/ https://www.unaique.com/
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Alpa

    Alpa

    Training and serving large-scale neural networks

    ...Scaling neural networks to hundreds of billions of parameters has enabled dramatic breakthroughs such as GPT-3, but training and serving these large-scale neural networks require complicated distributed system techniques. Alpa aims to automate large-scale distributed training and serving with just a few lines of code.
    Downloads: 10 This Week
    Last Update:
    See Project
  • Powerful Locator Software Icon
    Powerful Locator Software

    Free 30-day trial, no credit card required.

    Build a store locator, product finder, partner or dealer locator, where-to-buy feature, agent, hospital or physician finder in minutes with MetaLocator.
    Learn More
  • 10
    NOW

    NOW

    No-code tool for creating a neural search solution in minutes

    One line to host them all. Bootstrap your multimodal search case in minutes. NOW gives the world access to multimodal neural search with just one command. NOW supports various formats for uploading your dataset to your search application. You may either choose a demo dataset hosted by NOW, or use your own custom dataset, to build an application. NOW can support your custom data in the form of a DocumentArray, as a path to a local folder, or S3 bucket. You can choose a demo dataset to get...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    FFCV

    FFCV

    Fast Forward Computer Vision (and other ML workloads!)

    ffcv is a drop-in data loading system that dramatically increases data throughput in model training. From gridding to benchmarking to fast research iteration, there are many reasons to want faster model training. Below we present premade codebases for training on ImageNet and CIFAR, including both (a) extensible codebases and (b) numerous premade training configurations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    ConvNeXt V2

    ConvNeXt V2

    Code release for ConvNeXt V2 model

    ...The result is a convnet that competes strongly with transformer architectures on recognition benchmarks while being efficient and hardware-friendly. The repository provides official PyTorch implementations for multiple model sizes (Atto, Femto, Pico, up through Huge), conversion from JAX weights, code for pretraining/fine-tuning, and pretrained checkpoints. It supports both self-supervised pretraining and supervised fine-tuning.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    CodeContests

    CodeContests

    Large dataset of coding contests designed for AI and ML model training

    CodeContests, developed by Google DeepMind, is a large-scale competitive programming dataset designed for training and evaluating machine learning models on code generation and problem solving. This dataset played a central role in the development of AlphaCode, DeepMind’s model for solving programming problems at a human-competitive level, as published in Science. CodeContests aggregates problems and human-written solutions from multiple programming competition platforms, including AtCoder, Codeforces, CodeChef, Aizu, and HackerEarth. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    minGPT

    minGPT

    A minimal PyTorch re-implementation of the OpenAI GPT

    minGPT is a minimalist, educational re-implementation of the GPT (Generative Pretrained Transformer) architecture built in PyTorch, designed by Andrej Karpathy to expose the core structure of a transformer-based language model in as few lines of code as possible. It strips away extraneous bells and whistles, aiming to show how a sequence of token indices is fed into a stack of transformer blocks and then decoded into the next token probabilities, with both training and inference supported. Because the whole model is around 300 lines of code, users can follow each step—from embedding lookup, positional encodings, multi-head attention, feed-forward layers, to output heads—and thus demystify how GPT-style models work beneath the surface. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    DeepMozart

    DeepMozart

    Audio generation using diffusion models

    Audio generation using diffusion models in PyTorch. The code is based on the audio-diffusion-pytorch repository.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 16
    Multi-Agent Particle Envs

    Multi-Agent Particle Envs

    Code for a multi-agent particle environment used in a paper

    ...Scenarios are designed to model cooperative, competitive, and mixed interactions among agents, making it useful for testing algorithms in multi-agent settings. The project includes built-in scenarios such as navigation to landmarks, cooperative tasks, and adversarial setups. Although archived, its concepts and code structure remain foundational for more advanced libraries like PettingZoo, which extended and maintained this environment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    CPT

    CPT

    CPT: A Pre-Trained Unbalanced Transformer

    A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation. We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV. Position Embeddings We extend the max_position_embeddings from 512 to 1024. We...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    UnionML

    UnionML

    Build and deploy machine learning microservices

    ...Data science, ML engineering, and MLOps practitioners can all gather around UnionML apps as a way of defining a single source of truth about your ML system’s behavior. This helps you maintain consistent code across your ML stack, from training to prediction logic.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    d2l-zh

    d2l-zh

    Chinese-language edition of Dive into Deep Learning

    d2l‑zh is the Chinese-language edition of Dive into Deep Learning, an interactive, open‑source deep learning textbook that combines code, math, and explanatory text. It features runnable Jupyter notebooks compatible with multiple frameworks (e.g., PyTorch, MXNet, TensorFlow), comprehensive theoretical analysis, and exercises. Widely adopted in over 70 countries and used by more than 500 universities for teaching deep learning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    CleanRL

    CleanRL

    High-quality single file implementation of Deep Reinforcement Learning

    ...The implementation is clean and simple, yet we can scale it to run thousands of experiments using AWS Batch. CleanRL is not a modular library and therefore it is not meant to be imported. At the cost of duplicate code, we make all implementation details of a DRL algorithm variant easy to understand, so CleanRL comes with its own pros and cons. You should consider using CleanRL if you want to 1) understand all implementation details of an algorithm's variant or 2) prototype advanced features that other modular DRL libraries do not support (CleanRL has minimal lines of code so it gives you great debugging experience and you don't have to do a lot of subclassing like sometimes in modular DRL libraries).
    Downloads: 2 This Week
    Last Update:
    See Project
  • 21
    Minimal text diffusion

    Minimal text diffusion

    A minimal implementation of diffusion models for text generation

    A minimal implementation of diffusion models of text: learns a diffusion model of a given text corpus, allowing to generate text samples from the learned model. The main idea was to retain just enough code to allow training a simple diffusion model and generating samples, remove image-related terms, and make it easier to use. To train a model, run scripts/train.sh. By default, this will train a model on the simple corpus. However, you can change this to any text file using the --train_data argument. Note that you may have to increase the sequence length (--seq_len) if your corpus is longer than the simple corpus. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    ostRAT

    ostRAT

    OpenSourceTelegramRAT - Remote PC access via Telegram Bot.

    ostRAT is free and open source. GPLv3 Сomputer remote control software. Works via telegram bot. A lot of functions, for example: - Screenshot: sends a screenshot - Off: turns off the computer - Url: opens entered link - Write: sends your text to the computer - Move: changes mouse location with x and y - and more! WARNING: Using the bot is recommended only on your device. Failure to comply with the recommendation may result in criminal liability.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Apple Neural Engine (ANE) Transformers

    Apple Neural Engine (ANE) Transformers

    Reference implementation of the Transformer architecture optimized

    ANE Transformers is a reference PyTorch implementation of Transformer components optimized for Apple Neural Engine on devices with A14 or newer and on Macs with M1 or newer chips. It demonstrates how to structure attention and related layers to achieve substantial speedups and lower peak memory compared to baseline implementations when deployed to ANE. The repository targets practitioners who want to keep familiar PyTorch modeling while preparing models for Core ML/ANE execution paths....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    PyTorch Transfer-Learning-Library

    PyTorch Transfer-Learning-Library

    Transfer Learning Library for Domain Adaptation, Task Adaptation, etc.

    TLlib is an open-source and well-documented library for Transfer Learning. It is based on pure PyTorch with high performance and friendly API. Our code is pythonic, and the design is consistent with torchvision. You can easily develop new algorithms or readily apply existing algorithms. We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Video Pre-Training

    Video Pre-Training

    Learning to Act by Watching Unlabeled Online Videos

    The Video PreTraining (VPT) repository provides code and model artifacts for a project where agents learn to act by watching human gameplay videos—specifically, gameplay of Minecraft—using behavioral cloning. The idea is to learn general priors of control from large-scale, unlabeled video data, and then optionally fine-tune those priors for more goal-directed behavior via environment interaction.
    Downloads: 0 This Week
    Last Update:
    See Project