Showing 164 open source projects for "transformers"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Find Hidden Risks in Windows Task Scheduler Icon
    Find Hidden Risks in Windows Task Scheduler

    Free diagnostic script reveals configuration issues, error patterns, and security risks. Instant HTML report.

    Windows Task Scheduler might be hiding critical failures. Download the free JAMS diagnostic tool to uncover problems before they impact production—get a color-coded risk report with clear remediation steps in minutes.
    Download Free Tool
  • 1
    Qwen3-Omni

    Qwen3-Omni

    Qwen3-omni is a natively end-to-end, omni-modal LLM

    Qwen3-Omni is a natively end-to-end multilingual omni-modal foundation model that processes text, images, audio, and video and delivers real-time streaming responses in text and natural speech. It uses a Thinker-Talker architecture with a Mixture-of-Experts (MoE) design, early text-first pretraining, and mixed multimodal training to support strong performance across all modalities without sacrificing text or image quality. The model supports 119 text languages, 19 speech input languages, and...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    Seldon Core

    Seldon Core

    An MLOps framework to package, deploy, monitor and manage models

    ...Built on Kubernetes, runs on any cloud and on-premises. Framework agnostic, supports top ML libraries, toolkits and languages. Advanced deployments with experiments, ensembles and transformers. Our open-source framework makes it easier and faster to deploy your machine learning models and experiments at scale on Kubernetes. The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Apiato

    Apiato

    PHP Framework for building scalable API's on top of Laravel

    The open-source flawless framework for building scalable and testable API-Centric Apps with PHP and Laravel. Authentication with OAuth2.0 for first/third-party clients (using Laravel Passport). Role-Based Access Control (RBAC), seeded with a Super Admin, Roles, and Permissions. Query Parameters support (orderBy, sorted, and filter) with full-text search. Useful Endpoints for managing users, roles/permissions, tokens, and more. API Documentations generator, to generate API docs from PHP...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    InfiniteYou

    InfiniteYou

    Flexible Photo Recrafting While Preserving Your Identity

    InfiniteYou is an open-source image-generation and “identity-preserving image editing / generation” framework from ByteDance, designed to generate high-fidelity images that preserve a subject’s identity while allowing flexible editing or re-creation according to textual prompts. Using an architecture built around diffusion transformers (DiTs), InfiniteYou introduces a component called InfuseNet that injects identity features derived from reference images into the generation process — via residual connections — so that the output matches the person’s identity closely, without sacrificing visual quality or text-image alignment. The team uses a multi-stage training strategy with synthetic multi-sample data per identity to fine-tune for both identity consistency and aesthetic quality. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Enterprise-grade ITSM, for every business Icon
    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity.

    Freshservice is an intuitive, AI-powered platform that helps IT, operations, and business teams deliver exceptional service without the usual complexity. Automate repetitive tasks, resolve issues faster, and provide seamless support across the organization. From managing incidents and assets to driving smarter decisions, Freshservice makes it easy to stay efficient and scale with confidence.
    Try it Free
  • 5
    DeepSpeed MII

    DeepSpeed MII

    MII makes low-latency and high-throughput inference possible

    MII makes low-latency and high-throughput inference possible, powered by DeepSpeed. The Deep Learning (DL) open-source community has seen tremendous growth in the last few months. Incredibly powerful text generation models such as the Bloom 176B, or image generation model such as Stable Diffusion are now available to anyone with access to a handful or even a single GPU through platforms such as Hugging Face. While open-sourcing has democratized access to AI capabilities, their application is...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    DeepSeek MoE

    DeepSeek MoE

    Towards Ultimate Expert Specialization in Mixture-of-Experts Language

    ...The repo publishes both Base and Chat variants of the 16B MoE model (deepseek-moe-16b) and provides evaluation results across benchmarks. It also includes a quick start with inference instructions (using Hugging Face Transformers) and guidance on fine-tuning (DeepSpeed, hyperparameters, quantization). The licensing is MIT for code, with a “Model License” applied to the models.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    Transformers4Rec

    Transformers4Rec

    Transformers4Rec is a flexible and efficient library

    ...The library works as a bridge between natural language processing (NLP) and recommender systems (RecSys) by integrating with one of the most popular NLP frameworks, Hugging Face Transformers (HF). Transformers4Rec makes state-of-the-art transformer architectures available for RecSys researchers and industry practitioners. Traditional recommendation algorithms usually ignore the temporal dynamics and the sequence of interactions when trying to model user behavior. Generally, the next user interaction is related to the sequence of the user's previous choices. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Primus

    Primus

    An abstraction layer for real-time to prevent module lock-in

    Primus, the creator god of transformers but now also known as universal wrapper for real-time frameworks. There are a lot of real-time frameworks available for Node.js and they all have different opinions on how real-time should be done. Primus provides a common low level interface to communicate in real-time using various real-time frameworks. Effortless switching between real-time frameworks by changing one single line of code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    DocWire SDK

    DocWire SDK

    Award-winning modern data processing SDK in C++20

    DocWire SDK, a standout C++20AI driven data processing tool, has received award from SourceForge and strong backing from Microsoft. It handles nearly 100 file types, empowering efficient text extraction, web data extraction, and document analysis. For businesses, the shift to DocWire SDK signifies a leap forward. It promises comprehensive document format support and the ability to extract valuable insights from email boxes, databases, and websites using cutting-edge AI. DocWire SDK aims to...
    Downloads: 4 This Week
    Last Update:
    See Project
  • Grafana: The open and composable observability platform Icon
    Grafana: The open and composable observability platform

    Faster answers, predictable costs, and no lock-in built by the team helping to make observability accessible to anyone.

    Grafana is the open source analytics & monitoring solution for every database.
    Learn More
  • 10
    CogView

    CogView

    Text-to-Image generation. The repo for NeurIPS 2021 paper

    CogView is a large-scale pretrained text-to-image transformer model, introduced in the NeurIPS 2021 paper CogView: Mastering Text-to-Image Generation via Transformers. With 4 billion parameters, it was one of the earliest transformer-based models to successfully generate high-quality images from natural language descriptions in Chinese, with partial support for English via translation. The model incorporates innovations such as PB-relax and Sandwich-LN to enable stable training of very deep transformers without NaN loss issues. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    FXtransformer Designer is a graphical design aid for both power and RF transformers. RF Design: Broadband, Single-Tuned, Double-Tuned.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    DiT (Diffusion Transformers)

    DiT (Diffusion Transformers)

    Official PyTorch Implementation of "Scalable Diffusion Models"

    DiT (Diffusion Transformer) is a powerful architecture that applies transformer-based modeling directly to diffusion generative processes for high-quality image synthesis. Unlike CNN-based diffusion models, DiT represents the diffusion process in the latent space and processes image tokens through transformer blocks with learned positional encodings, offering scalability and superior sample quality. The model architecture parallels large language models but for image tokens—each block...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Basaran

    Basaran

    Basaran, an open-source alternative to the OpenAI text completion API

    Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models. The open source community will eventually witness the Stable Diffusion moment for large language models (LLMs), and Basaran allows you to replace OpenAI's service with the latest open-source model to power your application without modifying a single line of code. Stream generation using various decoding strategies. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Transformers-Interpret

    Transformers-Interpret

    Model explainability that works seamlessly with Hugging Face

    Transformers-Interpret is an interpretability tool for Transformer-based NLP models, providing insights into attention mechanisms and feature importance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese LLaMA & Alpaca large language model + local CPU/GPU training

    This project has open-sourced the Chinese LLaMA model and the Alpaca large model with instruction fine-tuning to further promote the open research of large models in the Chinese NLP community. Based on the original LLaMA , these models expand the Chinese vocabulary and use Chinese data for secondary pre-training, which further improves the basic semantic understanding of Chinese. At the same time, the Chinese Alpaca model further uses Chinese instruction data for fine-tuning, which...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Prime QA

    Prime QA

    State-of-the-art Multilingual Question Answering research

    ...By using PrimeQA, a researcher can replicate the experiments outlined in a paper published in the latest NLP conference while also enjoying the capability to download pre-trained models (from an online repository) and run them on their own custom data. PrimeQA is built on top of the Transformers toolkit and uses datasets and models that are directly downloadable.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 17
    DALL-E in Pytorch

    DALL-E in Pytorch

    Implementation / replication of DALL-E, OpenAI's Text to Image

    ...The wrapper class should take care of downloading and caching the model for you auto-magically. You can also use the pretrained VAE offered by the authors of Taming Transformers! Currently only the VAE with a codebook size of 1024 is offered, with the hope that it may train a little faster than OpenAI's, which has a size of 8192. In contrast to OpenAI's VAE, it also has an extra layer of downsampling, so the image sequence length is 256 instead of 1024 (this will lead to a 16 reduction in training costs, when you do the math).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Hyperformer

    Hyperformer

    Hypergraph Transformer for Skeleton-based Action Recognition

    ...To relax such a restriction, Self-Attention (SA) mechanism has been adopted to make the topology of GCNs adaptive to the input, resulting in the state-of-the-art hybrid models. Concurrently, attempts with plain Transformers have also been made, but they still lag behind state-of-the-art GCN-based methods due to the lack of structural prior.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    ...If you are not looking to train models with billions of parameters from scratch, this is likely the wrong library to use. For generic inference needs, we recommend you use the Hugging Face transformers library instead which supports GPT-NeoX models.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20

    Work Case Toolkit 0.4 beta 1

    Support variant based Java program execution, adapted to support MDE.

    ...The project was originally planned in 2005 to support Model Driven Architecture OMG approach, focusing specially on the management of architectural changes and models transformers. The main goal of the program is to design transformation chains based on features diagram variants. The model transformers orchestration is obtained by using Features Diagrams and Transformation Diagrams (a feature that must be re-implemented to support of a Java drawing API). Any help is welcome. Given that we have some good results in running dynamic transformation chains, we decided to extend the tool to support another goal: a Java API to support variant based program's execution. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 21
    OpenDelta

    OpenDelta

    A plug-and-play library for parameter-efficient-tuning

    OpenDelta is an open-source parameter-efficient fine-tuning library that enables efficient adaptation of large-scale pre-trained models using delta tuning techniques. OpenDelta is a toolkit for parameter-efficient tuning methods (we dub it as delta tuning), by which users could flexibly assign (or add) a small amount parameters to update while keeping the most parameters frozen. By using OpenDelta, users could easily implement prefix-tuning, adapters, Lora, or any other types of delta tuning...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    abstract2paper

    abstract2paper

    Auto-generate an entire paper from a prompt or abstract using NLP

    ...Note: to compile a PDF of your auto-generated paper (when you run the demo locally), you'll need to have a working LaTeX installation on your machine (e.g., so that pdflatex is a recognized system command). The notebook will also automatically install the transformers library if it's not already available in your local environment. In its unmodified state, the demo notebooks use the abstract from the GPT-3 paper as the "seed" for a new paper. Each time you run the notebook you'll get a new result.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Apple Neural Engine (ANE) Transformers

    Apple Neural Engine (ANE) Transformers

    Reference implementation of the Transformer architecture optimized

    ...The project sits alongside related Apple ML repos that focus on deploying attention-based models efficiently to ANE-equipped hardware. In short, it’s a practical blueprint for adapting Transformers to Apple’s dedicated ML accelerator without rewriting entire model stacks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Replibyte

    Replibyte

    Seed your development database with real data

    ...Start a local database with the prod data in a single command. On-the-fly data (de)compression (Zlib). On-the-fly data de/encryption (AES-256). Fully stateless (no server, no daemon) and lightweight binary. Use custom transformers. Auto-detect and version database schema change. Auto-detect sensitive fields. Auto-clean backed up data. At Qovery (the company behind Replibyte), developers can clone their applications and databases just with one click. However, the cloning process can be tedious and time-consuming, and we end up copying the information multiple times.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    BEVFormer

    BEVFormer

    Implementation of BEVFormer, a camera-only framework

    3D visual perception tasks, including 3D detection and map segmentation based on multi-camera images, are essential for autonomous driving systems. In this work, we present a new framework termed BEVFormer, which learns unified BEV representations with spatiotemporal transformers to support multiple autonomous driving perception tasks. In a nutshell, BEVFormer exploits both spatial and temporal information by interacting with spatial and temporal space through predefined grid-shaped BEV queries. To aggregate spatial information, we design spatial cross-attention that each BEV query extracts the spatial features from the regions of interest across camera views. ...
    Downloads: 0 This Week
    Last Update:
    See Project