Showing 138 open source projects for "transformer"

View related business solutions
  • PMG Low-Code Automation Platform Icon
    PMG Low-Code Automation Platform

    For companies of all sizes interested in a low-code and digital process automation platform

    PMG is a low-code software platform that allows users to configure automation solutions and business applications to drive digital transformation initiatives. From streamlining business processes through automation, to integrating existing systems and filling in point solution functionality gaps, to delivering a collaborative workspace and unified user experience – PMG’s low-code platform does it all without coding. Business users as well as IT resources are empowered to configure, deploy, and maintain solutions that meet their company’s specific needs.
  • Create state-of-the-art conversational agents with Google AI Icon
    Create state-of-the-art conversational agents with Google AI

    Using Dialogflow, you can provide new and engaging ways for users to interact with your product.

    Dialogflow can analyze multiple types of input from your customers, including text or audio inputs (like from a phone or voice recording). It can also respond to your customers in a couple of ways, either through text or with synthetic speech. Dialogflow CX and ES provide virtual agent services for chatbots and contact centers. If you have a contact center that employs human agents, you can use Agent Assist to help your human agents. Agent Assist provides real-time suggestions for human agents while they are in conversations with end-user customers.
  • 1
    Transformer Engine

    Transformer Engine

    A library for accelerating Transformer models on NVIDIA GPUs

    Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference. TE provides a collection of highly optimized building blocks for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your framework-specific code. TE also includes a framework-agnostic C++ API...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Transformer Reinforcement Learning X

    Transformer Reinforcement Learning X

    A repo for distributed training of language models with Reinforcement

    trlX is a distributed training framework designed from the ground up to focus on fine-tuning large language models with reinforcement learning using either a provided reward function or a reward-labeled dataset. Training support for Hugging Face models is provided by Accelerate-backed trainers, allowing users to fine-tune causal and T5-based language models of up to 20B parameters, such as facebook/opt-6.7b, EleutherAI/gpt-neox-20b, and google/flan-t5-xxl. For models beyond 20B parameters,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    RQ-Transformer

    RQ-Transformer

    Implementation of RQ Transformer, autoregressive image generation

    Implementation of RQ Transformer, which proposes a more efficient way of training multi-dimensional sequences autoregressively. This repository will only contain the transformer for now. You can use this vector quantization library for the residual VQ. This type of axial autoregressive transformer should be compatible with memcodes, proposed in NWT. It would likely also work well with multi-headed VQ. I also think there is something deeper going on, and have generalized this to any number...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Purchasing and invoice automation solution for small to mid market companies. Icon
    Purchasing and invoice automation solution for small to mid market companies.

    Save your team 10s of hours/week with a fully personalized and automated procurement process.

    ProcureDesk is an integrated purchasing and invoicing platform tailored to help small to medium sized businesses streamline their procurement processes. This user-friendly system automates workflows and consolidates purchasing data into a centralized dashboard, allowing companies to control spending and enhance transparency efficiently. Features like automated invoice matching, simple requisition creation, and immediate cash flow insights minimize manual tasks and boost operational efficiency. ProcureDesk is perfect for smaller enterprises leveraging big-business strategies to reduce costs and optimize their purchasing activities. Discover how ProcureDesk can transform your procurement process into a more effective and manageable part of your business.
  • 5
    Minecraft Development for IntelliJ

    Minecraft Development for IntelliJ

    Plugin for IntelliJ IDEA that gives special support for Minecraft mods

    Experience first class support for all of the major Java Minecraft development platforms, including Bukkit and derivatives such as Spigot and Paper, Sponge, Forge, Fabric, MCP, Mixins, LiteLoader, BungeeCord, and Waterfall. It also provides in-depth support for Access Transformer and NBT files, and more. Because of this, you can install the plugin through IntelliJ's internal plugin browser. Navigate to File -> Settings -> Plugins and click the Browse Repositories... button at the bottom...
    Downloads: 32 This Week
    Last Update:
    See Project
  • 6
    Whisper

    Whisper

    Robust Speech Recognition via Large-Scale Weak Supervision

    Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification. A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. These tasks are jointly represented...
    Downloads: 54 This Week
    Last Update:
    See Project
  • 7
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ... where applicable alongside graph optimizations and transforms. ONNX Runtime training can accelerate the model training time on multi-node NVIDIA GPUs for transformer models with a one-line addition for existing PyTorch training scripts. Support for a variety of frameworks, operating systems and hardware platforms. Built-in optimizations that deliver up to 17X faster inferencing and up to 1.4X faster training.
    Downloads: 20 This Week
    Last Update:
    See Project
  • 8
    Curated Transformers

    Curated Transformers

    PyTorch library of curated Transformer models and their components

    State-of-the-art transformers, brick by brick. Curated Transformers is a transformer library for PyTorch. It provides state-of-the-art models that are composed of a set of reusable components. Supports state-of-the-art transformer models, including LLMs such as Falcon, Llama, and Dolly v2. Implementing a feature or bugfix benefits all models. For example, all models support 4/8-bit inference through the bitsandbytes library and each model can use the PyTorch meta device to avoid unnecessary...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 9
    DALL-E 2 - Pytorch

    DALL-E 2 - Pytorch

    Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis

    Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch. The main novelty seems to be an extra layer of indirection with the prior network (whether it is an autoregressive transformer or a diffusion network), which predicts an image embedding based on the text embedding from CLIP. Specifically, this repository will only build out the diffusion prior network, as it is the best performing variant (but which incidentally involves a causal transformer...
    Downloads: 10 This Week
    Last Update:
    See Project
  • Multi-Site Network and Cloud Connectivity for Businesses Icon
    Multi-Site Network and Cloud Connectivity for Businesses

    Internet connectivity without complexity

    As your users rely more and more on Cloud and Internet-based technologies, reliable internet connectivity becomes more and more important to your business. With Bigleaf’s proven SD-WAN architecture, groundbreaking AI, and DDoS attack mitigation, you can finally deliver the reliable internet connectivity your business needs without the limitations of traditional networking platforms. Bigleaf’s Cloud Access Network and plug-and-play router allow for limitless control to and from anywhere your traffic needs to go. Bigleaf’s self-driving AI automatically identifies and adapts to any changing circuit conditions and traffic needs—addressing issues before they impact your users. Bigleaf puts you in the driver’s seat of every complaint and support call with full-path traffic and network performance data, delivered as actionable insights, reports, and alerts.
  • 10
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    Megatron is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision. Megatron is also used in NeMo Megatron, a framework to help enterprises overcome the challenges of building...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    CTranslate2

    CTranslate2

    Fast inference engine for Transformer models

    CTranslate2 is a C++ and Python library for efficient inference with Transformer models. The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU. The execution is significantly faster and requires less resources than general-purpose deep learning frameworks on supported models and tasks thanks to many advanced...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    UglifyJS 3

    UglifyJS 3

    JavaScript parser, mangler, compressor, beautifier toolkit

    UglifyJS is a JavaScript compressor/minifier written in JavaScript. It also contains tools that allow one to automate working with JavaScript code. A parser which produces an abstract syntax tree (AST) from JavaScript code. A code generator which outputs JavaScript code from an AST, also providing the option to get a source map. A compressor (optimizer). it uses the transformer API to optimize an AST into a smaller one. A mangler, reduce names of local variables to (usually) single-letters...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 13
    spacy-transformers

    spacy-transformers

    Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

    ... trained transformer model if you install spacy-transformers. You can also do your own language model pretraining via the spacy pre train command. You can even share your transformer or another contextual embedding model across multiple components, which can make long pipelines several times more efficient. To use transfer learning, you’ll need at least a few annotated examples for what you’re trying to predict.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    MetaTransformer

    MetaTransformer

    Meta-Transformer for Unified Multimodal Learning

    We're thrilled to present OneLLM, an ensembling Meta-Transformer framework with Multimodal Large Language Models, which performs multimodal joint training, supports more modalities including fMRI, Depth, and Normal Maps, and demonstrates very impressive performances on 25 benchmarks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    CTransformers

    CTransformers

    Python bindings for the Transformer models implemented in C/C++

    Python bindings for the Transformer models implemented in C/C++ using GGML library.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    CPT

    CPT

    CPT: A Pre-Trained Unbalanced Transformer

    A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation. We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV. Position Embeddings We extend the max_position_embeddings from 512 to 1024. We...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    LLM Foundry

    LLM Foundry

    LLM training code for MosaicML foundation models

    Introducing MPT-7B, the first entry in our MosaicML Foundation Series. MPT-7B is a transformer trained from scratch on 1T tokens of text and code. It is open source, available for commercial use, and matches the quality of LLaMA-7B. MPT-7B was trained on the MosaicML platform in 9.5 days with zero human intervention at a cost of ~$200k. Large language models (LLMs) are changing the world, but for those outside well-resourced industry labs, it can be extremely difficult to train and deploy...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    Hyperformer

    Hyperformer

    Hypergraph Transformer for Skeleton-based Action Recognition

    This is the official implementation of our paper "Hypergraph Transformer for Skeleton-based Action Recognition." Skeleton-based action recognition aims to recognize human actions given human joint coordinates with skeletal interconnections. By defining a graph with joints as vertices and their natural connections as edges, previous works successfully adopted Graph Convolutional networks (GCNs) to model joint co-occurrences and achieved superior performance. More recently, a limitation of GCNs...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    ... recommend Mesh Transformer JAX. If you are not looking to train models with billions of parameters from scratch, this is likely the wrong library to use. For generic inference needs, we recommend you use the Hugging Face transformers library instead which supports GPT-NeoX models.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    imodelsX

    imodelsX

    Interpretable prompting and models for NLP

    Interpretable prompting and models for NLP (using large language models). Generates a prompt that explains patterns in data (Official) Explain the difference between two distributions. Find a natural-language prompt using input-gradients. Fit a better linear model using an LLM to extract embeddings. Fit better decision trees using an LLM to expand features. Finetune a single linear layer on top of LLM embeddings. Use these just a like a sci-kit-learn model. During training, they fit better...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    solo-learn

    solo-learn

    Library of self-supervised methods for visual representation

    A library of self-supervised methods for visual representation learning powered by Pytorch Lightning. A library of self-supervised methods for unsupervised visual representation learning powered by PyTorch Lightning. We aim at providing SOTA self-supervised methods in a comparable environment while, at the same time, implementing training tricks. The library is self-contained, but it is possible to use the models outside of solo-learn.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    x-transformers

    x-transformers

    A simple but complete full-attention transformer

    A simple but complete full-attention transformer with a set of promising experimental features from various papers. Proposes adding learned memory key/values prior to attending. They were able to remove feedforwards altogether and attain a similar performance to the original transformers. I have found that keeping the feedforwards and adding the memory key/values leads to even better performance. Proposes adding learned tokens, akin to CLS tokens, named memory tokens, that is passed through...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Haystack

    Haystack

    Haystack is an open source NLP framework to interact with your data

    ..., not just keywords! Make use of and compare the latest pre-trained transformer-based languages models like OpenAI’s GPT-3, BERT, RoBERTa, DPR, and more. Pick any Transformer model from Hugging Face's Model Hub, experiment, find the one that works. Use Haystack NLP components on top of Elasticsearch, OpenSearch, or plain SQL. Boost search performance with Pinecone, Milvus, FAISS, or Weaviate vector databases, and dense passage retrieval.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    parcel/css

    parcel/css

    A CSS parser, transformer, and minifier written in Rust

    A CSS parser, transformer, and minifier written in Rust. Parsing and minifying large files are completed in milliseconds, often with significantly smaller output than other tools. Many other CSS parsers treat property values as an untyped series of tokens. This means that each transformer that wants to do something with these values must interpret them itself, leading to duplicate work and inconsistencies. @parcel/css parses all values using the grammar from the CSS specification and exposes...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    DALL-E in Pytorch

    DALL-E in Pytorch

    Implementation / replication of DALL-E, OpenAI's Text to Image

    Implementation / replication of DALL-E (paper), OpenAI's Text to Image Transformer, in Pytorch. It will also contain CLIP for ranking the generations. Kobiso, a research engineer from Naver, has trained on the CUB200 dataset here, using full and deepspeed sparse attention. You can also skip the training of the VAE altogether, using the pretrained model released by OpenAI! The wrapper class should take care of downloading and caching the model for you auto-magically. You can also use...
    Downloads: 1 This Week
    Last Update:
    See Project