Showing 29 open source projects for "parallel language"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Desktop and Mobile Device Management Software Icon
    Desktop and Mobile Device Management Software

    It's a modern take on desktop management that can be scaled as per organizational needs.

    Desktop Central is a unified endpoint management (UEM) solution that helps in managing servers, laptops, desktops, smartphones, and tablets from a central location.
    Learn More
  • 1
    Large Language Models (LLMs)

    Large Language Models (LLMs)

    Connect MATLAB to LLM APIs, including OpenAI® Chat Completions

    This repository enables MATLAB to connect with large language models (LLMs) such as OpenAI's ChatGPT, DALL-E, Azure OpenAI, and Ollama, integrating their natural language processing and image generation capabilities directly within MATLAB environments. It facilitates creating chatbots, summarizing text, and image generation, among other tasks.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 2
    The Julia Programming Language

    The Julia Programming Language

    High-level, high-performance dynamic language for technical computing

    Julia is a fast, open source high-performance dynamic language for technical computing. It can be used for data visualization and plotting, deep learning, machine learning, scientific computing, parallel computing and so much more. Having a high level syntax, Julia is easy to use for programmers of every level and background. Julia has more than 2,800 community-registered packages including various mathematical libraries, data manipulation tools, and packages for general purpose computing. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 3
    vLLM

    vLLM

    A high-throughput and memory-efficient inference and serving engine

    vLLM is a fast and easy-to-use library for LLM inference and serving. High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more.
    Downloads: 48 This Week
    Last Update:
    See Project
  • 4
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    Megatron is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision. Megatron is also used in NeMo Megatron, a framework to help enterprises overcome the challenges of building and training sophisticated natural language processing models with billions and trillions of parameters. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Create custom docs, forms, apps, e-signatures, and surveys with Titan. Icon
    Create custom docs, forms, apps, e-signatures, and surveys with Titan.

    Powerful no-code digital experiences for Salesforce

    Create custom docs, forms, apps, e-signatures, and surveys with Titan’s full-suite of enterprise applications designed to integrate seamlessly with Salesforce data across your entire organization. #1 on the Salesforce appexchange
    Learn More
  • 5
    CogVLM

    CogVLM

    A state-of-the-art open visual language model

    CogVLM is an open-source visual–language model suite—and its GUI-oriented sibling CogAgent—aimed at image understanding, grounding, and multi-turn dialogue, with optional agent actions on real UI screenshots. The flagship CogVLM-17B combines ~10B visual parameters with ~7B language parameters and supports 490×490 inputs; CogAgent-18B extends this to 1120×1120 and adds plan/next-action outputs plus grounded operation coordinates for GUI tasks.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    Stanza

    Stanza

    Stanford NLP Python library for many human languages

    ...Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Stanza is a Python natural language analysis package. It contains tools, which can be used in a pipeline, to convert a string containing human language text into lists of sentences and words, to generate base forms of those words, their parts of speech and morphological features, to give a syntactic structure dependency parse, and to recognize named entities. The toolkit is designed to be parallel among more than 70 languages, using the Universal Dependencies formalism. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 7
    MiniMax-01

    MiniMax-01

    Large-language-model & vision-language-model based on Linear Attention

    MiniMax-01 is the official repository for two flagship models: MiniMax-Text-01, a long-context language model, and MiniMax-VL-01, a vision-language model built on top of it. MiniMax-Text-01 uses a hybrid attention architecture that blends Lightning Attention, standard softmax attention, and Mixture-of-Experts (MoE) routing to achieve both high throughput and long-context reasoning. It has 456 billion total parameters with 45.9 billion activated per token and is trained with advanced parallel strategies such as LASP+, varlen ring attention, and Expert Tensor Parallelism, enabling a training context of 1 million tokens and up to 4 million tokens at inference. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Atropos

    Atropos

    Language Model Reinforcement Learning Environments frameworks

    ...It provides foundational tooling for asynchronous RL loops where environment services communicate with trainers and inference engines, enabling complex workflow orchestration in distributed and parallel setups. This framework facilitates experimentation with RLHF (Reinforcement Learning from Human Feedback), RLAIF, or multi-turn training approaches by abstracting environment logic, scoring, and logging into reusable components.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    Colossal-AI

    Colossal-AI

    Making large AI models cheaper, faster and more accessible

    ...However, distributed training, especially model parallelism, often requires domain expertise in computer systems and architecture. It remains a challenge for AI researchers to implement complex distributed training solutions for their models. Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop.
    Downloads: 0 This Week
    Last Update:
    See Project
  • North One Connected Banking Icon
    North One Connected Banking

    North One is a business banking app that integrates cash flow, payments, and budgeting to turn your North One Account into one 'Connected Bank Account

    Connect every part of your business to one bank account
    Learn More
  • 10
    higgsfield

    higgsfield

    Fault-tolerant, highly scalable GPU orchestration

    Higgsfield is an open-source, fault-tolerant, highly scalable GPU orchestration, and a machine learning framework designed for training models with billions to trillions of parameters, such as Large Language Models (LLMs).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    magentic

    magentic

    Seamlessly integrate LLMs as Python functions

    Easily integrate Large Language Models into your Python code. Simply use the @prompt and @chatprompt decorators to create functions that return structured output from the LLM. Mix LLM queries and function calling with regular Python code to create complex logic.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    fairseq2

    fairseq2

    FAIR Sequence Modeling Toolkit 2

    ...It supports multi-GPU and multi-node distributed training using DDP, FSDP, and tensor parallelism, capable of scaling up to 70B+ parameter models. The framework integrates seamlessly with PyTorch 2.x features such as torch.compile, Fully Sharded Data Parallel (FSDP), and modern configuration management.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    DeepEval
    DeepEval is a simple-to-use, open-source LLM evaluation framework, for evaluating and testing large-language model systems. It is similar to Pytest but specialized for unit testing LLM outputs. DeepEval incorporates the latest research to evaluate LLM outputs based on metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., which uses LLMs and various other NLP models that run locally on your machine for evaluation. Whether your application is implemented via RAG or fine-tuning,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Petals

    Petals

    Run 100B+ language models at home, BitTorrent-style

    Parallel inference reaches hundreds of tokens/sec. Beyond classic language model APIs — you can employ any fine-tuning and sampling methods, execute custom paths through the model, or see its hidden states. You get the comforts of an API with the flexibility of PyTorch. You can also host BLOOMZ, a version of BLOOM fine-tuned to follow human instructions in the zero-shot regime — just replace bloom-petals with bloomz-petals.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    Medusa

    Medusa

    Framework for Accelerating LLM Generation with Multiple Decoding Heads

    Medusa is a framework aimed at accelerating the generation capabilities of Large Language Models (LLMs) by employing multiple decoding heads. This approach allows for parallel processing during text generation, significantly enhancing throughput and reducing response times. Medusa is designed to be simple to implement and integrates with existing LLM infrastructures, making it a practical solution for scaling LLM applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 17
    TextBox

    TextBox

    A text generation library with pre-trained language models github.com

    TextBox 2.0 is an up-to-date text generation library based on Python and PyTorch focusing on building a unified and standardized pipeline for applying pre-trained language models to text generation. From a task perspective, we consider 13 common text generation tasks such as translation, story generation, and style transfer, and their corresponding 83 widely-used datasets. From a model perspective, we incorporate 47 pre-trained language models/modules covering the categories of general, translation, Chinese, dialogue, controllable, distilled, prompting, and lightweight models (modules). ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Fairseq

    Fairseq

    Facebook AI Research Sequence-to-Sequence Toolkit written in Python

    Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. We provide reference implementations of various sequence modeling papers. Recent work by Microsoft and Google has shown that data parallel training can be made significantly more efficient by sharding the model parameters and optimizer state across data parallel workers. These ideas are encapsulated in the new FullyShardedDataParallel (FSDP) wrapper provided by fairscale. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    GPT Neo

    GPT Neo

    An implementation of model parallel GPT-2 and GPT-3-style models

    An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here to play with our pre-trained models, we strongly recommend you try out the HuggingFace Transformer integration. Training and inference is officially supported on TPU and should work on GPU as well. This repository will be (mostly) archived as we move focus to our GPU-specific repo, GPT-NeoX. NB, while neo can technically run a training step at 200B+ parameters, it is very...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 20
    FARM

    FARM

    Fast & easy transfer learning for NLP

    ...With FARM you can build fast proofs-of-concept for tasks like text classification, NER or question answering and transfer them easily into production. Easy fine-tuning of language models to your task and domain language. AMP optimizers (~35% faster) and parallel preprocessing (16 CPU cores => ~16x faster). Modular design of language models and prediction heads. Switch between heads or combine them for multitask learning. Full Compatibility with HuggingFace Transformers' models and model hub. Smooth upgrading to newer language models. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    XLM (Cross-lingual Language Model)

    XLM (Cross-lingual Language Model)

    PyTorch original implementation of Cross-lingual Language Model

    XLM (Cross-lingual Language Model) is a family of multilingual pretraining methods that align representations across languages to enable strong zero-shot transfer. It popularized objectives like Masked Language Modeling (MLM) across many languages and Translation Language Modeling (TLM) that jointly trains on parallel sentence pairs to tighten cross-lingual alignment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    OpenSeq2Seq

    OpenSeq2Seq

    Toolkit for efficient experimentation with Speech Recognition

    ...Its core goal is to give researchers a flexible, modular framework for building and training encoder–decoder architectures while fully leveraging distributed and mixed-precision training. The toolkit includes ready-made models for neural machine translation, automatic speech recognition, speech synthesis, language modeling, and additional NLP tasks such as sentiment analysis. It supports multi-GPU and multi-node data-parallel training, and integrates with Horovod to scale out across large GPU clusters. Mixed-precision support (float16) is optimized for NVIDIA Volta and Turing GPUs, allowing significant speedups and memory savings without sacrificing model quality. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23

    popt4jlib

    Parallel Optimization Library for Java

    popt4jlib is an open-source parallel optimization library for the Java programming language supporting both shared memory and distributed message passing models. Implements a number of meta-heuristic algorithms for Non-Linear Programming, including Genetic Algorithms, Differential Evolution, Evolutionary Algorithms, Simulated Annealing, Particle Swarm Optimization, Firefly Algorithm, Monte-Carlo Search, Local Search algorithms, Gradient-Descent-based algorithms, as well as some well-known network flow and other graph algorithms. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Osman Arabic Text Readability

    Osman Arabic Text Readability

    Open Source tool for Arabic text readability

    We present OSMAN (Open Source Metric for Measuring Arabic Narratives) - a novel open source Arabic readability metric and tool. The open source Java tool allows users to calculate readability for Arabic text (with and without diacritics). The tool provides methods to split the text into words and sentence, count syllables, Faseeh letters, hard and complex words in addition to adding diacritics (vocalise text). This makes the tool useful for researchers and educators working with Arabic text....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    HLearn

    HLearn

    Homomorphic machine learning

    HLearn is a Haskell-based machine learning library focused on composability, algebraic structure, and performance. It provides a functional approach to building machine learning algorithms by leveraging algebraic properties such as monoids and groups. This allows for parallel, incremental, and distributed computation in a mathematically consistent way. HLearn aims to provide implementations of common algorithms like k-means, naive Bayes, and others while maintaining the expressiveness and safety of the Haskell language.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next