Showing 38 open source projects for "parallel"

View related business solutions
  • Total Network Visibility for Network Engineers and IT Managers Icon
    Total Network Visibility for Network Engineers and IT Managers

    Network monitoring and troubleshooting is hard. TotalView makes it easy.

    This means every device on your network, and every interface on every device is automatically analyzed for performance, errors, QoS, and configuration.
    Learn More
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 1

    LightGBM

    Gradient boosting framework based on decision tree algorithms

    LightGBM or Light Gradient Boosting Machine is a high-performance, open source gradient boosting framework based on decision tree algorithms. Compared to other boosting frameworks, LightGBM offers several advantages in terms of speed, efficiency and accuracy. Parallel experiments have shown that LightGBM can attain linear speed-up through multiple machines for training in specific settings, all while consuming less memory. LightGBM supports parallel and GPU learning, and can handle large-scale data. It’s become widely-used for ranking, classification and many other machine learning tasks.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 2
    openTSNE

    openTSNE

    Extensible, parallel implementations of t-SNE

    openTSNE is a modular Python implementation of t-Distributed Stochasitc Neighbor Embedding (t-SNE) [1], a popular dimensionality-reduction algorithm for visualizing high-dimensional data sets. openTSNE incorporates the latest improvements to the t-SNE algorithm, including the ability to add new data points to existing embeddings [2], massive speed improvements [3] [4] [5], enabling t-SNE to scale to millions of data points, and various tricks to improve the global alignment of the resulting...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 3
    higgsfield

    higgsfield

    Fault-tolerant, highly scalable GPU orchestration

    Higgsfield is an open-source, fault-tolerant, highly scalable GPU orchestration, and a machine learning framework designed for training models with billions to trillions of parameters, such as Large Language Models (LLMs).
    Downloads: 7 This Week
    Last Update:
    See Project
  • 4
    The Julia Programming Language

    The Julia Programming Language

    High-level, high-performance dynamic language for technical computing

    Julia is a fast, open source high-performance dynamic language for technical computing. It can be used for data visualization and plotting, deep learning, machine learning, scientific computing, parallel computing and so much more. Having a high level syntax, Julia is easy to use for programmers of every level and background. Julia has more than 2,800 community-registered packages including various mathematical libraries, data manipulation tools, and packages for general purpose computing. Libraries from Python, R, C/Fortran, C++, and Java can also be used.
    Downloads: 16 This Week
    Last Update:
    See Project
  • Run applications fast and securely in a fully managed environment Icon
    Run applications fast and securely in a fully managed environment

    Cloud Run is a fully-managed compute platform that lets you run your code in a container directly on top of scalable infrastructure.

    Run frontend and backend services, batch jobs, deploy websites and applications, and queue processing workloads without the need to manage infrastructure.
    Try for free
  • 5
    PaddlePaddle

    PaddlePaddle

    PArallel Distributed Deep LEarning: Machine Learning Framework

    PaddlePaddle is an open source deep learning industrial platform with advanced technologies and a rich set of features that make innovation and application of deep learning easier. It is the only independent R&D deep learning platform in China, and has been widely adopted in various sectors including manufacturing, agriculture and enterprise service. PaddlePaddle covers core deep learning frameworks, basic model libraries, end-to-end development kits and more, with support for both...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 6
    ROOT

    ROOT

    Analyzing, storing and visualizing big data, scientifically

    ...ROOT comes with histogramming capabilities in an arbitrary number of dimensions, curve fitting, statistical modeling, and minimization, to allow the easy setup of a data analysis system that can query and process the data interactively or in batch mode, as well as a general parallel processing framework, RDataFrame, that can considerably speed up an analysis.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 7
    TensorRT

    TensorRT

    C++ library for high performance inference on NVIDIA GPUs

    ...With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers, embedded, or automotive product platforms. TensorRT is built on CUDA®, NVIDIA’s parallel programming model, and enables you to optimize inference leveraging libraries, development tools, and technologies in CUDA-X™ for artificial intelligence, autonomous machines, high-performance computing, and graphics. With new NVIDIA Ampere Architecture GPUs, TensorRT also leverages sparse tensor cores providing an additional performance boost.
    Downloads: 17 This Week
    Last Update:
    See Project
  • 8
    OneFlow

    OneFlow

    OneFlow is a deep learning framework designed to be user-friendly

    OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient. An extension for OneFlow to target third-party compiler, such as XLA, TensorRT and OpenVINO etc.CUDA runtime is statically linked into OneFlow. OneFlow will work on a minimum supported driver, and any driver beyond. For more information. Distributed performance (efficiency) is the core technical difficulty of the deep learning framework. OneFlow focuses on performance improvement and heterogeneous...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    Llama Recipes

    Llama Recipes

    Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method

    The 'llama-recipes' repository is a companion to the Meta Llama models. We support the latest version, Llama 3.1, in this repository. The goal is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Llama and other tools in the LLM ecosystem. The examples here showcase how to run...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Publishing software for publishers and membership associations Icon
    Publishing software for publishers and membership associations

    Power your business strategy with the superior subscription and product order management solution.

    What sets Advantage apart as a superior subscription & product order management solution is total flexibility. Flexibility that allows you to efficiently run your business the way you want to. And with the rapid expansion of consumer preference for access to content through subscription and membership models—whether you’re a publisher, membership organization or subscription box provider—you need a market responsive order-to-cash solution. That’s Advantage.
    Learn More
  • 10
    TorchRec

    TorchRec

    Pytorch domain library for recommendation systems

    ...Parallelism primitives that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism/model-parallelism. The TorchRec sharder can shard embedding tables with different sharding strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding. The TorchRec planner can automatically generate optimized sharding plans for models. Pipelined training overlaps dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    Django friendly finite state machine

    Django friendly finite state machine

    Django friendly finite state machine support

    Django-fsm adds simple declarative state management for Django models. If you need parallel task execution, view, and background task code reuse over different flows - check my new project Django-view flow. Instead of adding a state field to a Django model and managing its values by hand, you use FSMField and mark model methods with the transition decorator. These methods could contain side effects of the state change.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    mlr3

    mlr3

    mlr3: Machine Learning in R - next generation

    mlr3 is a modern, object-oriented R framework for machine learning. It provides core abstractions (tasks, learners, resamplings, measures, pipelines) implemented using R6 classes, enabling extensible, composable machine learning workflows. It focuses on clean design, scalability (large datasets), and integration into the wider R ecosystem via extension packages. Users can do classification, regression, survival analysis, clustering, hyperparameter tuning, benchmarking etc., often via...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Colossal-AI

    Colossal-AI

    Making large AI models cheaper, faster and more accessible

    ...However, distributed training, especially model parallelism, often requires domain expertise in computer systems and architecture. It remains a challenge for AI researchers to implement complex distributed training solutions for their models. Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Ray

    Ray

    A unified framework for scalable computing

    Modern workloads like deep learning and hyperparameter tuning are compute-intensive and require distributed or parallel execution. Ray makes it effortless to parallelize single machine code — go from a single CPU to multi-core, multi-GPU or multi-node with minimal code changes. Accelerate your PyTorch and Tensorflow workload with a more resource-efficient and flexible distributed execution framework powered by Ray. Accelerate your hyperparameter search workloads with Ray Tune. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    ADAMS

    ADAMS

    ADAMS is a workflow engine for building complex knowledge workflows.

    ADAMS is a flexible workflow engine aimed at quickly building and maintaining data-driven, reactive workflows, easily integrated into business processes. Instead of placing operators on a canvas and manually connecting them, a tree structure and flow control operators determine how data is processed (sequentially/parallel). This allows rapid development and easy maintenance of large workflows, with hundreds or thousands of operators. Operators include machine learning (WEKA, MOA, MEKA) and image processing (ImageJ, JAI, BoofCV, LIRE and Gnuplot). R available using Rserve. WEKA webservice allows other frameworks to use WEKA models. Fast prototyping with Groovy and Jython. ...
    Leader badge
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    Implicit

    Implicit

    Fast Python collaborative filtering for implicit feedback datasets

    This project provides fast Python implementations of several different popular recommendation algorithms for implicit feedback datasets. All models have multi-threaded training routines, using Cython and OpenMP to fit the models in parallel among all available CPU cores. In addition, the ALS and BPR models both have custom CUDA kernels - enabling fitting on compatible GPU’s. This library also supports using approximate nearest neighbour libraries such as Annoy, NMSLIB and Faiss for speeding up making recommendations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Elephas

    Elephas

    Distributed Deep learning with Keras & Spark

    ...Elephas intends to keep the simplicity and high usability of Keras, thereby allowing for fast prototyping of distributed models, which can be run on massive data sets. Elephas implements a class of data-parallel algorithms on top of Keras, using Spark's RDDs and data frames. Keras Models are initialized on the driver, then serialized and shipped to workers, alongside with data and broadcasted model parameters. Spark workers deserialize the model, train their chunk of data and send their gradients back to the driver. The "master" model on the driver is updated by an optimizer, which takes gradients either synchronously or asynchronously. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    igel

    igel

    Machine learning tool that allows you to train and test models

    A delightful machine learning tool that allows you to train/fit, test, and use models without writing code. The goal of the project is to provide machine learning for everyone, both technical and non-technical users. I sometimes needed a tool sometimes, which I could use to fast create a machine learning prototype. Whether to build some proof of concept, create a fast draft model to prove a point or use auto ML. I find myself often stuck writing boilerplate code and thinking too much about...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 19
    FARM

    FARM

    Fast & easy transfer learning for NLP

    ...With FARM you can build fast proofs-of-concept for tasks like text classification, NER or question answering and transfer them easily into production. Easy fine-tuning of language models to your task and domain language. AMP optimizers (~35% faster) and parallel preprocessing (16 CPU cores => ~16x faster). Modular design of language models and prediction heads. Switch between heads or combine them for multitask learning. Full Compatibility with HuggingFace Transformers' models and model hub. Smooth upgrading to newer language models. Integration of custom datasets via Processor class. Powerful experiment tracking & execution.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    YOLO ROS

    YOLO ROS

    YOLO ROS: Real-Time Object Detection for ROS

    ...Darknet on the CPU is fast (approximately 1.5 seconds on an Intel Core i7-6700HQ CPU @ 2.60GHz × 8) but it's like 500 times faster on GPU! You'll have to have an Nvidia GPU and you'll have to install CUDA. The CMakeLists.txt file automatically detects if you have CUDA installed or not. CUDA is a parallel computing platform and application programming interface (API) model created by Nvidia.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    SINGA

    SINGA

    A distributed deep learning platform

    Apache SINGA is an Apache Top Level Project, focusing on distributed training of deep learning and machine learning models. Various example deep learning models are provided in SINGA repo on Github and on Google Colab. SINGA supports data parallel training across multiple GPUs (on a single node or across different nodes). SINGA supports various popular optimizers including stochastic gradient descent with momentum, Adam, RMSProp, and AdaGrad, etc. SINGA records the computation graph and applies the backward propagation automatically after forward propagation. The optimization of memory are implemented in the Device class. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22

    CUDA-JMI

    Tool for feature selection using the JMI metric and multiple GPUs

    CUDA-JMI is a parallel tool to accelerate the feature selection process using Joint Mutual Information as metric. This tool receives as input a file with ARFF, CVS or LIBSVM extensions that contais the values of m individuals and n features and returns a file with those features that provide more non-rendundant information.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Tensorpack

    Tensorpack

    A Neural Net Training Interface on TensorFlow, with focus on speed

    ...Uses TensorFlow in the efficient way with no extra overhead. On common CNNs, it runs training 1.2~5x faster than the equivalent Keras code. Your training can probably gets faster if written with Tensorpack. Scalable data-parallel multi-GPU / distributed training strategy is off-the-shelf to use. Squeeze the best data loading performance of Python with tensorpack.dataflow. Symbolic programming (e.g. tf.data) does not offer the data processing flexibility needed in research. Tensorpack squeezes the most performance out of pure Python with various auto parallelization strategies. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Intel neon

    Intel neon

    Intel® Nervana™ reference deep learning framework

    ...The Intel Math Kernel Library takes advantages of the parallelization and vectorization capabilities of Intel Xeon and Xeon Phi systems. When hyperthreading is enabled on the system, we recommend the following KMP_AFFINITY setting to make sure parallel threads are 1:1 mapped to the available physical cores.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Grenade

    Grenade

    Deep Learning in Haskell

    Grenade is a composable, dependently typed, practical, and fast recurrent neural network library for concise and precise specifications of complex networks in Haskell. Because the types are so rich, there's no specific term level code required to construct this network; although it is of course possible and easy to construct and deconstruct the networks and layers explicitly oneself. Networks in Grenade can be thought of as a heterogeneous list of layers, where their type includes not only...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next