Showing 26 open source projects for "stack"

View related business solutions
  • Outgrown Windows Task Scheduler? Icon
    Outgrown Windows Task Scheduler?

    Free diagnostic identifies where your workflow is breaking down—with instant analysis of your scheduling environment.

    Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
    Download Free Tool
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • 1
    PyTorch

    PyTorch

    Open source machine learning framework

    ...This project allows for fast, flexible experimentation and efficient production. PyTorch consists of torch (Tensor library), torch.autograd (tape-based automatic differentiation library), torch.jit (a compilation stack [TorchScript]), torch.nn (neural networks library), torch.multiprocessing (Python multiprocessing), and torch.utils (DataLoader and other utility functions). PyTorch can be used as a replacement for Numpy, or as a deep learning research platform that provides optimum flexibility and speed.
    Downloads: 93 This Week
    Last Update:
    See Project
  • 2
    dm_control

    dm_control

    DeepMind's software stack for physics-based simulation

    DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo. DeepMind's software stack for physics-based simulation and Reinforcement Learning environments, using MuJoCo physics. The MuJoCo Python bindings support three different OpenGL rendering backends: EGL (headless, hardware-accelerated), GLFW (windowed, hardware-accelerated), and OSMesa (purely software-based).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    ONNX Runtime

    ONNX Runtime

    ONNX Runtime: cross-platform, high performance ML inferencing

    ONNX Runtime is a cross-platform inference and training machine-learning accelerator. ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators...
    Downloads: 46 This Week
    Last Update:
    See Project
  • 4
    ZenML

    ZenML

    Build portable, production-ready MLOps pipelines

    A simple yet powerful open-source framework that scales your MLOps stack with your needs. Set up ZenML in a matter of minutes, and start with all the tools you already use. Gradually scale up your MLOps stack by switching out components whenever your training or deployment requirements change. Keep up with the latest changes in the MLOps world and easily integrate any new developments. Define simple and clear ML workflows without wasting time on boilerplate tooling or infrastructure code. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 5
    spaCy models

    spaCy models

    Models for the spaCy Natural Language Processing (NLP) library

    ...If your application needs to process entire web dumps, spaCy is the library you want to be using. Since its release in 2015, spaCy has become an industry standard with a huge ecosystem. Choose from a variety of plugins, integrate with your machine learning stack and build custom components and workflows.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 6
    Flux.jl

    Flux.jl

    Relax! Flux is the ML library that doesn't make you tensor

    Flux is an elegant approach to machine learning. It's a 100% pure Julia stack and provides lightweight abstractions on top of Julia's native GPU and AD support. Flux makes the easy things easy while remaining fully hackable. Flux provides a single, intuitive way to define models, just like mathematical notation. Julia transparently compiles your code, optimizing and fusing kernels for the GPU, for the best performance.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    Daft

    Daft

    Distributed DataFrame for Python designed for the cloud

    Daft is a framework for ETL, analytics and ML/AI at scale. Its familiar Python Dataframe API is built to outperform Spark in performance and ease of use. Daft plugs directly into your ML/AI stack through efficient zero-copy integrations with essential Python libraries such as Pytorch and Ray. It also allows requesting GPUs as a resource for running models. Daft runs locally with a lightweight multithreaded backend. When your local machine is no longer sufficient, it scales seamlessly to run out-of-core on a distributed cluster. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    tvm

    tvm

    Open deep learning compiler stack for cpu, gpu, etc.

    Apache TVM is an open source machine learning compiler framework for CPUs, GPUs, and machine learning accelerators. It aims to enable machine learning engineers to optimize and run computations efficiently on any hardware backend. The vision of the Apache TVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Gen.jl

    Gen.jl

    A general-purpose probabilistic programming system

    An open-source stack for generative modeling and probabilistic inference. Gen’s inference library gives users building blocks for writing efficient probabilistic inference algorithms that are tailored to their models, while automating the tricky math and the low-level implementation details. Gen helps users write hybrid algorithms that combine neural networks, variational inference, sequential Monte Carlo samplers, and Markov chain Monte Carlo.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Total Network Visibility for Network Engineers and IT Managers Icon
    Total Network Visibility for Network Engineers and IT Managers

    Network monitoring and troubleshooting is hard. TotalView makes it easy.

    This means every device on your network, and every interface on every device is automatically analyzed for performance, errors, QoS, and configuration.
    Learn More
  • 10
    AutoGluon

    AutoGluon

    AutoGluon: AutoML for Image, Text, and Tabular Data

    AutoGluon enables easy-to-use and easy-to-extend AutoML with a focus on automated stack ensembling, deep learning, and real-world applications spanning image, text, and tabular data. Intended for both ML beginners and experts, AutoGluon enables you to quickly prototype deep learning and classical ML solutions for your raw data with a few lines of code. Automatically utilize state-of-the-art techniques (where appropriate) without expert knowledge.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    PySyft

    PySyft

    Data science on data without acquiring a copy

    Most software libraries let you compute over the information you own and see inside of machines you control. However, this means that you cannot compute on information without first obtaining (at least partial) ownership of that information. It also means that you cannot compute using machines without first obtaining control over those machines. This is very limiting to human collaboration and systematically drives the centralization of data, because you cannot work with a bunch of data...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    DGL

    DGL

    Python package built to ease deep learning on graph

    Build your models with PyTorch, TensorFlow or Apache MXNet. Fast and memory-efficient message passing primitives for training Graph Neural Networks. Scale to giant graphs via multi-GPU acceleration and distributed training infrastructure. DGL empowers a variety of domain-specific projects including DGL-KE for learning large-scale knowledge graph embeddings, DGL-LifeSci for bioinformatics and cheminformatics, and many others. We are keen to bringing graphs closer to deep learning researchers....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    MLJAR Studio

    MLJAR Studio

    Python package for AutoML on Tabular Data with Feature Engineering

    We are working on new way for visual programming. We developed a desktop application called MLJAR Studio. It is a notebook-based development environment with interactive code recipes and a managed Python environment. All running locally on your machine. We are waiting for your feedback. The mljar-supervised is an Automated Machine Learning Python package that works with tabular data. It is designed to save time for a data scientist. It abstracts the common way to preprocess the data,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    SageMaker Inference Toolkit

    SageMaker Inference Toolkit

    Serve machine learning models within a Docker container

    ...A container provides an effectively isolated environment, ensuring a consistent runtime regardless of where the container is deployed. Containerizing your model and code enables fast and reliable deployment of your model. The SageMaker Inference Toolkit implements a model serving stack and can be easily added to any Docker container, making it deployable to SageMaker. This library's serving stack is built on Multi Model Server, and it can serve your own models or those you trained on SageMaker using machine learning frameworks with native SageMaker support.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Gorgonia

    Gorgonia

    Gorgonia is a library that helps facilitate machine learning in Go

    ...It should bring the appeal of Go (simple compilation and deployment process) to the ML world. It's a long way from there currently, however, the baby steps are already there. The main reason to use Gorgonia is developer comfort. If you're using a Go stack extensively, now you have access to the ability to create production-ready machine learning systems in an environment that you are already familiar and comfortable with.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Lightning-Hydra-Template

    Lightning-Hydra-Template

    PyTorch Lightning + Hydra. A very user-friendly template

    Convenient all-in-one technology stack for deep learning prototyping - allows you to rapidly iterate over new models, datasets and tasks on different hardware accelerators like CPUs, multi-GPUs or TPUs. A collection of best practices for efficient workflow and reproducibility. Thoroughly commented - you can use this repo as a reference and educational resource. Not fitted for data engineering - the template configuration setup is not designed for building data processing pipelines that depend on each other. ...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 17
    UnionML

    UnionML

    Build and deploy machine learning microservices

    ...Using industry-standard machine learning methods, implement endpoints for fetching data, training models, serving predictions (and much more) to write a complete ML stack in one place. Data science, ML engineering, and MLOps practitioners can all gather around UnionML apps as a way of defining a single source of truth about your ML system’s behavior. This helps you maintain consistent code across your ML stack, from training to prediction logic.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    handson-ml

    handson-ml

    Teaching you the fundamentals of Machine Learning in python

    ...The examples underscore fundamentals like bias-variance trade-offs, regularization, and proper validation, grounding learners before they move to deep nets. Even though the deep learning stack evolved, the classical ML sections remain highly relevant for production data problems. The code is crafted to be clear rather than clever, prioritizing readability for newcomers. As a historical snapshot and a still-useful primer, it pairs well with the second edition for understanding how the ecosystem matured.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Keras TCN

    Keras TCN

    Keras Temporal Convolutional Network

    ...The usual way is to import the TCN layer and use it inside a Keras model. The receptive field is defined as the maximum number of steps back in time from current sample at time T, that a filter from (block, layer, stack, TCN) can hit (effective history) + 1. The receptive field of the TCN can be calculated. Once keras-tcn is installed as a package, you can take a glimpse of what is possible to do with TCNs. Some tasks examples are available in the repository for this purpose.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    SINGA

    SINGA

    A distributed deep learning platform

    Apache SINGA is an Apache Top Level Project, focusing on distributed training of deep learning and machine learning models. Various example deep learning models are provided in SINGA repo on Github and on Google Colab. SINGA supports data parallel training across multiple GPUs (on a single node or across different nodes). SINGA supports various popular optimizers including stochastic gradient descent with momentum, Adam, RMSProp, and AdaGrad, etc. SINGA records the computation graph and...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    nGraph

    nGraph

    nGraph has moved to OpenVINO

    ...We strongly believe in providing freedom, performance, and ease of use to AI developers. Our documentation has extensive information about how to use nGraph Compiler stack to create an nGraph computational graph, integrate custom frameworks, and to interact with supported backends.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 22
    Learn_Data_Science_in_3_Months

    Learn_Data_Science_in_3_Months

    This is the Curriculum for "Learn Data Science in 3 Months"

    This project lays out a 12-week plan to go from basics to a portfolio-ready understanding of data science. It breaks the journey into clear stages: Python fundamentals, data wrangling, visualization, statistics, machine learning, and end-to-end projects. The schedule mixes learning and doing, encouraging you to build small deliverables each week—like notebooks, dashboards, and model demos—to reinforce skills. It also includes suggestions for datasets and problem domains so you aren’t stuck...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Seldon Server

    Seldon Server

    Machine learning platform and recommendation engine on Kubernetes

    ...Seldon Core is a progression of the goals of the Seldon-Server project but also a more restricted focus to solving the final step in a machine learning project which is serving models in production. Seldon Server is a machine learning platform that helps your data science team deploy models into production. It provides an open-source data science stack that runs within a Kubernetes Cluster. You can use Seldon to deploy machine learning and deep learning models into production on-premise or in the cloud (e.g. GCP, AWS, Azure).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    NNVM

    NNVM

    Open deep learning compiler stack for cpu, gpu

    The vision of the Apache NNVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging machine learning models for any hardware platform. Compilation of deep learning models into minimum deployable modules. Infrastructure to automatically generates and optimize models on more backend with better performance....
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Apache PredictionIO

    Apache PredictionIO

    Machine learning server for developers and ML engineers

    Apache PredictionIO® is an open source Machine Learning Server built on top of a state-of-the-art open source stack for developers and data scientists to create predictive engines for any machine learning task. Quickly build and deploy an engine as a web service on production with customizable templates; respond to dynamic queries in real-time once deployed as a web service; evaluate and tune multiple engine variants systematically; unify data from multiple platforms in batch or in real-time for comprehensive predictive analytics; speed up machine learning modeling with systematic processes and pre-built evaluation measures; support machine learning and data processing libraries such as Spark MLLib and OpenNLP; implement your own machine learning models and seamlessly incorporate them into your engine; simplify data infrastructure management.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next