Showing 103 open source projects for "standard ml"

View related business solutions
  • Omnichannel contact center platform for enterprises. Icon
    Omnichannel contact center platform for enterprises.

    For Call centers or BPOs with a very high volume of calls

    Deliver a personalized customer experience with every interaction, across every channel, with uContact, net2phone’s cloud contact center solution.
  • The Secure Workspace for Remote Work Icon
    The Secure Workspace for Remote Work

    Venn isolates and protects work from any personal use on the same computer, whether BYO or company issued.

    Venn is a secure workspace for remote work that isolates and protects work from any personal use on the same computer. Work lives in a secure local enclave that is company controlled, where all data is encrypted and access is managed. Within the enclave – visually indicated by the Blue Border around these applications – business activity is walled off from anything that happens on the personal side. As a result, work and personal uses can now safely coexist on the same computer.
  • 1
    Causal ML

    Causal ML

    Uplift modeling and causal inference with machine learning algorithms

    Causal ML is a Python package that provides a suite of uplift modeling and causal inference methods using machine learning algorithms based on recent research [1]. It provides a standard interface that allows users to estimate the Conditional Average Treatment Effect (CATE) or Individual Treatment Effect (ITE) from experimental or observational data. Essentially, it estimates the causal impact of intervention T on outcome Y for users with observed features X, without strong assumptions...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Lux

    Lux

    The Lux Programming Language

    ... commercial use, and has other conditions which may be undesirable for some. The language is mostly inspired by the following 3 languages. Clojure (syntax, overall look & feel), Haskell (functional programming), and Standard ML (module system). They are implemented as plain-old data-structures whose expressions get eval'ed by the compiler and integrated into the type-checker. The main difference between Lux & Standard ML is that Standard ML separates interfaces/signatures and implementations/structures.
    Downloads: 10 This Week
    Last Update:
    See Project
  • 3
    MindsDB

    MindsDB

    Low-code platform to help developers build AI solutions

    MindsDB is an emerging low-code machine learning platform to help developers easily build AI-powered solutions. Merge the capabilities of your database with popular ML frameworks to radically simplify the process of applying machine learning to applications. AI Tables behave just like standard database tables. Using familiar SQL statements – time series, regression, and classification models can be trained and deployed automatically. Power simple or complex ML workflows without the burdensome...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 4
    ONNX

    ONNX

    Open standard for machine learning interoperability

    ... learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring). ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community.
    Downloads: 3 This Week
    Last Update:
    See Project
  • SysAid multi-layered ITSM solution Icon
    SysAid multi-layered ITSM solution

    For organizations spanning all industries and sizes from SMBs to Fortune 500 corporations

    SysAid is an ITSM, Service Desk and Help Desk software solution that integrates all of the essential IT tools into one product. Its rich set of features include a powerful Help Desk, IT Asset Management, and other easy-to-use tools for analyzing and optimizing IT performance.
  • 5
    Seldon Core

    Seldon Core

    An MLOps framework to package, deploy, monitor and manage models

    The de facto standard open-source platform for rapidly deploying machine learning models on Kubernetes. Seldon Core, our open-source framework, makes it easier and faster to deploy your machine learning models and experiments at scale on Kubernetes. Seldon Core serves models built in any open-source or commercial model building framework. You can make use of powerful Kubernetes features like custom resource definitions to manage model graphs. And then connect your continuous integration...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    UnionML

    UnionML

    Build and deploy machine learning microservices

    Creating ML apps should be simple and frictionless. UnionML is an open-source Python framework built on top of Flyte™, unifying the complex ecosystem of ML tools into a single interface. Combine the tools that you love using a simple, standardized API so you can stop writing so much boilerplate and focus on what matters: the data and the models that learn from them. Fit the rich ecosystem of tools and frameworks into a common protocol for machine learning. Using industry-standard machine...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    KServe

    KServe

    Standardized Serverless ML Inference Platform on Kubernetes

    KServe provides a Kubernetes Custom Resource Definition for serving machine learning (ML) models on arbitrary frameworks. It aims to solve production model serving use cases by providing performant, high abstraction interfaces for common ML frameworks like Tensorflow, XGBoost, ScikitLearn, PyTorch, and ONNX. It encapsulates the complexity of autoscaling, networking, health checking, and server configuration to bring cutting edge serving features like GPU Autoscaling, Scale to Zero, and Canary...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    FFCV

    FFCV

    Fast Forward Computer Vision (and other ML workloads!)

    ffcv is a drop-in data loading system that dramatically increases data throughput in model training. From gridding to benchmarking to fast research iteration, there are many reasons to want faster model training. Below we present premade codebases for training on ImageNet and CIFAR, including both (a) extensible codebases and (b) numerous premade training configurations.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    BentoML

    BentoML

    Unified Model Serving Framework

    BentoML simplifies ML model deployment and serves your models at a production scale. Support multiple ML frameworks natively: Tensorflow, PyTorch, XGBoost, Scikit-Learn and many more! Define custom serving pipeline with pre-processing, post-processing and ensemble models. Standard .bento format for packaging code, models and dependencies for easy versioning and deployment. Integrate with any training pipeline or ML experimentation platform. Parallelize compute-intense model inference workloads...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Translate docs, audio, and videos in real time with Google AI Icon
    Translate docs, audio, and videos in real time with Google AI

    Make your content and apps multilingual with fast, dynamic machine translation available in thousands of language pairs.

    Google Cloud’s AI-powered APIs help you translate documents, websites, apps, audio files, videos, and more at scale with best-in-class quality and enterprise-grade control and security.
  • 10
    TensorFlow Datasets

    TensorFlow Datasets

    TFDS is a collection of datasets ready to use with TensorFlow,

    TensorFlow Datasets is a collection of datasets ready to use, with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed as tf.data. Datasets , enabling easy-to-use and high-performance input pipelines. To get started see the guide and our list of datasets.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Cleanlab

    Cleanlab

    The standard data-centric AI package for data quality and ML

    cleanlab helps you clean data and labels by automatically detecting issues in a ML dataset. To facilitate machine learning with messy, real-world data, this data-centric AI package uses your existing models to estimate dataset problems that can be fixed to train even better models. cleanlab cleans your data's labels via state-of-the-art confident learning algorithms, published in this paper and blog. See some of the datasets cleaned with cleanlab at labelerrors.com. This package helps you find...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    GoldenCheetah

    GoldenCheetah

    Performance Software for Cyclists, Runners, Triathletes and Coaches

    Analyze using summary metrics like BikeStress, TRIMP, or RPE. Extract insight via models like Critical Power and W'bal. Track and predict performance using models like Banister and PMC. Optimize aerodynamics using Virtual Elevation. Train indoors with ANT and BTLE trainers. Upload and Download with many cloud services including Strava, Withings, and Today's Plan. Import and export data to and from a wide range of bike computers and file formats. Track body measures, and equipment use and set...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    DataFrame

    DataFrame

    C++ DataFrame for statistical, Financial, and ML analysis

    This is a C++ analytical library designed for data analysis similar to libraries in Python and R. For example, you would compare this to Pandas, R data.frame, or Polars. You can slice the data in many different ways. You can join, merge, and group-by the data. You can run various statistical, summarization, financial, and ML algorithms on the data. You can add your custom algorithms easily. You can multi-column sort, custom pick, and delete the data. DataFrame also includes a large collection...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Pachyderm

    Pachyderm

    Data-Centric Pipelines and Data Versioning

    Data-driven pipelines automatically trigger based on detecting data changes. Automatic immutable data lineage and data versioning of all data types. Autoscaling and parallel processing built on Kubernetes for resource orchestration. Uses standard object stores for data storage with automatic deduplication. Runs across all major cloud providers and on-premises installations. Automatic and intelligent versioning of even the largest data sets of unstructured and structured data. Git-like...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Petastorm

    Petastorm

    Petastorm library enables single machine or distributed training

    ... Python-based machine learning (ML) frameworks such as Tensorflow, PyTorch, and PySpark. It can also be used from pure Python code. A dataset created using Petastorm is stored in Apache Parquet format. On top of a Parquet schema, petastorm also stores higher-level schema information that makes multidimensional arrays into a native part of a petastorm dataset. Petastorm supports extensible data codecs. These enable a user to use one of the standard data compressions (jpeg, png) or implement her own.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    MLPerf

    MLPerf

    Reference implementations of MLPerf™ training benchmarks

    This is a repository of reference implementations for the MLPerf training benchmarks. These implementations are valid as starting points for benchmark implementations but are not fully optimized and are not intended to be used for "real" performance measurements of software frameworks or hardware. Benchmarking the performance of training ML models on a wide variety of use cases, software, and hardware drives AI performance across the tech industry. The MLPerf Training working group draws...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    EconML

    EconML

    Python Package for ML-Based Heterogeneous Treatment Effects Estimation

    EconML is a Python package for estimating heterogeneous treatment effects from observational data via machine learning. This package was designed and built as part of the ALICE project at Microsoft Research with the goal of combining state-of-the-art machine learning techniques with econometrics to bring automation to complex causal inference problems. One of the biggest promises of machine learning is to automate decision-making in a multitude of domains. At the core of many data-driven...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    SHAP

    SHAP

    A game theoretic approach to explain the output of ml models

    SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions. While SHAP can explain the output of any machine learning model, we have developed a high-speed exact algorithm for tree ensemble methods. Fast C++ implementations are supported for XGBoost, LightGBM, CatBoost, scikit-learn and pyspark...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    whylogs

    whylogs

    The open standard for data logging

    ... mean, median, and standard deviation measures), the number of missing values, and a wide range of configurable custom metrics. By capturing these summary statistics, we are able to accurately represent the data and enable all of the use cases described in the introduction.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    GIMP ML

    GIMP ML

    AI for GNU Image Manipulation Program

    ... such as edge detection and color clustering have also been added. GIMP-ML relies on standard Python packages such as numpy, scikit-image, pillow, pytorch, open-cv, scipy. In addition, GIMP-ML also aims to bring the benefits of using deep learning networks used for computer vision tasks to routine image processing workflows.
    Downloads: 15 This Week
    Last Update:
    See Project
  • 21
    HOL is a system for proving theorems in Higher Order Logic. It comes with a large variety of existing theories formalising various parts of mathematics and theoretical computer science.
    Leader badge
    Downloads: 60 This Week
    Last Update:
    See Project
  • 22
    Lots of small projects: games, VST plugins, experimental IRC server, ROM hacking tools, net tools, font tools, html tools, etc. Browse CVS!
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23

    json-scada

    A portable SCADA/IoT platform centered on the MongoDB database server.

    Standard IT tools applied to SCADA/IoT (MongoDB, PostgreSQL/TimescaleDB,Node.js, C#, Golang, Grafana, etc.). MongoDB as the real-time core database, persistence layer, config store, SOE historian. Portability and interoperability over Linux, Windows, x86/64, ARM. Horizontal scalability, from a single computer to big clusters (MongoDB-sharding), Bare Metal, Docker containers, VM, cloud, or hybrid deployments. Unlimited tags, servers, and users. HTML5 Web interface. UTF-8/I18N. Protocols...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24
    MLton

    MLton

    A whole-program optimizing compiler for Standard ML

    MLton is a whole-program optimizing compiler for Standard ML. MLton generates small executables with excellent runtime performance, utilizing untagged and unboxed native integers, reals, and words, unboxed native arrays, fast arbitrary-precision arithmetic based on GnuMP, and multiple code generation and garbage collection strategies. In addition, MLton provides a feature rich Standard ML programming environment, with full support for SML97 as given in The Definition of Standard ML (Revised...
    Leader badge
    Downloads: 51 This Week
    Last Update:
    See Project
  • 25
    Oryx

    Oryx

    Lambda architecture on Apache Spark, Apache Kafka for real-time

    Oryx 2 is a realization of the lambda architecture built on Apache Spark and Apache Kafka, but with specialization for real-time large-scale machine learning. It is a framework for building applications but also includes packaged, end-to-end applications for collaborative filtering, classification, regression and clustering. The application is written in Java, using Apache Spark, Hadoop, Tomcat, Kafka, Zookeeper and more. Configuration uses a single Typesafe Config config file, wherein...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next