Showing 41 open source projects for "cloud deploy"

View related business solutions
  • Our Free Plans just got better! | Auth0 by Okta Icon
    Our Free Plans just got better! | Auth0 by Okta

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your secuirty. Auth0 now, thank yourself later.
    Try free now
  • Extended Threat Intelligence | SOCRadar Icon
    Extended Threat Intelligence | SOCRadar

    See what hackers already know about your organization – and stop them from getting in.

    Enterprises need full-spectrum cyber intelligence—beyond social media and the dark web. SOCRadar monitors cloud buckets, dark web leaks, and external threats in real time. Automate takedowns, detect brand impersonations, and stay ahead of evolving attacks. Strengthen your security with Extended Threat Intelligence.
    Free Trial
  • 1
    PHP Client For NLP Cloud

    PHP Client For NLP Cloud

    NLP Cloud serves high performance pre-trained or custom models for NER

    ... similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API. You can either use the NLP Cloud pre-trained models, fine-tune your own models, or deploy your own models. Pass the model you want to use and the NLP Cloud token to the client during initialization. If you are making asynchronous requests, you will always receive a quick response containing a URL.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Node.js Client For NLP Cloud

    Node.js Client For NLP Cloud

    NLP Cloud serves high performance pre-trained or custom models

    ..., machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, and served through a REST API. You can either use the NLP Cloud pre-trained models, fine-tune your own models, or deploy your own models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Python Client For NLP Cloud

    Python Client For NLP Cloud

    NLP Cloud serves high performance pre-trained or custom models for NER

    ... similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API. You can either use the NLP Cloud pre-trained models, fine-tune your own models, or deploy your own models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    OpenVINO

    OpenVINO

    OpenVINO™ Toolkit repository

    OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference. Boost deep learning performance in computer vision, automatic speech recognition, natural language processing and other common tasks. Use models trained with popular frameworks like TensorFlow, PyTorch and more. Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud. This open-source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training...
    Downloads: 19 This Week
    Last Update:
    See Project
  • Secure remote access solution to your private network, in the cloud or on-prem. Icon
    Secure remote access solution to your private network, in the cloud or on-prem.

    Deliver secure remote access with OpenVPN.

    OpenVPN is here to bring simple, flexible, and cost-effective secure remote access to companies of all sizes, regardless of where their resources are located.
    Get started — no credit card required.
  • 5
    DeepCamera

    DeepCamera

    Open-Source AI Camera. Empower any camera/CCTV

    DeepCamera empowers your traditional surveillance cameras and CCTV/NVR with machine learning technologies. It provides open-source facial recognition-based intrusion detection, fall detection, and parking lot monitoring with the inference engine on your local device. SharpAI-hub is the cloud hosting for AI applications that helps you deploy AI applications with your CCTV camera on your edge device in minutes. SharpAI yolov7_reid is an open-source Python application that leverages AI...
    Downloads: 12 This Week
    Last Update:
    See Project
  • 6
    aqueduct LLM

    aqueduct LLM

    Aqueduct allows you to run LLM and ML workloads on any infrastructure

    Aqueduct is an MLOps framework that allows you to define and deploy machine learning and LLM workloads on any cloud infrastructure. Aqueduct is an open-source MLOps framework that allows you to write code in vanilla Python, run that code on any cloud infrastructure you'd like to use, and gain visibility into the execution and performance of your models and predictions. Aqueduct's Python native API allows you to define ML tasks in regular Python code. You can connect Aqueduct to your existing...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    Alan AI

    Alan AI

    In-App assistant SDK to build a multimodal conversational UX websites

    ...-backend powered by the industry’s best Automatic Speech Recognition (ASR), Natural Language Understanding (NLU) and Speech Synthesis. The Alan Cloud provisions and handles the infrastructure required to maintain your voice deployments and perform all the voice processing tasks. To voice enable your app, you only need to get the Alan Client SDK and drop it to your app. No need to plan for, deploy and maintain any infrastructure or speech components - the Alan Platform does the bulk of the work.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 8
    OpenLLM

    OpenLLM

    Operating LLMs in production

    An open platform for operating large language models (LLMs) in production. Fine-tune, serve, deploy, and monitor any LLMs with ease. With OpenLLM, you can run inference with any open-source large-language models, deploy to the cloud or on-premises, and build powerful AI apps. Built-in supports a wide range of open-source LLMs and model runtime, including Llama 2, StableLM, Falcon, Dolly, Flan-T5, ChatGLM, StarCoder, and more. Serve LLMs over RESTful API or gRPC with one command, query via WebUI...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    Seldon Core

    Seldon Core

    An MLOps framework to package, deploy, monitor and manage models

    ... and deployment (CI/CD) tools to scale and update your deployment. Built on Kubernetes, runs on any cloud and on-premises. Framework agnostic, supports top ML libraries, toolkits and languages. Advanced deployments with experiments, ensembles and transformers. Our open-source framework makes it easier and faster to deploy your machine learning models and experiments at scale on Kubernetes. The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Smart Monitoring for Any Network. Powered by Open Source. Icon
    Smart Monitoring for Any Network. Powered by Open Source.

    Trusted by thousands of IT teams worldwide

    NMIS helps with fault, performance, and configuration management. It provides performance graphs, threshold alerting, and detailed notification policies with various methods. NMIS monitors an organization’s IT environment, helps identify and rectify faults, and provides valuable information for IT planning.
    Get a Free Trial
  • 10
    Alan AI for Android

    Alan AI for Android

    Assistant SDK to build a multimodal conversational UX for Android

    ...-backend powered by the industry’s best Automatic Speech Recognition (ASR), Natural Language Understanding (NLU) and Speech Synthesis. The Alan Cloud provisions and handles the infrastructure required to maintain your voice deployments and perform all the voice processing tasks. Voice enable your app, you only need to get the Alan Client SDK and drop it into your app. No need to plan for, deploy and maintain any infrastructure or speech components - the Alan Platform does the bulk of the work.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    Xorbits Inference

    Xorbits Inference

    Replace OpenAI GPT with another LLM in your app

    Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop. Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models. With Xorbits Inference...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    LangChain Apps on Production with Jina

    LangChain Apps on Production with Jina

    Langchain Apps on Production with Jina & FastAPI

    Jina is an open-source framework for building scalable multi-modal AI apps on Production. LangChain is another open-source framework for building applications powered by LLMs. long-chain-serve helps you deploy your LangChain apps on Jina AI Cloud in a matter of seconds. You can benefit from the scalability and serverless architecture of the cloud without sacrificing the ease and convenience of local development. And if you prefer, you can also deploy your LangChain apps on your own...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Autodistill

    Autodistill

    Images to inference with no labeling

    Autodistill uses big, slower foundation models to train small, faster supervised models. Using autodistill, you can go from unlabeled images to inference on a custom model running at the edge with no human intervention in between. You can use Autodistill on your own hardware, or use the Roboflow hosted version of Autodistill to label images in the cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    omegaml

    omegaml

    MLOps simplified. From ML Pipeline ⇨ Data Product without the hassle

    omega|ml is the innovative Python-native MLOps platform that provides a scalable development and runtime environment for your Data Products. Works from laptop to cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Triton Inference Server

    Triton Inference Server

    The Triton Inference Server provides an optimized cloud

    Triton Inference Server is an open-source inference serving software that streamlines AI inferencing. Triton enables teams to deploy any AI model from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton supports inference across cloud, data center, edge, and embedded devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton delivers optimized performance for many query types, including...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    Defang

    Defang

    Defang CLI and sample projects

    .... With a single command, Defang builds and deploys applications, handling configurations for computing, storage, load balancing, networking, logging, and security. The Defang Command Line Interface (CLI) facilitates interactions with the platform, offering installation options via shell scripts, Homebrew, Winget, Nix, or direct download. Developers can define services using compose.yaml files, which Defang utilizes to deploy applications to the cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Kubeflow

    Kubeflow

    Machine Learning Toolkit for Kubernetes

    Kubeflow is an open source Cloud Native machine learning platform based on Google’s internal machine learning pipelines. It seeks to make deployments of machine learning workflows on Kubernetes simple, portable and scalable. With Kubeflow you can deploy best-of-breed open-source systems for ML to diverse infrastructures. You can also take advantage of a number of great features, such as services for managing Jupyter notebooks and support for a TensorFlow Serving container. Wherever you may...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Aviary

    Aviary

    Ray Aviary - evaluate multiple LLMs easily

    Aviary is an LLM serving solution that makes it easy to deploy and manage a variety of open source LLMs. Providing an extensive suite of pre-configured open source LLMs, with defaults that work out of the box. Supporting Transformer models hosted on Hugging Face Hub or present on local disk. Aviary has native support for autoscaling and multi-node deployments thanks to Ray and Ray Serve. Aviary can scale to zero and create new model replicas (each composed of multiple GPU workers) in response...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 19
    MLflow

    MLflow

    Open source platform for the machine learning lifecycle

    MLflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. MLflow offers a set of lightweight APIs that can be used with any existing machine learning application or library (TensorFlow, PyTorch, XGBoost, etc), wherever you currently run ML code (e.g. in notebooks, standalone applications or the cloud).
    Downloads: 1 This Week
    Last Update:
    See Project
  • 20
    AWS CodeDeploy Agent

    AWS CodeDeploy Agent

    Host Agent for AWS CodeDeploy

    .... The service scales to match your deployment needs. AWS CodeDeploy fully automates your software deployments, allowing you to deploy reliably and rapidly. You can consistently deploy your application across your development, test, and production environments whether deploying to Amazon EC2, AWS Fargate, AWS Lambda, or your on-premises servers. The service scales with your infrastructure.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    OneFlow

    OneFlow

    OneFlow is a deep learning framework designed to be user-friendly

    ... distributed expansion. It adheres to the core concept and architecture of static compilation and streaming parallelism and solves the memory wall challenge at the cluster level. world-leading level. Provides a variety of services from primary AI talent training to enterprise-level machine learning lifecycle integrated management (MLOps), including AI training and AI development, and supports three deployment modes of public cloud, private cloud and hybrid cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Pezzo

    Pezzo

    Open-source, developer-first LLMOps platform

    Pezzo enables you to build, test, monitor and instantly ship AI all in one platform, while constantly optimizing for cost and performance. Packed with powerful features to streamline your workflow, so you can focus on what matters. Pezzo is a fully cloud-native and open-source LLMOps platform. Seamlessly observe and monitor your AI operations, troubleshoot issues, save up to 90% on costs and latency, collaborate and manage your prompts in one place, and instantly deliver AI changes.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    Prompt flow

    Prompt flow

    Build high-quality LLM apps

    Prompt flow is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, and evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    LLMStack

    LLMStack

    No-code multi-agent framework to build LLM Agents, workflows

    LLMStack is a no-code platform for building generative AI agents, workflows and chatbots, connecting them to your data and business processes. Build tailor-made generative AI agents, applications and chatbots that cater to your unique needs by chaining multiple LLMs. Seamlessly integrate your own data, internal tools and GPT-powered models without any coding experience using LLMStack's no-code builder. Trigger your AI chains from Slack or Discord. Deploy to the cloud or on-premise.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    FEDML Open Source

    FEDML Open Source

    The unified and scalable ML library for large-scale training

    A Unified and Scalable Machine Learning Library for Running Training and Deployment Anywhere at Any Scale. TensorOpera AI is the next-gen cloud service for LLMs & Generative AI. It helps developers to launch complex model training, deployment, and federated learning anywhere on decentralized GPUs, multi-clouds, edge servers, and smartphones, easily, economically, and securely. Highly integrated with TensorOpera open source library, TensorOpera AI provides holistic support of three...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next