Showing 75 open source projects for "foss-cloud"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Automate contact and company data extraction Icon
    Automate contact and company data extraction

    Build lead generation pipelines that pull emails, phone numbers, and company details from directories, maps, social platforms. Full API access.

    Generate leads at scale without building or maintaining scrapers. Use 10,000+ ready-made tools that handle authentication, pagination, and anti-bot protection. Pull data from business directories, social profiles, and public sources, then export to your CRM or database via API. Schedule recurring extractions, enrich existing datasets, and integrate with your workflows.
    Explore Apify Store
  • 1
    PyTorch/XLA

    PyTorch/XLA

    Enabling PyTorch on Google TPU

    PyTorch/XLA is a Python package that uses the XLA deep learning compiler to connect the PyTorch deep learning framework and Cloud TPUs. You can try it right now, for free, on a single Cloud TPU with Google Colab, and use it in production and on Cloud TPU Pods with Google Cloud. Take a look at one of our Colab notebooks to quickly try different PyTorch networks running on Cloud TPUs and learn how to use Cloud TPUs as PyTorch devices. We are also introducing new TPU VMs for more transparent and easier access to the TPU hardware. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Lepton AI

    Lepton AI

    A Pythonic framework to simplify AI service building

    A Pythonic framework to simplify AI service building. Cutting-edge AI inference and training, unmatched cloud-native experience, and top-tier GPU infrastructure. Ensure 99.9% uptime with comprehensive health checks and automatic repairs.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 3
    PyTorch

    PyTorch

    Open source machine learning framework

    PyTorch is a Python package that offers Tensor computation (like NumPy) with strong GPU acceleration and deep neural networks built on tape-based autograd system. This project allows for fast, flexible experimentation and efficient production. PyTorch consists of torch (Tensor library), torch.autograd (tape-based automatic differentiation library), torch.jit (a compilation stack [TorchScript]), torch.nn (neural networks library), torch.multiprocessing (Python multiprocessing), and...
    Downloads: 100 This Week
    Last Update:
    See Project
  • 4
    Weaviate

    Weaviate

    Weaviate is a cloud-native, modular, real-time vector search engine

    ...It offers Semantic Search, Question-Answer-Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), and more. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering with the fault-tolerance of a cloud-native database, all accessible through GraphQL, REST, and various language clients.
    Downloads: 4 This Week
    Last Update:
    See Project
  • Automated RMM Tools | RMM Software Icon
    Automated RMM Tools | RMM Software

    Proactively monitor, manage, and support client networks with ConnectWise Automate

    Out-of-the-box scripts. Around-the-clock monitoring. Unmatched automation capabilities. Start doing more with less and exceed service delivery expectations.
    Learn More
  • 5
    Kubeflow

    Kubeflow

    Machine Learning Toolkit for Kubernetes

    Kubeflow is an open source Cloud Native machine learning platform based on Google’s internal machine learning pipelines. It seeks to make deployments of machine learning workflows on Kubernetes simple, portable and scalable. With Kubeflow you can deploy best-of-breed open-source systems for ML to diverse infrastructures. You can also take advantage of a number of great features, such as services for managing Jupyter notebooks and support for a TensorFlow Serving container. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Jina

    Jina

    Build cross-modal and multimodal applications on the cloud

    Jina is a framework that empowers anyone to build cross-modal and multi-modal applications on the cloud. It uplifts a PoC into a production-ready service. Jina handles the infrastructure complexity, making advanced solution engineering and cloud-native technologies accessible to every developer. Build applications that deliver fresh insights from multiple data types such as text, image, audio, video, 3D mesh, PDF with Jina AI’s DocArray.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Daft

    Daft

    Distributed DataFrame for Python designed for the cloud

    Daft is a framework for ETL, analytics and ML/AI at scale. Its familiar Python Dataframe API is built to outperform Spark in performance and ease of use. Daft plugs directly into your ML/AI stack through efficient zero-copy integrations with essential Python libraries such as Pytorch and Ray. It also allows requesting GPUs as a resource for running models. Daft runs locally with a lightweight multithreaded backend. When your local machine is no longer sufficient, it scales seamlessly to run...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 8
    Lance

    Lance

    Modern columnar data format for ML and LLMs implemented in Rust

    Lance is a columnar data format that is easy and fast to version, query and train on. It’s designed to be used with images, videos, 3D point clouds, audio and of course tabular data. It supports any POSIX file systems, and cloud storage like AWS S3 and Google Cloud Storage.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    Autodistill

    Autodistill

    Images to inference with no labeling

    ...Using autodistill, you can go from unlabeled images to inference on a custom model running at the edge with no human intervention in between. You can use Autodistill on your own hardware, or use the Roboflow hosted version of Autodistill to label images in the cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Rent Manager Software Icon
    Rent Manager Software

    Landlords, multi-family homes, manufactured home communities, single family homes, associations, commercial properties and mixed portfolios.

    Rent Manager is award-winning property management software built for residential, commercial, and short-term-stay portfolios of any size. The program’s fully customizable features include a double-entry accounting system, maintenance management/scheduling, marketing integration, mobile applications, more than 450 insightful reports, and an API that integrates with the best PropTech providers on the market.
    Learn More
  • 10
    OpenVINO

    OpenVINO

    OpenVINO™ Toolkit repository

    ...Use models trained with popular frameworks like TensorFlow, PyTorch and more. Reduce resource demands and efficiently deploy on a range of Intel® platforms from edge to cloud. This open-source version includes several components: namely Model Optimizer, OpenVINO™ Runtime, Post-Training Optimization Tool, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as TensorFlow, ONNX, PaddlePaddle, MXNet, Caffe, Kaldi.
    Downloads: 17 This Week
    Last Update:
    See Project
  • 11
    OneFlow

    OneFlow

    OneFlow is a deep learning framework designed to be user-friendly

    ...Provides a variety of services from primary AI talent training to enterprise-level machine learning lifecycle integrated management (MLOps), including AI training and AI development, and supports three deployment modes of public cloud, private cloud and hybrid cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    omegaml

    omegaml

    MLOps simplified. From ML Pipeline ⇨ Data Product without the hassle

    omega|ml is the innovative Python-native MLOps platform that provides a scalable development and runtime environment for your Data Products. Works from laptop to cloud.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Triton Inference Server

    Triton Inference Server

    The Triton Inference Server provides an optimized cloud

    ...Triton enables teams to deploy any AI model from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton supports inference across cloud, data center, edge, and embedded devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton delivers optimized performance for many query types, including real-time, batched, ensembles, and audio/video streaming. Provides Backend API that allows adding custom backends and pre/post-processing operations. Model pipelines using Ensembling or Business Logic Scripting (BLS). ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 14
    plexe

    plexe

    Build a machine learning model from a prompt

    ...It aims to be production-minded: models can be exported, versioned, and deployed, with reports to explain performance and limitations. The project supports both a Python library and a managed cloud option, meeting teams wherever they prefer to run workloads. The overall goal is to compress the path from idea to usable model while keeping humans in the loop for review and adjustment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    CV-CUDA

    CV-CUDA

    CV-CUDA™ is an open-source, GPU accelerated library

    CV-CUDA is an open-source project that enables building efficient cloud-scale Artificial Intelligence (AI) imaging and computer vision (CV) applications. It uses graphics processing unit (GPU) acceleration to help developers build highly efficient pre- and post-processing pipelines. CV-CUDA originated as a collaborative effort between NVIDIA and ByteDance.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    MLflow

    MLflow

    Open source platform for the machine learning lifecycle

    ...MLflow offers a set of lightweight APIs that can be used with any existing machine learning application or library (TensorFlow, PyTorch, XGBoost, etc), wherever you currently run ML code (e.g. in notebooks, standalone applications or the cloud).
    Downloads: 7 This Week
    Last Update:
    See Project
  • 17
    BentoML

    BentoML

    Unified Model Serving Framework

    BentoML simplifies ML model deployment and serves your models at a production scale. Support multiple ML frameworks natively: Tensorflow, PyTorch, XGBoost, Scikit-Learn and many more! Define custom serving pipeline with pre-processing, post-processing and ensemble models. Standard .bento format for packaging code, models and dependencies for easy versioning and deployment. Integrate with any training pipeline or ML experimentation platform. Parallelize compute-intense model inference...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 18
    Spice.ai OSS

    Spice.ai OSS

    A self-hostable CDN for databases

    Spice is a portable runtime offering developers a unified SQL interface to materialize, accelerate, and query data from any database, data warehouse, or data lake. Spice connects, fuses, and delivers data to applications, machine-learning models, and AI backends, functioning as an application-specific, tier-optimized Database CDN. The Spice runtime, written in Rust, is built-with industry-leading technologies such as Apache DataFusion, Apache Arrow, Apache Arrow Flight, SQLite, and DuckDB....
    Downloads: 5 This Week
    Last Update:
    See Project
  • 19
    MLRun

    MLRun

    Machine Learning automation and tracking

    MLRun is an open MLOps framework for quickly building and managing continuous ML and generative AI applications across their lifecycle. MLRun integrates into your development and CI/CD environment and automates the delivery of production data, ML pipelines, and online applications, significantly reducing engineering efforts, time to production, and computation resources. MLRun breaks the silos between data, ML, software, and DevOps/MLOps teams, enabling collaboration and fast continuous...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 20
    Synapse Machine Learning

    Synapse Machine Learning

    Simple and distributed Machine Learning

    SynapseML (previously MMLSpark) is an open source library to simplify the creation of scalable machine learning pipelines. SynapseML builds on Apache Spark and SparkML to enable new kinds of machine learning, analytics, and model deployment workflows. SynapseML adds many deep learning and data science tools to the Spark ecosystem, including seamless integration of Spark Machine Learning pipelines with the Open Neural Network Exchange (ONNX), LightGBM, The Cognitive Services, Vowpal Wabbit,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Advanced Solutions Lab

    Advanced Solutions Lab

    This repos contains notebooks for the Advanced Solutions Lab

    This repository contains Jupyter notebooks meant to be run on Vertex AI. This is maintained by Google Cloud’s Advanced Solutions Lab (ASL) team. Vertex AI is the next-generation AI Platform on the Google Cloud Platform. The material covered in this repo will take a software engineer with no exposure to machine learning to an advanced level.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Llama Recipes

    Llama Recipes

    Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method

    ...The goal is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Llama and other tools in the LLM ecosystem. The examples here showcase how to run Llama locally, in the cloud, and on-prem.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    DeepCamera

    DeepCamera

    Open-Source AI Camera. Empower any camera/CCTV

    ...It provides open-source facial recognition-based intrusion detection, fall detection, and parking lot monitoring with the inference engine on your local device. SharpAI-hub is the cloud hosting for AI applications that helps you deploy AI applications with your CCTV camera on your edge device in minutes. SharpAI yolov7_reid is an open-source Python application that leverages AI technologies to detect intruders with traditional surveillance cameras. The source code is here It leverages Yolov7 as a person detector, FastReID for person feature extraction, Milvus the local vector database for self-supervised learning to identify unseen persons, Labelstudio to host images locally and for further usage such as label data and train your own classifier. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 24
    mlx

    mlx

    MLX: An array framework for Apple silicon

    MlX offers a local web interface to browse, download, and run ML models via Hugging Face or local sources. It supports searching by tags or tasks, visualization of model metadata, quick inference demos, automatic setup of runtime environments, and works with PyTorch, TensorFlow, and ONNX. Ideal for researchers exploring and testing models via browser.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Flyte
    ...Don’t let friction between development and production slow down the deployment of new data/ML workflows and cause an increase in production bugs. Flyte enables rapid experimentation with production-grade software. Debug in the cloud by iterating on the workflows locally to achieve tighter feedback loops. As your data and ML workflows expand and demand more computing power, your workflow orchestration platform must keep up. If it’s not designed to scale, your platform will require constant monitoring and maintenance. Flyte was built with scalability in mind, ready to handle changing workloads and resource needs.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • Next