A Pythonic framework to simplify AI service building
Run Local LLMs on Any Device. Open-source
LLM.swift is a simple and readable library
Images to inference with no labeling
OpenVINO™ Toolkit repository
Easiest and laziest way for building multi-agent LLMs applications
Unified Model Serving Framework
Operating LLMs in production
Sparsity-aware deep learning inference runtime for CPUs
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method
Open-Source AI Camera. Empower any camera/CCTV
An MLOps framework to package, deploy, monitor and manage models
GPU environment management and cluster orchestration
Standardized Serverless ML Inference Platform on Kubernetes
A high-performance ML model serving framework, offers dynamic batching
A toolkit to optimize ML models for deployment for Keras & TensorFlow
A scalable inference server for models optimized with OpenVINO
Replace OpenAI GPT with another LLM in your app
Deep Learning API and Server in C++14 support for Caffe, PyTorch
A computer vision framework to create and deploy apps in minutes
Training & Implementation of chatbots leveraging GPT-like architecture
Uniform deep learning inference framework for mobile
Deploy a ML inference service on a budget in 10 lines of code