LLM.swift is a simple and readable library
Open-source tool designed to enhance the efficiency of workloads
A Pythonic framework to simplify AI service building
OpenVINO™ Toolkit repository
Run Local LLMs on Any Device. Open-source
The Triton Inference Server provides an optimized cloud
GPU environment management and cluster orchestration
Easiest and laziest way for building multi-agent LLMs applications
The free, Open Source alternative to OpenAI, Claude and others
Open-Source AI Camera. Empower any camera/CCTV
Operating LLMs in production
A scalable inference server for models optimized with OpenVINO
Sparsity-aware deep learning inference runtime for CPUs
A high-performance ML model serving framework, offers dynamic batching
Standardized Serverless ML Inference Platform on Kubernetes
Unified Model Serving Framework
Replace OpenAI GPT with another LLM in your app
An MLOps framework to package, deploy, monitor and manage models
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method
Deep Learning API and Server in C++14 support for Caffe, PyTorch
Images to inference with no labeling
Open Source and Lightweight Local LLM Platform
A toolkit to optimize ML models for deployment for Keras & TensorFlow
A computer vision framework to create and deploy apps in minutes
Training & Implementation of chatbots leveraging GPT-like architecture