Run Local LLMs on Any Device. Open-source
Standardized Serverless ML Inference Platform on Kubernetes
Create HTML profiling reports from pandas DataFrame objects
Deep Learning API and Server in C++14 support for Caffe, PyTorch
State-of-the-art diffusion models for image and audio generation
The official Python client for the Huggingface Hub
A Pythonic framework to simplify AI service building
DoWhy is a Python library for causal inference
Integrate, train and manage any AI models and APIs with your database
LLM.swift is a simple and readable library
The unofficial python package that returns response of Google Bard
LLMFlows - Simple, Explicit and Transparent LLM Apps
A unified framework for scalable computing
Framework for Accelerating LLM Generation with Multiple Decoding Heads
Openai style api for open large language models
High-level Deep Learning Framework written in Kotlin
Lightweight anchor-free object detection model
Training & Implementation of chatbots leveraging GPT-like architecture
CPU/GPU inference server for Hugging Face transformer models
Deploy a ML inference service on a budget in 10 lines of code