Standardized Serverless ML Inference Platform on Kubernetes
AIMET is a library that provides advanced quantization and compression
Superduper: Integrate AI models and machine learning workflows
Open platform for training, serving, and evaluating language models
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction
Integrate, train and manage any AI models and APIs with your database
Database system for building simpler and faster AI-powered application
Prem provides a unified environment to develop AI applications
Self-contained Machine Learning and Natural Language Processing lib
Serve machine learning models within a Docker container
LLMFlows - Simple, Explicit and Transparent LLM Apps
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere
LLM Chatbot Assistant for Openfire server
Open Source and Lightweight Local LLM Platform
OpenFieldAI is an AI based Open Field Test Rodent Tracker
A computer vision framework to create and deploy apps in minutes
Framework for Accelerating LLM Generation with Multiple Decoding Heads
Run 100B+ language models at home, BitTorrent-style
Local AI, on Your Computer.
A real time inference engine for temporal logical specifications
A graphical manager for ollama that can manage your LLMs
Toolbox of models, callbacks, and datasets for AI/ML researchers
Implementation of "Tree of Thoughts
High-level Deep Learning Framework written in Kotlin
llama.go is like llama.cpp in pure Golang