LLM.swift is a simple and readable library
The AI-native (edge and LLM) proxy for agents
Easiest and laziest way for building multi-agent LLMs applications
AICI: Prompts as (Wasm) Programs
lightweight, standalone C++ inference engine for Google's Gemma models
The unofficial python package that returns response of Google Bard
LLMs and Machine Learning done easily
Serve, optimize and scale PyTorch models in production
Run serverless GPU workloads with fast cold starts on bare-metal
Open standard for machine learning interoperability
Protect and discover secrets using Gitleaks
On-device AI across mobile, embedded and edge for PyTorch
Superduper: Integrate AI models and machine learning workflows
Adversarial Robustness Toolbox (ART) - Python Library for ML security
High-performance neural network inference framework for mobile
Bring the notion of Model-as-a-Service to life
C++ library for high performance inference on NVIDIA GPUs
The Triton Inference Server provides an optimized cloud
A GPU-accelerated library containing highly optimized building blocks
Powering Amazon custom machine learning chips
Build Production-ready Agentic Workflow with Natural Language
LLM Chatbot Assistant for Openfire server
A lightweight vision library for performing large object detection
LLMFlows - Simple, Explicit and Transparent LLM Apps
A graphical manager for ollama that can manage your LLMs