• Enterprise-grade ITSM, for every business Icon
    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity.

    Freshservice is an intuitive, AI-powered platform that helps IT, operations, and business teams deliver exceptional service without the usual complexity. Automate repetitive tasks, resolve issues faster, and provide seamless support across the organization. From managing incidents and assets to driving smarter decisions, Freshservice makes it easy to stay efficient and scale with confidence.
    Try it Free
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 1
    TuyaOpen

    TuyaOpen

    Next-gen AI+IoT framework for T2/T3/T5AI/ESP32/and more

    ...It offers a unified development environment where developers can build devices capable of communicating with IoT cloud services while integrating AI capabilities and intelligent automation features. The system includes built-in networking support for communication protocols such as Wi-Fi, Bluetooth, and Ethernet, allowing devices to connect securely to remote services and applications. TuyaOpen also integrates with Tuya’s broader cloud ecosystem, enabling developers to manage device authentication, firmware updates, device activation, and remote monitoring from centralized services.
    Downloads: 34 This Week
    Last Update:
    See Project
  • 2
    OllamaSharp

    OllamaSharp

    The easiest way to use Ollama in .NET

    ...The project acts as a wrapper around the Ollama API, exposing all endpoints through asynchronous methods that allow developers to perform tasks such as generating text, creating embeddings, and managing models. It supports both local and remote Ollama instances, enabling developers to run AI models on their own hardware or connect to remote model servers. The library is designed to simplify integration by allowing developers to interact with AI models using just a few lines of code while still supporting advanced functionality. OllamaSharp also includes real-time streaming capabilities that allow applications to display generated responses incrementally as they are produced.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    LLM CLI

    LLM CLI

    Access large language models from the command-line

    A CLI utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your own machine.
    Downloads: 9 This Week
    Last Update:
    See Project
  • 4
    Anyquery

    Anyquery

    Query anything (GitHub, Notion, +40 more) with SQL and let LLMs

    ...Built on top of SQLite, the engine uses a plugin architecture that allows it to extend support to dozens of external services and data sources. Users can query structured files such as CSV, JSON, and Parquet as well as remote data sources like SaaS APIs, cloud storage services, and local applications. The platform also supports querying multiple data sources simultaneously and joining them together within a single SQL query, enabling powerful cross-system analysis. In addition to operating as a local query engine, the system can run as a MySQL-compatible server so that traditional database tools can connect to it.
    Downloads: 39 This Week
    Last Update:
    See Project
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 5
    node-llama-cpp

    node-llama-cpp

    Run AI models locally on your machine with node.js bindings for llama

    node-llama-cpp is a JavaScript and Node.js binding that allows developers to run large language models locally using the high-performance inference engine provided by llama.cpp. The library enables applications built with Node.js to interact directly with local LLM models without requiring a remote API or external service. By using native bindings and optimized model execution, the framework allows developers to integrate advanced language model capabilities into desktop applications, server software, and command-line tools. The system automatically detects the available hardware on a machine and selects the most appropriate compute backend, including CPU or GPU acceleration. ...
    Downloads: 19 This Week
    Last Update:
    See Project
  • 6
    Hollama

    Hollama

    A minimal LLM chat app that runs entirely in your browser

    Hollama is a lightweight open-source chat application designed to run entirely within the browser while interacting with large language model servers. The project provides a minimal but powerful user interface for communicating with local or remote LLMs, including servers powered by Ollama or OpenAI-compatible APIs. Because the application runs as a static web interface, it does not require complex backend infrastructure and can be easily deployed or self-hosted. Hollama supports both text-based and multimodal interactions, allowing users to work with models that process images as well as text. ...
    Downloads: 14 This Week
    Last Update:
    See Project
  • 7
    Agent Chat UI

    Agent Chat UI

    Web app for interacting with any LangGraph agent (PY & TS) via a chat

    Agent Chat UI is an open-source web application that provides a graphical interface for interacting with AI agents built using LangGraph and related frameworks. The project is implemented as a modern Next.js application and allows users to chat with agent workflows running on remote or local LangGraph servers. Through a simple configuration process, developers can connect the interface to a deployed agent by specifying the server URL, assistant identifier, and authentication credentials. Once connected, the interface enables real-time conversations where messages are sent to the agent and responses are streamed back to the chat interface. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    wllama

    wllama

    WebAssembly binding for llama.cpp - Enabling on-browser LLM inference

    ...The library leverages WebAssembly SIMD capabilities to achieve efficient execution within modern browsers while maintaining compatibility across platforms. By running models locally on the user’s device, wllama enables privacy-preserving AI applications that do not require sending data to remote servers. The framework provides both high-level APIs for common tasks such as text generation and embeddings, as well as low-level APIs that expose tokenization, sampling controls, and model state management.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 9
    Jlama

    Jlama

    Jlama is a modern LLM inference engine for Java

    Jlama is a modern inference engine written entirely in Java that enables developers to run large language models locally within Java applications. Unlike frameworks that require external APIs or remote services, Jlama performs inference directly on a machine using pre-trained models. This allows organizations to integrate generative AI features into their systems while maintaining full control over data privacy and infrastructure. The engine supports a wide range of open-source model architectures and formats, including variants of Llama, Mistral, and other transformer-based models. ...
    Downloads: 8 This Week
    Last Update:
    See Project
  • Go from Code to Production URL in Seconds Icon
    Go from Code to Production URL in Seconds

    Cloud Run deploys apps in any language instantly. Scales to zero. Pay only when code runs.

    Skip the Kubernetes configs. Cloud Run handles HTTPS, scaling, and infrastructure automatically. Two million requests free per month.
    Try it free
  • 10
    Osaurus

    Osaurus

    AI edge infrastructure for macOS. Run local or cloud models

    ...The project provides a native runtime that allows applications to access large language models and AI tools directly on the user’s machine without relying entirely on cloud services. Osaurus supports running both local and remote models, enabling developers to build AI-powered applications that can operate offline or leverage external APIs when needed. The platform acts as an always-on runtime that coordinates AI tasks, tools, and workflows while enabling applications to communicate with models through standardized interfaces. Developers can extend the system through plugins that expose additional capabilities, tools, or services to the runtime using a structured plugin architecture. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 11
    llama.vim

    llama.vim

    Vim plugin for LLM-assisted code/text completion

    ...The plugin enables developers to access AI-assisted text and code completion features without leaving their terminal-based development environment. Instead of relying on remote AI services, the plugin is designed to work with locally running LLM inference engines such as llama.cpp. This approach allows developers to benefit from AI-assisted coding features while maintaining full control over their data and avoiding external API dependencies. The plugin focuses on simplicity and performance, providing fast completions and editing assistance even on consumer-grade hardware. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB