Showing 15 open source projects for "openssh-server"

View related business solutions
  • Full-stack observability with actually useful AI | Grafana Cloud Icon
    Full-stack observability with actually useful AI | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build, govern, and optimize agents and models with Gemini Enterprise Agent Platform.
    Start Free
  • 1
    Flowise

    Flowise

    Drag & drop UI to build your customized LLM flow

    ...Open source is the core of Flowise, and it will always be free for commercial and personal usage. Flowise support different environment variables to configure your instance. You can specify the following variables in the .env file inside the packages/server folder.
    Downloads: 37 This Week
    Last Update:
    See Project
  • 2
    wllama

    wllama

    WebAssembly binding for llama.cpp - Enabling on-browser LLM inference

    wllama is a WebAssembly-based library that enables large language model inference directly inside a web browser. Built as a binding for the llama.cpp inference engine, the project allows developers to run LLM models locally without requiring a server backend or dedicated GPU hardware. The library leverages WebAssembly SIMD capabilities to achieve efficient execution within modern browsers while maintaining compatibility across platforms. By running models locally on the user’s device, wllama enables privacy-preserving AI applications that do not require sending data to remote servers. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 3
    DevDocs by CyberAGI

    DevDocs by CyberAGI

    Completely free, private, UI based Tech Documentation MCP server

    DevDocs is an open-source documentation server designed to provide developers with a private, structured interface for browsing and interacting with technical documentation using AI tools. The system functions as a Model Context Protocol (MCP) server that allows large language models and developer assistants to access technical documentation in a structured and efficient way.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    AingDesk

    AingDesk

    AI assistant that supports knowledge bases, model APIs

    ...The system supports additional features such as web search, intelligent agent workflows, and multi-model conversations within a single session. AingDesk can be deployed locally on personal machines or installed as a server using containerized environments. Its design emphasizes accessibility, making it suitable for both beginners and experienced developers who want to experiment with AI tools.
    Downloads: 1 This Week
    Last Update:
    See Project
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 5
    Grounded Docs

    Grounded Docs

    Open-Source Alternative to Context7, Nia, and Ref.Tools

    Grounded Docs is an open-source implementation of a Model Context Protocol server designed to expose documentation and structured information as tools that AI agents can query. The project allows language models and agent frameworks to retrieve and interact with documentation through standardized MCP interfaces. By acting as an intermediary layer between documentation sources and AI tools, the server enables models to access structured documentation in a consistent and machine-readable format. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Secret Llama

    Secret Llama

    Fully private LLM chatbot that runs entirely with a browser

    Secret Llama is a privacy-first large-language-model chatbot that runs entirely inside your web browser, meaning no server is required and your conversation data never leaves your device. It focuses on open-source model support, letting you load families like Llama and Mistral directly in the client for fully local inference. Because everything happens in-browser, it can work offline once models are cached, which is helpful for air-gapped environments or travel.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    React Native AI

    React Native AI

    Full stack framework for building cross-platform mobile AI apps

    ...It supports real-time streaming responses from multiple AI providers and enables developers to build chat interfaces, AI-driven image generation tools, and natural language features within mobile apps. The framework includes backend components such as an Express-based server proxy that handles authentication and API communication with model providers. Developers can also integrate multiple models and services through a unified interface, making it easier to experiment with different AI capabilities. Built-in theming and UI templates allow developers to quickly create polished interfaces for AI chat and generative features.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    WebLLM

    WebLLM

    Bringing large-language models and chat to web browsers

    WebLLM is a modular, customizable javascript package that directly brings language model chats directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU. We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration. WebLLM offers a minimalist and modular interface to access the chatbot in the browser. The WebLLM package itself does not come with UI, and is designed in a modular way to hook to any of the UI components. ...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 9
    PasteGuard

    PasteGuard

    Masks sensitive data and secrets before they reach AI

    PasteGuard is an open-source privacy proxy that protects sensitive information like personal data and API secrets by detecting and masking them before they reach large language model APIs such as OpenAI or Anthropic Claude. It sits between an application and the LLM provider, automatically replacing names, emails, tokens, and other personally identifiable information (PII) with placeholders so that external services never see raw sensitive values, and then optionally unmasking them in the...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 10
    MCP Router

    MCP Router

    A Unified MCP Server Management App (MCP Manager)

    MCP Router is an open-source management platform designed to simplify the deployment and coordination of Model Context Protocol (MCP) servers used by AI agents. MCP is an emerging standard that allows language models and AI assistants to connect to external tools, data sources, and services through a structured interface. The MCP Router project acts as a centralized manager that helps developers run, configure, and coordinate multiple MCP servers within a single environment. This enables AI...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    node-llama-cpp

    node-llama-cpp

    Run AI models locally on your machine with node.js bindings for llama

    ...By using native bindings and optimized model execution, the framework allows developers to integrate advanced language model capabilities into desktop applications, server software, and command-line tools. The system automatically detects the available hardware on a machine and selects the most appropriate compute backend, including CPU or GPU acceleration. Developers can use the library to perform tasks such as text generation, conversational chat, embedding generation, and structured output generation. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    TONL

    TONL

    TONL (Token-Optimized Notation Language)

    ...TONL isn’t just a format — it includes a rich API for querying, indexing, modifying, and streaming data, along with tools for schema validation and TypeScript code generation. The platform comes with a complete command-line interface that supports interactive dashboards and cross-platform usage in browsers and server environments, and its high test coverage gives developers confidence in stability.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Agent Chat UI

    Agent Chat UI

    Web app for interacting with any LangGraph agent (PY & TS) via a chat

    ...The project is implemented as a modern Next.js application and allows users to chat with agent workflows running on remote or local LangGraph servers. Through a simple configuration process, developers can connect the interface to a deployed agent by specifying the server URL, assistant identifier, and authentication credentials. Once connected, the interface enables real-time conversations where messages are sent to the agent and responses are streamed back to the chat interface. The project is designed to serve as a flexible frontend for agent-based AI systems, allowing developers to test and deploy conversational interfaces quickly. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Farfalle

    Farfalle

    AI search engine - self-host with local or cloud LLMs

    ...It can run either with local language models or with cloud-based providers, allowing developers to deploy it privately or integrate with hosted AI services. The architecture separates the frontend and backend, using modern web technologies such as Next.js and FastAPI to deliver an interactive interface and scalable server logic. Farfalle also includes an agent-based search workflow that plans queries and executes multiple search steps to produce more accurate results than traditional keyword searches. The system supports multiple external search providers and integrates caching and rate-limiting mechanisms to maintain reliability during heavy usage.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    AutoGPT.js

    AutoGPT.js

    Auto-GPT on the browser

    ...The system allows users to run an AI agent capable of performing tasks such as generating code, searching the web, and interacting with files on the local computer. Unlike traditional AutoGPT implementations that require server infrastructure, AutoGPT.js is designed to run primarily in the browser, making it easier to deploy and experiment with autonomous agents. The platform uses web APIs and language model integrations to give the agent the ability to plan tasks, execute commands, and store short-term memory during operations. Developers can also configure the system to connect to different language model APIs and adjust parameters such as temperature or prompt configuration. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB