Showing 13 open source projects for "openssh-server"

View related business solutions
  • AI-powered service management for IT and enterprise teams Icon
    AI-powered service management for IT and enterprise teams

    Enterprise-grade ITSM, for every business

    Give your IT, operations, and business teams the ability to deliver exceptional services—without the complexity. Maximize operational efficiency with refreshingly simple, AI-powered Freshservice.
    Try it Free
  • Try Google Cloud Risk-Free With $300 in Credit Icon
    Try Google Cloud Risk-Free With $300 in Credit

    No hidden charges. No surprise bills. Cancel anytime.

    Use your credit across every product. Compute, storage, AI, analytics. When it runs out, 20+ products stay free. You only pay when you choose to.
    Start Free
  • 1
    Rust Docs MCP Server

    Rust Docs MCP Server

    Prevents outdated Rust code suggestions from AI assistants

    The Rust Docs MCP Server fetches documentation for specified Rust crates, generates embeddings for the content, and provides an MCP tool to answer questions about the crate based on the documentation context. ​
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Zed

    Zed

    High-performance, multiplayer code editor from the creators of Atom

    ...Multibuffers compose excerpts from across the codebase in one editable surface. Evaluate code inline via Jupyter runtimes and collaboratively edit notebooks. Support for many languages via Tree-sitter, WebAssembly, and the Language Server Protocol. Fast native terminal tightly integrates with Zed's language-aware task runner and AI capabilities. First-class modal editing via Vim bindings, including features like text objects and marks. Zed is built by a global community of thousands of developers. Boost your Zed experience by choosing from hundreds of extensions that broaden language support, offer different themes, and more.
    Downloads: 41 This Week
    Last Update:
    See Project
  • 3
    mistral.rs

    mistral.rs

    Fast, flexible LLM inference

    mistral.rs is a fast and flexible LLM inference engine implemented in Rust, designed to run and serve modern language models with an emphasis on performance and practical deployment. It provides multiple entry points for developers, including a CLI for running models locally and an HTTP server that exposes an OpenAI-compatible API surface for easy integration with existing clients. The project includes hardware-aware tooling that can benchmark a system and choose sensible quantization and device-mapping strategies, helping users get strong performance without manual tuning. It also supports serving multiple models from the same server process, enabling routing or quick switching between models depending on workload needs. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    Text Embeddings Inference

    Text Embeddings Inference

    High-performance inference server for text embeddings models API layer

    Text Embeddings Inference is a high-performance server designed to serve text embedding models efficiently in production environments. It focuses on delivering fast and scalable embedding generation by leveraging optimized inference techniques and modern hardware acceleration. It is built to support transformer-based embedding models, making it suitable for tasks such as semantic search, clustering, and retrieval-augmented systems.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • 5
    webclaw

    webclaw

    Fast, local-first web content extraction for LLMs

    ...The tool addresses a major inefficiency in AI workflows by removing irrelevant elements like navigation menus, ads, and scripts, significantly reducing token usage when feeding data into language models. It supports multiple modes of operation, including CLI usage, REST API access, and an MCP server for direct integration with agent-based systems. Webclaw also provides advanced capabilities such as recursive crawling, structured JSON extraction, summarization, and content comparison, making it suitable for research and data pipelines. Its local-first architecture ensures privacy and eliminates the need for API keys.
    Downloads: 5 This Week
    Last Update:
    See Project
  • 6
    shimmy

    shimmy

    Python-free Rust inference server

    The shimmy project is a lightweight local inference server designed to run large language models with minimal overhead. Written primarily in Rust, the tool provides a small standalone binary that exposes an API compatible with the OpenAI interface, allowing existing applications to interact with local models without significant code changes. This compatibility enables developers to replace remote AI services with locally hosted models while keeping their existing software architecture intact. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Google Workspace CLI

    Google Workspace CLI

    Command-line tool for Drive, Gmail, Calendar, Sheets, Docs, Chat, etc.

    Google Workspace CLI (gws) is a command-line tool designed to interact with Google Workspace services such as Drive, Gmail, Calendar, Sheets, and more from a single interface. It dynamically generates its command structure using Google’s Discovery Service, allowing it to automatically support new API endpoints as they become available. The tool eliminates the need for manual REST API calls by providing structured commands and built-in help for each resource and method. It outputs structured...
    Downloads: 16 This Week
    Last Update:
    See Project
  • 8
    Monoio

    Monoio

    Rust async runtime based on io-uring

    Monoio is a Rust asynchronous runtime designed for high-performance I/O-bound servers and applications, built around native OS async I/O primitives (e.g. io_uring on Linux, epoll / kqueue on other Unix-like systems), rather than layering atop an existing runtime. Its design philosophy centers on a “thread-per-core” model where each core runs its own event loop, minimizing cross-thread synchronization needs, avoiding the overhead and complexity of task scheduling, and letting developers write...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    ort

    ort

    Fast ML inference & training for ONNX models in Rust

    ...The library emphasizes speed and efficiency, leveraging hardware acceleration across CPUs, GPUs, and specialized accelerators to deliver low-latency inference both on-device and in server environments. One of its key strengths is its flexibility, as it supports multiple backends and allows developers to configure execution providers depending on available hardware. ort also includes advanced capabilities such as model compilation and optimization, reducing startup time and improving runtime performance in production systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Application Monitoring That Won't Slow Your App Down Icon
    Application Monitoring That Won't Slow Your App Down

    AppSignal's Rust-based agent is lightweight and stable. Already running in thousands of production apps.

    Full APM with errors, performance, logs, and uptime monitoring. 99.999% uptime SLA on the platform itself.
    Start Free
  • 10
    Cog

    Cog

    Package and deploy machine learning models using Docker containers

    ...Cog also resolves compatibility issues between frameworks and GPU libraries by automatically selecting compatible combinations of CUDA, cuDNN, and machine learning frameworks such as PyTorch or TensorFlow. Cog automatically generates a RESTful HTTP API for running predictions, enabling models to be accessed programmatically through a built-in prediction server.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    Phantasm

    Phantasm

    Toolkits to create a human-in-the-loop approval layer

    Phantasm offers toolkits to create a human-in-the-loop approval layer to monitor and guide AI agents' workflows in real-time, ensuring safety and reliability in AI operations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Code2Prompt

    Code2Prompt

    Convert codebases into structured prompts optimized for LLM analysis

    ...The generated output can be saved to a file, printed to standard output, or copied to the clipboard for immediate use. In addition to the core command line interface, the project also includes a library, Python bindings, and an MCP server.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 13
    Extractous

    Extractous

    Fast and efficient unstructured data extraction

    ...For broader format support, the system combines its Rust core with ahead-of-time compiled Apache Tika shared libraries, which allows it to extend parsing coverage while still avoiding traditional server-based overhead. It also supports OCR for images and scanned documents through Tesseract, making it useful for document ingestion pipelines that include image-based or scanned inputs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB