Showing 7 open source projects for "work management"

View related business solutions
  • Forever Free Full-Stack Observability | Grafana Cloud Icon
    Forever Free Full-Stack Observability | Grafana Cloud

    Our generous forever free tier includes the full platform, including the AI Assistant, for 3 users with 10k metrics, 50GB logs, and 50GB traces.

    Built on open standards like Prometheus and OpenTelemetry, Grafana Cloud includes Kubernetes Monitoring, Application Observability, Incident Response, plus the AI-powered Grafana Assistant. Get started with our generous free tier today.
    Create free account
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • 1
    Gollama

    Gollama

    Go manage your Ollama models

    ...It provides a TUI that lets users list, inspect, sort, filter, edit, run, unload, copy, rename, delete, and push models from one place rather than relying entirely on manual command-line workflows. The project is aimed at developers and local AI users who frequently work with multiple Ollama models and want a more efficient operational layer for everyday maintenance. Beyond standard model management, Gollama can display metadata such as size, quantization level, model family, and modification date, which helps users compare models quickly. One of its more distinctive capabilities is a VRAM estimation system that can calculate memory requirements, estimate context limits, and help users choose quantization settings that fit available hardware.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    METATRON

    METATRON

    AI-powered penetration testing assistant using local LLM on linux

    METATRON is a multi-agent AI orchestration framework designed to coordinate complex workflows across multiple intelligent agents. It provides a structured system for task delegation, communication, and collaboration between agents. The framework emphasizes scalability, allowing multiple agents to work together on large or complex problems. It includes mechanisms for managing context, memory, and execution flow across tasks. METATRON is particularly useful for building advanced AI systems...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Chat UI

    Chat UI

    The open source codebase powering HuggingChat

    ...Built with modern web technologies such as SvelteKit and backed by MongoDB for persistence, the interface provides a responsive environment for multi-turn conversations, file handling, and configuration management. Chat UI connects to any service that exposes an OpenAI-compatible API endpoint, allowing it to work with a wide range of models and inference providers. The platform supports advanced capabilities such as multimodal input, tool integration through Model Context Protocol servers, and intelligent routing that selects the most appropriate model for each request.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    Cake

    Cake

    Distributed LLM and StableDiffusion inference

    Cake is a compact, powerful toolkit that combines a flexible TCP/UDP proxy, port forwarding system, and connection manager designed for both development and penetration testing scenarios. It enables users to create complex networking flows where traffic can be proxied, relayed, and manipulated between endpoints — useful for debugging networked applications, inspecting protocols, or tunneling traffic through different hops. The tool is designed to work with multiple protocols and supports...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Go From AI Idea to AI App Fast Icon
    Go From AI Idea to AI App Fast

    One platform to build, fine-tune, and deploy ML models. No MLOps team required.

    Access Gemini 3 and 200+ models. Build chatbots, agents, or custom models with built-in monitoring and scaling.
    Try Free
  • 5
    The LLM Evaluation guidebook

    The LLM Evaluation guidebook

    Sharing both practical insights and theoretical knowledge about LLM

    The Evaluation Guidebook is an open educational resource created by Hugging Face that explains how to evaluate machine learning and large language models effectively. It compiles practical insights and theoretical knowledge gathered from real-world evaluation work, including experience managing the Open LLM Leaderboard and designing evaluation tools. The guidebook teaches developers how to design evaluation pipelines, select appropriate metrics, and interpret model performance results. It...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    AI Agents From Scratch

    AI Agents From Scratch

    Demystify AI agents by building them yourself. Local LLMs

    AI Agents from Scratch is an educational repository designed to teach developers how to build autonomous AI agents using large language models and modern AI frameworks. The project walks through the process of constructing agents step by step, beginning with simple prompt-based interactions and gradually introducing more advanced capabilities such as planning, tool use, and memory. The repository provides example implementations that demonstrate how language models can interact with external...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Aviary

    Aviary

    Ray Aviary - evaluate multiple LLMs easily

    Aviary is an LLM serving solution that makes it easy to deploy and manage a variety of open source LLMs. Providing an extensive suite of pre-configured open source LLMs, with defaults that work out of the box. Supporting Transformer models hosted on Hugging Face Hub or present on local disk. Aviary has native support for autoscaling and multi-node deployments thanks to Ray and Ray Serve. Aviary can scale to zero and create new model replicas (each composed of multiple GPU workers) in response to demand. Ray ensures that the orchestration and resource management is handled automatically. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB