Showing 32 open source projects for "code server"

View related business solutions
  • AI-generated apps that pass security review Icon
    AI-generated apps that pass security review

    Stop waiting on engineering. Build production-ready internal tools with AI—on your company data, in your cloud.

    Retool lets you generate dashboards, admin panels, and workflows directly on your data. Type something like “Build me a revenue dashboard on my Stripe data” and get a working app with security, permissions, and compliance built in from day one. Whether on our cloud or self-hosted, create the internal software your team needs without compromising enterprise standards or control.
    Try Retool free
  • Cut Cloud Costs with Google Compute Engine Icon
    Cut Cloud Costs with Google Compute Engine

    Save up to 91% with Spot VMs and get automatic sustained-use discounts. One free VM per month, plus $300 in credits.

    Save on compute costs with Compute Engine. Reduce your batch jobs and workload bill 60-91% with Spot VMs. Compute Engine's committed use offers customers up to 70% savings through sustained use discounts. Plus, you get one free e2-micro VM monthly and $300 credit to start.
    Try Compute Engine
  • 1
    Kimi Code CLI

    Kimi Code CLI

    Kimi Code CLI is your next CLI agent

    Kimi CLI is a command-line AI agent that brings an intelligent software development assistant directly into your terminal, helping you with coding tasks, shell operations, and workflow automation without leaving your command prompt. It supports an interactive shell-like user interface where you can chat with the agent, request code edits, run shell commands, and receive contextual suggestions as you work, creating a seamless blend of AI-augmented development and traditional terminal usage. The tool includes integration with Zsh so that users can activate AI assistance via a hotkey while staying within their favorite shell environment, and it can serve as an Agent Client Protocol (ACP) server to bridge AI functionality into compatible IDEs and editors. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    IDA Pro MCP

    IDA Pro MCP

    MCP Server for IDA Pro

    The IDA Pro MCP Server is a Model Context Protocol (MCP) server designed to integrate with IDA Pro, a popular disassembler and debugger. It enables AI assistants to interact with IDA Pro, facilitating tasks such as code analysis and reverse engineering. ​
    Downloads: 11 This Week
    Last Update:
    See Project
  • 3
    Qwen-Agent

    Qwen-Agent

    Agent framework and applications built upon Qwen>=3.0

    Qwen-Agent is a framework for building applications / agents using Qwen models (version 3.0+). It provides components for instruction following, tool usage (function calling), planning, memory, RAG (retrieval augmented generation), code interpreter, etc. It ships with example applications (Browser Assistant, Code Interpreter, Custom Assistant), supports GUI front-ends, backends, server setups. Agent workflow can maintain context / memory to perform multi-turn or more complex logic over time. It acts as the backend for Qwen Chat among other use cases. Built-in Code Interpreter tool that can execute code (locally) as part of agent workflows.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    mcpo

    mcpo

    A simple, secure MCP-to-OpenAPI proxy server

    mcpo is a minimal bridge that exposes any MCP tool as an OpenAPI-compatible HTTP server. Instead of writing glue code, you point mcpo at an MCP server command and it generates REST endpoints and an OpenAPI spec that other systems (or LLM agent frameworks) can call immediately. This design lets you reuse a growing library of MCP servers with platforms that only understand HTTP+OpenAPI, unifying tool access across ecosystems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Easily Host LLMs and Web Apps on Cloud Run Icon
    Easily Host LLMs and Web Apps on Cloud Run

    Run everything from popular models with on-demand NVIDIA L4 GPUs to web apps without infrastructure management.

    Run frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure. Cloud Run gives you on-demand GPU access for hosting LLMs and running real-time AI—with 5-second cold starts and automatic scale-to-zero so you only pay for actual usage. New customers get $300 in free credit to start.
    Try Cloud Run Free
  • 5
    ChatGLM3

    ChatGLM3

    ChatGLM3 series: Open Bilingual Chat LLMs | Open Source Bilingual Chat

    ChatGLM3 is ZhipuAI & Tsinghua KEG’s third-gen conversational model suite centered on the 6B-parameter ChatGLM3-6B. It keeps the series’ smooth dialog and low deployment cost while adding native tool use (function calling), a built-in code interpreter, and agent-style workflows. The family includes base and long-context variants (8K/32K/128K). The repo ships Python APIs, CLI and web demos (Gradio/Streamlit), an OpenAI-format API server, and a compact fine-tuning kit. Quantization (4/8-bit), CPU/MPS support, and accelerator backends (TensorRT-LLM, OpenVINO, chatglm.cpp) enable lightweight local or edge deployment.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 6
    gpt-oss

    gpt-oss

    gpt-oss-120b and gpt-oss-20b are two open-weight language models

    ...Both models use a native MXFP4 quantization for efficient memory use and support OpenAI’s Harmony response format, enabling transparent full chain-of-thought reasoning and advanced tool integrations such as function calling, browsing, and Python code execution. The repository provides multiple reference implementations—including PyTorch, Triton, and Metal—for educational and experimental use, as well as example clients and tools like a terminal chat app and a Responses API server.
    Downloads: 10 This Week
    Last Update:
    See Project
  • 7
    ClearML

    ClearML

    Streamline your ML workflow

    ClearML is an open source platform that automates and simplifies developing and managing machine learning solutions for thousands of data science teams all over the world. It is designed as an end-to-end MLOps suite allowing you to focus on developing your ML code & automation, while ClearML ensures your work is reproducible and scalable. The ClearML Python Package for integrating ClearML into your existing scripts by adding just two lines of code, and optionally extending your experiments and other workflows with ClearML powerful and versatile set of classes and methods. The ClearML Server storing experiment, model, and workflow data, and supports the Web UI experiment manager, and ML-Ops automation for reproducibility and tuning. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    nanobot

    nanobot

    🐈 nanobot: The Ultra-Lightweight Clawdbot / OpenClaw

    nanobot is an ultra-lightweight personal AI assistant designed to deliver powerful agent capabilities without unnecessary complexity. Built in just ~4,000 lines of clean, readable code, it offers a minimalist alternative to heavyweight agent frameworks while retaining core intelligence and extensibility. nanobot is optimized for speed and efficiency, enabling fast startup times and low resource usage across environments. Its research-ready architecture makes it easy for developers to...
    Downloads: 15 This Week
    Last Update:
    See Project
  • 9
    FastKoko

    FastKoko

    Dockerized FastAPI wrapper for Kokoro-82M text-to-speech model

    FastKoko is a self-hosted text-to-speech server built around the Kokoro-82M model and exposed through a FastAPI backend. It is designed to be easy to deploy via Docker, with separate CPU and GPU images so that users can choose between pure CPU inference and NVIDIA GPU acceleration. The project exposes an OpenAI-compatible speech endpoint, which means existing code that talks to the OpenAI audio API can often be pointed at a Kokoro-FastAPI instance with minimal changes. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Cut Data Warehouse Costs up to 54% with BigQuery Icon
    Cut Data Warehouse Costs up to 54% with BigQuery

    Migrate from Snowflake, Databricks, or Redshift with free migration tools. Exabyte scale without the Exabyte price.

    BigQuery delivers up to 54% lower TCO than cloud alternatives. Migrate from legacy or competing warehouses using free BigQuery Migration Service with automated SQL translation. Get serverless scale with no infrastructure to manage, compressed storage, and flexible pricing—pay per query or commit for deeper discounts. New customers get $300 in free credit.
    Try BigQuery Free
  • 10
    Serena

    Serena

    Agent toolkit providing semantic retrieval and editing capabilities

    Serena is a coding-focused agent toolkit that turns an LLM into a practical software-engineering agent with semantic retrieval and editing over real repositories. It operates as an MCP server (and other integrations), exposing IDE-like tools so agents can locate symbols, reason about code structure, make targeted edits, and validate changes. The toolkit is LLM-agnostic and framework-agnostic, positioning itself as a drop-in capability for different chat UIs, orchestrators, or custom agent stacks. It emphasizes symbol-level understanding rather than naive file-wide diffs, enabling more precise refactors and additions. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    A.I.G

    A.I.G

    Full-stack AI Red Teaming platform

    AI-Infra-Guard is a powerful open-source security platform from Tencent’s Zhuque Lab designed to assess the safety and resilience of AI infrastructures, codebases, and components through automated scanning and evaluation tools. It brings together AI infrastructure vulnerability scanning, MCP server risk analysis, and jailbreak evaluation into a unified workflow so that enterprises and individuals can identify critical security issues without relying on external services. Users can deploy it...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    OpenVoice

    OpenVoice

    Instant voice cloning by MIT and MyShell. Audio foundation model

    OpenVoice is a versatile instant voice cloning system that can replicate a speaker’s tone color from just a short audio clip and then generate speech in multiple languages. It is designed not only to match the timbre of the reference voice, but also to give granular control over style parameters such as emotion, accent, rhythm, pauses, and intonation. The model supports cross-lingual and even zero-shot cross-lingual voice cloning, so a speaker recorded in one language can be made to speak...
    Downloads: 19 This Week
    Last Update:
    See Project
  • 13
    DeerFlow

    DeerFlow

    Deep Research framework, combining language models with tools

    ...“planner,” “searcher,” “coder,” “report generator”) that collaborate in a structured workflow, allowing tasks like literature reviews, data gathering, data analysis, code execution, and final report generation to be largely automated. It supports asynchronous task coordination, modular tool integration, and orchestrates the data flow between agents — making it suitable for large-scale or multi-stage research pipelines. Users can deploy it locally or on server infrastructure, integrate custom tools, and benefit from its flexible configuration.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    HunyuanOCR

    HunyuanOCR

    OCR expert VLM powered by Hunyuan's native multimodal architecture

    ...HunyuanOCR handles complex documents: multi-column layouts, tables, mathematical formulas, mixed languages, handwritten or stylized fonts, receipts, tickets, and even video-frame subtitles. The project provides code, pretrained weights, and inference instructions, making it feasible to deploy locally or on a server, and to integrate with applications.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 15
    Archon

    Archon

    The knowledge and task management backbone for AI coding assistants

    Archon is an open-source “command center” designed to enhance AI coding assistant workflows by giving developers a centralized environment for knowledge management, context engineering, and task coordination across AI agents. It acts as a backend (including an MCP server) that allows different AI coding tools and assistants to share the same structured context, knowledge base, and task lists, improving consistency, productivity, and collaboration across multi-agent interactions. Users can import documentation, project files, and external knowledge so that assistants like Claude Code, Cursor, or other LLM-powered tools work with up-to-date, project-specific context rather than relying on limited prompt memory. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    FlowLens MCP

    FlowLens MCP

    Open-source MCP server that gives your coding agent

    FlowLens MCP Server is an open-source tool designed to give AI-powered coding agents (like Claude Code, Cursor, GitHub Copilot / Codex, and others) full, replayable browser context to dramatically improve debugging, bug reporting, and regression testing for web applications. It works together with a companion browser extension: when a user reproduces a bug or a complicated UI interaction, the extension captures a rich session log, including screen/video recording, network traffic, console logs, DOM events, storage changes, and more, and exports it. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Elyra

    Elyra

    Elyra extends JupyterLab with an AI centric approach

    Elyra is a set of AI-centric extensions to JupyterLab Notebooks. The Elyra Getting Started Guide includes more details on these features. A version-specific summary of new features is located on the releases page.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    ChatGPT Retrieval Plugin

    ChatGPT Retrieval Plugin

    The ChatGPT Retrieval Plugin lets you easily find personal documents

    ...It can serve as a custom GPT plugin or function-calling backend so that a chat session can “look up” relevant documents based on user queries, inject those results into context, and respond more knowledgeably about a private knowledge base. The repo provides code for ingestion pipelines (embedding documents), APIs for querying, local server components, and privacy / PII detection modules. It also contains plugin manifest files (OpenAPI spec, plugin JSON) so that the retrieval backend can be registered in a plugin ecosystem. Because retrieval is often needed to make LLMs “know what’s in your docs” without leaking everything, this plugin aims to be a secure, flexible building block for retrieval-augmented generation (RAG) systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Red Discord Bot

    Red Discord Bot

    A multi-function Discord bot

    Red is a fully modular bot, meaning all features and commands can be enabled/disabled to your liking, making it completely customizable. This is a self-hosted bot, meaning you will need to host and maintain your own instance. You can turn Red into an admin bot, music bot, trivia bot, new best friend or all of these together! CustomCommands allows you to create simple commands for your bot without requiring you to code your own cog for Red. If the command you attempt to create shares a name...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 20
    Klavis AI

    Klavis AI

    MCP integration platforms for AI agents to use tools at any scale

    Klavis AI is a Y Combinator X25-backed open-source infrastructure platform that enables AI agents to reliably connect with external tools and services at scale through Model Context Protocol (MCP). Founded by ex-Google DeepMind and ex-Lyft engineers, Klavis provides 50+ production-ready MCP servers with enterprise OAuth support for GitHub, Slack, Gmail, Salesforce, Linear, Notion, and more. The flagship product Strata solves tool overload through progressive discovery, achieving +13% higher...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 21
    Hamilton DAGWorks

    Hamilton DAGWorks

    Helps scientists define testable, modular, self-documenting dataflow

    Hamilton is a lightweight Python library for directed acyclic graphs (DAGs) of data transformations. Your DAG is portable; it runs anywhere Python runs, whether it's a script, notebook, Airflow pipeline, FastAPI server, etc. Your DAG is expressive; Hamilton has extensive features to define and modify the execution of a DAG (e.g., data validation, experiment tracking, remote execution). To create a DAG, write regular Python functions that specify their dependencies with their parameters. As shown below, it results in readable code that can always be visualized. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    AWS MCP Servers

    AWS MCP Servers

    Helping you get the most out of AWS, wherever you use MCP

    AWS MCP Servers are a collection of remotely hosted, fully-managed Model Context Protocol (MCP) servers by AWS, providing AI applications with real-time access to AWS documentation, API references, best practices, and infrastructure-management capabilities via natural-language workflows. An MCP Server is a lightweight program that exposes specific capabilities through the standardized Model Context Protocol. Host applications (such as chatbots, IDEs, and other AI tools) have MCP clients that...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    Sygil WebUI

    Sygil WebUI

    Stable Diffusion web UI

    Sygil WebUI is a browser-based interface for running Stable Diffusion image generation locally or on a server, wrapping common text-to-image and image-to-image workflows into a practical UI. It provides multiple UI modes (including a legacy Gradio interface) and focuses on making iterative prompting, parameter tuning, and post-processing accessible without writing code. The UI exposes core generation controls like resolution, CFG guidance, sampling steps, samplers, seeds, and batch generation so users can reproduce results and refine outputs systematically. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    SageMaker Hugging Face Inference Toolkit

    SageMaker Hugging Face Inference Toolkit

    Library for serving Transformers models on Amazon SageMaker

    SageMaker Hugging Face Inference Toolkit is an open-source library for serving Transformers models on Amazon SageMaker. This library provides default pre-processing, predict and postprocessing for certain Transformers models and tasks. It utilizes the SageMaker Inference Toolkit for starting up the model server, which is responsible for handling inference requests. For the Dockerfiles used for building SageMaker Hugging Face Containers, see AWS Deep Learning Containers. The SageMaker Hugging...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    SageMaker Inference Toolkit

    SageMaker Inference Toolkit

    Serve machine learning models within a Docker container

    ...This library's serving stack is built on Multi Model Server, and it can serve your own models or those you trained on SageMaker using machine learning frameworks with native SageMaker support.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
MongoDB Logo MongoDB
Gen AI apps are built with MongoDB Atlas
Atlas offers built-in vector search and global availability across 125+ regions. Start building AI apps faster, all in one place.
Try Free →