Showing 1872 open source projects for "python::module"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Cloud-based help desk software with ServoDesk Icon
    Cloud-based help desk software with ServoDesk

    Full access to Enterprise features. No credit card required.

    What if You Could Automate 90% of Your Repetitive Tasks in Under 30 Days? At ServoDesk, we help businesses like yours automate operations with AI, allowing you to cut service times in half and increase productivity by 25% - without hiring more staff.
    Try ServoDesk for free
  • 1
    Diplomacy Cicero

    Diplomacy Cicero

    Code for Cicero, an AI agent that plays the game of Diplomacy

    ...It supports two variants: Cicero (which handles full “press” negotiation) and Diplodocus (a variant focused on no-press diplomacy) as described in the README. The codebase is implemented primarily in Python with performance-critical components in C++ (via pybind11 bindings) and is configured to run in a high‐GPU cluster environment. Configuration is managed via protobuf files to define tasks such as self-play, benchmark agent comparisons, and RL training. The project is now archived and read-only, reflecting that it is no longer actively developed but remains publicly available for research use.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 2
    Agent Squad

    Agent Squad

    Flexible and powerful framework for managing multiple AI agents

    Agent-Squad is a flexible and powerful framework for managing multiple AI agents and handling complex conversations. It intelligently routes queries and maintains context across interactions, offering pre-built components for quick deployment and easy integration of custom agents.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 3
    FlashInfer

    FlashInfer

    FlashInfer: Kernel Library for LLM Serving

    FlashInfer is a kernel library designed to enhance the serving of Large Language Models (LLMs) by optimizing inference performance. It provides a high-performance framework that integrates seamlessly with existing systems, aiming to reduce latency and improve efficiency in LLM deployments. FlashInfer supports various hardware architectures and is built to scale with the demands of production environments.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    AppWorld

    AppWorld

    World of apps for benchmarking interactive coding agent

    AppWorld is a framework developed by Stony Brook University's NLP group to simulate environments for training and evaluating dialogue agents in task-oriented applications.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Free and Open Source HR Software Icon
    Free and Open Source HR Software

    OrangeHRM provides a world-class HRIS experience and offers everything you and your team need to be that HR hero you know that you are.

    Give your HR team the tools they need to streamline administrative tasks, support employees, and make informed decisions with the OrangeHRM free and open source HR software.
    Learn More
  • 5
    Letta

    Letta

    Letta (formerly MemGPT) is a framework for creating LLM services

    Letta is an AI-powered task automation framework designed to handle workflow automation, natural language commands, and AI-driven decision-making.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 6
    GPT-2 Output Dataset

    GPT-2 Output Dataset

    Dataset of GPT-2 outputs for research in detection, biases, and more

    The GPT-2 Output Dataset is a large collection of model-generated text, released by OpenAI alongside the GPT-2 research paper to study the behaviors and limitations of large language models. It contains 250,000 samples of GPT-2 outputs, generated with different sampling strategies such as top-k truncation, to highlight the diversity and quality of model completions. The dataset also includes corresponding human-written text for comparison, enabling researchers to explore methods for...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 7
    Stable Diffusion Version 2

    Stable Diffusion Version 2

    High-Resolution Image Synthesis with Latent Diffusion Models

    Stable Diffusion (the stablediffusion repo by Stability-AI) is an open-source implementation and reference codebase for high-resolution latent diffusion image models that power many text-to-image systems. The repository provides code for training and running Stable Diffusion-style models, instructions for installing dependencies (with notes about performance libraries like xformers), and guidance on hardware/driver requirements for efficient GPU inference and training. It’s organized as a...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 8
    LangChain MCP

    LangChain MCP

    Model Context Protocol tool support for LangChain

    langchain-mcp provides Model Context Protocol (MCP) tool support for LangChain, a framework for developing applications powered by language models. It allows developers to create an MCPToolkit with a client.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Logfire MCP

    Logfire MCP

    The Logfire MCP Server is here

    The Logfire MCP Server is a Model Context Protocol server that allows AI applications to access OpenTelemetry traces and metrics sent to Logfire. It enables retrieval and analysis of telemetry data, enhancing debugging and observability workflows. ​
    Downloads: 0 This Week
    Last Update:
    See Project
  • Leverage AI to Automate Medical Coding Icon
    Leverage AI to Automate Medical Coding

    Medical Coding Solution

    As a healthcare provider, you should be paid promptly for the services you provide to patients. Slow, inefficient, and error-prone manual coding keeps you from the financial peace you deserve. XpertDox’s autonomous coding solution accelerates the revenue cycle so you can focus on providing great healthcare.
    Learn More
  • 10
    Zettelkasten MCP

    Zettelkasten MCP

    Implements the Zettelkasten knowledge management methodology

    The Zettelkasten MCP Server is a Model Context Protocol (MCP) server that implements the Zettelkasten knowledge management methodology. It allows users to create, link, and manage notes, facilitating a structured and interconnected note-taking system. ​
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    LangCheck

    LangCheck

    Simple, Pythonic building blocks to evaluate LLM applications

    Simple, Pythonic building blocks to evaluate LLM applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Ragas

    Ragas

    Supercharge Your LLM Application Evaluations

    Objective metrics, intelligent test generation, and data-driven insights for LLM apps. Ragas is your ultimate toolkit for evaluating and optimizing Large Language Model (LLM) applications. Say goodbye to time-consuming, subjective assessments and hello to data-driven, efficient evaluation workflows. Don't have a test dataset ready? We also do production-aligned test set generation.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    MLE-Agent

    MLE-Agent

    Intelligent companion for seamless AI engineering and research

    MLE-Agent is designed as a pairing LLM agent for machine learning engineers and researchers. A library designed for managing machine learning experiments, tracking metrics, and model deployment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    TorchMetrics AI

    TorchMetrics AI

    Machine learning metrics for distributed, scalable PyTorch application

    TorchMetrics is a collection of 100+ PyTorch metrics implementations and an easy-to-use API to create custom metrics.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Thinc

    Thinc

    A refreshing functional take on deep learning

    Thinc is a lightweight deep learning library that offers an elegant, type-checked, functional-programming API for composing models, with support for layers defined in other frameworks such as PyTorch, TensorFlow and MXNet. You can use Thinc as an interface layer, a standalone toolkit or a flexible way to develop new models. Previous versions of Thinc have been running quietly in production in thousands of companies, via both spaCy and Prodigy. We wrote the new version to let users compose,...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 16
    lightning AI

    lightning AI

    The most intuitive, flexible, way for researchers to build models

    Build in days not months with the most intuitive, flexible framework for building models and Lightning Apps (ie: ML workflow templates) which "glue" together your favorite ML lifecycle tools. Build models and build/publish end-to-end ML workflows that "glue" your favorite tools together. Models are “easy”, the “glue” work is hard. Lightning Apps are community-built templates that stitch together your favorite ML lifecycle tools into cohesive ML workflows that can run on your laptop or any...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 17
    tvm

    tvm

    Open deep learning compiler stack for cpu, gpu, etc.

    ...The vision of the Apache TVM Project is to host a diverse community of experts and practitioners in machine learning, compilers, and systems architecture to build an accessible, extensible, and automated open-source framework that optimizes current and emerging machine learning models for any hardware platform. Compilation of deep learning models in Keras, MXNet, PyTorch, Tensorflow, CoreML, DarkNet and more. Start using TVM with Python today, build out production stacks using C++, Rust, or Java the next day.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    Controllable-RAG-Agent

    Controllable-RAG-Agent

    This repository provides an advanced RAG

    Controllable-RAG-Agent is an advanced Retrieval-Augmented Generation (RAG) system designed specifically for complex, multi-step question answering over your own documents. Instead of relying solely on simple semantic search, it builds a deterministic control graph that acts as the “brain” of the agent, orchestrating planning, retrieval, reasoning, and verification across many steps. The pipeline ingests PDFs, splits them into chapters, cleans and preprocesses text, then constructs vector...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 19
    TreeQuest

    TreeQuest

    A Tree Search Library with Flexible API for LLM Inference-Time Scaling

    TreeQuest, developed by SakanaAI, is a versatile Python library implementing adaptive tree search algorithms—such as AB‑MCTS—for enhancing inference-time performance of large language models (LLMs). It allows developers to define custom state-generation and scoring functions (e.g., via LLMs), and then efficiently explores possible answer trees during runtime. With support for multi-LLM collaboration, checkpointing, and mixed policies, TreeQuest enables smarter, trial‑and‑error question answering by leveraging both breadth (multiple attempts) and depth (iterative refinement) strategies to find better outputs dynamically
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    PilottAI

    PilottAI

    Python framework for building scalable multi-agent systems

    pilottai is an AI-based autonomous drone navigation system utilizing reinforcement learning for real-time decision-making. It is designed for simulating and training drones to fly safely through dynamic environments using AI-based controllers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Prompt flow

    Prompt flow

    Build high-quality LLM apps

    Prompt flow is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, and evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Lepton AI

    Lepton AI

    A Pythonic framework to simplify AI service building

    A Pythonic framework to simplify AI service building. Cutting-edge AI inference and training, unmatched cloud-native experience, and top-tier GPU infrastructure. Ensure 99.9% uptime with comprehensive health checks and automatic repairs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    sktime

    sktime

    A unified framework for machine learning with time series

    sktime is a library for time series analysis in Python. It provides a unified interface for multiple time series learning tasks. Currently, this includes time series classification, regression, clustering, annotation, and forecasting. It comes with time series algorithms and scikit-learn compatible tools to build, tune and validate time series models. Our objective is to enhance the interoperability and usability of the time series analysis ecosystem in its entirety. sktime provides a unified interface for distinct but related time series learning tasks. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    GPTCache

    GPTCache

    Semantic cache for LLMs. Fully integrated with LangChain

    ChatGPT and various large language models (LLMs) boast incredible versatility, enabling the development of a wide range of applications. However, as your application grows in popularity and encounters higher traffic levels, the expenses related to LLM API calls can become substantial. Additionally, LLM services might exhibit slow response times, especially when dealing with a significant number of requests. To tackle this challenge, we have created GPTCache, a project dedicated to building a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    AWS Deep Learning Containers

    AWS Deep Learning Containers

    A set of Docker images for training and serving models in TensorFlow

    AWS Deep Learning Containers (DLCs) are a set of Docker images for training and serving models in TensorFlow, TensorFlow 2, PyTorch, and MXNet. Deep Learning Containers provide optimized environments with TensorFlow and MXNet, Nvidia CUDA (for GPU instances), and Intel MKL (for CPU instances) libraries and are available in the Amazon Elastic Container Registry (Amazon ECR). The AWS DLCs are used in Amazon SageMaker as the default vehicles for your SageMaker jobs such as training, inference,...
    Downloads: 7 This Week
    Last Update:
    See Project