Showing 11 open source projects for "compare"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 1
    Agentex

    Agentex

    Open source codebase for Scale Agentex

    ...It treats an “agent” as a composition of a policy (the LLM), tools, memory, and an execution runtime so you can test the whole loop, not just prompting. The repo focuses on structured experiments: standardized tasks, canonical tool interfaces, and logs that make it possible to compare models, prompts, and tool sets fairly. It also includes evaluation harnesses that capture success criteria and partial credit, plus traces you can inspect to understand where reasoning or tool use failed. The design encourages clean separation between experiment configuration and code, which makes sharing results or re-running baselines straightforward. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Rewriting Project Claw Code

    Rewriting Project Claw Code

    Ensure consistency and alignment between different codebases

    ...It focuses on maintaining parity across systems, which is particularly important in distributed architectures or multi-platform applications. The project provides mechanisms to compare, validate, and synchronize code or behavior, helping teams avoid discrepancies that can lead to bugs or inconsistencies. It may include automation tools that detect differences and enforce standards across repositories. The tool is useful in scenarios such as maintaining parity between frontend and backend logic, ensuring API consistency, or synchronizing multiple deployments. ...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 3
    Agent Stack

    Agent Stack

    Deploy and share agents with open infrastructure

    ...The platform supports agents built in frameworks like LangChain, CrewAI, etc., enabling them to be hosted, managed and shared through a unified interface. It also offers multi-model, multi-provider support (OpenAI, Anthropic, Gemini, IBM WatsonX, Ollama etc.), letting users compare performance and cost across models. For developers and organizations building AI-agent products or automations, Agent Stack gives a scaffold that handles the “plumbing”, so they can focus on logic and domain.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 4
    Coze Loop

    Coze Loop

    Next-generation AI Agent Optimization Platform

    ...The project aims to simplify the increasingly complex workflow of building reliable AI agents by offering integrated tools for debugging, evaluation, observability, and optimization. Through its visual playground, developers can test prompts interactively and compare outputs across different language models. The platform also includes automated evaluation capabilities that assess agent performance across multiple quality dimensions such as accuracy and compliance. Its observability layer captures detailed execution traces, enabling teams to understand how inputs, prompts, and tools interact during runtime. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 5
    Haystack

    Haystack

    Haystack is an open source NLP framework to interact with your data

    ...Ask questions in natural language and find granular answers in your documents using the latest QA models with the help of Haystack pipelines. Perform semantic search and retrieve ranked documents according to meaning, not just keywords! Make use of and compare the latest pre-trained transformer-based languages models like OpenAI’s GPT-3, BERT, RoBERTa, DPR, and more. Pick any Transformer model from Hugging Face's Model Hub, experiment, find the one that works. Use Haystack NLP components on top of Elasticsearch, OpenSearch, or plain SQL. Boost search performance with Pinecone, Milvus, FAISS, or Weaviate vector databases, and dense passage retrieval.
    Downloads: 10 This Week
    Last Update:
    See Project
  • 6
    LabClaw

    LabClaw

    Operating Layer for LabOS (Stanford-Princeton AI Co-Scientists)

    ...It provides a framework for composing multiple tools, prompts, and execution steps into structured pipelines that can be reused and evaluated across different scenarios. The system emphasizes experimentation, allowing users to run multiple variations of agent workflows, compare outputs, and refine performance over time. LabClaw is designed to integrate with various large language models and external tools, enabling flexible configurations that adapt to different use cases such as automation, research, and product prototyping. It also includes logging and observability features that help developers track agent decisions and debug behavior during execution.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    Skills Janitor

    Skills Janitor

    Audit, track usage, and compare your Claude Code skills

    The Skills Janitor project is a lightweight plugin designed to manage, audit, and optimize AI agent skill ecosystems, particularly for environments like Claude Code and OpenAI Codex. It functions as a “maintenance layer” for AI skills by automatically scanning installed skill directories, identifying duplicates, and analyzing their structure and usage. One of its core purposes is to help developers maintain a clean and efficient skill environment, especially as the number of installed skills...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    Unstract

    Unstract

    No-code LLM Platform to launch APIs and ETL Pipelines

    Unstract is a powerful open-source, no-code platform built to automate the extraction and structuring of unstructured documents using large language models and flexible workflows, enabling developers and data teams to turn messy files into organized JSON content without complex coding. It integrates a visual Prompt Studio environment where users can iteratively design extraction schemas, compare outputs from different models, and monitor costs and accuracy side by side, making it easier to refine prompts and extraction logic before deploying at scale. Unstract supports deploying structured extraction as REST API endpoints or embedding it into data engineering ETL pipelines, which allows it to plug directly into data warehouses, cloud storage, or downstream analytics systems. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Natalie is a free Personal Assistant who does your bidding from the Windows Run Box. Features Research Assistant Mode, Natural Language Interface, and extreme Extensibility. Compare to IWantSandy. Looking for Help / Staff!
    Downloads: 0 This Week
    Last Update:
    See Project
  • Fully Managed MySQL, PostgreSQL, and SQL Server Icon
    Fully Managed MySQL, PostgreSQL, and SQL Server

    Automatic backups, patching, replication, and failover. Focus on your app, not your database.

    Cloud SQL handles your database ops end to end, so you can focus on your app.
    Try Free
  • 10
    The ART testbed program allows researchers to compare their work from a practical point of view in the modelling of trust in agent societies.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    TOAST (Trust Organisational Agent System Testbed) is a simulation framework used to evaluate and compare different trust models for agents embedded in organisational systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB