Business Software for LangChain

Top Software that integrates with LangChain as of June 2025

LangChain Clear Filters

Compare business software, products, and services to find the best solution for your business or organization. Use the filters on the left to drill down by category, pricing, features, organization size, organization type, region, user reviews, integrations, and more. View and sort the products and solutions that match your needs in the results below.

  • 1
    Python

    Python

    Python

    The core of extensible programming is defining functions. Python allows mandatory and optional arguments, keyword arguments, and even arbitrary argument lists. Whether you're new to programming or an experienced developer, it's easy to learn and use Python. Python can be easy to pick up whether you're a first-time programmer or you're experienced with other languages. The following pages are a useful first step to get on your way to writing programs with Python! The community hosts conferences and meetups to collaborate on code, and much more. Python's documentation will help you along the way, and the mailing lists will keep you in touch. The Python Package Index (PyPI) hosts thousands of third-party modules for Python. Both Python's standard library and the community-contributed modules allow for endless possibilities.
    Starting Price: Free
  • 2
    Langfuse

    Langfuse

    Langfuse

    Langfuse is an open source LLM engineering platform to help teams collaboratively debug, analyze and iterate on their LLM Applications. Observability: Instrument your app and start ingesting traces to Langfuse Langfuse UI: Inspect and debug complex logs and user sessions Prompts: Manage, version and deploy prompts from within Langfuse Analytics: Track metrics (LLM cost, latency, quality) and gain insights from dashboards & data exports Evals: Collect and calculate scores for your LLM completions Experiments: Track and test app behavior before deploying a new version Why Langfuse? - Open source - Model and framework agnostic - Built for production - Incrementally adoptable - start with a single LLM call or integration, then expand to full tracing of complex chains/agents - Use GET API to build downstream use cases and export data
    Starting Price: $29/month
  • 3
    Opik

    Opik

    Comet

    Confidently evaluate, test, and ship LLM applications with a suite of observability tools to calibrate language model outputs across your dev and production lifecycle. Log traces and spans, define and compute evaluation metrics, score LLM outputs, compare performance across app versions, and more. Record, sort, search, and understand each step your LLM app takes to generate a response. Manually annotate, view, and compare LLM responses in a user-friendly table. Log traces during development and in production. Run experiments with different prompts and evaluate against a test set. Choose and run pre-configured evaluation metrics or define your own with our convenient SDK library. Consult built-in LLM judges for complex issues like hallucination detection, factuality, and moderation. Establish reliable performance baselines with Opik's LLM unit tests, built on PyTest. Build comprehensive test suites to evaluate your entire LLM pipeline on every deployment.
    Starting Price: $39 per month
  • 4
    DeepSeek R1

    DeepSeek R1

    DeepSeek

    DeepSeek-R1 is an advanced open-source reasoning model developed by DeepSeek, designed to rival OpenAI's Model o1. Accessible via web, app, and API, it excels in complex tasks such as mathematics and coding, demonstrating superior performance on benchmarks like the American Invitational Mathematics Examination (AIME) and MATH. DeepSeek-R1 employs a mixture of experts (MoE) architecture with 671 billion total parameters, activating 37 billion parameters per token, enabling efficient and accurate reasoning capabilities. This model is part of DeepSeek's commitment to advancing artificial general intelligence (AGI) through open-source innovation.
    Starting Price: Free
  • 5
    Pinokio

    Pinokio

    Pinokio

    There are so many applications that require you to open your terminal and enter commands, not to mention deal with all kinds of complicated environments and installation settings. With Pinokio, all of this can be packaged into a simple JSON script, which can then be run in a browser setting with just one click. Running a server on a computer is not a trivial task. You need to open a terminal and run a bunch of commands to start the server and keep the terminal open to keep them running.
  • 6
    Arize AI

    Arize AI

    Arize AI

    Automatically discover issues, diagnose problems, and improve models with Arize’s machine learning observability platform. Machine learning systems address mission critical needs for businesses and their customers every day, yet often fail to perform in the real world. Arize is an end-to-end observability platform to accelerate detecting and resolving issues for your AI models at large. Seamlessly enable observability for any model, from any platform, in any environment. Lightweight SDKs to send training, validation, and production datasets. Link real-time or delayed ground truth to predictions. Gain foresight and confidence that your models will perform as expected once deployed. Proactively catch any performance degradation, data/prediction drift, and quality issues before they spiral. Reduce the time to resolution (MTTR) for even the most complex models with flexible, easy-to-use tools for root cause analysis.
    Starting Price: $50/month
  • 7
    LangGraph

    LangGraph

    LangChain

    Gain precision and control with LangGraph to build agents that reliably handle complex tasks. Build and scale agentic applications with LangGraph Platform. LangGraph's flexible framework supports diverse control flows – single agent, multi-agent, hierarchical, sequential – and robustly handles realistic, complex scenarios. Ensure reliability with easy-to-add moderation and quality loops that prevent agents from veering off course. Use LangGraph Platform to templatize your cognitive architecture so that tools, prompts, and models are easily configurable with LangGraph Platform Assistants. With built-in statefulness, LangGraph agents seamlessly collaborate with humans by writing drafts for review and awaiting approval before acting. Easily inspect the agent’s actions and "time-travel" to roll back and take a different action to correct course.
    Starting Price: Free
  • 8
    Metal

    Metal

    Metal

    Metal is your production-ready, fully-managed, ML retrieval platform. Use Metal to find meaning in your unstructured data with embeddings. Metal is a managed service that allows you to build AI products without the hassle of managing infrastructure. Integrations with OpenAI, CLIP, and more. Easily process & chunk your documents. Take advantage of our system in production. Easily plug into the MetalRetriever. Simple /search endpoint for running ANN queries. Get started with a free account. Metal API Keys to use our API & SDKs. With your API Key, you can use authenticate by populating the headers. Learn how to use our Typescript SDK to implement Metal into your application. Although we love TypeScript, you can of course utilize this library in JavaScript. Mechanism to fine-tune your spp programmatically. Indexed vector database of your embeddings. Resources that represent your specific ML use-case.
    Starting Price: $25 per month
  • 9
    Langdock

    Langdock

    Langdock

    Native support for ChatGPT and LangChain. Bing, HuggingFace and more coming soon. Add your API documentation manually or import an existing OpenAPI specification. Access the request prompt, parameters, headers, body and more. Inspect detailed live metrics about how your plugin is performing, including latencies, errors, and more. Configure your own dashboards, track funnels and aggregated metrics.
    Starting Price: Free
  • 10
    ZenML

    ZenML

    ZenML

    Simplify your MLOps pipelines. Manage, deploy, and scale on any infrastructure with ZenML. ZenML is completely free and open-source. See the magic with just two simple commands. Set up ZenML in a matter of minutes, and start with all the tools you already use. ZenML standard interfaces ensure that your tools work together seamlessly. Gradually scale up your MLOps stack by switching out components whenever your training or deployment requirements change. Keep up with the latest changes in the MLOps world and easily integrate any new developments. Define simple and clear ML workflows without wasting time on boilerplate tooling or infrastructure code. Write portable ML code and switch from experimentation to production in seconds. Manage all your favorite MLOps tools in one place with ZenML's plug-and-play integrations. Prevent vendor lock-in by writing extensible, tooling-agnostic, and infrastructure-agnostic code.
    Starting Price: Free
  • 11
    Deep Lake

    Deep Lake

    activeloop

    Generative AI may be new, but we've been building for this day for the past 5 years. Deep Lake thus combines the power of both data lakes and vector databases to build and fine-tune enterprise-grade, LLM-based solutions, and iteratively improve them over time. Vector search does not resolve retrieval. To solve it, you need a serverless query for multi-modal data, including embeddings or metadata. Filter, search, & more from the cloud or your laptop. Visualize and understand your data, as well as the embeddings. Track & compare versions over time to improve your data & your model. Competitive businesses are not built on OpenAI APIs. Fine-tune your LLMs on your data. Efficiently stream data from remote storage to the GPUs as models are trained. Deep Lake datasets are visualized right in your browser or Jupyter Notebook. Instantly retrieve different versions of your data, materialize new datasets via queries on the fly, and stream them to PyTorch or TensorFlow.
    Starting Price: $995 per month
  • 12
    Flowise

    Flowise

    Flowise AI

    Flowise is an open-source, low-code platform that enables developers to create customized Large Language Model (LLM) applications through a user-friendly drag-and-drop interface. It supports integration with various LLMs, including LangChain and LlamaIndex, and offers over 100 integrations to facilitate the development of AI agents and orchestration flows. Flowise provides APIs, SDKs, and embedded widgets for seamless incorporation into existing systems, and is platform-agnostic, allowing deployment in air-gapped environments with local LLMs and vector databases.
    Starting Price: Free
  • 13
    Typeblock

    Typeblock

    Typeblock

    Create shareable AI apps using a simple Notion-like editor. No need to write code or hire expensive developers. We handle the hosting, database, and deployment for you. Whether you're an entrepreneur, agency, or marketing team Typeblock gives you the power to build AI tools in under 2 minutes. Write SEO-optimized blog posts and instantly publish them to your CMS. Create a tool to generate highly personalized cold emails for your sales team. Build a tool to write highly converting Facebook ads, LinkedIn posts, or Twitter threads. Build an app that writes landing page copy for your marketing team. Harness the power of AI to build tools that write highly engaging newsletters for and your users.
    Starting Price: $20 per month
  • 14
    Zep

    Zep

    Zep

    Zep ensures your assistant remembers past conversations and resurfaces them when relevant. Identify your user's intent, build semantic routers, and trigger events, all in milliseconds. Emails, phone numbers, dates, names, and more, are extracted quickly and accurately. Your assistant will never forget a user. Classify intent, emotion, and more and turn dialog into structured data. Retrieve, analyze, and extract in milliseconds; your users never wait. We don't send your data to third-party LLM services. SDKs for your favorite languages and frameworks. Automagically populate prompts with a summary of relevant past conversations, no matter how distant. Zep summarizes, embeds, and executes retrieval pipelines over your Assistant's chat history. Instantly and accurately classify chat dialog. Understand user intent and emotion. Route chains based on semantic context, and trigger events. Quickly extract business data from chat conversations.
    Starting Price: Free
  • 15
    Runbear

    Runbear

    Runbear

    Runbear is a no-code platform that enables teams to create AI agents integrated with popular communication and productivity tools like Slack, Teams, HubSpot, and more. It allows users to build custom AI assistants quickly, typically within 10 minutes, without requiring technical expertise. Runbear helps automate repetitive tasks, streamline internal communication, and manage AI agents tailored for different teams all from a single interface. The platform supports integrations with AI models like OpenAI, Claude, and Gemini, combined with content management systems such as Google Drive, Notion, and Confluence. Use cases include automating meeting preparation, summarizing Slack threads, analyzing Airtable data through natural language, and enabling AI to suggest answers in Slack channels. Customer testimonials highlight Runbear’s ease of use and the significant efficiency gains achieved by integrating AI directly into workflows.
    Starting Price: $79 per month
  • 16
    CodeQwen

    CodeQwen

    Alibaba

    CodeQwen is the code version of Qwen, the large language model series developed by the Qwen team, Alibaba Cloud. It is a transformer-based decoder-only language model pre-trained on a large amount of data of codes. Strong code generation capabilities and competitive performance across a series of benchmarks. Supporting long context understanding and generation with the context length of 64K tokens. CodeQwen supports 92 coding languages and provides excellent performance in text-to-SQL, bug fixes, etc. You can just write several lines of code with transformers to chat with CodeQwen. Essentially, we build the tokenizer and the model from pre-trained methods, and we use the generate method to perform chatting with the help of the chat template provided by the tokenizer. We apply the ChatML template for chat models following our previous practice. The model completes the code snippets according to the given prompts, without any additional formatting.
    Starting Price: Free
  • 17
    Agenta

    Agenta

    Agenta

    Collaborate on prompts, evaluate, and monitor LLM apps with confidence. Agenta is a comprehensive platform that enables teams to quickly build robust LLM apps. Create a playground connected to your code where the whole team can experiment and collaborate. Systematically compare different prompts, models, and embeddings before going to production. Share a link to gather human feedback from the rest of the team. Agenta works out of the box with all frameworks (Langchain, Lama Index, etc.) and model providers (OpenAI, Cohere, Huggingface, self-hosted models, etc.). Gain visibility into your LLM app's costs, latency, and chain of calls. You have the option to create simple LLM apps directly from the UI. However, if you would like to write customized applications, you need to write code with Python. Agenta is model agnostic and works with all model providers and frameworks. The only limitation at present is that our SDK is available only in Python.
    Starting Price: Free
  • 18
    Langtrace

    Langtrace

    Langtrace

    Langtrace is an open source observability tool that collects and analyzes traces and metrics to help you improve your LLM apps. Langtrace ensures the highest level of security. Our cloud platform is SOC 2 Type II certified, ensuring top-tier protection for your data. Supports popular LLMs, frameworks, and vector databases. Langtrace can be self-hosted and supports OpenTelemetry standard traces, which can be ingested by any observability tool of your choice, resulting in no vendor lock-in. Get visibility and insights into your entire ML pipeline, whether it is a RAG or a fine-tuned model with traces and logs that cut across the framework, vectorDB, and LLM requests. Annotate and create golden datasets with traced LLM interactions, and use them to continuously test and enhance your AI applications. Langtrace includes built-in heuristic, statistical, and model-based evaluations to support this process.
    Starting Price: Free
  • 19
    AgentOps

    AgentOps

    AgentOps

    Industry-leading developer platform to test and debug AI agents. We built the tools so you don't have to. Visually track events such as LLM calls, tools, and multi-agent interactions. Rewind and replay agent runs with point-in-time precision. Keep a full data trail of logs, errors, and prompt injection attacks from prototype to production. Native integrations with the top agent frameworks. Track, save, and monitor every token your agent sees. Manage and visualize agent spending with up-to-date price monitoring. Fine-tune specialized LLMs up to 25x cheaper on saved completions. Build your next agent with evals, observability, and replays. With just two lines of code, you can free yourself from the chains of the terminal and instead visualize your agents’ behavior in your AgentOps dashboard. After setting up AgentOps, each execution of your program is recorded as a session and the data is automatically recorded for you.
    Starting Price: $40 per month
  • 20
    Remind

    Remind

    Remind

    Recall your tasks and optimize your workflow. Boost your productivity by using your own artificial memory today. Remind is an advanced application designed to capture, transcribe, and index digital activity from your device, making it easy to recall important information. To get started with Remind, download the repo from our website or Github, install it on your device, and follow the setup instructions on GitHub. Effortlessly capture your digital activity and use it as memory, using advanced AI technology. Remind allows you to customize various components to suit your needs. You can modify settings such as the frequency of screenshots, the format of transcriptions, and the organization of indexed data.
    Starting Price: Free
  • 21
    VESSL AI

    VESSL AI

    VESSL AI

    Build, train, and deploy models faster at scale with fully managed infrastructure, tools, and workflows. Deploy custom AI & LLMs on any infrastructure in seconds and scale inference with ease. Handle your most demanding tasks with batch job scheduling, only paying with per-second billing. Optimize costs with GPU usage, spot instances, and built-in automatic failover. Train with a single command with YAML, simplifying complex infrastructure setups. Automatically scale up workers during high traffic and scale down to zero during inactivity. Deploy cutting-edge models with persistent endpoints in a serverless environment, optimizing resource usage. Monitor system and inference metrics in real-time, including worker count, GPU utilization, latency, and throughput. Efficiently conduct A/B testing by splitting traffic among multiple models for evaluation.
    Starting Price: $100 + compute/month
  • 22
    ApertureDB

    ApertureDB

    ApertureDB

    Build your competitive edge with the power of vector search. Streamline your AI/ML pipeline workflows, reduce infrastructure costs, and stay ahead of the curve with up to 10x faster time-to-market. Break free of data silos with ApertureDB's unified multimodal data management, freeing your AI teams to innovate. Set up and scale complex multimodal data infrastructure for billions of objects across your entire enterprise in days, not months. Unifying multimodal data, advanced vector search, and innovative knowledge graph with a powerful query engine to build AI applications faster at enterprise scale. ApertureDB can enhance the productivity of your AI/ML teams and accelerate returns from AI investment with all your data. Try it for free or schedule a demo to see it in action. Find relevant images based on labels, geolocation, and regions of interest. Prepare large-scale multi-modal medical scans for ML and clinical studies.
    Starting Price: $0.33 per hour
  • 23
    SWE-Kit

    SWE-Kit

    Composio

    SweKit let’s you build PR agents to review code, suggest improvements, enforce coding standards, identify potential issues, automate merge approvals, and provide feedback on best practices, streamlining the review process and enhancing code quality. Automate writing new features, debug complex issues, create and run tests, optimize code for performance, refactor for maintainability, and ensure best practices across the codebase, accelerating development and efficiency. Use highly optimized code analysis, advanced code indexing, and intelligent file navigation tools to explore and interact with large codebases effortlessly. Ask questions, trace dependencies, uncover logic flows, and gain instant insights, enabling seamless communication with complex code structures. Keep your documentation in sync with your code. Automatically update Mintlify documentation whenever changes are made to the codebase, ensuring that your docs stay accurate, up-to-date, and ready for your team and users.
    Starting Price: $49 per month
  • 24
    Arize Phoenix
    Phoenix is an open-source observability library designed for experimentation, evaluation, and troubleshooting. It allows AI engineers and data scientists to quickly visualize their data, evaluate performance, track down issues, and export data to improve. Phoenix is built by Arize AI, the company behind the industry-leading AI observability platform, and a set of core contributors. Phoenix works with OpenTelemetry and OpenInference instrumentation. The main Phoenix package is arize-phoenix. We offer several helper packages for specific use cases. Our semantic layer is to add LLM telemetry to OpenTelemetry. Automatically instrumenting popular packages. Phoenix's open-source library supports tracing for AI applications, via manual instrumentation or through integrations with LlamaIndex, Langchain, OpenAI, and others. LLM tracing records the paths taken by requests as they propagate through multiple steps or components of an LLM application.
    Starting Price: Free
  • 25
    Lunary

    Lunary

    Lunary

    Lunary is an AI developer platform designed to help AI teams manage, improve, and protect Large Language Model (LLM) chatbots. It offers features such as conversation and feedback tracking, analytics on costs and performance, debugging tools, and a prompt directory for versioning and team collaboration. Lunary supports integration with various LLMs and frameworks, including OpenAI and LangChain, and provides SDKs for Python and JavaScript. Guardrails to deflect malicious prompts and sensitive data leaks. Deploy in your VPC with Kubernetes or Docker. Allow your team to judge responses from your LLMs. Understand what languages your users are speaking. Experiment with prompts and LLM models. Search and filter anything in milliseconds. Receive notifications when agents are not performing as expected. Lunary's core platform is 100% open-source. Self-host or in the cloud, get started in minutes.
    Starting Price: $20 per month
  • 26
    Ragas

    Ragas

    Ragas

    Ragas is an open-source framework designed to test and evaluate Large Language Model (LLM) applications. It offers automatic metrics to assess performance and robustness, synthetic test data generation tailored to specific requirements, and workflows to ensure quality during development and production monitoring. Ragas integrates seamlessly with existing stacks, providing insights to enhance LLM applications. The platform is maintained by a team of passionate individuals leveraging cutting-edge research and pragmatic engineering practices to empower visionaries redefining LLM possibilities. Synthetically generate high-quality and diverse evaluation data customized for your requirements. Evaluate and ensure the quality of your LLM application in production. Use insights to improve your application. Automatic metrics that helps you understand the performance and robustness of your LLM application.
    Starting Price: Free
  • 27
    DeepEval

    DeepEval

    Confident AI

    DeepEval is a simple-to-use, open source LLM evaluation framework, for evaluating and testing large-language model systems. It is similar to Pytest but specialized for unit testing LLM outputs. DeepEval incorporates the latest research to evaluate LLM outputs based on metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., which uses LLMs and various other NLP models that run locally on your machine for evaluation. Whether your application is implemented via RAG or fine-tuning, LangChain, or LlamaIndex, DeepEval has you covered. With it, you can easily determine the optimal hyperparameters to improve your RAG pipeline, prevent prompt drifting, or even transition from OpenAI to hosting your own Llama2 with confidence. The framework supports synthetic dataset generation with advanced evolution techniques and integrates seamlessly with popular frameworks, allowing for efficient benchmarking and optimization of LLM systems.
    Starting Price: Free
  • 28
    HumanLayer

    HumanLayer

    HumanLayer

    HumanLayer is an API and SDK that enables AI agents to contact humans for feedback, input, and approvals. It guarantees human oversight of high-stakes function calls with approval workflows across Slack, email, and more. By integrating with your preferred Large Language Model (LLM) and framework, HumanLayer empowers AI agents with safe access to the world. The platform supports various frameworks and LLMs, including LangChain, CrewAI, ControlFlow, LlamaIndex, Haystack, OpenAI, Claude, Llama3.1, Mistral, Gemini, and Cohere. HumanLayer offers features such as approval workflows, human-as-tool integration, and custom responses with escalations. Pre-fill response prompts for seamless human-agent interactions. Route to specific individuals or teams, and control which users can approve or respond to LLM requests. Invert the flow of control, from human-initiated to agent-initiated. Add a variety of human contact channels to your agent toolchain.
    Starting Price: $500 per month
  • 29
    ScrapFly

    ScrapFly

    ScrapFly

    Scrapfly offers a suite of APIs designed to streamline web data collection for developers. Their web scraping API enables efficient extraction of web pages, handling challenges like anti-scraping measures and JavaScript rendering. The Extraction API utilizes AI and large language models to parse documents and extract structured data, while the screenshot API allows for capturing high-quality visuals of web pages. These tools are built to scale, ensuring reliability and performance as data needs grow. Scrapfly also provides comprehensive documentation, SDKs in Python and TypeScript, and integrations with platforms like Zapier and Make to facilitate seamless integration into various workflows.
    Starting Price: $30 per month
  • 30
    ScrapeGraphAI

    ScrapeGraphAI

    ScrapeGraphAI

    ScrapeGraphAI is an AI-powered web scraping platform that transforms unstructured web content into clean, organized JSON data. Designed for AI agents and large language models, it enables users to extract data from various websites, including e-commerce, social media, and dynamic web applications, using natural language instructions. The platform offers a simple API with official SDKs for Python, JavaScript, and TypeScript, facilitating quick setup without complex configurations. ScrapeGraphAI adapts to website changes automatically, ensuring reliable data collection. It is built for scalability, featuring automatic proxy rotation and rate limiting, making it suitable for both startups and enterprises. The platform operates on a transparent, usage-based pricing model, starting with a free tier and scaling according to user needs. Additionally, ScrapeGraphAI provides an open source Python library that utilizes large language models and direct graph logic.
    Starting Price: $20 per month
  • Previous
  • You're on page 1
  • 2
  • 3
  • Next