30 Integrations with LangChain

View a list of LangChain integrations and software that integrates with LangChain below. Compare the best LangChain integrations as well as features, ratings, user reviews, and pricing of software that integrates with LangChain. Here are the current LangChain integrations in 2024:

  • 1
    Python

    Python

    Python

    The core of extensible programming is defining functions. Python allows mandatory and optional arguments, keyword arguments, and even arbitrary argument lists. Whether you're new to programming or an experienced developer, it's easy to learn and use Python. Python can be easy to pick up whether you're a first-time programmer or you're experienced with other languages. The following pages are a useful first step to get on your way to writing programs with Python! The community hosts conferences and meetups to collaborate on code, and much more. Python's documentation will help you along the way, and the mailing lists will keep you in touch. The Python Package Index (PyPI) hosts thousands of third-party modules for Python. Both Python's standard library and the community-contributed modules allow for endless possibilities.
    Starting Price: Free
  • 2
    Langfuse

    Langfuse

    Langfuse

    Langfuse is an open source LLM engineering platform to help teams collaboratively debug, analyze and iterate on their LLM Applications. Observability: Instrument your app and start ingesting traces to Langfuse Langfuse UI: Inspect and debug complex logs and user sessions Prompts: Manage, version and deploy prompts from within Langfuse Analytics: Track metrics (LLM cost, latency, quality) and gain insights from dashboards & data exports Evals: Collect and calculate scores for your LLM completions Experiments: Track and test app behavior before deploying a new version Why Langfuse? - Open source - Model and framework agnostic - Built for production - Incrementally adoptable - start with a single LLM call or integration, then expand to full tracing of complex chains/agents - Use GET API to build downstream use cases and export data
    Starting Price: $29/month
  • 3
    ZenML

    ZenML

    ZenML

    Simplify your MLOps pipelines. Manage, deploy, and scale on any infrastructure with ZenML. ZenML is completely free and open-source. See the magic with just two simple commands. Set up ZenML in a matter of minutes, and start with all the tools you already use. ZenML standard interfaces ensure that your tools work together seamlessly. Gradually scale up your MLOps stack by switching out components whenever your training or deployment requirements change. Keep up with the latest changes in the MLOps world and easily integrate any new developments. Define simple and clear ML workflows without wasting time on boilerplate tooling or infrastructure code. Write portable ML code and switch from experimentation to production in seconds. Manage all your favorite MLOps tools in one place with ZenML's plug-and-play integrations. Prevent vendor lock-in by writing extensible, tooling-agnostic, and infrastructure-agnostic code.
    Starting Price: Free
  • 4
    CrateDB

    CrateDB

    CrateDB

    The enterprise database for time series, documents, and vectors. Store any type of data and combine the simplicity of SQL with the scalability of NoSQL. CrateDB is an open source distributed database running queries in milliseconds, whatever the complexity, volume and velocity of data.
  • 5
    Metal

    Metal

    Metal

    Metal is your production-ready, fully-managed, ML retrieval platform. Use Metal to find meaning in your unstructured data with embeddings. Metal is a managed service that allows you to build AI products without the hassle of managing infrastructure. Integrations with OpenAI, CLIP, and more. Easily process & chunk your documents. Take advantage of our system in production. Easily plug into the MetalRetriever. Simple /search endpoint for running ANN queries. Get started with a free account. Metal API Keys to use our API & SDKs. With your API Key, you can use authenticate by populating the headers. Learn how to use our Typescript SDK to implement Metal into your application. Although we love TypeScript, you can of course utilize this library in JavaScript. Mechanism to fine-tune your spp programmatically. Indexed vector database of your embeddings. Resources that represent your specific ML use-case.
    Starting Price: $25 per month
  • 6
    MyScale

    MyScale

    MyScale

    MyScale is an innovative AI database that seamlessly integrates vector search with SQL analytics, delivering a comprehensive, fully managed, and high-performance solution. Key Features: - Superior Data Capacity and Performance: Each MyScale pod supports 5 million 768-dimensional data points with exceptional accuracy, enabling over 150 queries per second (QPS). - Rapid Data Ingestion: Import up to 5 million data points in under 30 minutes, reducing waiting time and enabling faster utilization of your vector data. - Flexible Indexing: MyScale allows you to create multiple tables with unique vector indexes, efficiently managing diverse vector data within a single cluster. - Effortless Data Import and Backup: Seamlessly import/export data from/to S3 or other compatible storage systems, ensuring smooth data management and backup processes. With MyScale, unleash the power of advanced AI database capabilities for efficient and effective data analysis.
  • 7
    Langdock

    Langdock

    Langdock

    Native support for ChatGPT and LangChain. Bing, HuggingFace and more coming soon. Add your API documentation manually or import an existing OpenAPI specification. Access the request prompt, parameters, headers, body and more. Inspect detailed live metrics about how your plugin is performing, including latencies, errors, and more. Configure your own dashboards, track funnels and aggregated metrics.
    Starting Price: Free
  • 8
    Deep Lake

    Deep Lake

    activeloop

    Generative AI may be new, but we've been building for this day for the past 5 years. Deep Lake thus combines the power of both data lakes and vector databases to build and fine-tune enterprise-grade, LLM-based solutions, and iteratively improve them over time. Vector search does not resolve retrieval. To solve it, you need a serverless query for multi-modal data, including embeddings or metadata. Filter, search, & more from the cloud or your laptop. Visualize and understand your data, as well as the embeddings. Track & compare versions over time to improve your data & your model. Competitive businesses are not built on OpenAI APIs. Fine-tune your LLMs on your data. Efficiently stream data from remote storage to the GPUs as models are trained. Deep Lake datasets are visualized right in your browser or Jupyter Notebook. Instantly retrieve different versions of your data, materialize new datasets via queries on the fly, and stream them to PyTorch or TensorFlow.
    Starting Price: $995 per month
  • 9
    Flowise

    Flowise

    Flowise

    Open source is the core of Flowise, and it will always be free for commercial and personal usage. Build LLMs apps easily with Flowise, an open source UI visual tool to build your customized LLM flow using LangchainJS, written in Node Typescript/Javascript. Open source MIT license, see your LLM apps running live, and manage custom component integrations. GitHub repo Q&A using conversational retrieval QA chain. Language translation using LLM chain with a chat prompt template and chat model. Conversational agent for a chat model which utilizes chat-specific prompts and buffer memory.
    Starting Price: Free
  • 10
    Typeblock

    Typeblock

    Typeblock

    Create shareable AI apps using a simple Notion-like editor. No need to write code or hire expensive developers. We handle the hosting, database, and deployment for you. Whether you're an entrepreneur, agency, or marketing team Typeblock gives you the power to build AI tools in under 2 minutes. Write SEO-optimized blog posts and instantly publish them to your CMS. Create a tool to generate highly personalized cold emails for your sales team. Build a tool to write highly converting Facebook ads, LinkedIn posts, or Twitter threads. Build an app that writes landing page copy for your marketing team. Harness the power of AI to build tools that write highly engaging newsletters for and your users.
    Starting Price: $20 per month
  • 11
    BoilerCode

    BoilerCode

    BoilerCode

    These boilerplates are ready to use, you just need to clone and can start running. BoilerCode is a catalog of SaaS boilerplates to help you ship your next product super fast. Stripe, LemonSqueezy, auth, and email integrations come out of the box with BoilerCode.
    Starting Price: $49 one-time payment
  • 12
    Zep

    Zep

    Zep

    Zep ensures your assistant remembers past conversations and resurfaces them when relevant. Identify your user's intent, build semantic routers, and trigger events, all in milliseconds. Emails, phone numbers, dates, names, and more, are extracted quickly and accurately. Your assistant will never forget a user. Classify intent, emotion, and more and turn dialog into structured data. Retrieve, analyze, and extract in milliseconds; your users never wait. We don't send your data to third-party LLM services. SDKs for your favorite languages and frameworks. Automagically populate prompts with a summary of relevant past conversations, no matter how distant. Zep summarizes, embeds, and executes retrieval pipelines over your Assistant's chat history. Instantly and accurately classify chat dialog. Understand user intent and emotion. Route chains based on semantic context, and trigger events. Quickly extract business data from chat conversations.
    Starting Price: Free
  • 13
    PlugBear

    PlugBear

    Runbear

    PlugBear is a no/low-code solution for connecting communication channels with LLM (Large Language Model) applications. For example, it enables the creation of a Slack bot from an LLM app in just a few clicks. When a trigger event occurs in the integrated channels, PlugBear receives this event. It then transforms the messages to be suitable for LLM applications and initiates generation. Once the apps complete the generation, PlugBear transforms the results to be compatible with each channel. This process allows users of different channels to interact seamlessly with LLM applications.
    Starting Price: $31 per month
  • 14
    CodeQwen

    CodeQwen

    QwenLM

    CodeQwen is the code version of Qwen, the large language model series developed by the Qwen team, Alibaba Cloud. It is a transformer-based decoder-only language model pre-trained on a large amount of data of codes. Strong code generation capabilities and competitive performance across a series of benchmarks. Supporting long context understanding and generation with the context length of 64K tokens. CodeQwen supports 92 coding languages and provides excellent performance in text-to-SQL, bug fixes, etc. You can just write several lines of code with transformers to chat with CodeQwen. Essentially, we build the tokenizer and the model from pre-trained methods, and we use the generate method to perform chatting with the help of the chat template provided by the tokenizer. We apply the ChatML template for chat models following our previous practice. The model completes the code snippets according to the given prompts, without any additional formatting.
    Starting Price: Free
  • 15
    Comet LLM

    Comet LLM

    Comet LLM

    CometLLM is a tool to log and visualize your LLM prompts and chains. Use CometLLM to identify effective prompt strategies, streamline your troubleshooting, and ensure reproducible workflows. Log your prompts and responses, including prompt template, variables, timestamps and duration, and any metadata that you need. Visualize your prompts and responses in the UI. Log your chain execution down to the level of granularity that you need. Visualize your chain execution in the UI. Automatically tracks your prompts when using the OpenAI chat models. Track and analyze user feedback. Diff your prompts and chain execution in the UI. Comet LLM Projects have been designed to support you in performing smart analysis of your logged prompt engineering workflows. Each column header corresponds to a metadata attribute logged in the LLM project, so the exact list of the displayed default headers can vary across projects.
    Starting Price: Free
  • 16
    Agenta

    Agenta

    Agenta

    Collaborate on prompts, evaluate, and monitor LLM apps with confidence. Agenta is a comprehensive platform that enables teams to quickly build robust LLM apps. Create a playground connected to your code where the whole team can experiment and collaborate. Systematically compare different prompts, models, and embeddings before going to production. Share a link to gather human feedback from the rest of the team. Agenta works out of the box with all frameworks (Langchain, Lama Index, etc.) and model providers (OpenAI, Cohere, Huggingface, self-hosted models, etc.). Gain visibility into your LLM app's costs, latency, and chain of calls. You have the option to create simple LLM apps directly from the UI. However, if you would like to write customized applications, you need to write code with Python. Agenta is model agnostic and works with all model providers and frameworks. The only limitation at present is that our SDK is available only in Python.
    Starting Price: Free
  • 17
    Langtrace

    Langtrace

    Langtrace

    Langtrace is an open source observability tool that collects and analyzes traces and metrics to help you improve your LLM apps. Langtrace ensures the highest level of security. Our cloud platform is SOC 2 Type II certified, ensuring top-tier protection for your data. Supports popular LLMs, frameworks, and vector databases. Langtrace can be self-hosted and supports OpenTelemetry standard traces, which can be ingested by any observability tool of your choice, resulting in no vendor lock-in. Get visibility and insights into your entire ML pipeline, whether it is a RAG or a fine-tuned model with traces and logs that cut across the framework, vectorDB, and LLM requests. Annotate and create golden datasets with traced LLM interactions, and use them to continuously test and enhance your AI applications. Langtrace includes built-in heuristic, statistical, and model-based evaluations to support this process.
    Starting Price: Free
  • 18
    Databricks Data Intelligence Platform
    The Databricks Data Intelligence Platform allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. The winners in every industry will be data and AI companies. From ETL to data warehousing to generative AI, Databricks helps you simplify and accelerate your data and AI goals. Databricks combines generative AI with the unification benefits of a lakehouse to power a Data Intelligence Engine that understands the unique semantics of your data. This allows the Databricks Platform to automatically optimize performance and manage infrastructure in ways unique to your business. The Data Intelligence Engine understands your organization’s language, so search and discovery of new data is as easy as asking a question like you would to a coworker.
  • 19
    Dify

    Dify

    Dify

    your team can develop AI applications based on models such as GPT-4 and operate them visually. Whether for internal team use or external release, you can deploy your application in as fast as 5 minutes. Using documents/webpages/Notion content as the context for AI, automatically complete text preprocessing, vectorization and segmentation. You don't have to learn embedding techniques anymore, saving you weeks of development time. Dify provides a smooth experience for model access, context embedding, cost control and data annotation. Whether for internal team use or product development, you can easily create AI applications. Starting from a prompt, but transcending the limitations of the prompt. Dify provides rich functionality for many scenarios, all through graphical user interface operations.
  • 20
    Bruinen

    Bruinen

    Bruinen

    Bruinen enables your platform to validate and connect your users’ profiles from across the internet. We offer simple integration with a variety of data sources, including Google, GitHub, and many more. Connect to the data you need and take action on one platform. Our API takes care of the auth, permissions, and rate limits - reducing complexity and increasing efficiency, allowing you to iterate quickly and stay focused on your core product. Allow users to confirm an action via email, SMS, or a magic-link before the action occurs. Let your users customize the actions they want to confirm, all with a pre-built permissions UI. Bruinen offers an easy-to-use, consistent interface to access your users’ profiles. You can connect, authenticate, and pull data from those accounts all from Bruinen’s platform.
  • 21
    endoftext

    endoftext

    endoftext

    Take the guesswork out of prompt engineering with suggested edits, prompt rewrites, and automatically generated test cases. We run dozens of analyses over your prompts and data to identify limitations and apply fixes. Detect prompt issues and potential improvements. Automatically rewrite prompts with AI-generated fixes. Don't waste time writing test cases for your prompts. We generate high-quality examples to test your prompts and guide your updates. Identify ways in which you can improve your prompts. Have AI automatically rewrite your prompts to fix limitations. Generate diverse test cases to validate changes and guide updates. Use your optimized prompts across models and tools.
    Starting Price: $20 per month
  • 22
    Browserbase

    Browserbase

    Browserbase

    Headless browsers that work everywhere, every time. Control fleets of stealth browsers to build reliable browser automation. Focus on your code with autoscaled browser instances, and best-in-class stealth features. Run hundreds of browsers with powerful resources to power uninterrupted long-running sessions. Work with headless browsers as you do with your browser with live access, replay, and full tools featuring logs and networks. Build and run undetectable automation with configurable fingerprinting, automatic captcha solving, and proxies. The best AI agents are built with Browserbase, navigating the most complex web pages, undetected. With a few lines of code, enable your AI agent to interact with any web pages, undetected and at scale. At any time, leverage the live session view feature to let humans help in completing complex tasks. Leverage Browserbase’s infrastructure to power your web scraping, automation, and LLM applications.
    Starting Price: $39 per month
  • 23
    Klee

    Klee

    Klee

    Local and secure AI on your desktop, ensuring comprehensive insights with complete data security and privacy. Experience unparalleled efficiency, privacy, and intelligence with our cutting-edge macOS-native app and advanced AI features. RAG can utilize data from a local knowledge base to supplement the large language model (LLM). This means you can keep sensitive data on-premises while leveraging it to enhance the model‘s response capabilities. To implement RAG locally, you first need to segment documents into smaller chunks and then encode these chunks into vectors, storing them in a vector database. These vectorized data will be used for subsequent retrieval processes. When a user query is received, the system retrieves the most relevant chunks from the local knowledge base and inputs these chunks along with the original query into the LLM to generate the final response. We promise lifetime free access for individual users.
  • 24
    Toolkit

    Toolkit

    Toolkit AI

    Use the Pubmed API to get a list of scholarly articles on a given topic. Download a YouTube video from a URL to a given file on your filesystem (relative to the current path), logging progress, and return the file's path. Use the Alpha Vantage API to return the latest stock information based on the provided ticker. Suggest code improvements for one or more code files that are passed in. Returns the path of the current directory, and a tree structure of the descendant files. Retrieves the contents of a given file on the filesystem.
  • 25
    LangSmith

    LangSmith

    LangChain

    Unexpected results happen all the time. With full visibility into the entire chain sequence of calls, you can spot the source of errors and surprises in real time with surgical precision. Software engineering relies on unit testing to build performant, production-ready applications. LangSmith provides that same functionality for LLM applications. Spin up test datasets, run your applications over them, and inspect results without having to leave LangSmith. LangSmith enables mission-critical observability with only a few lines of code. LangSmith is designed to help developers harness the power–and wrangle the complexity–of LLMs. We’re not only building tools. We’re establishing best practices you can rely on. Build and deploy LLM applications with confidence. Application-level usage stats. Feedback collection. Filter traces, cost and performance measurement. Dataset curation, compare chain performance, AI-assisted evaluation, and embrace best practices.
  • 26
    Pinokio

    Pinokio

    Pinokio

    There are so many applications that require you to open your terminal and enter commands, not to mention deal with all kinds of complicated environments and installation settings. With Pinokio, all of this can be packaged into a simple JSON script, which can then be run in a browser setting with just one click. Running a server on a computer is not a trivial task. You need to open a terminal and run a bunch of commands to start the server and keep the terminal open to keep them running.
  • 27
    Azure AI Studio

    Azure AI Studio

    Microsoft

    Your platform for developing generative AI solutions and custom copilots. Build solutions faster, using pre-built and customizable AI models on your data—securely—to innovate at scale. Explore a robust and growing catalog of pre-built and customizable frontier and open-source models. Create AI models with a code-first experience and accessible UI validated by developers with disabilities. Seamlessly integrate all your data from OneLake in Microsoft Fabric. Integrate with GitHub Codespaces, Semantic Kernel, and LangChain. Access prebuilt capabilities to build apps quickly. Personalize content and interactions and reduce wait times. Lower the burden of risk and aid in new discoveries for organizations. Decrease the chance of human error using data and tools. Automate operations to refocus employees on more critical tasks.
  • 28
    TheDevStarter

    TheDevStarter

    Byteoski Developers OPC Pvt Ltd

    TheDevStarter is a boilerplate for building SaaS applications using the Django Ninja and Next.js frameworks. It provides features for authentication, payments integration via Stripe, analytics, content management, customer support, and newsletter capabilities. The documentation emphasizes the performance benefits of combining Django Ninja's asynchronous functionality with Next.js's optimizations. Support is offered to users, and lifetime updates are included.
    Starting Price: $49
  • 29
    Prompt Security

    Prompt Security

    Prompt Security

    Prompt Security enables enterprises to benefit from the adoption of Generative AI while protecting from the full range of risks to their applications, employees and customers. At every touchpoint of Generative AI in an organization — from AI tools used by employees to GenAI integrations in customer-facing products — Prompt inspects each prompt and model response to prevent the exposure of sensitive data, block harmful content, and secure against GenAI-specific attacks. The solution also provides leadership of enterprises with complete visibility and governance over the AI tools used within their organization.
  • 30
    Gemma 2

    Gemma 2

    Google

    A family of state-of-the-art, light-open models created from the same research and technology that were used to create Gemini models. These models incorporate comprehensive security measures and help ensure responsible and reliable AI solutions through selected data sets and rigorous adjustments. Gemma models achieve exceptional comparative results in their 2B, 7B, 9B, and 27B sizes, even outperforming some larger open models. With Keras 3.0, enjoy seamless compatibility with JAX, TensorFlow, and PyTorch, allowing you to effortlessly choose and change frameworks based on task. Redesigned to deliver outstanding performance and unmatched efficiency, Gemma 2 is optimized for incredibly fast inference on various hardware. The Gemma family of models offers different models that are optimized for specific use cases and adapt to your needs. Gemma models are large text-to-text lightweight language models with a decoder, trained in a huge set of text data, code, and mathematical content.
  • Previous
  • You're on page 1
  • Next