Showing 189 open source projects for "ocr application python"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Atera all-in-one platform IT management software with AI agents Icon
    Atera all-in-one platform IT management software with AI agents

    Ideal for internal IT departments or managed service providers (MSPs)

    Atera’s AI agents don’t just assist, they act. From detection to resolution, they handle incidents and requests instantly, taking your IT management from automated to autonomous.
    Learn More
  • 1
    Real-Time Voice Cloning

    Real-Time Voice Cloning

    Clone a voice in 5 seconds to generate arbitrary speech in real-time

    Real-Time Voice Cloning is an influential deep-learning repository that demonstrates how to clone a voice from just a few seconds of audio and then generate arbitrary speech in that voice in near real time. It implements the SV2TTS pipeline (“Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis”) in three stages: a speaker encoder, a synthesizer, and a vocoder. In the first stage, short audio clips are converted into a fixed-dimensional speaker embedding that...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 2
    Bard API

    Bard API

    The unofficial python package that returns response of Google Bard

    The Python package returns a response of Google Bard through the value of the cookie. This package is designed for application to the Python package ExceptNotifier and Co-Coder. Please note that the bardapi is not a free service, but rather a tool provided to assist developers with testing certain functionalities due to the delayed development and release of Google Bard's API.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Eigent

    Eigent

    The Open Source Cowork Desktop to Unlock Your Exceptional Productivity

    Eigent is an open-source cowork desktop application designed to help you build, manage, and deploy a custom AI workforce. It enables multiple specialized AI agents to collaborate in parallel, turning complex workflows into automated, end-to-end tasks. Built on the CAMEL-AI multi-agent framework, Eigent emphasizes productivity, flexibility, and transparent system design. You can run Eigent fully locally for maximum privacy and data control, or choose a cloud-connected experience for quick...
    Downloads: 8 This Week
    Last Update:
    See Project
  • 4
    AI Runner

    AI Runner

    Offline inference engine for art, real-time voice conversations

    AI Runner is an offline inference engine designed to run a collection of AI workloads on your own machine, including image generation for art, real-time voice conversations, LLM-powered chatbots and automated workflows. It is implemented as a desktop-oriented Python application and emphasizes privacy and self-hosting, allowing users to work with text-to-speech, speech-to-text, text-to-image and multimodal models without sending data to external services. At the core of its LLM stack is a mode-based architecture with specialized “modes” such as Author, Code, Research, QA and General, and a workflow manager that automatically routes user requests to the right agent based on the task. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • Cloud tools for web scraping and data extraction Icon
    Cloud tools for web scraping and data extraction

    Deploy pre-built tools that crawl websites, extract structured data, and feed your applications. Reliable web data without maintaining scrapers.

    Automate web data collection with cloud tools that handle anti-bot measures, browser rendering, and data transformation out of the box. Extract content from any website, push to vector databases for RAG workflows, or pipe directly into your apps via API. Schedule runs, set up webhooks, and connect to your existing stack. Free tier available, then scale as you need to.
    Explore 10,000+ tools
  • 5
    GPTCache

    GPTCache

    Semantic cache for LLMs. Fully integrated with LangChain

    ChatGPT and various large language models (LLMs) boast incredible versatility, enabling the development of a wide range of applications. However, as your application grows in popularity and encounters higher traffic levels, the expenses related to LLM API calls can become substantial. Additionally, LLM services might exhibit slow response times, especially when dealing with a significant number of requests. To tackle this challenge, we have created GPTCache, a project dedicated to building a...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    LangGraph

    LangGraph

    Build resilient language agents as graphs

    LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows. Compared to other LLM frameworks, it offers these core benefits: cycles, controllability, and persistence. LangGraph allows you to define flows that involve cycles, essential for most agentic architectures, differentiating it from DAG-based solutions. As a very low-level framework, it provides fine-grained control over both the flow and state of your application,...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 7
    Open Interface

    Open Interface

    Control Any Computer Using LLMs

    Open Interface is a cross-platform application that allows users to control their computers using large language models (LLMs). By sending user requests to an LLM backend, it determines the necessary steps and executes them by simulating keyboard and mouse inputs. The system can adjust its actions based on real-time feedback, providing a self-driving computer experience.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    Triton Inference Server

    Triton Inference Server

    The Triton Inference Server provides an optimized cloud

    ...Model pipelines using Ensembling or Business Logic Scripting (BLS). HTTP/REST and GRPC inference protocols based on the community-developed KServe protocol. A C API and Java API allow Triton to link directly into your application for edge and other in-process use cases.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    OpenLLMetry

    OpenLLMetry

    Open-source observability for your LLM application

    The repo contains standard OpenTelemetry instrumentations for LLM providers and Vector DBs, as well as a Traceloop SDK that makes it easy to get started with OpenLLMetry, while still outputting standard OpenTelemetry data that can be connected to your observability stack. If you already have OpenTelemetry instrumented, you can just add any of our instrumentations directly.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Grafana: The open and composable observability platform Icon
    Grafana: The open and composable observability platform

    Faster answers, predictable costs, and no lock-in built by the team helping to make observability accessible to anyone.

    Grafana is the open source analytics & monitoring solution for every database.
    Learn More
  • 10
    MLJAR Studio

    MLJAR Studio

    Python package for AutoML on Tabular Data with Feature Engineering

    We are working on new way for visual programming. We developed a desktop application called MLJAR Studio. It is a notebook-based development environment with interactive code recipes and a managed Python environment. All running locally on your machine. We are waiting for your feedback. The mljar-supervised is an Automated Machine Learning Python package that works with tabular data. It is designed to save time for a data scientist.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    OpenaiBot

    OpenaiBot

    Refractoring ChatBot+LLM, Gpt-3.5-turbo, ChatGPT Bot/Voice Assistant

    If you don't have the instant messaging platform you need or you want to develop a new application, you are welcome to contribute to this repository. You can develop a new Controller by using Event.py. Compatibility with multiple LLMs and integration with GPT and third-party systems is handled by our llm-kira project on GitHub. It can accurately limit billing, with limits and ID binding. Supports asynchronous operations and can handle multiple requests simultaneously. Allows for private and...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    LLM Action

    LLM Action

    Technical principles related to large models

    LLM-Action is a knowledge/tutorial/repository that shares principles, techniques, and real-world experience related to large language models (LLMs), focusing on LLM engineering, deployment, optimization, inference, compression, and tooling. It organizes content in domains like training, inference, compression, alignment, evaluation, pipelines, and applications. Sections covering infrastructure, engineering, and deployment. Repository templates, sample code, and resource links. Articles/code...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    FastRTC

    FastRTC

    The python library for real-time communication

    FastRTC is a Python library designed to simplify real-time communication (RTC), especially for audio and video streaming applications. It abstracts away much of the complexity that typically comes with implementing WebRTC by providing a simple interface — e.g. a Stream class — that can be mounted within a web backend (for example a FastAPI application).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Superduper

    Superduper

    Superduper: Integrate AI models and machine learning workflows

    ...This allows developers to completely avoid implementing MLOps, ETL pipelines, model deployment, data migration, and synchronization. Using Superduper is simply "CAPE": Connect to your data, apply arbitrary AI to that data, package and reuse the application on arbitrary data, and execute AI-database queries and predictions on the resulting AI outputs and data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    NVIDIA FLARE

    NVIDIA FLARE

    NVIDIA Federated Learning Application Runtime Environment

    NVIDIA Federated Learning Application Runtime Environment NVIDIA FLARE is a domain-agnostic, open-source, extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflows(PyTorch, TensorFlow, Scikit-learn, XGBoost etc.) to a federated paradigm. It enables platform developers to build a secure, privacy-preserving offering for a distributed multi-party collaboration. NVIDIA FLARE is built on a componentized architecture that allows you to take federated...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Ragas

    Ragas

    Supercharge Your LLM Application Evaluations

    Objective metrics, intelligent test generation, and data-driven insights for LLM apps. Ragas is your ultimate toolkit for evaluating and optimizing Large Language Model (LLM) applications. Say goodbye to time-consuming, subjective assessments and hello to data-driven, efficient evaluation workflows. Don't have a test dataset ready? We also do production-aligned test set generation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Laminar

    Laminar

    Open-source all-in-one platform for engineering AI products

    Laminar is an open source all-in-one platform for engineering best-in-class LLM products. Data governs the quality of your LLM application. Laminar helps you collect it, understand it, and use it. When you trace your LLM application, you get a clear picture of every step of execution and simultaneously collect invaluable data. You can use it to set up better evaluations, as dynamic few-shot examples, and for fine-tuning. All traces are sent in the background via gRPC with minimal overhead. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 18
    Gemini Fullstack LangGraph Quickstart

    Gemini Fullstack LangGraph Quickstart

    Get started w/ building Fullstack Agents using Gemini 2.5 & LangGraph

    gemini-fullstack-langgraph-quickstart is a fullstack reference application from Google DeepMind’s Gemini team that demonstrates how to build a research-augmented conversational AI system using LangGraph and Google Gemini models. The project features a React (Vite) frontend and a LangGraph/FastAPI backend designed to work together seamlessly for real-time research and reasoning tasks. The backend agent dynamically generates search queries based on user input, retrieves information via the...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 19
    MemU

    MemU

    MemU is an open-source memory framework for AI companions

    MemU is an agentic memory layer for LLM applications, specifically designed for AI companions. Transform your memory into an intelligent file system that automatically organizes, connects, and evolves with your memories. Simple, fast, and reliable memory infrastructure for AI applications. Powerful tools and dedicated support to scale your AI applications with confidence. Full proprietary features, commercial usage rights, and white-labeling options for your enterprise needs. SSO/RBAC...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    AutoClip

    AutoClip

    AI-powered video clipping and highlight generation

    AutoClip is an open-source, AI-powered video processing system designed to automate the extraction of “highlight” segments from full-length videos — ideal for creators who want to generate bite-sized clips, compilations, or highlight reels without manually sifting through hours of footage. The system supports downloading videos from major platforms (e.g. YouTube, Bilibili), or accepting local uploads, and then applies AI analysis to identify segments worth clipping based on content (e.g....
    Downloads: 4 This Week
    Last Update:
    See Project
  • 21
    PML

    PML

    The easiest way to use deep metric learning in your application

    This library contains 9 modules, each of which can be used independently within your existing codebase, or combined together for a complete train/test workflow. To compute the loss in your training loop, pass in the embeddings computed by your model, and the corresponding labels. The embeddings should have size (N, embedding_size), and the labels should have size (N), where N is the batch size. The TripletMarginLoss computes all possible triplets within the batch, based on the labels you...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    Opik

    Opik

    Open-source end-to-end LLM Development Platform

    Confidently evaluate, test, and monitor LLM applications. Opik is an open-source platform for evaluating, testing, and monitoring LLM applications. Built by Comet. Record, sort, search, and understand each step your LLM app takes to generate a response. Manually annotate, view, and compare LLM responses in a user-friendly table. Log traces during development and in production. Run experiments with different prompts and evaluate against a test set. Choose and run pre-configured evaluation...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 23
    TensorHouse

    TensorHouse

    A collection of reference Jupyter notebooks and demo AI/ML application

    TensorHouse is a scalable reinforcement learning (RL) platform that focuses on high-throughput experience generation and distributed training. It is designed to efficiently train agents across multiple environments and compute resources. TensorHouse enables flexible experiment management, making it suitable for large-scale RL experiments in both research and applied settings.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    OuteTTS

    OuteTTS

    Interface for OuteTTS models

    OuteTTS is an interface library for running OuteTTS text-to-speech models across a range of backends, making it easier to deploy the same model on different hardware and runtimes. It provides a high-level Interface API that wraps model configuration, speaker handling, and audio generation so you can focus on integrating speech into your application rather than wiring up low-level engines. The project supports multiple backends including llama.cpp (Python bindings and server), Hugging Face Transformers, ExLlamaV2, VLLM and a JavaScript interface via Transformers.js, allowing it to run on CPUs, NVIDIA CUDA GPUs, AMD ROCm, Vulkan-capable GPUs, and Apple Metal. It also includes a notion of speaker profiles: you can create a speaker from a short audio sample, save it as JSON, and reuse it for consistent voice identity across generations and sessions. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    DeepSpeed MII

    DeepSpeed MII

    MII makes low-latency and high-throughput inference possible

    ...Incredibly powerful text generation models such as the Bloom 176B, or image generation model such as Stable Diffusion are now available to anyone with access to a handful or even a single GPU through platforms such as Hugging Face. While open-sourcing has democratized access to AI capabilities, their application is still restricted by two critical factors: inference latency and cost. DeepSpeed-MII is a new open-source python library from DeepSpeed, aimed towards making low-latency, low-cost inference of powerful models not only feasible but also easily accessible. MII offers access to the highly optimized implementation of thousands of widely used DL models. ...
    Downloads: 0 This Week
    Last Update:
    See Project