Showing 200 open source projects for "anpr using python"

View related business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Train ML Models With SQL You Already Know Icon
    Train ML Models With SQL You Already Know

    BigQuery automates data prep, analysis, and predictions with built-in AI assistance.

    Build and deploy ML models using familiar SQL. Automate data prep with built-in Gemini. Query 1 TB and store 10 GB free monthly.
    Try Free
  • 1
    Python API for JMComic

    Python API for JMComic

    Python crawler and API for downloading JMComic albums and images

    JMComic-Crawler-Python is a Python library and crawler framework designed to programmatically access and download comic content from the JMComic platform. It provides a structured API that allows developers to retrieve albums, chapters, and images using simple Python code while handling the necessary network requests and data processing behind the scenes.
    Downloads: 11 This Week
    Last Update:
    See Project
  • 2
    Firebase Admin Python SDK

    Firebase Admin Python SDK

    Firebase Admin Python SDK

    ...Programmatically send Firebase Cloud Messaging messages using a simple, alternative approach to the Firebase Cloud Messaging server protocols. We currently support Python 3.7+. Firebase Admin Python SDK is also tested on PyPy and Google App Engine environments.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 3
    Anna’s Archive

    Anna’s Archive

    Comprehensive search engine for books, papers, comics, magazines

    Anna’s Archive is a large-scale open-source search engine and data aggregation platform designed to index and provide access to a vast collection of books, academic papers, comics, magazines, and other digital texts through a unified interface. The project includes all the infrastructure required to run a full instance locally or in production, combining web servers, databases, and search indexing systems into a scalable architecture. It relies heavily on technologies such as Elasticsearch...
    Downloads: 139 This Week
    Last Update:
    See Project
  • 4
    Amazon CodeGuru Profiler Python Agent

    Amazon CodeGuru Profiler Python Agent

    Amazon CodeGuru Profiler Python Agent

    ...Use CodeGuru Profiler to help profile your applications in the cloud from a single, centralized dashboard. CodeGuru Profiler currently supports applications written in all Java virtual machine (JVM) languages and Python.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Gemini 3 and 200+ AI Models on One Platform Icon
    Gemini 3 and 200+ AI Models on One Platform

    Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

    Build generative AI apps with Vertex AI. Switch between models without switching platforms.
    Start Free
  • 5
    s3-client

    s3-client

    Sample python script to work with Amazon S3

    Example Python script to work with S3.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 6
    Scrapy

    Scrapy

    A fast, high-level web crawling and web scraping framework

    Scrapy is a fast, open source, high-level framework for crawling websites and extracting structured data from these websites. Portable and written in Python, it can run on Windows, Linux, macOS and BSD. Scrapy is powerful, fast and simple, and also easily extensible. Simply write the rules to extract the data, and add new functionality if you wish without having to touch the core. Scrapy does the rest, and can be used in a number of applications. It can be used for data mining, monitoring...
    Downloads: 28 This Week
    Last Update:
    See Project
  • 7
    ScrapeGraphAI

    ScrapeGraphAI

    Python scraper based on AI

    Extracting content from websites and local documents using LLM. ScrapeGraphAI is a web scraping python library that uses LLM and direct graph logic to create scraping pipelines for websites and local documents (XML, HTML, JSON, Markdown, etc.). Just say which information you want to extract and the library will do it for you.
    Downloads: 14 This Week
    Last Update:
    See Project
  • 8
    theHarvester

    theHarvester

    E-mails, subdomains and names

    theHarvester is a very simple to use, yet powerful and effective tool designed to be used in the early stages of a penetration test or red team engagement. Use it for open source intelligence (OSINT) gathering to help determine a company's external threat landscape on the internet. The tool gathers emails, names, subdomains, IPs and URLs using multiple public data sources.
    Downloads: 43 This Week
    Last Update:
    See Project
  • 9
    Pydoll

    Pydoll

    Async Python library in automating Chromium browsers without WebDriver

    Pydoll is a Python library designed for automating Chromium-based web browsers such as Chrome and Edge without relying on a traditional WebDriver layer. Instead of using external drivers, it connects directly to the Chrome DevTools Protocol through WebSocket, allowing scripts to control browser behavior more efficiently and with fewer compatibility issues.
    Downloads: 7 This Week
    Last Update:
    See Project
  • $300 in Free Credit Towards Top Cloud Services Icon
    $300 in Free Credit Towards Top Cloud Services

    Build VMs, containers, AI, databases, storage—all in one place.

    Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
    Get Started
  • 10
    Termux:X11

    Termux:X11

    Termux X11 add-on application

    Termux is an Android terminal emulator and Linux environment app that works directly with no rooting or setup required. A minimal base system is installed automatically - additional packages are available using the APT package manager. Termux combines standard packages with accurate terminal emulation in a beautiful open-source solution. Access API endpoints with curl and use rsync to store backups of your contact list on a remote server.
    Downloads: 318 This Week
    Last Update:
    See Project
  • 11
    nginx-proxy

    nginx-proxy

    Automated nginx proxy for Docker containers using docker-gen

    nginx-proxy sets up a container running nginx and docker-gen. docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped. The containers being proxied must expose the port to be proxied, either by using the EXPOSE directive in their Dockerfile or by using the --expose flag to docker run or docker create and be in the same network. By default, if you don't pass the --net flag when your nginx-proxy container is created, it will only be...
    Downloads: 9 This Week
    Last Update:
    See Project
  • 12
    Checkov

    Checkov

    Prevent cloud misconfigurations during build-time for Terraform

    ...Checkov uses a common command-line interface to manage and analyze infrastructure as code (IaC) scan results across platforms such as Terraform, CloudFormation, Kubernetes, Helm, ARM Templates and Serverless framework. Verify changes to hundreds of supported resource types in all major cloud providers. Checkov supports developers using Terraform, Terraform plan, CloudFormation, Kubernetes, ARM Templates, Serverless, Helm, and AWS CDK. Scan cloud resources in build-time for misconfigured attributes with a simple Python policy-as-code framework. Analyze relationships between cloud resources using Checkov’s graph-based YAML policies. Execute, test, and modify runner parameters in the context of a subject repository CI/CD and version control integrations.
    Downloads: 17 This Week
    Last Update:
    See Project
  • 13
    Render Farm Deployment Kit on AWS (RFDK)

    Render Farm Deployment Kit on AWS (RFDK)

    Library for use with the AWS Cloud Development Kit

    The Render Farm Deployment Kit on AWS (RFDK) is an open-source software development kit (SDK) that can be used to deploy, configure, and manage your render farm infrastructure in the cloud. It offers high-level object-oriented abstractions to define render farm infrastructure using the power of Python and Typescript. The Render Farm Deployment Kit (RFDK) on AWS is an open-source software development kit that can be used to deploy, configure, and manage your render farm infrastructure in the cloud. The RFDK is built to operate with the AWS Cloud Development Kit (CDK) and provides a library of classes, called constructs, that each deploy and configure a component of your cloud-based render farm. ...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 14
    Selectolax

    Selectolax

    Python binding to Modest and Lexbor engines

    A fast HTML5 parser with CSS selectors using Modest and Lexbor engines. Selectolax supports two backends: Modest and Lexbor. By default, all examples use the Modest backend. Most of the features between backends are almost identical, but there are still some differences. Currently, the Lexbor backend is in beta and missing some of the features. To use lexbor, just import the parser and use it in the similar way to the HTMLParser.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 15
    PyFCM

    PyFCM

    Python client for FCM - Firebase Cloud Messaging

    Python client for FCM - Firebase Cloud Messaging (Android, iOS and Web) Firebase Cloud Messaging (FCM) is the new version of GCM. It inherits the reliable and scalable GCM infrastructure, plus new features. GCM users are strongly recommended to upgrade to FCM. Using FCM, you can notify a client app that new email or other data is available to sync.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    proxy.py

    proxy.py

    Utilize all available CPU cores for accepting new client connections

    proxy.py is made with performance in mind. By default, proxy.py will try to utilize all available CPU cores to it for accepting new client connections. This is achieved by starting AcceptorPool which listens on configured server port. Then, AcceptorPool starts Acceptor processes (--num-acceptors) to accept incoming client connections. Alongside, if --threadless is enabled, ThreadlessPool is setup which starts Threadless processes (--num-workers) to handle the incoming client connections....
    Downloads: 7 This Week
    Last Update:
    See Project
  • 17
    spider_collection

    spider_collection

    Collection of Python web scraping scripts for data extraction tasks

    spider_collection is a collection of Python web crawler scripts created primarily for experimentation, learning, and practical scraping tasks. spider_collection gathers multiple independent spiders designed to collect data from different platforms and services, demonstrating a variety of scraping techniques and workflows. These crawlers make use of common Python scraping tools such as requests, parsel, BeautifulSoup, and the Scrapy framework to extract structured information from web pages. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    autocrawler

    autocrawler

    Multiprocess Selenium crawler for downloading images by keywords

    AutoCrawler is a Python-based image crawling tool designed to automatically download large numbers of images from search engines using automated browser interaction. It uses Selenium and a Chrome browser driver to navigate image search pages and collect image sources based on keywords provided by the user. AutoCrawler supports multiprocess and multithreaded downloading, which allows it to retrieve images faster by running several tasks simultaneously.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    HTTPie Desktop

    HTTPie Desktop

    Cross-platform API testing client for humans

    HTTPie Desktop is a graphical API client built on top of the popular HTTPie terminal tool, offering a user-friendly interface for testing and interacting with APIs. It combines the simplicity of HTTPie’s CLI with a modern desktop and web UI for a more visual workflow. Developers can easily build, send, and preview HTTP requests without needing to memorize commands or write scripts. The platform supports organizing work into spaces, collections, and tabs, making it ideal for managing multiple...
    Downloads: 26 This Week
    Last Update:
    See Project
  • 20
    Scrapy-Redis

    Scrapy-Redis

    Redis-based components for Scrapy

    ...Scheduler + Duplication Filter, Item Pipeline, Base Spiders. Default requests serializer is pickle, but it can be changed to any module with loads and dumps functions. Note that pickle is not compatible between python versions. Version 0.3 changed the requests serialization from marshal to cPickle, therefore persisted requests using version 0.2 will not able to work on 0.3. The class scrapy_redis.spiders.RedisSpider enables a spider to read the urls from redis. The urls in the redis queue will be processed one after another, if the first request yields more requests, the spider will process those requests before fetching another url from redis.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 21
    Powerline

    Powerline

    Statusline plugin for vim with prompts for several other applications

    Powerline is a statusline plugin for vim, and provides statuslines and prompts for several other applications, including zsh, bash, tmux, IPython, Awesome, i3 and Qtile. Powerline was completely rewritten in Python to get rid of as much vimscript as possible. This has allowed much better extensibility, leaner and better config files, and a structured, object-oriented codebase with no mandatory third-party dependencies other than a Python interpreter. Using Python has allowed unit testing of all the project code. The code is tested to work in Python 2.6+ and Python 3. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Crawl4AI

    Crawl4AI

    Open-source LLM Friendly Web Crawler & Scraper

    Crawl4AI is a high-performance, AI‑ready web crawler tailored for LLM data ingestion and RAG pipelines. It supports adaptive crawling heuristics (stopping when enough info is gathered), structured markdown output, and high-speed parallel execution. Designed to operate at scale with optional Docker deployment and framework integrations.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 23
    AWS SAM CLI

    AWS SAM CLI

    CLI tool to build, test, debug, and deploy Serverless applications

    The AWS Serverless Application Model (SAM) CLI is an open-source CLI tool that helps you develop serverless applications containing Lambda functions, Step Functions, API Gateway, EventBridge, SQS, SNS and more. The AWS Serverless Application Model (SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With just a few lines per resource, you can define the application you want and...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 24
    Linkedin Scraper

    Linkedin Scraper

    A library that scrapes Linkedin for user data

    Linkedin Scraper is a library that scrapes Linkedin for user data. Version 2.0.0 and before is called linkedin_user_scraper and can be installed via pip3 install --user linkedin_user_scraper. The reason is that LinkedIn has recently blocked people from viewing certain profiles without having previously signed in. So by setting scrape=False, it doesn't automatically scrape the profile, but Chrome will open the linkedin page anyways. You can login and logout, and the cookie will stay in the...
    Downloads: 9 This Week
    Last Update:
    See Project
  • 25
    douyin

    douyin

    Open source Douyin crawler for collecting and downloading public data

    DouyinCrawler is an open source data collection tool designed to gather publicly available information from the Douyin platform. It demonstrates how to build a Python-based web crawler combined with a graphical interface and command line functionality. It allows users to collect data from various types of Douyin content, including user profiles, videos, hashtags, and music pages. DouyinCrawler supports both automated scraping and batch operations to process multiple targets efficiently. It...
    Downloads: 8 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • 3
  • 4
  • 5
  • Next
MongoDB Logo MongoDB