The repository contains Google's robots.txt parser
...The code is optimized for speed and low memory so large crawls can evaluate millions of URLs quickly. It also focuses on correctness—edge cases like overlapping patterns and longest-match resolution are handled consistently. Consumers integrate it to decide whether a specific URL may be fetched by a particular bot name and to respect crawl-delay or sitemaps hints where applicable. The library serves both search-scale crawlers and smaller tools that need a reliable decision engine for polite crawling.