Crawls web pages and prints any link it can find. Fast HTML SAX-parser (powered by golang.org/x/net/html) Small (below 1500 SLOC), idiomatic, 100% test-covered codebase. Grabs most of useful resources URLs (pics, videos, audios, forms, etc...) Found URLs are streamed to stdout and guaranteed to be unique (with fragments omitted) Scan depth (limited by starting host and path, by default - 0) can be configured. Can crawl rules and sitemaps from robots.txt. Brute mode - scan HTML comments for URLs (this can lead to bogus results) Make use of HTTP_PROXY / HTTPS_PROXY environment values + handle proxy auth. Directory-only scan mode (aka fast-scan)

Features

  • Idiomatic, 100% test covered codebase
  • Below 1500 SLOC
  • Grabs most of useful resources URLs
  • Directory-only scan mode
  • Allows to ignore URLs with matched substrings from crawling
  • Extract API endpoints from JS files

Project Samples

Project Activity

See All Activity >

Categories

Web Scrapers

License

MIT License

Follow crawley

crawley Web Site

Other Useful Business Software
Train ML Models With SQL You Already Know Icon
Train ML Models With SQL You Already Know

BigQuery automates data prep, analysis, and predictions with built-in AI assistance.

Build and deploy ML models using familiar SQL. Automate data prep with built-in Gemini. Query 1 TB and store 10 GB free monthly.
Try Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of crawley!

Additional Project Details

Operating Systems

Mac, Windows

Programming Language

Go

Related Categories

Go Web Scrapers

Registered

2023-04-12