...Save screenshots for the crawling evidence, emulate devices and user agents, priority queue for crawling efficiency, obey robots.txt, and more. The static crawlers are based on simple requests to HTML files. They are generally fast, but fail scraping the contents when the HTML dynamically changes on browsers. Dynamic crawlers based on PhantomJS and Selenium work magically on such dynamic applications. However, PhantomJS's maintainer has stepped down and recommended to switch to Headless Chrome, which is fast and stable. This crawler is dynamic and based on Headless Chrome.