You can start multiple spider instances that share a single redis queue. Best suitable for broad multi-domain crawls. Scraped items gets pushed into a redis queued meaning that you can start as many as needed post-processing processes sharing the items queue. Scheduler + Duplication Filter, Item Pipeline, Base Spiders. Default requests serializer is pickle, but it can be changed to any module with loads and dumps functions. Note that pickle is not compatible between python versions. Version 0.3 changed the requests serialization from marshal to cPickle, therefore persisted requests using version 0.2 will not able to work on 0.3. The class scrapy_redis.spiders.RedisSpider enables a spider to read the urls from redis. The urls in the redis queue will be processed one after another, if the first request yields more requests, the spider will process those requests before fetching another url from redis.

Features

  • Distributed crawling/scraping
  • Distributed post-processing
  • Scrapy plug-and-play components
  • Python 2.7, 3.4 or 3.5 required
  • Redis >= 2.8 required
  • Scheduler + Duplication Filter, Item Pipeline, Base Spiders

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow Scrapy-Redis

Scrapy-Redis Web Site

Other Useful Business Software
Earn up to 16% annual interest with Nexo. Icon
Earn up to 16% annual interest with Nexo.

Let your crypto work for you

Put idle assets to work with competitive interest rates, borrow without selling, and trade with precision. All in one platform. Geographic restrictions, eligibility, and terms apply.
Get started with Nexo.
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of Scrapy-Redis!

Additional Project Details

Programming Language

Python

Related Categories

Python Browsers, Python Web Scrapers

Registered

2021-11-09