You can start multiple spider instances that share a single redis queue. Best suitable for broad multi-domain crawls. Scraped items gets pushed into a redis queued meaning that you can start as many as needed post-processing processes sharing the items queue. Scheduler + Duplication Filter, Item Pipeline, Base Spiders. Default requests serializer is pickle, but it can be changed to any module with loads and dumps functions. Note that pickle is not compatible between python versions. Version 0.3 changed the requests serialization from marshal to cPickle, therefore persisted requests using version 0.2 will not able to work on 0.3. The class scrapy_redis.spiders.RedisSpider enables a spider to read the urls from redis. The urls in the redis queue will be processed one after another, if the first request yields more requests, the spider will process those requests before fetching another url from redis.

Features

  • Distributed crawling/scraping
  • Distributed post-processing
  • Scrapy plug-and-play components
  • Python 2.7, 3.4 or 3.5 required
  • Redis >= 2.8 required
  • Scheduler + Duplication Filter, Item Pipeline, Base Spiders

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow Scrapy-Redis

Scrapy-Redis Web Site

Other Useful Business Software
$300 in Free Credit Towards Top Cloud Services Icon
$300 in Free Credit Towards Top Cloud Services

Build VMs, containers, AI, databases, storage—all in one place.

Start your project in minutes. After credits run out, 20+ products include free monthly usage. Only pay when you're ready to scale.
Get Started
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of Scrapy-Redis!

Additional Project Details

Programming Language

Python

Related Categories

Python Browsers, Python Web Scrapers

Registered

2021-11-09