Skip to content

Suggesting new way to schedule requests #119

@botzill

Description

@botzill

Hi.

This approach, adding new requests when spider is idle, works good but I think we can improve it. Here is my idea:

Imagine that we configured our spider to handle hight load(as example):

CONCURRENT_REQUESTS = 100
CONCURRENT_ITEMS = 200
DOWNLOAD_DELAY = 0.15

(I know is not good to make so many requests but there are cases when we can do that)
Now, according to docs https://doc.scrapy.org/en/latest/topics/signals.html#spider-idle an idle state
is "which means the spider has no further":

  • requests waiting to be downloaded
  • requests scheduled
  • items being processed in the item pipeline

Why do we need to wait until items from pipeline are being processed? There may be DB insert and other things that can slow it down but we don't need to wait for that, we can process new requests meantime. But it waits for all and then new batch of requests are added. My solution is to have a task that runs each x seconds which will check the scheduler queue size and add new requests even if there are already. Example (prototype code):

from twisted.internet import task


class RedisMixin(object):
    # .... existing code

    def setup_redis(self, crawler=None):
        # .... existing code
        self.task = task.LoopingCall(self.check_scheduler_size, crawler)
        self.task.start(60)  # Some option here defined(seconds)

    def check_scheduler_size(self, crawler):
        queue_size = len(crawler.engine.slot.scheduler)

        if queue_size <= crawler.settings.getint('MIN_QUEUE_SIZE'):
            self.schedule_next_requests()
            # Some logs if needed
        else:
            # Do nothing, we already have requests in queue
            # Some logs if needed
            pass

    # .... existing code

This way we can keep some requests in queue always so that spider does not go idle(we still can use idle case). This way we can keep the spider always busy and make it finish sooner and at the same time we have a reasonable amount of requests in queue and fetching new batch size from DB.

Let me know what you think about this approach. I can contribute with a PR.

Thx.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions