暂停刮擦。我能得到故障吗?

Pause scrapy. Can I get a breakdown?

我希望能够 start/pause/resume 一个蜘蛛,我正在尝试使用

scrapy crawl some spiders JOBDIR=crawls/some spider-1

但是,它主要只是复制和粘贴,因为关于这里实际发生的事情的信息不多。有人有更多关于细节的信息吗?

我得到了第一部分,但不知道 JOBDIR=crawls/some spider-1 部分实际发生了什么。我看到有人把代码放成这样

scrapy crawl some spiders JOBDIR=crawls/some spider

.. 没有 -1 并且不知道这有什么区别。我确实注意到了这一点。我倾向于敲击 CTRL+C 退出,从我读到的和我的经历来看这显然很糟糕,因为如果我重新输入代码

scrapy crawl some spiders JOBDIR=crawls/some spider-1 

.. 就像蜘蛛已经完成一样,它直接完成了。

在我犯了那个错误之后,我该如何 "Reset" 它?如果我取出 -1 它会再次工作,但我不知道我是否在那里丢失了一些东西。

As explained in the docs,scrapy允许暂停和恢复抓取,但是你需要JOBDIR设置。

JOBDIR 值应该是 path to a directory on your filesystem 来持久化各种对象,scrapy 需要恢复它必须做的事情。

请注意,对于单独的爬网,您需要指向不同的目录:

This directory will be for storing all required data to keep the state of a single job (ie. a spider run). It’s important to note that this directory must not be shared by different spiders, or even different jobs/runs of the same spider, as it’s meant to be used for storing the state of a single job.

正在复制该文档页面中的内容:

scrapy crawl somespider -s JOBDIR=crawls/somespider-1
             ----------           -------------------
                 |                         |       
         name of your spider               |        
                                           |
                               relative path where to save stuff

另一个使用 JOBDIR 的 scrapy 爬网命令示例可能是:

scrapy crawl myspider -s JOBDIR=/home/myuser/crawldata/myspider_run_32

示例时间线:

scrapy crawl myspider -s JOBDIR=/home/myuser/crawldata/myspider_run_001
# pause using Ctrl-C ...

# ...lets continue where it was left off
scrapy crawl myspider -s JOBDIR=/home/myuser/crawldata/myspider_run_001
# crawl finished properly.
# (and /home/myuser/crawldata/myspider_run_001 should not contain anything now)

# now you want to crawl a 2nd time, from the beginning
scrapy crawl myspider -s JOBDIR=/home/myuser/crawldata/myspider_run_002