Scrapy return/pass 数据到另一个模块

Scrapy return/pass data to another module

嗨,我想知道如何将 pandas 文件的抓取结果传递给创建蜘蛛的模块。

import mySpider as mspider
def main():
    spider1 = mspider.MySpider()
    process = CrawlerProcess()
    process.crawl(spider1)
    process.start()
    print(len(spider1.result))

蜘蛛:

class MySpider(scrapy.Spider):
    name = 'MySpider'
    allowed_domains = config.ALLOWED_DOMAINS
    result = pd.DataFrame(columns=...)

    def start_requests(self):
        yield Request(url=...,headers=config.HEADERS, callback=self.parse)

    def parse(self, response):
        *...Some Code of adding values to result...*
        print("size: " + str(len(self.result)))

main 方法中的打印值为 0,而在 parse 方法中为 1005。你能告诉我我应该如何传递值吗?

我想这样做,因为我是 运行 多个蜘蛛。他们完成抓取后,我将合并并保存到文件。

解决方案

def spider_closed(spider, reason):
    print("Size" + str(len(spider.result)))

def main():
    now = datetime.now()
    spider1 = spider.MySpider()
    crawler_process = CrawlerProcess()
    crawler = crawler_process.create_crawler(spider1)
    crawler.signals.connect(spider_closed, signals.spider_closed)
    crawler_process.crawl(spider1)
    crawler_process.start()

这种行为的主要原因是 Scrapy 本身的异步特性。 print(len(spider1.result)) 行将在调用 .parse() 方法之前执行。

有多种方法可以等待蜘蛛完成。我会做 spider_closed signal:

from scrapy import signals


def spider_closed(spider, reason):
    print(len(spider.result))

spider1 = mspider.MySpider()

crawler_process = CrawlerProcess(settings)
crawler = crawler_process.create_crawler()

crawler.signals.connect(spider_closed, signals.spider_closed)

crawler.crawl(spider1)
crawler_process.start()