如何为提取的每个项目获取 url 跟踪后跟蜘蛛?

How to get the url trail followed by spider for each item extracted?

我目前正在研究在电子商务网站上爬行并提取数据的蜘蛛。同时,我还需要在

等产品中保存url轨迹
{
'product_name: "apple iphone 12",
'trail': ["https://www.apple.com/", "https://www.apple.com/iphone/", "https://www.apple.com/iphone-12/"
}

与用户从起始页转到产品相同。

我正在使用scrapy 2.4.1

我将之前的 url 作为关键字参数传递给回调

来源:https://docs.scrapy.org/en/latest/topics/request-response.html#topics-request-response-ref-request-callback-arguments

def parse(self, response):
    request = scrapy.Request('http://www.example.com/index.html',
                             callback=self.parse_page2,
                             cb_kwargs=dict(main_url=response.url))
    request.cb_kwargs['foo'] = 'bar'  # add more arguments for the callback
    yield request

def parse_page2(self, response, main_url, foo):
    yield dict(
        main_url=main_url,
        other_url=response.url,
        foo=foo,
    )