Scrapy 解析 url 列表,逐个打开并解析附加数据

Scrapy parse list of urls, open one by one and parse additional data

我正在尝试解析一个网站,一个电子商店。我解析一个包含产品的页面,其中加载了 ajax,获取这些产品的 url,然后按照这些 parced url 解析每个产品的附加信息。

我的脚本获取页面上前 4 个项目的列表、它们的 url、发出请求、解析添加信息,但随后没有返回到循环中,因此 spider 关闭。

有人可以帮我解决这个问题吗?我对这种东西很陌生,完全卡住时请在这里问。

这是我的代码:

from scrapy import Spider
from scrapy.selector import Selector
from scrapy.http.request import Request
from scrapy_sokos.items import SokosItem


class SokosSpider(Spider):
    name = "sokos"
    allowed_domains = ["sokos.fi"]
    base_url = "http://www.sokos.fi/fi/SearchDisplay?searchTermScope=&searchType=&filterTerm=&orderBy=8&maxPrice=&showResultsPage=true&beginIndex=%s&langId=-11&sType=SimpleSearch&metaData=&pageSize=4&manufacturer=&resultCatEntryType=&catalogId=10051&pageView=image&searchTerm=&minPrice=&urlLangId=-11&categoryId=295401&storeId=10151"
    start_urls = [
        "http://www.sokos.fi/fi/SearchDisplay?searchTermScope=&searchType=&filterTerm=&orderBy=8&maxPrice=&showResultsPage=true&beginIndex=0&langId=-11&sType=SimpleSearch&metaData=&pageSize=4&manufacturer=&resultCatEntryType=&catalogId=10051&pageView=image&searchTerm=&minPrice=&urlLangId=-11&categoryId=295401&storeId=10151",
    ]

    for i in range(0, 8, 4):
        start_urls.append((base_url) % str(i))


    def parse(self, response):
        products = Selector(response).xpath('//div[@class="product-listing product-grid"]/article[@class="product product-thumbnail"]')
        for product in products:
            item = SokosItem()
            item['url'] = product.xpath('//div[@class="content"]/a[@class="image"]/@href').extract()[0]

            yield Request(url = item['url'], meta = {'item': item}, callback=self.parse_additional_info) 

    def parse_additional_info(self, response):
        item = response.meta['item']
        item['name'] = Selector(response).xpath('//h1[@class="productTitle"]/text()').extract()[0].strip()
        item['description'] = Selector(response).xpath('//div[@id="kuvaus"]/p/text()').extract()[0]
        euro = Selector(response).xpath('//strong[@class="special-price"]/span[@class="euros"]/text()').extract()[0]
        cent = Selector(response).xpath('//strong[@class="special-price"]/span[@class="cents"]/text()').extract()[0]
        item['price'] = '.'.join(euro + cent)
        item['number'] = Selector(response).xpath('//@data-productid').extract()[0]
        yield item

您正在模拟的 AJAX 请求被 Scrapy "duplicate url filter" 捕获。

在生成 Request 时将 dont_filter 设置为 True:

yield Request(url=item['url'], 
              meta={'item': item},    
              callback=self.parse_additional_info, 
              dont_filter=True)