Scrapy 无法到达 start_urls: DEBUG: Crawled (200) and ERROR

Scrapy cannot reach start_urls: DEBUG: Crawled (200) and ERROR

我正在尝试使用 Scrapy 从一个大学项目的运动鞋网站上抓取信息。这个想法是告诉 Scrapy 跟随每只鞋的每个 link 并抓取四个信息点(名称,release_date,retail_price,resell_price)。然后返回到之前的站点并单击下一个 link 并再次进行相同的抓取。在页面末尾,单击下一页并重复直到不再 links。

然而,当 Scrapy 试图达到给定 start_url.

时,我总是 运行 进入 DEBUG 和 ERROR 消息
2020-04-06 11:59:56 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stockx.com/sneakers/release-date?page=1> (referer: None)
2020-04-06 11:59:56 [scrapy.core.scraper] ERROR: Spider error processing <GET https://stockx.com/sneakers/release-date?page=1> (referer: None)

代码如下:

import scrapy

class Spider200406Item(scrapy.Item):
    link = scrapy.Field()
    name = scrapy.Field()
    release_date = scrapy.Field()
    retail_price = scrapy.Field()
    resell_price = scrapy.Field()


class Spider200406Spider(scrapy.Spider):
    name = 'spider_200406'
    allowed_domains = ['www.stockx.com']
    start_urls = ['https://stockx.com/sneakers/release-date?page=1']

    BASE_URL = 'https://stockx.com/sneakers/release-date'

    def parse(self, response):
        links = response.xpath('//a[@class="TileBody-sc-1d2ws1l-0 bKAXcS"/@href').extract()
        for link in links:
            absolute_url = self.BASE_URL + link
            yield scrapy.Request(absolute_url, callback=self.parse_info)

    def parse_info(self, response):
        item = Spider200406Item()
        item["link"] = response.url
        item["name"] = "".join(response.xpath("//h1[@class='name']//text()").extract())
        item["release_date"] = "".join(response.xpath("//span[@data-testid='product-detail-release date']//text()").extract())
        item["retail_price"] = "".join(response.xpath("//span[@data-testid='product-detail-retail price']//text()").extract())
        item["resell_price"] = "".join(response.xpath("//div[@class='gauge-value']//text()").extract())
        return item

我也尝试过用一个简单得多的网站使用相同的代码结构。但是,我收到了相同的错误消息,这让我得出结论,代码一定有问题。

整个轨迹:

2020-04-06 14:33:02 [scrapy.core.engine] INFO: Spider opened
2020-04-06 14:33:02 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-04-06 14:33:02 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-04-06 14:33:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://stockx.com/sneakers/release-date?page=1> (referer: None)
2020-04-06 14:33:03 [scrapy.core.scraper] ERROR: Spider error processing <GET https://stockx.com/sneakers/release-date?page=1> (referer: None)
Traceback (most recent call last):
  File "/Applications/anaconda3/lib/python3.7/site-packages/parsel/selector.py", line 238, in xpath
    **kwargs)
  File "src/lxml/etree.pyx", line 1581, in lxml.etree._Element.xpath
  File "src/lxml/xpath.pxi", line 305, in lxml.etree.XPathElementEvaluator.__call__
  File "src/lxml/xpath.pxi", line 225, in lxml.etree._XPathEvaluatorBase._handle_result
lxml.etree.XPathEvalError: Invalid predicate

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/utils/defer.py", line 117, in iter_errback
    yield next(it)
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/utils/python.py", line 345, in __next__
    return next(self.data)
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/utils/python.py", line 345, in __next__
    return next(self.data)
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/spidermiddlewares/referer.py", line 338, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/core/spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "/Users/ritterm/Desktop/Data2Dollar_Coding/Group_project/stockx_200406/stockx_200406/spiders/spider_200406.py", line 20, in parse
    links = response.xpath('//a[@class="TileBody-sc-1d2ws1l-0 bKAXcS"/@href').extract()
  File "/Applications/anaconda3/lib/python3.7/site-packages/scrapy/http/response/text.py", line 117, in xpath
    return self.selector.xpath(query, **kwargs)
  File "/Applications/anaconda3/lib/python3.7/site-packages/parsel/selector.py", line 242, in xpath
    six.reraise(ValueError, ValueError(msg), sys.exc_info()[2])
  File "/Applications/anaconda3/lib/python3.7/site-packages/six.py", line 692, in reraise
    raise value.with_traceback(tb)
  File "/Applications/anaconda3/lib/python3.7/site-packages/parsel/selector.py", line 238, in xpath
    **kwargs)
  File "src/lxml/etree.pyx", line 1581, in lxml.etree._Element.xpath
  File "src/lxml/xpath.pxi", line 305, in lxml.etree.XPathElementEvaluator.__call__
  File "src/lxml/xpath.pxi", line 225, in lxml.etree._XPathEvaluatorBase._handle_result
ValueError: XPath error: Invalid predicate in //a[@class="TileBody-sc-1d2ws1l-0 bKAXcS"/@href
2020-04-06 14:33:03 [scrapy.core.engine] INFO: Closing spider (finished)

非常感谢任何建议和想法。 先生

allowed_domains = ['stockx.com']

您的代码中存在多个错误,导致 scrapy 无法成功。

首先,正如 所指出的,将您的 allowed_domains 更正为 allowed_domains = ['stockx.com'] 或完全删除该行。

此外你的BASE_URL是错误的。将其更改为:BASE_URL = 'https://stockx.com'

此外,如堆栈跟踪所示,您的 xpath 中存在错误。我通过使用一个非常简单的 css 选择器将 link 获取到每个鞋子页面来解决它:response.css('.browse-grid a::attr(href)').extract()

总而言之,下面的代码应该完全符合您的要求:

import scrapy

class Spider200406Item(scrapy.Item):
    link = scrapy.Field()
    name = scrapy.Field()
    release_date = scrapy.Field()
    retail_price = scrapy.Field()
    resell_price = scrapy.Field()


class Spider200406Spider(scrapy.Spider):
    name = 'spider_200406'
    start_urls = ['https://stockx.com/sneakers/release-date?page=1']
    allowed_domains = ['stockx.com']
    BASE_URL = 'https://stockx.com'

    def parse(self, response):
        links = response.css('.browse-grid a::attr(href)').extract()
        for link in links:
            absolute_url = self.BASE_URL + link
            yield scrapy.Request(absolute_url, callback=self.parse_info)

    def parse_info(self, response):
        item = Spider200406Item()
        item["link"] = response.url
        item["name"] = "".join(response.xpath("//h1[@class='name']//text()").extract())
        item["release_date"] = "".join(response.xpath("//span[@data-testid='product-detail-release date']//text()").extract())
        item["retail_price"] = "".join(response.xpath("//span[@data-testid='product-detail-retail price']//text()").extract())
        item["resell_price"] = "".join(response.xpath("//div[@class='gauge-value']//text()").extract())
        return item

确保您在设置中使用 USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36' 等用户代理。