我的 scrapy 蜘蛛给出了一个我无法理解的错误

My scrapy spider gives an error that I cannot understand

我刚刚在 datacamp 上了一门课程,想尝试我学到的东西,但我 运行 犯了我无法理解的错误。 我的代码是:

import scrapy
from scrapy.crawler import CrawlerProcess

class myspider(scrapy.Spider):
    name = 'my_spider'

    name_loc = '//div[@id="divListView"]//h4/a/text()'
    price_loc = 'div.price > span::text'
    def start_requests(self):
        yield scrapy.Request('url=https://www.czone.com.pk/mouse-pakistan-ppt.95.aspx', callback=self.parse)

    def parse(self, reponse):
        page1_prices = response.css(myspider.price_loc).extract()
        page1_names = response.xpath(myspider.name_loc).extract()
        for price, name in zip(page1_prices, page1_names):
            mice_names.append(name)
            mice_prices.append(price)
        links = response.css('a.PageNumber::attr(href)').extract()
        for link in links:
            yield response.follow(url=link, callback=self.parse_pages)

    def parse_pages(self, response):
        page_prices = response.css(myspider.price_loc).extract()
        page_names = response.xpath(myspider.name_loc).extract()
        for price, name in zip(page_prices, page_names):
            mice_names.append(name)
            mice_prices.append(price)
mice_names = []
mice_prices = []

process = CrawlerProcess()
process.crawl(myspider)
process.start()
print(mice_names)
print(mice_prices)

我试图从这个网站上抓取老鼠的名字和价格:https://www.czone.com.pk/mouse-pakistan-ppt.95.aspx。我试图浏览所有关于老鼠的信息。即使我注释掉与访问其他页面相关的部分,它仍然不起作用并给出相同的错误。 我分别测试了 xpath 和 css 定位器,它们似乎工作正常。我什至尝试与 datacamp 示例代码进行比较,但仍然找不到错误

错误是:

2020-05-21 01:47:50 [scrapy.utils.log] INFO: Scrapy 2.1.0 started (bot: scrapybot)
2020-05-21 01:47:50 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.2 (tags/v3.8.
2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1g  21 Apr 2020), cryptography 2.9.2, Platform Windows-10-10.0.18362-SP0
2020-05-21 01:47:50 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2020-05-21 01:47:50 [scrapy.crawler] INFO: Overridden settings:
{}
2020-05-21 01:47:50 [scrapy.extensions.telnet] INFO: Telnet Password: 2ebba07815410a9b
2020-05-21 01:47:50 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2020-05-21 01:47:50 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',

 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-05-21 01:47:50 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'downloader/request_method_count/GET': 1,
 'elapsed_time_seconds': 0.32308,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 5, 20, 20, 47, 51, 102851),
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2020, 5, 20, 20, 47, 50, 779771)}
2020-05-21 01:47:51 [scrapy.core.engine] INFO: Spider closed (finished)
[]
[]

我完全迷路了。我不知道这是什么意思。我怎样才能解决这个问题?请尝试指导我了解错误部分的含义。

据我所知,您的错误在 start_requests。

def start_requests(self):
        yield scrapy.Request('url=https://www.czone.com.pk/mouse-pakistan-ppt.95.aspx', callback=self.parse)

您将 url 放在引号内 'url=https://www.czone.com.pk/mouse-pakistan-ppt.95.aspx'。 我想你想做的是:

def start_requests(self):
        yield scrapy.Request(url='https://www.czone.com.pk/mouse-pakistan-ppt.95.aspx', callback=self.parse)

您的代码实际上是正确的,这就是您没有收到错误消息的原因。这是最难调试的错误之一...