Scrapy 爬行没有爬行任何 url
Scrapy crawl not Crawling any url
这是我的第一个爬虫代码。当我在我的 cmd 中执行这段代码时。日志显示 url 甚至没有被抓取,并且其中没有 DEBUG 消息。
无法在任何地方找到解决此问题的任何方法。我无法理解哪里出了问题。有人可以帮我解决这个问题吗?
我的代码:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes_spider"
def start_request(self):
urls = ["http://quotes.toscrape.com/page/1/",
"http://quotes.toscrape.com/page/2/",
"http://quotes.toscrape.com/page/3/"
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
page = response.url.split("/")[-2]
filename = 'quotes-%s.html'% page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
日志:
2021-06-19 23:19:01 [scrapy.utils.log] INFO: Scrapy 2.5.0 started (bot: my_scrapy)
2021-06-19 23:19:01 [scrapy.utils.log] INFO: Versions: lxml 4.6.3.0, libxml2 2.9.5, cssselect
1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 21.2.0, Python 3.9.0 (tags/v3.9.0:9cf6752, Oct 5
2020, 15:34:40) [MSC v.1927 64 bit (AMD64)], pyOpenSSL 20.0.1 (OpenSSL 1.1.1k 25 Mar 2021),
cryptography 3.4.7, Platform Windows-10-10.0.19041-SP0
2021-06-19 23:19:01 [scrapy.utils.log] DEBUG: Using reactor:
twisted.internet.selectreactor.SelectReactor
2021-06-19 23:19:01 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'my_scrapy',
'NEWSPIDER_MODULE': 'my_scrapy.spiders',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['my_scrapy.spiders']}
2021-06-19 23:19:01 [scrapy.extensions.telnet] INFO: Telnet Password: 1a9440bbf933d074
2021-06-19 23:19:01 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2021-06-19 23:19:02 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2021-06-19 23:19:02 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2021-06-19 23:19:02 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2021-06-19 23:19:02 [scrapy.core.engine] INFO: Spider opened
2021-06-19 23:19:02 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min),
scraped 0 items (at 0 items/min)
2021-06-19 23:19:02 [scrapy.extensions.telnet] INFO: Telnet console listening on
127.0.0.1:6023
2021-06-19 23:19:02 [scrapy.core.engine] INFO: Closing spider (finished)
2021-06-19 23:19:02 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.008228,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2021, 6, 19, 17, 49, 2, 99933),
'log_count/INFO': 10,
'start_time': datetime.datetime(2021, 6, 19, 17, 49, 2, 91705)}
2021-06-19 23:19:02 [scrapy.core.engine] INFO: Spider closed (finished)
注意: 因为我没有 50 的评论评论,所以我在这里回答。
问题出在函数命名上,你的函数应该是def start_requests(self)
而不是def start_request(self)
。
要执行的第一个请求是通过调用 start_requests() 方法获得的,该方法(默认情况下)为 URL 生成请求。但是,在您的情况下,它永远不会进入该功能,因此永远不会对这些 URL 发出请求。
您的代码经过小改动
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes_spider"
def start_requests(self):
urls = ["http://quotes.toscrape.com/page/1/",
"http://quotes.toscrape.com/page/2/",
"http://quotes.toscrape.com/page/3/"
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
page = response.url.split("/")[-2]
filename = 'quotes-%s.html'% page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
这是我的第一个爬虫代码。当我在我的 cmd 中执行这段代码时。日志显示 url 甚至没有被抓取,并且其中没有 DEBUG 消息。 无法在任何地方找到解决此问题的任何方法。我无法理解哪里出了问题。有人可以帮我解决这个问题吗?
我的代码:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes_spider"
def start_request(self):
urls = ["http://quotes.toscrape.com/page/1/",
"http://quotes.toscrape.com/page/2/",
"http://quotes.toscrape.com/page/3/"
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
page = response.url.split("/")[-2]
filename = 'quotes-%s.html'% page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
日志:
2021-06-19 23:19:01 [scrapy.utils.log] INFO: Scrapy 2.5.0 started (bot: my_scrapy)
2021-06-19 23:19:01 [scrapy.utils.log] INFO: Versions: lxml 4.6.3.0, libxml2 2.9.5, cssselect
1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 21.2.0, Python 3.9.0 (tags/v3.9.0:9cf6752, Oct 5
2020, 15:34:40) [MSC v.1927 64 bit (AMD64)], pyOpenSSL 20.0.1 (OpenSSL 1.1.1k 25 Mar 2021),
cryptography 3.4.7, Platform Windows-10-10.0.19041-SP0
2021-06-19 23:19:01 [scrapy.utils.log] DEBUG: Using reactor:
twisted.internet.selectreactor.SelectReactor
2021-06-19 23:19:01 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'my_scrapy',
'NEWSPIDER_MODULE': 'my_scrapy.spiders',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['my_scrapy.spiders']}
2021-06-19 23:19:01 [scrapy.extensions.telnet] INFO: Telnet Password: 1a9440bbf933d074
2021-06-19 23:19:01 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2021-06-19 23:19:02 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2021-06-19 23:19:02 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2021-06-19 23:19:02 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2021-06-19 23:19:02 [scrapy.core.engine] INFO: Spider opened
2021-06-19 23:19:02 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min),
scraped 0 items (at 0 items/min)
2021-06-19 23:19:02 [scrapy.extensions.telnet] INFO: Telnet console listening on
127.0.0.1:6023
2021-06-19 23:19:02 [scrapy.core.engine] INFO: Closing spider (finished)
2021-06-19 23:19:02 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.008228,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2021, 6, 19, 17, 49, 2, 99933),
'log_count/INFO': 10,
'start_time': datetime.datetime(2021, 6, 19, 17, 49, 2, 91705)}
2021-06-19 23:19:02 [scrapy.core.engine] INFO: Spider closed (finished)
注意: 因为我没有 50 的评论评论,所以我在这里回答。
问题出在函数命名上,你的函数应该是def start_requests(self)
而不是def start_request(self)
。
要执行的第一个请求是通过调用 start_requests() 方法获得的,该方法(默认情况下)为 URL 生成请求。但是,在您的情况下,它永远不会进入该功能,因此永远不会对这些 URL 发出请求。
您的代码经过小改动
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes_spider"
def start_requests(self):
urls = ["http://quotes.toscrape.com/page/1/",
"http://quotes.toscrape.com/page/2/",
"http://quotes.toscrape.com/page/3/"
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
page = response.url.split("/")[-2]
filename = 'quotes-%s.html'% page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)