Scrapy:如何在 3rd 方函数中实现 yield?
Scrapy: How to implement of yield in 3rd party function?
代码如下所示:
def parse(self, response):
param = {}
self.send_request(self, param)
def send_request(self, param):
url = "www.sample.com/auto/"
yield FormRequest(url, callback=self.parse_auto, formdata=param, method="POST")
def parse_auto(self, response):
...
为什么 yield 在此代码中不起作用?我想在其他部分重复使用 send_request。
日志:
2017-02-26 23:43:16 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2017-02-26 23:43:16 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-02-26 23:43:16 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-02-26 23:43:16 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-02-26 23:43:16 [scrapy.core.engine] INFO: Spider opened
2017-02-26 23:43:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-02-26 23:43:16 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-02-26 23:43:17 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.compreseguros.com/> (referer: None)
2017-02-26 23:43:18 [scrapy.core.engine] INFO: Closing spider (finished)
2017-02-26 23:43:18 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 291,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 7561,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 2, 26, 15, 43, 18, 32000),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 2, 26, 15, 43, 16, 355000)}
2017-02-26 23:43:18 [scrapy.core.engine] INFO: Spider closed (finished)
这是日志结果。
查看 sent_request 函数体。有变化。
自行调试蜘蛛。使用日志记录 class.
记录一些有用的消息
import logging
def parse(self, response):
logging.info("I am called from parse method")
param = {}
url = "www.sample.com/auto/"
yield FormRequest(url, callback=self.parse_auto, formdata=param, method="POST")
def parse_auto(self, response):
logging.info("I am called from parse_auto method")
这样做并再次使用 scrapy 日志更新您的问题。我确定您的代码不会进入 send_request
方法。
您的代码中有 url = "www.sample.com/auto/"
。因此,请确保您的 allowed_domains
变量中包含 www.sample.com/auto/
。
这是完整的工作代码。
你所需要的只是 return
而不是 parse
中的方法
# -*- coding: utf-8 -*-
import scrapy, logging
from scrapy.http import FormRequest, Request
class Test1SpiderSpider(scrapy.Spider):
name = "test1_spider"
start_urls = ["http://whosebug.com"]
def parse(self, response):
param = {}
return self.send_request(param)
def send_request(self, param):
logging.info("send_request is called")
url = "http://whosebug.com"
yield FormRequest(url, callback=self.parse_auto, formdata=param, method="POST")
def parse_auto(self, response):
logging.info("parse_auto is called")
代码如下所示:
def parse(self, response):
param = {}
self.send_request(self, param)
def send_request(self, param):
url = "www.sample.com/auto/"
yield FormRequest(url, callback=self.parse_auto, formdata=param, method="POST")
def parse_auto(self, response):
...
为什么 yield 在此代码中不起作用?我想在其他部分重复使用 send_request。
日志:
2017-02-26 23:43:16 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2017-02-26 23:43:16 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-02-26 23:43:16 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-02-26 23:43:16 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-02-26 23:43:16 [scrapy.core.engine] INFO: Spider opened
2017-02-26 23:43:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-02-26 23:43:16 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-02-26 23:43:17 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.compreseguros.com/> (referer: None)
2017-02-26 23:43:18 [scrapy.core.engine] INFO: Closing spider (finished)
2017-02-26 23:43:18 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 291,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 7561,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 2, 26, 15, 43, 18, 32000),
'log_count/DEBUG': 2,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 2, 26, 15, 43, 16, 355000)}
2017-02-26 23:43:18 [scrapy.core.engine] INFO: Spider closed (finished)
这是日志结果。 查看 sent_request 函数体。有变化。
自行调试蜘蛛。使用日志记录 class.
记录一些有用的消息import logging
def parse(self, response):
logging.info("I am called from parse method")
param = {}
url = "www.sample.com/auto/"
yield FormRequest(url, callback=self.parse_auto, formdata=param, method="POST")
def parse_auto(self, response):
logging.info("I am called from parse_auto method")
这样做并再次使用 scrapy 日志更新您的问题。我确定您的代码不会进入 send_request
方法。
您的代码中有 url = "www.sample.com/auto/"
。因此,请确保您的 allowed_domains
变量中包含 www.sample.com/auto/
。
这是完整的工作代码。
你所需要的只是 return
而不是 parse
# -*- coding: utf-8 -*-
import scrapy, logging
from scrapy.http import FormRequest, Request
class Test1SpiderSpider(scrapy.Spider):
name = "test1_spider"
start_urls = ["http://whosebug.com"]
def parse(self, response):
param = {}
return self.send_request(param)
def send_request(self, param):
logging.info("send_request is called")
url = "http://whosebug.com"
yield FormRequest(url, callback=self.parse_auto, formdata=param, method="POST")
def parse_auto(self, response):
logging.info("parse_auto is called")