使用 Scrapy 通过 html 表单的问题
Issue getting past an html form with Scrapy
Url 我正在尝试抓取:https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched
总共有 3 页,第一页 select term,第二页 select a subject,以及包含实际 课程信息 的页面。
我 运行 遇到的问题是一旦 subject() 调用 courses() 回调 [=写入文件的 response.body 中的 35=] 是主题页面的 html 而不是课程页面。我如何知道我正在发送正确的表单数据以便收到正确的回复?
# term():
# Selects the school term to use. Clicks submit
def term(self, response):
return scrapy.FormRequest.from_response(
response,
formxpath="/html/body/div[3]/form",
formdata={
"p_term" : "201705" },
clickdata = { "type": "submit" },
callback=self.subject
)
# subject():
# Selects the subject to query. Clicks submit
def subject(self, response):
return scrapy.FormRequest.from_response(
response,
formxpath="/html/body/div[3]/form",
formdata={
"sel_subj" : "ART" },
clickdata = { "type": "submit" },
callback=self.courses
)
# courses():
# Currently just saves all the html on the page.
def courses(self, response):
page = response.url.split("/")[-1]
filename = 'uvic-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
调试输出
2017-04-02 01:15:28 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapy4uvic)
2017-04-02 01:15:28 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapy4uvic.spiders', 'SPIDER_MODULES': ['scrapy4uvic.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'scrapy4uvic'}
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-04-02 01:15:28 [scrapy.core.engine] INFO: Spider opened
2017-04-02 01:15:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-02 01:15:28 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.uvic.ca/robots.txt> (referer: None)
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched> (referer: None)
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://www.uvic.ca/BAN1P/bwckgens.p_proc_term_date> (referer: https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched)
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://www.uvic.ca/BAN1P/bwckschd.p_get_crse_unsec> (referer: https://www.uvic.ca/BAN1P/bwckgens.p_proc_term_date)
2017-04-02 01:15:30 [uvic] DEBUG: Saved file uvic-bwckschd.p_get_crse_unsec.html
2017-04-02 01:15:30 [scrapy.core.engine] INFO: Closing spider (finished)
2017-04-02 01:15:30 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 2335,
'downloader/request_count': 4,
'downloader/request_method_count/GET': 2,
'downloader/request_method_count/POST': 2,
'downloader/response_bytes': 105499,
'downloader/response_count': 4,
'downloader/response_status_count/200': 4,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 4, 2, 8, 15, 30, 103536),
'log_count/DEBUG': 6,
'log_count/INFO': 7,
'request_depth_max': 2,
'response_received_count': 4,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2017, 4, 2, 8, 15, 28, 987034)}
2017-04-02 01:15:30 [scrapy.core.engine] INFO: Spider closed (finished)
您似乎缺少 subject()
中的一些表单数据。
我设法让它工作:
formdata={
"sel_subj": ["dummy", "ART"],
}
我是如何调试它的。
首先你不必保存到文件,你可以在抓取期间inspect_response
:
def courses(self, response):
from scrapy.shell import inspect_response
inspect_response(response, self)
这将打开一个带有 response
和 request
对象的 shell,您甚至可以调用 view(response)
在浏览器中打开 html。它还将使用 ipython
或 bpython
shells(如果可用),在下面的示例中,我使用 ipython 来方便格式化。
其次,我检查了我的浏览器 (firefox) 当我单击按钮并将其复制到变量 bar
下的 shell 时它发送的是什么形式,并将其与 scrapy 发送的请求正文进行比较:
bar = '''term_in=201705&sel_subj=dummy&sel_day=dummy&sel_schd=dummy&sel_insm=dummy&
sel_camp=dummy&sel_levl=dummy
&sel_sess=dummy&sel_instr=dummy&sel_ptrm=dummy&sel_attr=dummy&sel_subj=ART&sel_crse
=&sel_title=&sel_schd
=%25&sel_insm=%25&sel_from_cred=&sel_to_cred=&sel_camp=%25&sel_levl=%25&sel_ptrm=%2
5&sel_instr=%25&begin_hh
=0&begin_mi=0&begin_ap=a&end_hh=0&end_mi=0&end_ap=a'''
# split into arguments
bar = sorted(bar.split('&'))
# do the same with the request body that was sent by scrapy
foo =sorted(request.body.split('&'))
# now join these together and find the differences!
zip(foo, bar)
[('begin_ap=a', 'begin_ap=a'),
('begin_hh=0', 'begin_hh\n=0'),
('begin_mi=0', 'begin_mi=0'),
('end_ap=a', 'end_ap=a'),
('end_hh=0', 'end_hh=0'),
('end_mi=0', 'end_mi=0'),
('sel_attr=dummy', 'sel_attr=dummy'),
('sel_camp=%25', 'sel_camp=%25'),
('sel_camp=dummy', 'sel_camp=dummy'),
('sel_crse=', 'sel_crse='),
('sel_day=dummy', 'sel_day=dummy'),
('sel_from_cred=', 'sel_from_cred='),
('sel_insm=%25', 'sel_insm=%25'),
('sel_insm=dummy', 'sel_insm=dummy'),
('sel_instr=%25', 'sel_instr=%25'),
('sel_instr=dummy', 'sel_instr=dummy'),
('sel_levl=%25', 'sel_levl=%25'),
('sel_levl=dummy', 'sel_levl=dummy\n'),
('sel_ptrm=%25', 'sel_ptrm=%25'),
('sel_ptrm=dummy', 'sel_ptrm=dummy'),
('sel_schd=%25', 'sel_schd\n=%25'),
('sel_schd=dummy', 'sel_schd=dummy'),
('sel_sess=dummy', 'sel_sess=dummy'),
('sel_subj=ART', 'sel_subj=ART'),
('sel_title=', 'sel_subj=dummy'),
('sel_to_cred=', 'sel_title='),
('term_in=201705', 'sel_to_cred=')]
如您所见,您在 sel_subj
中丢失了 "dummy",而 'term_in' 在不应该出现的情况下出现了,但它似乎没有效果 :)
Url 我正在尝试抓取:https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched
总共有 3 页,第一页 select term,第二页 select a subject,以及包含实际 课程信息 的页面。
我 运行 遇到的问题是一旦 subject() 调用 courses() 回调 [=写入文件的 response.body 中的 35=] 是主题页面的 html 而不是课程页面。我如何知道我正在发送正确的表单数据以便收到正确的回复?
# term():
# Selects the school term to use. Clicks submit
def term(self, response):
return scrapy.FormRequest.from_response(
response,
formxpath="/html/body/div[3]/form",
formdata={
"p_term" : "201705" },
clickdata = { "type": "submit" },
callback=self.subject
)
# subject():
# Selects the subject to query. Clicks submit
def subject(self, response):
return scrapy.FormRequest.from_response(
response,
formxpath="/html/body/div[3]/form",
formdata={
"sel_subj" : "ART" },
clickdata = { "type": "submit" },
callback=self.courses
)
# courses():
# Currently just saves all the html on the page.
def courses(self, response):
page = response.url.split("/")[-1]
filename = 'uvic-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
调试输出
2017-04-02 01:15:28 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: scrapy4uvic)
2017-04-02 01:15:28 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scrapy4uvic.spiders', 'SPIDER_MODULES': ['scrapy4uvic.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'scrapy4uvic'}
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-04-02 01:15:28 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-04-02 01:15:28 [scrapy.core.engine] INFO: Spider opened
2017-04-02 01:15:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-02 01:15:28 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.uvic.ca/robots.txt> (referer: None)
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched> (referer: None)
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://www.uvic.ca/BAN1P/bwckgens.p_proc_term_date> (referer: https://www.uvic.ca/BAN1P/bwckschd.p_disp_dyn_sched)
2017-04-02 01:15:29 [scrapy.core.engine] DEBUG: Crawled (200) <POST https://www.uvic.ca/BAN1P/bwckschd.p_get_crse_unsec> (referer: https://www.uvic.ca/BAN1P/bwckgens.p_proc_term_date)
2017-04-02 01:15:30 [uvic] DEBUG: Saved file uvic-bwckschd.p_get_crse_unsec.html
2017-04-02 01:15:30 [scrapy.core.engine] INFO: Closing spider (finished)
2017-04-02 01:15:30 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 2335,
'downloader/request_count': 4,
'downloader/request_method_count/GET': 2,
'downloader/request_method_count/POST': 2,
'downloader/response_bytes': 105499,
'downloader/response_count': 4,
'downloader/response_status_count/200': 4,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 4, 2, 8, 15, 30, 103536),
'log_count/DEBUG': 6,
'log_count/INFO': 7,
'request_depth_max': 2,
'response_received_count': 4,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2017, 4, 2, 8, 15, 28, 987034)}
2017-04-02 01:15:30 [scrapy.core.engine] INFO: Spider closed (finished)
您似乎缺少 subject()
中的一些表单数据。
我设法让它工作:
formdata={
"sel_subj": ["dummy", "ART"],
}
我是如何调试它的。
首先你不必保存到文件,你可以在抓取期间inspect_response
:
def courses(self, response):
from scrapy.shell import inspect_response
inspect_response(response, self)
这将打开一个带有 response
和 request
对象的 shell,您甚至可以调用 view(response)
在浏览器中打开 html。它还将使用 ipython
或 bpython
shells(如果可用),在下面的示例中,我使用 ipython 来方便格式化。
其次,我检查了我的浏览器 (firefox) 当我单击按钮并将其复制到变量 bar
下的 shell 时它发送的是什么形式,并将其与 scrapy 发送的请求正文进行比较:
bar = '''term_in=201705&sel_subj=dummy&sel_day=dummy&sel_schd=dummy&sel_insm=dummy&
sel_camp=dummy&sel_levl=dummy
&sel_sess=dummy&sel_instr=dummy&sel_ptrm=dummy&sel_attr=dummy&sel_subj=ART&sel_crse
=&sel_title=&sel_schd
=%25&sel_insm=%25&sel_from_cred=&sel_to_cred=&sel_camp=%25&sel_levl=%25&sel_ptrm=%2
5&sel_instr=%25&begin_hh
=0&begin_mi=0&begin_ap=a&end_hh=0&end_mi=0&end_ap=a'''
# split into arguments
bar = sorted(bar.split('&'))
# do the same with the request body that was sent by scrapy
foo =sorted(request.body.split('&'))
# now join these together and find the differences!
zip(foo, bar)
[('begin_ap=a', 'begin_ap=a'),
('begin_hh=0', 'begin_hh\n=0'),
('begin_mi=0', 'begin_mi=0'),
('end_ap=a', 'end_ap=a'),
('end_hh=0', 'end_hh=0'),
('end_mi=0', 'end_mi=0'),
('sel_attr=dummy', 'sel_attr=dummy'),
('sel_camp=%25', 'sel_camp=%25'),
('sel_camp=dummy', 'sel_camp=dummy'),
('sel_crse=', 'sel_crse='),
('sel_day=dummy', 'sel_day=dummy'),
('sel_from_cred=', 'sel_from_cred='),
('sel_insm=%25', 'sel_insm=%25'),
('sel_insm=dummy', 'sel_insm=dummy'),
('sel_instr=%25', 'sel_instr=%25'),
('sel_instr=dummy', 'sel_instr=dummy'),
('sel_levl=%25', 'sel_levl=%25'),
('sel_levl=dummy', 'sel_levl=dummy\n'),
('sel_ptrm=%25', 'sel_ptrm=%25'),
('sel_ptrm=dummy', 'sel_ptrm=dummy'),
('sel_schd=%25', 'sel_schd\n=%25'),
('sel_schd=dummy', 'sel_schd=dummy'),
('sel_sess=dummy', 'sel_sess=dummy'),
('sel_subj=ART', 'sel_subj=ART'),
('sel_title=', 'sel_subj=dummy'),
('sel_to_cred=', 'sel_title='),
('term_in=201705', 'sel_to_cred=')]
如您所见,您在 sel_subj
中丢失了 "dummy",而 'term_in' 在不应该出现的情况下出现了,但它似乎没有效果 :)