为什么 Scrapy returns 一个 Iframe?
Why Scrapy returns an Iframe?
我想通过 Python-Scrapy
抓取 this site
我试试这个
class Parik(scrapy.Spider):
name = "ooshop"
allowed_domains = ["http://www.ooshop.com/courses-en-ligne/Home.aspx"]
def __init__(self, idcrawl=None, proxy=None, *args, **kwargs):
super(Parik, self).__init__(*args, **kwargs)
self.start_urls = ['http://www.ooshop.com/courses-en-ligne/Home.aspx']
def parse(self, response):
print response.css('body').extract_first()
但我没有第一页,我有一个空的 iframe
2016-09-06 19:09:24 [scrapy] DEBUG: Crawled (200) <GET http://www.ooshop.com/courses-en-ligne/Home.aspx> (referer: None)
<body>
<iframe style="display:none;visibility:hidden;" src="//content.incapsula.com/jsTest.html" id="gaIframe"></iframe>
</body>
2016-09-06 19:09:24 [scrapy] INFO: Closing spider (finished)
该网站受网站安全服务 Incapsula 保护。它为您的 "browser" 提供了一个挑战,它必须先完成挑战,然后才能获得一个特殊的 cookie,让您可以访问网站本身。
幸运的是,绕过它并不难。安装incapsula-cracker并安装其下载器中间件:
DOWNLOADER_MIDDLEWARES = {
'incapsula.IncapsulaMiddleware': 900
}
我想通过 Python-Scrapy
抓取 this site我试试这个
class Parik(scrapy.Spider):
name = "ooshop"
allowed_domains = ["http://www.ooshop.com/courses-en-ligne/Home.aspx"]
def __init__(self, idcrawl=None, proxy=None, *args, **kwargs):
super(Parik, self).__init__(*args, **kwargs)
self.start_urls = ['http://www.ooshop.com/courses-en-ligne/Home.aspx']
def parse(self, response):
print response.css('body').extract_first()
但我没有第一页,我有一个空的 iframe
2016-09-06 19:09:24 [scrapy] DEBUG: Crawled (200) <GET http://www.ooshop.com/courses-en-ligne/Home.aspx> (referer: None)
<body>
<iframe style="display:none;visibility:hidden;" src="//content.incapsula.com/jsTest.html" id="gaIframe"></iframe>
</body>
2016-09-06 19:09:24 [scrapy] INFO: Closing spider (finished)
该网站受网站安全服务 Incapsula 保护。它为您的 "browser" 提供了一个挑战,它必须先完成挑战,然后才能获得一个特殊的 cookie,让您可以访问网站本身。
幸运的是,绕过它并不难。安装incapsula-cracker并安装其下载器中间件:
DOWNLOADER_MIDDLEWARES = {
'incapsula.IncapsulaMiddleware': 900
}