Scrapy - 从过滤器中获取数据(Python)
Scrapy - Getting the data from filterbox(Python)
我遇到了 Scrapy 的问题。我需要从下面我 link 编辑的图像中的红色圆圈部分获取所有城市名称。但是用我的代码我不能 return 任何东西。我尝试了很多选择但没有成功。我怎样才能解决这个问题并获得这些城市名称? link 图片和源代码如下。
import scrapy
from scrapy.spiders import CrawlSpider
#from city_crawl.items import CityCrawlItem
class details(CrawlSpider):
name = "city_crawling"
start_urls = ['https://www.booking.com/searchresults.tr.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=cfc09bd0db4d07c7b55902c6d0ae81a5&track_lsso=1&sb=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Findex.tr.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3Dcfc09bd0db4d07c7b55902c6d0ae81a5%3Bsb_price_type%3Dtotal%26%3B&ss=isve%C3%A7&checkin_monthday=&checkin_month=&checkin_year=&checkout_monthday=&checkout_month=&checkout_year=&room1=A%2CA&no_rooms=1&group_adults=2&group_children=0']
def parse(self, response):
for content in response.xpath('//a[contains(@data-name, "uf")]'):
yield {
'text': content.css('span.filter_label::text').extract()
}
Image of the source which i need to parse the data. The red circled part in the left is what i need to get
您的 for
循环是 select 具有 class
的 <a>
元素包含“uf
”,它将 return 什么都没有。如果 select 带有 data-name
的元素包含“uf
”,您可以这样更改您的代码:
for content in response.xpath('//a[contains(@data-name, "uf")]'):
yield {
'text': content.css('span.filter_label::text').extract()
}
更新:
我已经测试了你的 url 链接,你是对的,它不会 return 什么。根本原因是scrapy重定向了3次,最后进错了页面,在错误的页面上乱写“https://www.booking.com/country/se.tr.html
”,而且这个页面和你图片里的不一样。日志如下:
2017-04-30 15:18:47 [scrapy] DEBUG: Redirecting (301) to <GET https://www.bookin
g.com/searchresults.tr.html?ss=isve%25C3%25A7> from <GET https://www.booking.com
/searchresults.tr.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEK
d2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=cfc09bd0db4d07c7b55902c6d0ae81a5&tra
ck_lsso=1&sb=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Fi
ndex.tr.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG
93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3Dcfc09bd0db4d07c7b55902c6d0ae81a5%3Bsb_
price_type%3Dtotal%26%3B&ss=isve%C3%A7&checkin_monthday=&checkin_month=&checkin_
year=&checkout_monthday=&checkout_month=&checkout_year=&room1=A%2CA&no_rooms=1&g
roup_adults=2&group_children=0>
2017-04-30 15:18:48 [scrapy] DEBUG: Redirecting (301) to <GET https://www.bookin
g.com/searchresults.tr.html?ss=isve%C3%A7> from <GET https://www.booking.com/sea
rchresults.tr.html?ss=isve%25C3%25A7>
2017-04-30 15:18:48 [scrapy] DEBUG: Redirecting (302) to <GET https://www.bookin
g.com/country/se.tr.html> from <GET https://www.booking.com/searchresults.tr.htm
l?ss=isve%C3%A7>
2017-04-30 15:18:49 [scrapy] DEBUG: Crawled (200) <GET https://www.booking.com/c
ountry/se.tr.html> (referer: None)
2017-04-30 15:18:49 [scrapy] INFO: Closing spider (finished)
解法:
您可以像我一样尝试将 html 文件保存在您的本地 PC 中,以 "Booking.html" 命名,然后将您的代码更改为:
import scrapy
class CitiesSpider(scrapy.Spider):
name = "city_crawling"
start_urls = [
'file:///F:/algorithm%20study/python/Whosebug/Booking.html', # put the saved html file directory here
# 'https://www.booking.com/searchresults.tr.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=cfc09bd0db4d07c7b55902c6d0ae81a5&track_lsso=1&sb=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Findex.tr.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3Dcfc09bd0db4d07c7b55902c6d0ae81a5%3Bsb_price_type%3Dtotal%26%3B&ss=isve%C3%A7&checkin_monthday=&checkin_month=&checkin_year=&checkout_monthday=&checkout_month=&checkout_year=&room1=A%2CA&no_rooms=1&group_adults=2&group_children=0',
]
def parse(self, response):
#self.logger.info('A response from %s just arrived!', response.url)
for content in response.xpath('//a[contains(@data-name, "uf")]'):
#self.logger.info('TEST %s TEST', content.css('span.filter_label::text').extract())
yield {
'text': content.css('span.filter_label::text').extract()
}
运行 scrapy 项目中的scrawl命令:scrapy crawl city_crawling
,它会让你开始scrawl你想要的信息,检查下面的日志和输出:
2017-04-30 15:33:31 [scrapy] DEBUG: Crawled (200) <GET file:///F:/algorithm%20st
udy/python/Whosebug/Booking.html> (referer: None)
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nStockholm\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nG\xf6teborg\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nVisby\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nFalkenberg\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nMalm\xf6\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nLysekil\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nSimrishamn\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nLund\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nK\xf6pingsvik\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nBorgholm\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nJ\xf6nk\xf6ping\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nUppsala\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nF\xe4rjestaden\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nHelsingborg\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nRonneby\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nYstad\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nHalmstad\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nKivik\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nBorrby\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nFj\xe4llbacka\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nKarlskrona\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nGr\xe4nna\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nL\xf6ttorp\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nNorrk\xf6ping\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\n\xd6rebro\n']}
2017-04-30 15:33:31 [scrapy] INFO: Closing spider (finished)
`
def parse(self, response):
for content in response.xpath('//a[contains(@class, "uf")]'):
yield {
'text':content.css('span.filter_label::text').extract(),
}
`
你需要在
的末尾保留 逗号
"'text':content.css('span.filter_label::text').extract()"
def parse(self, response):
for content in response.css('a[data-name=uf)]'):
yield {
'text': content.css('span.filter_label::text').extract(),
}
立即查看
有效
我遇到了 Scrapy 的问题。我需要从下面我 link 编辑的图像中的红色圆圈部分获取所有城市名称。但是用我的代码我不能 return 任何东西。我尝试了很多选择但没有成功。我怎样才能解决这个问题并获得这些城市名称? link 图片和源代码如下。
import scrapy
from scrapy.spiders import CrawlSpider
#from city_crawl.items import CityCrawlItem
class details(CrawlSpider):
name = "city_crawling"
start_urls = ['https://www.booking.com/searchresults.tr.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=cfc09bd0db4d07c7b55902c6d0ae81a5&track_lsso=1&sb=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Findex.tr.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3Dcfc09bd0db4d07c7b55902c6d0ae81a5%3Bsb_price_type%3Dtotal%26%3B&ss=isve%C3%A7&checkin_monthday=&checkin_month=&checkin_year=&checkout_monthday=&checkout_month=&checkout_year=&room1=A%2CA&no_rooms=1&group_adults=2&group_children=0']
def parse(self, response):
for content in response.xpath('//a[contains(@data-name, "uf")]'):
yield {
'text': content.css('span.filter_label::text').extract()
}
Image of the source which i need to parse the data. The red circled part in the left is what i need to get
您的 for
循环是 select 具有 class
的 <a>
元素包含“uf
”,它将 return 什么都没有。如果 select 带有 data-name
的元素包含“uf
”,您可以这样更改您的代码:
for content in response.xpath('//a[contains(@data-name, "uf")]'):
yield {
'text': content.css('span.filter_label::text').extract()
}
更新:
我已经测试了你的 url 链接,你是对的,它不会 return 什么。根本原因是scrapy重定向了3次,最后进错了页面,在错误的页面上乱写“https://www.booking.com/country/se.tr.html
”,而且这个页面和你图片里的不一样。日志如下:
2017-04-30 15:18:47 [scrapy] DEBUG: Redirecting (301) to <GET https://www.bookin
g.com/searchresults.tr.html?ss=isve%25C3%25A7> from <GET https://www.booking.com
/searchresults.tr.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEK
d2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=cfc09bd0db4d07c7b55902c6d0ae81a5&tra
ck_lsso=1&sb=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Fi
ndex.tr.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG
93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3Dcfc09bd0db4d07c7b55902c6d0ae81a5%3Bsb_
price_type%3Dtotal%26%3B&ss=isve%C3%A7&checkin_monthday=&checkin_month=&checkin_
year=&checkout_monthday=&checkout_month=&checkout_year=&room1=A%2CA&no_rooms=1&g
roup_adults=2&group_children=0>
2017-04-30 15:18:48 [scrapy] DEBUG: Redirecting (301) to <GET https://www.bookin
g.com/searchresults.tr.html?ss=isve%C3%A7> from <GET https://www.booking.com/sea
rchresults.tr.html?ss=isve%25C3%25A7>
2017-04-30 15:18:48 [scrapy] DEBUG: Redirecting (302) to <GET https://www.bookin
g.com/country/se.tr.html> from <GET https://www.booking.com/searchresults.tr.htm
l?ss=isve%C3%A7>
2017-04-30 15:18:49 [scrapy] DEBUG: Crawled (200) <GET https://www.booking.com/c
ountry/se.tr.html> (referer: None)
2017-04-30 15:18:49 [scrapy] INFO: Closing spider (finished)
解法:
您可以像我一样尝试将 html 文件保存在您的本地 PC 中,以 "Booking.html" 命名,然后将您的代码更改为:
import scrapy
class CitiesSpider(scrapy.Spider):
name = "city_crawling"
start_urls = [
'file:///F:/algorithm%20study/python/Whosebug/Booking.html', # put the saved html file directory here
# 'https://www.booking.com/searchresults.tr.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=cfc09bd0db4d07c7b55902c6d0ae81a5&track_lsso=1&sb=1&src=index&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Findex.tr.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaOQBiAEBmAEowgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3Dcfc09bd0db4d07c7b55902c6d0ae81a5%3Bsb_price_type%3Dtotal%26%3B&ss=isve%C3%A7&checkin_monthday=&checkin_month=&checkin_year=&checkout_monthday=&checkout_month=&checkout_year=&room1=A%2CA&no_rooms=1&group_adults=2&group_children=0',
]
def parse(self, response):
#self.logger.info('A response from %s just arrived!', response.url)
for content in response.xpath('//a[contains(@data-name, "uf")]'):
#self.logger.info('TEST %s TEST', content.css('span.filter_label::text').extract())
yield {
'text': content.css('span.filter_label::text').extract()
}
运行 scrapy 项目中的scrawl命令:scrapy crawl city_crawling
,它会让你开始scrawl你想要的信息,检查下面的日志和输出:
2017-04-30 15:33:31 [scrapy] DEBUG: Crawled (200) <GET file:///F:/algorithm%20st
udy/python/Whosebug/Booking.html> (referer: None)
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nStockholm\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nG\xf6teborg\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nVisby\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nFalkenberg\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nMalm\xf6\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nLysekil\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nSimrishamn\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nLund\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nK\xf6pingsvik\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nBorgholm\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nJ\xf6nk\xf6ping\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nUppsala\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nF\xe4rjestaden\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nHelsingborg\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nRonneby\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nYstad\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nHalmstad\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nKivik\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nBorrby\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nFj\xe4llbacka\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nKarlskrona\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nGr\xe4nna\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nL\xf6ttorp\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\nNorrk\xf6ping\n']}
2017-04-30 15:33:31 [scrapy] DEBUG: Scraped from <200 file:///F:/algorithm%20stu
dy/python/Whosebug/Booking.html>
{'text': [u'\n\xd6rebro\n']}
2017-04-30 15:33:31 [scrapy] INFO: Closing spider (finished)
`
def parse(self, response):
for content in response.xpath('//a[contains(@class, "uf")]'):
yield {
'text':content.css('span.filter_label::text').extract(),
}
` 你需要在
的末尾保留 逗号"'text':content.css('span.filter_label::text').extract()"
def parse(self, response):
for content in response.css('a[data-name=uf)]'):
yield {
'text': content.css('span.filter_label::text').extract(),
}
立即查看 有效