Scrapy 从第一个元素和 post 的标题收集数据
Scrapy collect data from first element and post's title
我需要Scrapy从这个标签中收集数据,并在一个片段中检索所有三个部分。输出类似于:
Tonka double shock boys bike - (Denver).
<span class="postingtitletext">Tonka double shock boys bike - <span class="price"></span><small> (Denver)</small></span>
其次是从第一个 span 标签收集数据。所以结果只会是:
2016 2004 Pontiac Grand Prix gt.
<p class="attrgroup"><span><b>2016 2004 Pontiac Grand Prix gt</b></span> <span>odometer: <b>164</b></span> <span>fuel : <b>gas</b></span> <span>transmission : <b>automatic</b></span> <span>title status : <b>clean</b></span></p>
到目前为止,这是我的代码:
# -*- coding: utf-8 -*-
# scrapy crawl dmoz -o items.csv -t csv
import re
import scrapy
from scrapy.http import Request
# item class included here
class DmozItem(scrapy.Item):
# define the fields for your item here like:
link = scrapy.Field()
attr = scrapy.Field()
title = scrapy.Field()
tag = scrapy.Field()
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["craigslist.org"]
start_urls = [
"http://jxn.craigslist.org/search/cto?"
]
BASE_URL = 'http://jxn.craigslist.org/'
def parse(self, response):
links = response.xpath('//a[@class="hdrlnk"]/@href').extract()
for link in links:
absolute_url = self.BASE_URL + link
yield scrapy.Request(absolute_url, callback=self.parse_attr)
def parse_attr(self, response):
match = re.search(r"(\w+)\.html", response.url)
if match:
item_id = match.group(1)
url = self.BASE_URL + "reply/nos/vgm/" + item_id
item = DmozItem()
item["link"] = response.url
item["title"] = "".join(response.xpath("//span[@class='postingtitletext']//text()").extract())
item["tag"]=response.xpath("//p[@class='attrgroup']/span/b/text()").extract()
return scrapy.Request(url, meta={'item': item}, callback=self.parse_contact)
def parse_contact(self, response):
item = response.meta['item']
item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract())
return item
对于发布标题,从span
标签中获取所有文本节点并加入它们:
$ scrapy shell http://denver.craigslist.org/bik/5042090428.html
In [1]: "".join(response.xpath("//span[@class='postingtitletext']//text()").extract())
Out[1]: u'Tonka double shock boys bike - (Denver)'
请注意,要执行此操作 "Scrapy-way" 将使用 ItemLoader
and the Join()
处理器。
Second is to collect data from first span tag.
由于您没有提供示例输入数据,这里是一个有根据的猜测:
response.xpath("//p[@class='attrgroup']/span/b/text()").extract()[0]
我需要Scrapy从这个标签中收集数据,并在一个片段中检索所有三个部分。输出类似于:
Tonka double shock boys bike - (Denver).
<span class="postingtitletext">Tonka double shock boys bike - <span class="price"></span><small> (Denver)</small></span>
其次是从第一个 span 标签收集数据。所以结果只会是:
2016 2004 Pontiac Grand Prix gt.
<p class="attrgroup"><span><b>2016 2004 Pontiac Grand Prix gt</b></span> <span>odometer: <b>164</b></span> <span>fuel : <b>gas</b></span> <span>transmission : <b>automatic</b></span> <span>title status : <b>clean</b></span></p>
到目前为止,这是我的代码:
# -*- coding: utf-8 -*-
# scrapy crawl dmoz -o items.csv -t csv
import re
import scrapy
from scrapy.http import Request
# item class included here
class DmozItem(scrapy.Item):
# define the fields for your item here like:
link = scrapy.Field()
attr = scrapy.Field()
title = scrapy.Field()
tag = scrapy.Field()
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["craigslist.org"]
start_urls = [
"http://jxn.craigslist.org/search/cto?"
]
BASE_URL = 'http://jxn.craigslist.org/'
def parse(self, response):
links = response.xpath('//a[@class="hdrlnk"]/@href').extract()
for link in links:
absolute_url = self.BASE_URL + link
yield scrapy.Request(absolute_url, callback=self.parse_attr)
def parse_attr(self, response):
match = re.search(r"(\w+)\.html", response.url)
if match:
item_id = match.group(1)
url = self.BASE_URL + "reply/nos/vgm/" + item_id
item = DmozItem()
item["link"] = response.url
item["title"] = "".join(response.xpath("//span[@class='postingtitletext']//text()").extract())
item["tag"]=response.xpath("//p[@class='attrgroup']/span/b/text()").extract()
return scrapy.Request(url, meta={'item': item}, callback=self.parse_contact)
def parse_contact(self, response):
item = response.meta['item']
item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract())
return item
对于发布标题,从span
标签中获取所有文本节点并加入它们:
$ scrapy shell http://denver.craigslist.org/bik/5042090428.html
In [1]: "".join(response.xpath("//span[@class='postingtitletext']//text()").extract())
Out[1]: u'Tonka double shock boys bike - (Denver)'
请注意,要执行此操作 "Scrapy-way" 将使用 ItemLoader
and the Join()
处理器。
Second is to collect data from first span tag.
由于您没有提供示例输入数据,这里是一个有根据的猜测:
response.xpath("//p[@class='attrgroup']/span/b/text()").extract()[0]