我希望 Scrapy 运行 通过每个项目一次

I want Scrapy to run through each item once

我希望 Scrapy 运行 遍历每个项目一次,以便将相关数据组合在一起。实际上,它只是将所有链接、headers、日期等放在一起。它还不止一次地将所有内容发布到文件中。我对 Scrapy 和 Python 都很陌生,所以如果有任何建议,我将不胜感激。

这是我的爬虫代码:

from scrapy.spiders import Spider 
from scrapy.selector import Selector
from fashioBlog.functions import extract_data

from fashioBlog.items import Fashioblog

class firstSpider(Spider):
   name = "first"
   allowed_domains = [
      "stopitrightnow.com"

   ]
   start_urls = [
      "http://www.stopitrightnow.com"

   ]




def parse(self, response):
    sel = Selector(response)
    sites = sel.xpath('//div[@class="post-outer"]')
    items= []

    for site in sites:
        item = Fashioblog()

        item['title'] = extract_data(site.xpath('//h3[normalize-space(@class)="post-title entry-title"]//text()').extract())
        item['url'] = extract_data(site.xpath('//div[normalize-space(@class)="post-body entry-content"]//@href').extract())
        item['date'] = extract_data(site.xpath('//h2[normalize-space(@class)="date-header"]/span/text()').extract())
        #item['body'] = site.xpath('//div[@class="post-body entry-content"]/i/text()').extract()
        item['labelLink'] = extract_data(site.xpath('//span[normalize-space(@class)="post-labels"]//@href').extract())
        item['comment'] = extract_data(site.xpath('//span[normalize-space(@class)="post-comment-link"]//text()').extract())
        item['picUrl'] = extract_data(site.xpath('//div[normalize-space(@class)="separator"]//@href').extract())
        #item['labelText'] = extract_data(site.xpath('(//i//text()').extract())
        #item['labelLink2'] = extract_data(site.xpath('(//i//@href').extract())
        yield item

通过在前面添加一个点,使您的表达式上下文特定

item['title'] = extract_data(site.xpath('.//h3[normalize-space(@class)="post-title entry-title"]//text()').extract())
                                         ^ HERE