坚持按顺序抓取多个域 - Python Scrapy

Stuck Scraping Multiple Domains sequentially - Python Scrapy

我对 python 和网络抓取都比较陌生。我的第一个项目是在交通子域(即 https://dallas.craigslist.org)下抓取随机的 Craiglist 城市(总共 5 个城市),尽管我不得不在手动更新每个城市后手动 运行 每个城市的脚本脚本中常量 >>>> (start_urls = 和 absolute_next_url =) 下的城市各自域。无论如何,我可以通过我定义的城市(即迈阿密、纽约、休斯顿、芝加哥等)按顺序 运行 调整脚本,并自动填充其各自城市的常量(start_urls = 和 absolute_next_url = ) ?

此外,是否有办法调整脚本以将每个城市输出到其自己的 .csv >>(即 miami.csv、houston.csv、chicago.csv 等)?

提前致谢

# -*- coding: utf-8 -*-
import scrapy
from scrapy import Request

class JobsSpider(scrapy.Spider):
    name = "jobs"
    allowed_domains = ["craigslist.org"]
    start_urls = ['https://dallas.craigslist.org/d/transportation/search/trp']

    def parse(self, response):
        jobs = response.xpath('//p[@class="result-info"]')

        for job in jobs:
            listing_title = job.xpath('a/text()').extract_first()
            city = job.xpath('span[@class="result-meta"]/span[@class="result-hood"]/text()').extract_first("")[2:-1]
            job_posting_date = job.xpath('time/@datetime').extract_first()
            job_posting_url = job.xpath('a/@href').extract_first()
            data_id = job.xpath('a/@data-id').extract_first()


            yield Request(job_posting_url, callback=self.parse_page, meta={'job_posting_url': job_posting_url, 'listing_title': listing_title, 'city':city, 'job_posting_date':job_posting_date, 'data_id':data_id})

        relative_next_url = response.xpath('//a[@class="button next"]/@href').extract_first()
        absolute_next_url = "https://dallas.craigslist.org" + relative_next_url

        yield Request(absolute_next_url, callback=self.parse)

    def parse_page(self, response):
        job_posting_url = response.meta.get('job_posting_url')
        listing_title = response.meta.get('listing_title')
        city = response.meta.get('city')
        job_posting_date = response.meta.get('job_posting_date')
        data_id = response.meta.get('data_id')

        description = "".join(line for line in response.xpath('//*[@id="postingbody"]/text()').extract()).strip()

        compensation = response.xpath('//p[@class="attrgroup"]/span[1]/b/text()').extract_first()
        employment_type = response.xpath('//p[@class="attrgroup"]/span[2]/b/text()').extract_first()
        latitude = response.xpath('//div/@data-latitude').extract_first()
        longitude = response.xpath('//div/@data-longitude').extract_first()
        posting_id = response.xpath('//p[@class="postinginfo"]/text()').extract()


        #yield{'job_posting_url': job_posting_url, 'listing_title': listing_title, 'city':city, 'job_posting_date':job_posting_date, 'description':description, #'compensation':compensation, 'employment_type':employment_type, 'posting_id':posting_id, 'longitude':longitude, 'latitude':latitude }

        yield{'job_posting_url':job_posting_url,
                      'data_id':data_id,
                'listing_title':listing_title,
                         'city':city,
                  'description':description,
                 'compensation':compensation,
              'employment_type':employment_type,
                     'latitude':latitude,
                    'longitude':longitude,
             'job_posting_date':job_posting_date,
                   'posting_id':posting_id,
                      'data_id':data_id
              }

可能有更简洁的方法,但请查看 https://docs.scrapy.org/en/latest/topics/practices.html?highlight=multiple%20spiders,您基本上可以将蜘蛛的多个实例组合在一起,这样您就可以为每个城市创建一个单独的 'class'。可能有一些方法可以合并一些代码,因此不会全部重复。

至于写入 csv,您现在是通过命令行执行的吗?我会将代码添加到蜘蛛本身 https://realpython.com/python-csv/