Scrapy Spider 不跟踪链接

Scrapy Spider not Following Links

我正在编写一个 scrapy 蜘蛛来从主页上抓取今日纽约时报的文章,但由于某种原因它没有跟随任何 links。当我在 scrapy shell http://www.nytimes.com 中实例化 link 提取器时,它成功提取了带有 le.extract_links(response) 的文章 url 列表,但我无法让我的爬网命令 (scrapy crawl nyt -o out.json) 进行抓取除了主页。我有点无计可施了。是不是因为主页没有从解析功能中产生一篇文章?任何帮助是极大的赞赏。

from datetime import date                                                       

import scrapy                                                                   
from scrapy.contrib.spiders import Rule                                         
from scrapy.contrib.linkextractors import LinkExtractor                         


from ..items import NewsArticle                                                 

with open('urls/debug/nyt.txt') as debug_urls:                                  
    debug_urls = debug_urls.readlines()                                         

with open('urls/release/nyt.txt') as release_urls:                              
    release_urls = release_urls.readlines() # ["http://www.nytimes.com"]                                 

today = date.today().strftime('%Y/%m/%d')                                       
print today                                                                     


class NytSpider(scrapy.Spider):                                                 
    name = "nyt"                                                                
    allowed_domains = ["nytimes.com"]                                           
    start_urls = release_urls                                                      
    rules = (                                                                      
            Rule(LinkExtractor(allow=(r'/%s/[a-z]+/.*\.html' % today, )),          
                 callback='parse', follow=True),                                   
    )                                                                              

    def parse(self, response):                                                     
        article = NewsArticle()                                                                         
        for story in response.xpath('//article[@id="story"]'):                     
            article['url'] = response.url                                          
            article['title'] = story.xpath(                                        
                    '//h1[@id="story-heading"]/text()').extract()                  
            article['author'] = story.xpath(                                       
                    '//span[@class="byline-author"]/@data-byline-name'             
            ).extract()                                                         
            article['published'] = story.xpath(                                 
                    '//time[@class="dateline"]/@datetime').extract()            
            article['content'] = story.xpath(                                   
                    '//div[@id="story-body"]/p//text()').extract()              
            yield article  

我找到了解决问题的方法。我做错了两件事:

  1. 如果我想让它自动抓取子链接,我需要继承 CrawlSpider 而不是 Spider
  2. 使用 CrawlSpider 时,我需要使用回调函数而不是覆盖 parse。根据文档,覆盖 parse 会破坏 CrawlSpider 功能。