如何使用 Scrapy 中的规则来跟踪某些链接?

How to use Rules in Scrapy for following some links?

我想通过 Python Scrapy 抓取一个网站并跟踪所有包含 "catalogue"

的链接

我认为聪明的方法是使用 Scrapy 规则,我尝试了这个但它不遵循链接

class Houra(CrawlSpider):
    reload(sys)
    pageNumber = 0
    name = 'houra'
    allowed_domains = ["houra.fr"]
    driver = webdriver.Chrome()
    rules = [
        Rule(LinkExtractor(allow=r'catalogue/'), callback='parse_page', follow=True),
    ]
    def __init__(self, idcrawl=None, iddrive=None, idrobot=None, proxy=None, *args, **kwargs):
        super(Houra, self).__init__(*args, **kwargs)
def start_requests(self):
    yield Request("http://www.houra.fr", callback=self.parse_page1)
def parse_page1(self, response):
    self.driver.get(response.url)
    inputElement = self.driver.find_element_by_css_selector("#CPProspect")
    inputElement.send_keys("75001")
    inputElement.submit()

def parse_page(self, response):

    body = response.css('body').extract_first()
    f = io.open('./houra/page%s' % str(self.pageNumber), 'w+', encoding='utf-8')
    f.write(body)
    f.close()
    self.pageNumber = self.pageNumber + 1

restrict_xpaths 定义要查找 link 的一个或多个区域。但是您需要使用 allow 来检查 link href 值:

Rule(LinkExtractor(allow=r'catalogue/'), callback='parse_page', follow=True)