Scrapy 蜘蛛无限爬行
Scrapy spider crawl infinite
任务
我的蜘蛛应该能够抓取整个域的每个 link,并且应该识别它是产品link 还是例如类别link,但只写产品link s 到项目。
我设置了一个允许包含“a-”的 URL 的规则,因为它包含在每个产品中link。
我的 if-condition 应该简单地检查,如果有 productean 列出,如果是,那么它的双重检查应该肯定是一个产品link
完成该过程后,它应该会在我的列表中保存 link
问题
蜘蛛收集所有 links 而不是解析 links 如果包含“-a”
已编辑:使用代码
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from ..items import LinkextractorItem
class TopArtSpider(CrawlSpider):
name = "topart"
allow_domains = ['topart-online.com']
start_urls = [
'https://www.topart-online.com'
]
custom_settings = {'FEED_EXPORT_FIELDS' : ['Link'] }
rules = (
Rule(LinkExtractor(allow='/a-'), callback='parse_filter_item', follow=True),
)
def parse_filter_item(self, response):
exists = response.xpath('.//div[@class="producteant"]').get()
link = response.xpath('//a/@href')
if exists:
response.follow(url=link.get(), callback=self.parse)
for a in link:
items = LinkextractorItem()
items['Link'] = a.get()
yield items
# -*- coding: utf-8 -*-
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class TopartSpider(CrawlSpider):
name = 'topart'
allowed_domains = ['topart-online.com']
start_urls = ['http://topart-online.com/']
rules = (
Rule(LinkExtractor(allow=r'/a-'), callback='parse_item', follow=True),
)
def parse_item(self, response):
return {'Link': response.url}
任务 我的蜘蛛应该能够抓取整个域的每个 link,并且应该识别它是产品link 还是例如类别link,但只写产品link s 到项目。
我设置了一个允许包含“a-”的 URL 的规则,因为它包含在每个产品中link。
我的 if-condition 应该简单地检查,如果有 productean 列出,如果是,那么它的双重检查应该肯定是一个产品link
完成该过程后,它应该会在我的列表中保存 link
问题 蜘蛛收集所有 links 而不是解析 links 如果包含“-a”
已编辑:使用代码
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from ..items import LinkextractorItem
class TopArtSpider(CrawlSpider):
name = "topart"
allow_domains = ['topart-online.com']
start_urls = [
'https://www.topart-online.com'
]
custom_settings = {'FEED_EXPORT_FIELDS' : ['Link'] }
rules = (
Rule(LinkExtractor(allow='/a-'), callback='parse_filter_item', follow=True),
)
def parse_filter_item(self, response):
exists = response.xpath('.//div[@class="producteant"]').get()
link = response.xpath('//a/@href')
if exists:
response.follow(url=link.get(), callback=self.parse)
for a in link:
items = LinkextractorItem()
items['Link'] = a.get()
yield items
# -*- coding: utf-8 -*-
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class TopartSpider(CrawlSpider):
name = 'topart'
allowed_domains = ['topart-online.com']
start_urls = ['http://topart-online.com/']
rules = (
Rule(LinkExtractor(allow=r'/a-'), callback='parse_item', follow=True),
)
def parse_item(self, response):
return {'Link': response.url}