如果满足特定条件,如何在 for 循环中停止 scrapy spider 的屈服?

how to stop yielding of scrapy spider within for loop if a certain condition is met?

我想通过 link 使用代理获得 JSON 响应,但是当我收集所有代理并循环遍历它们时,我在 2 到 4 次尝试后得到有效的 JSON 文件如果满足这个条件,我现在想退出。

但是当我的条件满足时或在获得“响应 200”后尝试关闭后,我的蜘蛛仍在运行。 我试过 sys.exit() 和 raise CloseSpider(reason) 但对我没有任何作用。 这是我的代码:

import scrapy
from scrapy.crawler import CrawlerProcess
import json
from scrapy.exceptions import CloseSpider
import sys

class ScrapyProxy(scrapy.Spider):
    name = 'scrapy_proxy'
    start_urls = ['https://free-proxy-list.net']
    
    headers = {
        'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'accept-encoding': 'gzip, deflate, br',
        'accept-language': 'en-US,en;q=0.9',
        'cache-control': 'no-cache',
        'pragma': 'no-cache',
        'sec-fetch-mode': 'navigate',
        'sec-fetch-site': 'none',
        'sec-fetch-user': '?1',
        'upgrade-insecure-requests': '1',
        'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'
    }
    
    def parse(self, response):
        table = response.css('table')
        rows = table.css('tr')
        cols = [row.css('td::text').getall() for row in rows]
        
        proxies = []
        
        for col in cols:
            if col and col[4] == 'elite proxy' and col[6] == 'yes':
                proxies.append('https://' + col[0] + ':' + col[1])
            
        print('proxies:', len(proxies))
        
        for proxy in proxies[0:5]:
            print(proxy)
            
            url = 'https://shopee.com.my/api/v2/search_items/?by=sales&limit=50&match_id=2426&newest=0&order=desc&page_type=search&version=2'
            
            yield scrapy.Request(url, dont_filter=True, headers=self.headers, meta={'proxy': proxy}, callback=self.check_response)
            
         
    def check_response(self, response):
        print('\n\nRESPONSE:', response.status)
        try:
            data = json.loads(response.body)
            if data['items']:
                print(f'Received data with: {len(data["items"])} items.')
                # HERE I WANT TO CLOSE MY SPIDER
                # self.close(reason='Closign spider')
                # sys.exit('Exiting from the spider')
                # raise CloseSpider(reason='Closing the spider')
        except:
            print(f'got error in url {response.url}')

# run spider
process = CrawlerProcess()
process.crawl(ScrapyProxy)
process.start()

这是一个独立的蜘蛛。 请帮我终止这个。提前致谢。

我想知道为什么 raise CloseSpider 不起作用。 根据 docs 它应该。 参见 Georgiys 评论

之所以sys.exit不行,估计是被twisted抓到了。 您可以尝试获取反应堆并将其停止

from twisted.internet import reactor
...
reactor.stop() 

如果不行试试reactor.crash()