Scrapy NotSupported 和 TimeoutError

Scrapy NotSupported and TimeoutError

我的目标是找出每个 link 包含 daraz.com.bd/shop/

到目前为止我尝试的是下面的..

import scrapy

class LinksSpider(scrapy.Spider):

    name = 'links'
    allowed_domains = ['daraz.com.bd']

    extracted_links = []
    shop_list = []

    def start_requests(self):
        start_urls = 'https://www.daraz.com.bd'
        yield scrapy.Request(url=start_urls, callback=self.extract_link)

    def extract_link(self, response):

        str_response_content_type = str(response.headers.get('content-type'))
        if str_response_content_type == "b'text/html; charset=utf-8'" :
            links = response.xpath("//a/@href").extract()

            for link in links:
                link = link.lstrip("/")
                if ("https://" or "http://") not in link:
                    link = "https://" + str(link)
                
                split_link = link.split('.')

                if "daraz.com.bd" in link and link not in self.extracted_links:
                    self.extracted_links.append(link)
                    if len(split_link) > 1:
                        if "www" in link and "daraz" in split_link[1]:
                            yield scrapy.Request(url=link, callback=self.extract_link, dont_filter=True)
                        elif "www" not in link and "daraz" in split_link[0]:
                            yield scrapy.Request(url=link, callback=self.extract_link, dont_filter=True)

                        if "daraz.com.bd/shop/" in link and link not in self.shop_list:
                            yield {
                                "links" : link
                            }

这是我的 settings.py 文件:

BOT_NAME = 'chotosite'

SPIDER_MODULES = ['chotosite.spiders']
NEWSPIDER_MODULE = 'chotosite.spiders'

ROBOTSTXT_OBEY = False
REDIRECT_ENABLED = False
DOWNLOAD_DELAY = 0.25
USER_AGENT = 'Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Googlebot/2.1; +http://www.google.com/bot.html) Chrome/W.X.Y.Z‡ Safari/537.36'
AUTOTHROTTLE_ENABLED = True

我遇到了什么问题?

  1. 仅收集包含 daraz.com.bd/shop/.
  2. 的 6-7 link 秒后自动停止
  3. 用户超时导致连接失败:获取 https://www.daraz.com.bd/kettles/ 花费的时间超过 180.0 秒..
  4. 信息:忽略响应<301 https://www.daraz.com.bd/toner-and-mists/>: HTTP 状态代码未处理或不允许

我该如何解决这些问题?请帮助我。

如果您有其他想法来实现我的目标,我将非常高兴。谢谢...

这里是一些控制台日志:

2020-12-04 22:21:23 [scrapy.extensions.logstats] INFO: Crawled 891 pages (at 33 pages/min), scraped 6 items (at 0 items/min)
2020-12-04 22:22:05 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.daraz.com.bd/kettles/> (failed 1 times): User timeout caused connection failure: Getting https://www.daraz.com.bd/kettles/ took longer than 180.0 seconds..
2020-12-04 22:22:11 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.daraz.com.bd/kettles/> (referer: https://www.daraz.com.bd)
2020-12-04 22:22:11 [scrapy.core.engine] INFO: Closing spider (finished)
2020-12-04 22:22:11 [scrapy.extensions.feedexport] INFO: Stored csv feed (6 items) in: dara.csv
2020-12-04 22:22:11 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 4,
 'downloader/exception_type_count/scrapy.exceptions.NotSupported': 1,
 'downloader/exception_type_count/twisted.internet.error.TimeoutError': 3,
 'downloader/request_bytes': 565004,
 'downloader/request_count': 896,
 'downloader/request_method_count/GET': 896,
 'downloader/response_bytes': 39063472,
 'downloader/response_count': 892,
 'downloader/response_status_count/200': 838,
 'downloader/response_status_count/301': 45,
 'downloader/response_status_count/302': 4,
 'downloader/response_status_count/404': 5,
 'elapsed_time_seconds': 828.333752,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 12, 4, 16, 22, 11, 864492),
 'httperror/response_ignored_count': 54,
 'httperror/response_ignored_status_count/301': 45,
 'httperror/response_ignored_status_count/302': 4,
 'httperror/response_ignored_status_count/404': 5,
 'item_scraped_count': 6,
 'log_count/DEBUG': 901,
 'log_count/ERROR': 1,
 'log_count/INFO': 78,
 'memusage/max': 112971776,
 'memusage/startup': 53370880,
 'request_depth_max': 5,
 'response_received_count': 892,
 'retry/count': 3,
 'retry/reason_count/twisted.internet.error.TimeoutError': 3,
 'scheduler/dequeued': 896,
 'scheduler/dequeued/memory': 896,
 'scheduler/enqueued': 896,
 'scheduler/enqueued/memory': 896,
 'start_time': datetime.datetime(2020, 12, 4, 16, 8, 23, 530740)}
2020-12-04 22:22:11 [scrapy.core.engine] INFO: Spider closed (finished)

您可以使用link extract object提取所有link。然后你可以过滤你的愿望link.

在你scrapyshell

scrapy shell https://www.daraz.com.bd
from scrapy.linkextractors import LinkExtractor
l = LinkExtractor()
links = l.extract_links(response)
for link in links:
    print(link.url)