Scrapy 不关注域中的链接

Scrapy not following links in domain

无法理解为什么 crawlspider 在网站 cvvp.nva.gov.lv/#/pub/ 的 start_urls link 完成抓取。 parse_item 代码只是为了测试蜘蛛是否跟随 allowed_domain 中的其他 link。似乎它没有跟随其他 links。我通过更改 allowed_domains = ['books.toscrape.com']start_urls = ['https://books.toscrape.com'] 尝试了确切的代码,它工作正常。

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class CvspiderSpider(CrawlSpider):
    name = 'cvspider'
    allowed_domains = ['cvvp.nva.gov.lv']
    start_urls = ['https://cvvp.nva.gov.lv/#/pub/']
    rules = (
        Rule(LinkExtractor(), callback='parse_item', follow=True),
    )
    def parse_item(self, response):
        item = {}
        print('success')
        return item

我也没有收到任何错误。这是控制台

2021-09-02 16:11:38 [scrapy.core.engine] INFO: Spider opened
2021-09-02 16:11:38 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2021-09-02 16:11:38 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2021-09-02 16:11:38 [scrapy.core.engine] DEBUG: Crawled (404) <GET https://cvvp.nva.gov.lv/robots.txt> (referer: None)
2021-09-02 16:11:43 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://cvvp.nva.gov.lv/#/pub/> (referer: None)
2021-09-02 16:11:43 [scrapy.core.engine] INFO: Closing spider (finished)

没有robots.txt文件,请求headers设置为'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36 Edg/92.0.902.84'

有什么想法吗?

似乎问题出在整个站点都在 javascript 中。如果 javascript 被禁用则没有内容