Scrapy 爬虫不会爬取任何网页

Scrapy crawler will not crawl any webpages

我一直在努力让这个爬虫正常工作,但我总是收到错误。
任何人都可以建议任何方法来获得它 运行 吗?

主要的爬虫代码是

import scrapy
from scrapy.spiders import Spider
from scrapy.selector import Selector


class gameSpider(scrapy.Spider):
name = "game_spider.py"
allowed_domains = ["*"]
start_urls = [
    "http://www.game.co.uk/en/grand-theft-auto-v-with-gta-online-3-500-000-1085837?categoryIdentifier=706209&catGroupId="
]

def parse(self, response):
    sel = Selector(response)
    sites = sel.xpath('//ul[@class="directory-url"]/li')
    items = []

    for site in sites:
        item = Website()
        item['name'] = site.xpath('//*[@id="details301149"]/div/div/h2/text()').extract()
        """item['link'] = site.xpath('//a/@href').extract()
        item['description'] = site.xpath('//*[@id="overview"]/div[3]()').re('-\s[^\n]*\r')"""
        items.append(item)

    print items
    return items

项目代码是

import scrapy


class GameItem(Item):
    name = Field()
    pass

您的 start_urls link returns 错误 500。 没有项目。

In [7]: sites = response.xpath('//ul[@class="directory-url"]/li')

In [8]: sites
Out[8]: []