exceptions.TypeError: cannot convert dictionary update sequence element #1 to a sequence?
exceptions.TypeError: cannot convert dictionary update sequence element #1 to a sequence?
我用打开的项目scrapy爬取腾讯视频评论,报错。而且我不知道怎么弄明白。
2015-10-22 18:33:58 [scrapy] INFO: Scrapy 1.0.1 started (bot: qqtvurl)
2015-10-22 18:33:58 [scrapy] INFO: Optional features available: ssl, http11, boto
2015-10-22 18:33:58 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'qqtvurl.spiders', 'SPIDER_MODULES': ['qqtvurl.spiders'], 'SCHEDULER': 'scrapy_redis.scheduler.Scheduler', 'BOT_NAME': 'qqtvurl'}
2015-10-22 18:33:58 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-10-22 18:33:58 [qqtvspider] DEBUG: Reading URLs from redis list 'qqtvspider:star_urls'
Unhandled error in Deferred:
2015-10-22 18:33:58 [twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last):
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\cmdline.py", line 150, in _run_command
cmd.run(args, opts)
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\commands\crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\crawler.py", line 153, in crawl
d = crawler.crawl(*args, **kwargs)
File "D:\anzhuang\Anaconda\lib\site-packages\twisted\internet\defer.py", line 1274, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "D:\anzhuang\Anaconda\lib\site-packages\twisted\internet\defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\crawler.py", line 71, in crawl
self.engine = self._create_engine()
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\crawler.py", line 83, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\core\engine.py", line 66, in __init__
self.downloader = downloader_cls(crawler)
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\core\downloader\__init__.py", line 65, in __init__
self.handlers = DownloadHandlers(crawler)
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\core\downloader\handlers\__init__.py", line 17, in __init__
handlers.update(crawler.settings.get('DOWNLOAD_HANDLERS', {}))
exceptions.TypeError: cannot convert dictionary update sequence element #1 to a sequence
2015-10-22 18:33:58 [twisted] CRITICAL:
然后我在setting.py
中加入如下代码
DOWNLOAD_HANDLERS = {'S3', None,}
我在运行项目的时候,就出现了上面的错误。
非常感谢!!!
那是因为您正在将序列元素设置到字典中。
您应该输入:
DOWNLOAD_HANDLERS = {'S3': None,}
或类似这样的东西。
您可以在此处通过 示例 阅读有关如何为 DOWNLOAD_HANDLERS
设置值的更多信息:http://doc.scrapy.org/en/latest/topics/settings.html#download-handlers-base
{'S3', None,}
是一个 set
,而代码期望 DOWNLOAD_HANDLERS
是一个 dict
或一系列(键,值)元组。
IOW 将 {'S3', None,}
替换为 {'S3': None}
,您不应该出现此错误。
我用打开的项目scrapy爬取腾讯视频评论,报错。而且我不知道怎么弄明白。
2015-10-22 18:33:58 [scrapy] INFO: Scrapy 1.0.1 started (bot: qqtvurl)
2015-10-22 18:33:58 [scrapy] INFO: Optional features available: ssl, http11, boto
2015-10-22 18:33:58 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'qqtvurl.spiders', 'SPIDER_MODULES': ['qqtvurl.spiders'], 'SCHEDULER': 'scrapy_redis.scheduler.Scheduler', 'BOT_NAME': 'qqtvurl'}
2015-10-22 18:33:58 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-10-22 18:33:58 [qqtvspider] DEBUG: Reading URLs from redis list 'qqtvspider:star_urls'
Unhandled error in Deferred:
2015-10-22 18:33:58 [twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last):
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\cmdline.py", line 150, in _run_command
cmd.run(args, opts)
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\commands\crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\crawler.py", line 153, in crawl
d = crawler.crawl(*args, **kwargs)
File "D:\anzhuang\Anaconda\lib\site-packages\twisted\internet\defer.py", line 1274, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "D:\anzhuang\Anaconda\lib\site-packages\twisted\internet\defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\crawler.py", line 71, in crawl
self.engine = self._create_engine()
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\crawler.py", line 83, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\core\engine.py", line 66, in __init__
self.downloader = downloader_cls(crawler)
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\core\downloader\__init__.py", line 65, in __init__
self.handlers = DownloadHandlers(crawler)
File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\core\downloader\handlers\__init__.py", line 17, in __init__
handlers.update(crawler.settings.get('DOWNLOAD_HANDLERS', {}))
exceptions.TypeError: cannot convert dictionary update sequence element #1 to a sequence
2015-10-22 18:33:58 [twisted] CRITICAL:
然后我在setting.py
中加入如下代码DOWNLOAD_HANDLERS = {'S3', None,}
我在运行项目的时候,就出现了上面的错误。 非常感谢!!!
那是因为您正在将序列元素设置到字典中。
您应该输入:
DOWNLOAD_HANDLERS = {'S3': None,}
或类似这样的东西。
您可以在此处通过 示例 阅读有关如何为 DOWNLOAD_HANDLERS
设置值的更多信息:http://doc.scrapy.org/en/latest/topics/settings.html#download-handlers-base
{'S3', None,}
是一个 set
,而代码期望 DOWNLOAD_HANDLERS
是一个 dict
或一系列(键,值)元组。
IOW 将 {'S3', None,}
替换为 {'S3': None}
,您不应该出现此错误。