主力进程意外终止 RQ 和 Scrapy
Work-horse process was terminated unexpectedly RQ and Scrapy
我正在尝试从生成 CrawlerProcess 的 redis (rq) 中检索一个函数,但我得到
Work-horse process was terminated unexpectedly (waitpid returned 11)
控制台日志:
Moving job to 'failed' queue (work-horse terminated unexpectedly;
waitpid returned 11)
在我用评论标记的那一行
THIS LINE KILL THE PROGRAM
我做错了什么?
我该如何解决?
我从 RQ 中检索到的这个函数:
def custom_executor(url):
process = CrawlerProcess({
'USER_AGENT': "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.75 Safari/537.36",
'DOWNLOAD_TIMEOUT': 20000, # 100
'ROBOTSTXT_OBEY': False,
'HTTPCACHE_ENABLED': False,
'REDIRECT_ENABLED': False,
'SPLASH_URL': 'http://localhost:8050/',
'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter',
'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage',
'DOWNLOADER_MIDDLEWARES': {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
},
'SPIDER_MIDDLEWARES': {
'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': True,
'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': True,
'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': True,
'scrapy.extensions.closespider.CloseSpider': True,
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
})
### THIS LINE KILL THE PROGRAM
process.crawl(ExtractorSpider,
start_urls=[url, ], es_client=es_get_connection(),
redis_conn=redis_get_connection())
process.start()
这是我的 ExtractorSpider:
class ExtractorSpider(Spider):
name = "Extractor Spider"
handle_httpstatus_list = [301, 302, 303]
def parse(self, response):
yield SplashRequest(url=url, callback=process_screenshot,
endpoint='execute', args=SPLASH_ARGS)
谢谢
进程因计算量大而内存不足而崩溃。增加内存解决了这个问题。
对我来说,进程超时,必须更改默认超时
我正在尝试从生成 CrawlerProcess 的 redis (rq) 中检索一个函数,但我得到
Work-horse process was terminated unexpectedly (waitpid returned 11)
控制台日志:
Moving job to 'failed' queue (work-horse terminated unexpectedly; waitpid returned 11)
在我用评论标记的那一行
THIS LINE KILL THE PROGRAM
我做错了什么? 我该如何解决?
我从 RQ 中检索到的这个函数:
def custom_executor(url):
process = CrawlerProcess({
'USER_AGENT': "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.75 Safari/537.36",
'DOWNLOAD_TIMEOUT': 20000, # 100
'ROBOTSTXT_OBEY': False,
'HTTPCACHE_ENABLED': False,
'REDIRECT_ENABLED': False,
'SPLASH_URL': 'http://localhost:8050/',
'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter',
'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage',
'DOWNLOADER_MIDDLEWARES': {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
},
'SPIDER_MIDDLEWARES': {
'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': True,
'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': True,
'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': True,
'scrapy.extensions.closespider.CloseSpider': True,
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
})
### THIS LINE KILL THE PROGRAM
process.crawl(ExtractorSpider,
start_urls=[url, ], es_client=es_get_connection(),
redis_conn=redis_get_connection())
process.start()
这是我的 ExtractorSpider:
class ExtractorSpider(Spider):
name = "Extractor Spider"
handle_httpstatus_list = [301, 302, 303]
def parse(self, response):
yield SplashRequest(url=url, callback=process_screenshot,
endpoint='execute', args=SPLASH_ARGS)
谢谢
进程因计算量大而内存不足而崩溃。增加内存解决了这个问题。
对我来说,进程超时,必须更改默认超时