使用应用程序工厂模式将 Celery 与 Flask 集成:最大递归深度误差
Integrating Celery with Flask using the application factory pattern: maximum recursion depth error
我正在使用 cookiecutter Flask 模板,它使用应用程序工厂模式。我让 Celery 处理不使用应用程序上下文的任务,但我的一个任务确实需要知道它;它进行数据库查询并更新数据库对象。现在我没有循环导入错误(尽管我在其他尝试中遇到过)但最大递归深度错误。
我仔细查阅了 this blog post about how to use Celery with the application factory pattern, and I'm trying to follow this Stack Overflow answer,因为它的结构显然也源自 cookiecutter Flask。
我的项目结构的相关部分:
cookiecutter_mbam
│ celeryconfig.py
│
└───cookiecutter_mbam
| __init__.py
│ app.py
│ run_celery.py
│
└───utility
| celery_utils.py
|
└───derivation
| tasks.py
|
└───storage
| tasks.py
|
└───xnat
tasks.py
__init__.py
:
"""Main application package."""
from celery import Celery
celery = Celery('cookiecutter_mbam', config_source='cookiecutter_mbam.celeryconfig')
app.py
的相关部分:
from cookiecutter_mbam import celery
def create_app(config_object='cookiecutter_mbam.settings'):
"""An application factory, as explained here: http://flask.pocoo.org/docs/patterns/appfactories/.
:param config_object: The configuration object to use.
"""
app = Flask(__name__.split('.')[0])
app.config.from_object(config_object)
init_celery(app, celery=celery)
register_extensions(app)
# ...
return app
run_celery.py
:
from cookiecutter_mbam.app import create_app
from cookiecutter_mbam import celery
from cookiecutter_mbam.utility.celery_utils import init_celery
app = create_app(config_object='cookiecutter_mbam.settings')
init_celery(app, celery)
celeryconfig.py
:
broker_url = 'redis://localhost:6379'
result_backend = 'redis://localhost:6379'
task_serializer = 'json'
result_serializer = 'json'
accept_content = ['json']
enable_utc = True
imports = {'cookiecutter_mbam.xnat.tasks', 'cookiecutter_mbam.storage.tasks', 'cookiecutter_mbam.derivation.tasks'}
celery_utils.py
的相关部分:
def init_celery(app, celery):
"""Add flask app context to celery.Task"""
class ContextTask(celery.Task):
def __call__(self, *args, **kwargs):
with app.app_context():
return self.run(*args, **kwargs)
celery.Task = ContextTask
return celery
当我尝试使用 celery -A cookiecutter_mbam.run_celery:celery worker
启动工作程序时,出现 RecursionError: maximum recursion depth exceeded while calling a Python object
错误。 (我也尝试了其他几种调用 worker 的方法,但都出现了相同的错误。)以下是堆栈跟踪的摘录:
Traceback (most recent call last):
File "/Users/katie/anaconda/bin/celery", line 11, in <module>
sys.exit(main())
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/__main__.py", line 16, in main
_main()
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 322, in main
cmd.execute_from_commandline(argv)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 496, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 275, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 488, in handle_argv
return self.execute(command, argv)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 420, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/worker.py", line 221, in run_from_argv
*self.parse_options(prog_name, argv, command))
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 398, in parse_options
self.parser = self.create_parser(prog_name, command)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 414, in create_parser
self.add_arguments(parser)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/worker.py", line 277, in add_arguments
default=conf.worker_state_db,
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
return self[k]
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 429, in __getitem__
return getitem(k)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 278, in __getitem__
return mapping[_key]
File "/Users/katie/anaconda/lib/python3.6/collections/__init__.py", line 989, in __getitem__
if key in self.data:
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
return self[k]
File "/Users/katie/anaconda/lib/python3.6/collections/__init__.py", line 989, in __getitem__
if key in self.data:
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
return self[k]
我理解这个错误的基本意义——有东西在无限地调用自己。也许 create_app
。但我不明白为什么,我不知道如何去调试它。
我在尝试加载我的网站时也遇到了这个问题:
File "~/cookiecutter_mbam/cookiecutter_mbam/xnat/tasks.py", line 14, in <module>
@celery.task
AttributeError: module 'cookiecutter_mbam.celery' has no attribute 'task'
当我使用 here 中描述的 make_celery
方法时,我没有遇到这个问题,但是当您需要任务访问应用程序上下文时,该方法会产生循环导入问题。非常感谢有关如何使用 Cookiecutter Flask 模板正确执行此操作的指示。
我对使 Flask 应用程序可用于 celery 的那段代码表示怀疑。它通过直接转到 run()
来跳过一些基本代码。 (参见 https://github.com/celery/celery/blob/master/celery/app/task.py#L387)
尝试调用继承的__call__
。这是我的一个(工作中的)应用程序的片段。
# Arrange for tasks to have access to the Flask app
TaskBase = celery.Task
class ContextTask(TaskBase):
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs) ## << here
celery.Task = ContextTask
我也没有看到您在哪里创建 Celery
的实例并对其进行配置。我假设你有
celery = Celery(__name__)
然后需要
celery.config_from_object(...)
来自 init_celery()
中的某处
已解决。我的 configcelery.py
放错了地方。我需要将它移动到包目录,而不是父 repo 目录。令人难以置信的是 unintuitive/uninformative 错位的配置文件不会导致 "I can't find that file" 类型的错误,而是会导致无限递归。但至少我终于看到了并改正了。
我正在使用 cookiecutter Flask 模板,它使用应用程序工厂模式。我让 Celery 处理不使用应用程序上下文的任务,但我的一个任务确实需要知道它;它进行数据库查询并更新数据库对象。现在我没有循环导入错误(尽管我在其他尝试中遇到过)但最大递归深度错误。
我仔细查阅了 this blog post about how to use Celery with the application factory pattern, and I'm trying to follow this Stack Overflow answer,因为它的结构显然也源自 cookiecutter Flask。
我的项目结构的相关部分:
cookiecutter_mbam
│ celeryconfig.py
│
└───cookiecutter_mbam
| __init__.py
│ app.py
│ run_celery.py
│
└───utility
| celery_utils.py
|
└───derivation
| tasks.py
|
└───storage
| tasks.py
|
└───xnat
tasks.py
__init__.py
:
"""Main application package."""
from celery import Celery
celery = Celery('cookiecutter_mbam', config_source='cookiecutter_mbam.celeryconfig')
app.py
的相关部分:
from cookiecutter_mbam import celery
def create_app(config_object='cookiecutter_mbam.settings'):
"""An application factory, as explained here: http://flask.pocoo.org/docs/patterns/appfactories/.
:param config_object: The configuration object to use.
"""
app = Flask(__name__.split('.')[0])
app.config.from_object(config_object)
init_celery(app, celery=celery)
register_extensions(app)
# ...
return app
run_celery.py
:
from cookiecutter_mbam.app import create_app
from cookiecutter_mbam import celery
from cookiecutter_mbam.utility.celery_utils import init_celery
app = create_app(config_object='cookiecutter_mbam.settings')
init_celery(app, celery)
celeryconfig.py
:
broker_url = 'redis://localhost:6379'
result_backend = 'redis://localhost:6379'
task_serializer = 'json'
result_serializer = 'json'
accept_content = ['json']
enable_utc = True
imports = {'cookiecutter_mbam.xnat.tasks', 'cookiecutter_mbam.storage.tasks', 'cookiecutter_mbam.derivation.tasks'}
celery_utils.py
的相关部分:
def init_celery(app, celery):
"""Add flask app context to celery.Task"""
class ContextTask(celery.Task):
def __call__(self, *args, **kwargs):
with app.app_context():
return self.run(*args, **kwargs)
celery.Task = ContextTask
return celery
当我尝试使用 celery -A cookiecutter_mbam.run_celery:celery worker
启动工作程序时,出现 RecursionError: maximum recursion depth exceeded while calling a Python object
错误。 (我也尝试了其他几种调用 worker 的方法,但都出现了相同的错误。)以下是堆栈跟踪的摘录:
Traceback (most recent call last):
File "/Users/katie/anaconda/bin/celery", line 11, in <module>
sys.exit(main())
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/__main__.py", line 16, in main
_main()
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 322, in main
cmd.execute_from_commandline(argv)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 496, in execute_from_commandline
super(CeleryCommand, self).execute_from_commandline(argv)))
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 275, in execute_from_commandline
return self.handle_argv(self.prog_name, argv[1:])
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 488, in handle_argv
return self.execute(command, argv)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/celery.py", line 420, in execute
).run_from_argv(self.prog_name, argv[1:], command=argv[0])
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/worker.py", line 221, in run_from_argv
*self.parse_options(prog_name, argv, command))
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 398, in parse_options
self.parser = self.create_parser(prog_name, command)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/base.py", line 414, in create_parser
self.add_arguments(parser)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/bin/worker.py", line 277, in add_arguments
default=conf.worker_state_db,
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
return self[k]
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 429, in __getitem__
return getitem(k)
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 278, in __getitem__
return mapping[_key]
File "/Users/katie/anaconda/lib/python3.6/collections/__init__.py", line 989, in __getitem__
if key in self.data:
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
return self[k]
File "/Users/katie/anaconda/lib/python3.6/collections/__init__.py", line 989, in __getitem__
if key in self.data:
File "/Users/katie/anaconda/lib/python3.6/site-packages/celery/utils/collections.py", line 126, in __getattr__
return self[k]
我理解这个错误的基本意义——有东西在无限地调用自己。也许 create_app
。但我不明白为什么,我不知道如何去调试它。
我在尝试加载我的网站时也遇到了这个问题:
File "~/cookiecutter_mbam/cookiecutter_mbam/xnat/tasks.py", line 14, in <module>
@celery.task
AttributeError: module 'cookiecutter_mbam.celery' has no attribute 'task'
当我使用 here 中描述的 make_celery
方法时,我没有遇到这个问题,但是当您需要任务访问应用程序上下文时,该方法会产生循环导入问题。非常感谢有关如何使用 Cookiecutter Flask 模板正确执行此操作的指示。
我对使 Flask 应用程序可用于 celery 的那段代码表示怀疑。它通过直接转到 run()
来跳过一些基本代码。 (参见 https://github.com/celery/celery/blob/master/celery/app/task.py#L387)
尝试调用继承的__call__
。这是我的一个(工作中的)应用程序的片段。
# Arrange for tasks to have access to the Flask app
TaskBase = celery.Task
class ContextTask(TaskBase):
def __call__(self, *args, **kwargs):
with app.app_context():
return TaskBase.__call__(self, *args, **kwargs) ## << here
celery.Task = ContextTask
我也没有看到您在哪里创建 Celery
的实例并对其进行配置。我假设你有
celery = Celery(__name__)
然后需要
celery.config_from_object(...)
来自 init_celery()
已解决。我的 configcelery.py
放错了地方。我需要将它移动到包目录,而不是父 repo 目录。令人难以置信的是 unintuitive/uninformative 错位的配置文件不会导致 "I can't find that file" 类型的错误,而是会导致无限递归。但至少我终于看到了并改正了。