Celery + Redis - Django 在延迟触发任务时阻塞
Celery + Redis - Django blocks when triggering a task with delay
我已经使用 Redis 在我的 Django 项目上安装了 Celery。
计划任务 运行 没有问题。
使用 delay()
触发异步任务时会出现问题。执行停止,就像在 kombu.utils.retry_over_time
.
的循环中被阻塞一样
我检查过 Redis 已启动并且 运行。
我真的不知道如何调试这个问题。
这是一些包版本
Django==2.1.2
celery==4.2.1
django-celery-beat==1.4.0
django-celery-results==1.0.4
redis==3.2.0
kombu==4.4.0
设置
CELERY_REDIS_HOST = 'localhost'
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 1 # # Redis DB number, if not provided the default will be 0
CELERY_REDIS_PASSWORD = ''
CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler'
CELERY_BROKER_URL = 'redis://{host}:{port}/{db}'.format(host=CELERY_REDIS_HOST, port=CELERY_REDIS_PORT, db=CELERY_REDIS_DB)
CELERY_RESULT_BACKEND = 'django-db'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json' # Result serialization format
CELERY_TASK_SERIALIZER = 'json' # String identifying the serializer to be used
CELERY_BROKER_TRANSPORT_OPTIONS = {
'visibility_timeout': 3600, # 1 hour, default Redis visibility timeout
}
Celery 和 Celery Beat 是如何启动的
Shell 将 Celery 和 Celery Beat 添加到 Supervisor 的脚本:
#!/usr/bin/env bash
# Create required directories
sudo mkdir -p /var/log/celery/
sudo mkdir -p /var/run/celery/
# Create group called 'celery'
sudo groupadd -f celery
# add the user 'celery' if it doesn't exist and add it to the group with same name
id -u celery &>/dev/null || sudo useradd -g celery celery
# add permissions to the celery user for r+w to the folders just created
sudo chown -R celery:celery /var/log/celery/
sudo chown -R celery:celery /var/run/celery/
# Get django environment variables
celeryenv=`cat ./env_vars | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}
# Create CELERY configuration script
celeryconf="[program:celeryd]
directory=/home/ubuntu/splityou/splityou
; Set full path to celery program if using virtualenv
command=/home/ubuntu/splityou/splityou-env/bin/celery worker -A config.celery.celery_app:app --loglevel=INFO --logfile=\"/var/log/celery/%%n%%I.log\" --pidfile=\"/var/run/celery/%%n.pid\"
user=celery
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create CELERY BEAT configuration script
celerybeatconf="[program:celerybeat]
; Set full path to celery program if using virtualenv
command=/home/ubuntu/splityou/splityou-env/bin/celery beat -A config.celery.celery_app:app --loglevel=INFO --logfile=\"/var/log/celery/celery-beat.log\" --pidfile=\"/var/run/celery/celery-beat.pid\"
directory=/home/ubuntu/splityou/splityou
user=celery
numprocs=1
stdout_logfile=/var/log/celerybeat.log
stderr_logfile=/var/log/celerybeat.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /etc/supervisor/conf.d/celery.conf
echo "$celerybeatconf" | tee /etc/supervisor/conf.d/celerybeat.conf
# Enable supervisor to listen for HTTP/XML-RPC requests.
# supervisorctl will use XML-RPC to communicate with supervisord over port 9001.
# Source: https://askubuntu.com/questions/911994/supervisorctl-3-3-1-http-localhost9001-refused-connection
if ! grep -Fxq "[inet_http_server]" /etc/supervisor/supervisord.conf
then
echo "[inet_http_server]" | tee -a /etc/supervisor/supervisord.conf
echo "port = 127.0.0.1:9001" | tee -a /etc/supervisor/supervisord.conf
fi
# Reread the supervisord config
sudo supervisorctl reread
# Update supervisord in cache without restarting all services
sudo supervisorctl update
# Sleep for 15 seconds to give enough time to previous supervisor instance to shutdown
# Source:
sleep 15
# Start/Restart celeryd through supervisord
sudo supervisorctl restart celeryd
sudo supervisorctl restart celerybeat
正如 First steps with Django Celery 教程中所指出的,我们必须在 proj/__init__.py
模块中导入 app 对象。
它确保在 Django 启动时始终导入该应用程序,以便 shared_task
将使用相同的应用程序。
我完全忘记了,所以我通过在 __init__.py
中放入以下内容解决了这个问题:
from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ('celery_app',)
我已经使用 Redis 在我的 Django 项目上安装了 Celery。
计划任务 运行 没有问题。
使用 delay()
触发异步任务时会出现问题。执行停止,就像在 kombu.utils.retry_over_time
.
我检查过 Redis 已启动并且 运行。 我真的不知道如何调试这个问题。
这是一些包版本
Django==2.1.2
celery==4.2.1
django-celery-beat==1.4.0
django-celery-results==1.0.4
redis==3.2.0
kombu==4.4.0
设置
CELERY_REDIS_HOST = 'localhost'
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 1 # # Redis DB number, if not provided the default will be 0
CELERY_REDIS_PASSWORD = ''
CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler'
CELERY_BROKER_URL = 'redis://{host}:{port}/{db}'.format(host=CELERY_REDIS_HOST, port=CELERY_REDIS_PORT, db=CELERY_REDIS_DB)
CELERY_RESULT_BACKEND = 'django-db'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json' # Result serialization format
CELERY_TASK_SERIALIZER = 'json' # String identifying the serializer to be used
CELERY_BROKER_TRANSPORT_OPTIONS = {
'visibility_timeout': 3600, # 1 hour, default Redis visibility timeout
}
Celery 和 Celery Beat 是如何启动的
Shell 将 Celery 和 Celery Beat 添加到 Supervisor 的脚本:
#!/usr/bin/env bash
# Create required directories
sudo mkdir -p /var/log/celery/
sudo mkdir -p /var/run/celery/
# Create group called 'celery'
sudo groupadd -f celery
# add the user 'celery' if it doesn't exist and add it to the group with same name
id -u celery &>/dev/null || sudo useradd -g celery celery
# add permissions to the celery user for r+w to the folders just created
sudo chown -R celery:celery /var/log/celery/
sudo chown -R celery:celery /var/run/celery/
# Get django environment variables
celeryenv=`cat ./env_vars | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}
# Create CELERY configuration script
celeryconf="[program:celeryd]
directory=/home/ubuntu/splityou/splityou
; Set full path to celery program if using virtualenv
command=/home/ubuntu/splityou/splityou-env/bin/celery worker -A config.celery.celery_app:app --loglevel=INFO --logfile=\"/var/log/celery/%%n%%I.log\" --pidfile=\"/var/run/celery/%%n.pid\"
user=celery
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
environment=$celeryenv"
# Create CELERY BEAT configuration script
celerybeatconf="[program:celerybeat]
; Set full path to celery program if using virtualenv
command=/home/ubuntu/splityou/splityou-env/bin/celery beat -A config.celery.celery_app:app --loglevel=INFO --logfile=\"/var/log/celery/celery-beat.log\" --pidfile=\"/var/run/celery/celery-beat.pid\"
directory=/home/ubuntu/splityou/splityou
user=celery
numprocs=1
stdout_logfile=/var/log/celerybeat.log
stderr_logfile=/var/log/celerybeat.log
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999
environment=$celeryenv"
# Create the celery supervisord conf script
echo "$celeryconf" | tee /etc/supervisor/conf.d/celery.conf
echo "$celerybeatconf" | tee /etc/supervisor/conf.d/celerybeat.conf
# Enable supervisor to listen for HTTP/XML-RPC requests.
# supervisorctl will use XML-RPC to communicate with supervisord over port 9001.
# Source: https://askubuntu.com/questions/911994/supervisorctl-3-3-1-http-localhost9001-refused-connection
if ! grep -Fxq "[inet_http_server]" /etc/supervisor/supervisord.conf
then
echo "[inet_http_server]" | tee -a /etc/supervisor/supervisord.conf
echo "port = 127.0.0.1:9001" | tee -a /etc/supervisor/supervisord.conf
fi
# Reread the supervisord config
sudo supervisorctl reread
# Update supervisord in cache without restarting all services
sudo supervisorctl update
# Sleep for 15 seconds to give enough time to previous supervisor instance to shutdown
# Source:
sleep 15
# Start/Restart celeryd through supervisord
sudo supervisorctl restart celeryd
sudo supervisorctl restart celerybeat
正如 First steps with Django Celery 教程中所指出的,我们必须在 proj/__init__.py
模块中导入 app 对象。
它确保在 Django 启动时始终导入该应用程序,以便 shared_task
将使用相同的应用程序。
我完全忘记了,所以我通过在 __init__.py
中放入以下内容解决了这个问题:
from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app
__all__ = ('celery_app',)