Flask+Celery 作为守护进程
Flask+Celery as a Daemon
我试着靠 Python Flask,想用 celery。分布式任务工作正常,但现在我想按照芹菜文档中的说明将其配置为守护进程。但是我收到 celery_worker_1 exited with code 0
错误。
项目结构:
celery
|-- flask-app
| `-- app.py
|-- worker
| |-- celeryd
| |-- celeryd.conf
| |-- Dockerfile
| |-- start.sh
| `-- tasks.py
`-- docker-compose.yml
烧瓶应用程序/ app.py:
from flask import Flask
from flask_restful import Api, Resource
from celery import Celery
celery = Celery(
'tasks',
broker='redis://redis:6379',
backend='redis://redis:6379'
)
app = Flask(__name__)
api = Api(app)
class add_zahl(Resource):
def get(self):
zahl = 54
task = celery.send_task('mytasks.add', args=[zahl])
return {'message': f"Prozess {task.id} gestartet, input {zahl}"}, 200
api.add_resource(add_zahl, "/add")
if __name__ == '__main__':
app.run(host="0.0.0.0", debug=True)
工人:
tasks.py
from celery import Celery
import requests
import time
import os
from dotenv import load_dotenv
basedir = os.path.abspath(os.path.dirname(__file__))
load_dotenv(os.path.join(basedir, '.env'))
celery = Celery(
'tasks',
broker='redis://redis:6379',
backend='redis://redis:6379'
)
@celery.task(name='mytasks.add')
def send_simple_message(zahl):
time.sleep(5)
result = zahl * zahl
return result
if __name__ == '__main__':
celery.start()
Docker文件:
FROM python:3.6-slim
RUN mkdir /worker
COPY requirements.txt /worker/
RUN pip install --no-cache-dir -r /worker/requirements.txt
COPY . /worker/
COPY celeryd /etc/init.d/celeryd
RUN chmod +x /etc/init.d/celeryd
COPY celeryd.conf /etc/default/celeryd
RUN chown root:root /etc/default/celeryd
RUN useradd -N -M --system -s /bin/bash celery
RUN addgroup celery
RUN adduser celery celery
RUN mkdir -p /var/run/celery
RUN mkdir -p /var/log/celery
RUN chown -R celery:celery /var/run/celery
RUN chown -R celery:celery /var/log/celery
RUN chmod u+x /worker/start.sh
ENTRYPOINT /worker/start.sh
celeryd.conf:
CELERYD_NODES="worker1"
CELERY_BIN="/worker/tasks"
CELERY_APP="worker.tasks:celery"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERY_CREATE_DIRS=1
start.sh
#!/bin/sh
exec celery multi start worker1 -A worker --app=worker.tasks:celery
芹菜:
https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd
Docker 检查日志:
Docker inspect 50fbe00fdc5de56dafaf4268f24baed3b47c8519a689f0733e41ec7fdbc86765
[
{
"Id": "50fbe00fdc5de56dafaf4268f24baed3b47c8519a689f0733e41ec7fdbc86765",
"Created": "2019-02-21T23:20:15.017156266Z",
"Path": "/bin/sh",
"Args": [
"-c",
"/worker/start.sh"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 0,
"Error": "",
"StartedAt": "2019-02-21T23:20:40.375566345Z",
"FinishedAt": "2019-02-21T23:20:41.162618701Z"
},
抱歉"spam",但我无法解决这个问题。
编辑 编辑 编辑
我添加了提到的 CMD 行,现在 worker 没有启动。我正在努力为此寻找解决方案。有什么提示吗?谢谢大家
FROM python:3.6-slim
RUN mkdir /worker
COPY requirements.txt /worker/
RUN pip install --no-cache-dir -r /worker/requirements.txt
COPY . /worker/
COPY celeryd /etc/init.d/celeryd
RUN chmod +x /etc/init.d/celeryd
COPY celeryd.conf /etc/default/celeryd
RUN chown -R root:root /etc/default/celeryd
RUN useradd -N -M --system -s /bin/bash celery
RUN addgroup celery
RUN adduser celery celery
RUN mkdir -p /var/run/celery
RUN mkdir -p /var/log/celery
RUN chown -R celery:celery /var/run/celery
RUN chown -R celery:celery /var/log/celery
CMD ["celery", "worker", "--app=worker.tasks:celery"]
每当 Docker 容器的入口点退出(或者,如果你没有入口点,它的主要命令),容器就会退出。这样做的必然结果是容器中的主进程不能是像 celery multi
这样产生一些后台工作并立即 returns 的命令;您需要使用在前台运行的 celery worker
之类的命令。
我可能会将你的 Dockerfile
中的最后几行替换为:
CMD ["celery", "worker", "--app=worker.tasks:celery"]
保留入口点脚本并将其更改为等效的前台 celery worker
命令也应该可以完成这项工作。
你也可以使用 supervisord 来 运行 你的 celery worker。一个好处是 supervisord 还将监视并在出现问题时重新启动您的工作人员。下面是我根据您的情况提取的工作图像示例...
文件supervisord.conf
[supervisord]
nodaemon=true
[program:celery]
command=celery worker -A proj --loglevel=INFO
directory=/path/to/project
user=nobody
numprocs=1
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
stopasgroup=true
priority=1000
文件start.sh
#!/bin/bash
set -e
exec /usr/bin/supervisord -c /etc/supervisor/supervisord.conf
文件 Dockerfile
# Your other Dockerfile content here
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/start.sh"]
我试着靠 Python Flask,想用 celery。分布式任务工作正常,但现在我想按照芹菜文档中的说明将其配置为守护进程。但是我收到 celery_worker_1 exited with code 0
错误。
项目结构:
celery
|-- flask-app
| `-- app.py
|-- worker
| |-- celeryd
| |-- celeryd.conf
| |-- Dockerfile
| |-- start.sh
| `-- tasks.py
`-- docker-compose.yml
烧瓶应用程序/ app.py:
from flask import Flask
from flask_restful import Api, Resource
from celery import Celery
celery = Celery(
'tasks',
broker='redis://redis:6379',
backend='redis://redis:6379'
)
app = Flask(__name__)
api = Api(app)
class add_zahl(Resource):
def get(self):
zahl = 54
task = celery.send_task('mytasks.add', args=[zahl])
return {'message': f"Prozess {task.id} gestartet, input {zahl}"}, 200
api.add_resource(add_zahl, "/add")
if __name__ == '__main__':
app.run(host="0.0.0.0", debug=True)
工人: tasks.py
from celery import Celery
import requests
import time
import os
from dotenv import load_dotenv
basedir = os.path.abspath(os.path.dirname(__file__))
load_dotenv(os.path.join(basedir, '.env'))
celery = Celery(
'tasks',
broker='redis://redis:6379',
backend='redis://redis:6379'
)
@celery.task(name='mytasks.add')
def send_simple_message(zahl):
time.sleep(5)
result = zahl * zahl
return result
if __name__ == '__main__':
celery.start()
Docker文件:
FROM python:3.6-slim
RUN mkdir /worker
COPY requirements.txt /worker/
RUN pip install --no-cache-dir -r /worker/requirements.txt
COPY . /worker/
COPY celeryd /etc/init.d/celeryd
RUN chmod +x /etc/init.d/celeryd
COPY celeryd.conf /etc/default/celeryd
RUN chown root:root /etc/default/celeryd
RUN useradd -N -M --system -s /bin/bash celery
RUN addgroup celery
RUN adduser celery celery
RUN mkdir -p /var/run/celery
RUN mkdir -p /var/log/celery
RUN chown -R celery:celery /var/run/celery
RUN chown -R celery:celery /var/log/celery
RUN chmod u+x /worker/start.sh
ENTRYPOINT /worker/start.sh
celeryd.conf:
CELERYD_NODES="worker1"
CELERY_BIN="/worker/tasks"
CELERY_APP="worker.tasks:celery"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERY_CREATE_DIRS=1
start.sh
#!/bin/sh
exec celery multi start worker1 -A worker --app=worker.tasks:celery
芹菜: https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd
Docker 检查日志:
Docker inspect 50fbe00fdc5de56dafaf4268f24baed3b47c8519a689f0733e41ec7fdbc86765
[
{
"Id": "50fbe00fdc5de56dafaf4268f24baed3b47c8519a689f0733e41ec7fdbc86765",
"Created": "2019-02-21T23:20:15.017156266Z",
"Path": "/bin/sh",
"Args": [
"-c",
"/worker/start.sh"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 0,
"Error": "",
"StartedAt": "2019-02-21T23:20:40.375566345Z",
"FinishedAt": "2019-02-21T23:20:41.162618701Z"
},
抱歉"spam",但我无法解决这个问题。
编辑 编辑 编辑
我添加了提到的 CMD 行,现在 worker 没有启动。我正在努力为此寻找解决方案。有什么提示吗?谢谢大家
FROM python:3.6-slim
RUN mkdir /worker
COPY requirements.txt /worker/
RUN pip install --no-cache-dir -r /worker/requirements.txt
COPY . /worker/
COPY celeryd /etc/init.d/celeryd
RUN chmod +x /etc/init.d/celeryd
COPY celeryd.conf /etc/default/celeryd
RUN chown -R root:root /etc/default/celeryd
RUN useradd -N -M --system -s /bin/bash celery
RUN addgroup celery
RUN adduser celery celery
RUN mkdir -p /var/run/celery
RUN mkdir -p /var/log/celery
RUN chown -R celery:celery /var/run/celery
RUN chown -R celery:celery /var/log/celery
CMD ["celery", "worker", "--app=worker.tasks:celery"]
每当 Docker 容器的入口点退出(或者,如果你没有入口点,它的主要命令),容器就会退出。这样做的必然结果是容器中的主进程不能是像 celery multi
这样产生一些后台工作并立即 returns 的命令;您需要使用在前台运行的 celery worker
之类的命令。
我可能会将你的 Dockerfile
中的最后几行替换为:
CMD ["celery", "worker", "--app=worker.tasks:celery"]
保留入口点脚本并将其更改为等效的前台 celery worker
命令也应该可以完成这项工作。
你也可以使用 supervisord 来 运行 你的 celery worker。一个好处是 supervisord 还将监视并在出现问题时重新启动您的工作人员。下面是我根据您的情况提取的工作图像示例...
文件supervisord.conf
[supervisord]
nodaemon=true
[program:celery]
command=celery worker -A proj --loglevel=INFO
directory=/path/to/project
user=nobody
numprocs=1
stdout_logfile=/var/log/celery/worker.log
stderr_logfile=/var/log/celery/worker.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
stopasgroup=true
priority=1000
文件start.sh
#!/bin/bash
set -e
exec /usr/bin/supervisord -c /etc/supervisor/supervisord.conf
文件 Dockerfile
# Your other Dockerfile content here
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/start.sh"]