Docker&Celery - ERROR: Pidfile (celerybeat.pid) already exists

Docker&Celery - ERROR: Pidfile (celerybeat.pid) already exists

申请包括: - 姜戈 - Redis - 芹菜 - Docker - Postgres

在将项目合并到 docker 之前,一切都很顺利,但是一旦将其移入容器中,就开始出现问题。 一开始一切正常,但过了一会儿我确实收到以下错误:

celery-beat_1  | ERROR: Pidfile (celerybeat.pid) already exists.

我已经为此苦苦挣扎了一段时间,但现在我真的放弃了。我不知道它有什么问题。

Docker文件:

FROM python:3.7

ENV PYTHONUNBUFFERED 1
RUN mkdir -p /opt/services/djangoapp/src


COPY /scripts/startup/entrypoint.sh entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

COPY Pipfile Pipfile.lock /opt/services/djangoapp/src/
WORKDIR /opt/services/djangoapp/src
RUN pip install pipenv && pipenv install --system

COPY . /opt/services/djangoapp/src

RUN find . -type f -name "celerybeat.pid" -exec rm -f {} \;

RUN sed -i "s|django.core.urlresolvers|django.urls |g" /usr/local/lib/python3.7/site-packages/vanilla/views.py
RUN cp /usr/local/lib/python3.7/site-packages/celery/backends/async.py /usr/local/lib/python3.7/site-packages/celery/backends/asynchronous.py
RUN rm /usr/local/lib/python3.7/site-packages/celery/backends/async.py
RUN sed -i "s|async|asynchronous|g" /usr/local/lib/python3.7/site-packages/celery/backends/redis.py
RUN sed -i "s|async|asynchronous|g" /usr/local/lib/python3.7/site-packages/celery/backends/rpc.py

RUN cd app && python manage.py collectstatic --no-input



EXPOSE 8000
CMD ["gunicorn", "-c", "config/gunicorn/conf.py", "--bind", ":8000", "--chdir", "app", "example.wsgi:application", "--reload"]

docker-compose.yml:

version: '3'

services:

  djangoapp:
    build: .
    volumes:
      - .:/opt/services/djangoapp/src
      - static_volume:/opt/services/djangoapp/static  # <-- bind the static volume
      - media_volume:/opt/services/djangoapp/media  # <-- bind the media volume
      - static_local_volume:/opt/services/djangoapp/src/app/static
      - media_local_volume:/opt/services/djangoapp/src/app/media
      - .:/code
    restart: always
    networks:
      - nginx_network
      - database1_network # comment when testing
      # - test_database1_network # uncomment when testing
      - redis_network
    depends_on:
      - database1 # comment when testing
      # - test_database1 # uncomment when testing
      - migration
      - redis

  # base redis server
  redis:
    image: "redis:alpine"
    restart: always
    ports: 
      - "6379:6379"
    networks:
      - redis_network
    volumes:
      - redis_data:/data

  # celery worker
  celery:
    build: .
    command: >
      bash -c "cd app && celery -A example worker --without-gossip --without-mingle --without-heartbeat -Ofair"
    volumes:
      - .:/opt/services/djangoapp/src
      - static_volume:/opt/services/djangoapp/static  # <-- bind the static volume
      - media_volume:/opt/services/djangoapp/media  # <-- bind the media volume    
      - static_local_volume:/opt/services/djangoapp/src/app/static
      - media_local_volume:/opt/services/djangoapp/src/app/media
    networks:
      - redis_network
      - database1_network # comment when testing
      # - test_database1_network # uncomment when testing
    restart: always
    depends_on:
      - database1 # comment when testing
      # - test_database1 # uncomment when testing
      - redis
    links:
      - redis

  celery-beat:
    build: .
    command: >
      bash -c "cd app && celery -A example beat"
    volumes:
      - .:/opt/services/djangoapp/src
      - static_volume:/opt/services/djangoapp/static  # <-- bind the static volume
      - media_volume:/opt/services/djangoapp/media  # <-- bind the media volume
      - static_local_volume:/opt/services/djangoapp/src/app/static
      - media_local_volume:/opt/services/djangoapp/src/app/media
    networks:
      - redis_network
      - database1_network # comment when testing
      # - test_database1_network # uncomment when testing
    restart: always
    depends_on:
      - database1 # comment when testing
      # - test_database1 # uncomment when testing
      - redis
    links:
      - redis

  # migrations needed for proper db functioning
  migration:
    build: .
    command: >
      bash -c "cd app && python3 manage.py makemigrations && python3 manage.py migrate"
    depends_on:
      - database1 # comment when testing
      # - test_database1 # uncomment when testing
    networks:
     - database1_network # comment when testing
     # - test_database1_network # uncomment when testing

  # reverse proxy container (nginx)
  nginx:
    image: nginx:1.13
    ports:
      - 80:80
    volumes:
      - ./config/nginx/conf.d:/etc/nginx/conf.d
      - static_volume:/opt/services/djangoapp/static  # <-- bind the static volume
      - media_volume:/opt/services/djangoapp/media  # <-- bind the media volume
      - static_local_volume:/opt/services/djangoapp/src/app/static
      - media_local_volume:/opt/services/djangoapp/src/app/media 
    restart: always
    depends_on:
      - djangoapp
    networks:
      - nginx_network

  database1: # comment when testing
    image: postgres:10 # comment when testing
    env_file: # comment when testing
      - config/db/database1_env # comment when testing
    networks: # comment when testing
      - database1_network # comment when testing
    volumes: # comment when testing
      - database1_volume:/var/lib/postgresql/data # comment when testing

  # test_database1: # uncomment when testing
    # image: postgres:10 # uncomment when testing
    # env_file: # uncomment when testing
      # - config/db/test_database1_env # uncomment when testing
    # networks: # uncomment when testing
      # - test_database1_network # uncomment when testing
    # volumes: # uncomment when testing
      # - test_database1_volume:/var/lib/postgresql/data # uncomment when testing


networks:
  nginx_network:
    driver: bridge
  database1_network: # comment when testing
    driver: bridge # comment when testing
  # test_database1_network: # uncomment when testing
    # driver: bridge # uncomment when testing
  redis_network:
    driver: bridge
volumes:
  database1_volume: # comment when testing
  # test_database1_volume: # uncomment when testing
  static_volume:  # <-- declare the static volume
  media_volume:  # <-- declare the media volume
  static_local_volume:
  media_local_volume:
  redis_data:

请忽略 "test_database1_volume",因为它仅用于测试目的。

我相信在您的项目目录中有一个 pid 文件 ./ 然后当您 运行 容器时,它被安装在其中。 (因此 RUN find . -type f -name "celerybeat.pid" -exec rm -f {} \; 无效)。

您可以使用celery --pidfile=/opt/celeryd.pid指定一个非挂载路径,这样它就不会在主机上镜像。

其他方式,创建一个 django 命令celery_kill.py

import shlex
import subprocess

from django.core.management.base import BaseCommand


class Command(BaseCommand):
    def handle(self, *args, **options):
        kill_worker_cmd = 'pkill -9 celery'
        subprocess.call(shlex.split(kill_worker_cmd))

docker-compose.yml :

  celery:
    build: ./src
    restart: always
    command: celery -A project worker -l info
    volumes:
      - ./src:/var/lib/celery/data/
    depends_on:
      - db
      - redis
      - app

  celery-beat:
    build: ./src
    restart: always
    command: celery -A project beat -l info --pidfile=/tmp/celeryd.pid
    volumes:
      - ./src:/var/lib/beat/data/
    depends_on:
      - db
      - redis
      - app

和生成文件:

run:
    docker-compose up -d --force-recreate
    docker-compose exec app python manage.py celery_kill
    docker-compose restart
    docker-compose exec app python manage.py migrate

另一个解决方案(取自)是使用--pidfile=(没有路径)根本不创建pidfile。和上面思雨的回答效果一样。

虽然一点都不专业,但是发现补充:

celerybeat.pid

我的 .dockerignore 文件解决了上述问题。

此错误的原因是 docker 容器在没有正常的 Celery 停止过程的情况下停止。 解决方法很简单。开始前停止 Celery。

解决方案 1. 编写 celery 启动命令(ex> docker-entrypoint.sh, ...)如下

celery multi stopwait w1 -A myproject
&& rm -f /var/run/celery/w1.pid  # remove stale pidfile
&& celery multi start w1 -A myproject-l info --pidfile=/var/run/celery/w1.pid 

方案二(不推荐)

始终运行“docker-撰写”在“docker-撰写”之前。

当我使用 docker-compose 运行 时,Airflow 出现了这个错误。

如果你不关心你的Airflow的当前状态,你可以直接删除airflow容器。

docker rm containerId

之后,再次启动 Airflow:

docker-compose up