Docker 容器中的 Celery:ERROR/MainProcess 消费者:无法连接到 Redis

Celery in Docker container: ERROR/MainProcess consumer: Cannot connect to redis

对此有很多挫败感,几天来一直在努力让它发挥作用。我请求帮助。

这是一个包含 Postgres、Celery 和 Docker 的 Django 项目。 首先我尝试使用 RabbitMQ,我和现在使用 Redis 的错误相同,然后我在多次尝试后更改为 redis,错误仍然相同,所以我认为问题出在 Celery 上,而不是 RabbitMQ/Redis.

Docker文件:

FROM python:3.8.5-alpine

ENV PYTHONUNBUFFERED 1

RUN apk update \
    # psycopg2 dependencies
    && apk add --virtual build-deps gcc python3-dev musl-dev \
    && apk add postgresql-dev \
    # Pillow dependencies
    && apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
    # Translation dependencies
    && apk add gettext \
    # CFFI dependencies
    && apk add libffi-dev py-cffi \
    && apk add --no-cache openssl-dev libffi-dev \
    && apk add --no-cache --virtual .pynacl_deps build-base python3-dev libffi-dev

RUN mkdir /app
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
COPY . /app/

docker-compose.yml:

version: '3'

volumes:
  local_postgres_data: {}

services:
  postgres:
    image: postgres
    environment:
      - POSTGRES_DB=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
    volumes:
      - local_postgres_data:/var/lib/postgresql/data
    env_file:
      - ./.envs/.postgres

  django: &django
    build: .
    command: python manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/app/
    ports:
      - "8000:8000"
    depends_on:
      - postgres

  redis:
    image: redis:6.0.8

  celeryworker:
    <<: *django
    image: pyrty_celeryworker
    depends_on:
      - redis
      - postgres
    ports: []
    command: celery -A pyrty worker -l INFO

  celerybeat:
    <<: *django
    image: pyrty_celerybeat
    depends_on:
      - redis
      - postgres
    ports: []
    command: celery -A pyrty beat -l INFO

pyrty/pyrty/celery.py:

from __future__ import absolute_import, unicode_literals

import os

from celery import Celery


os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'pyrty.settings')

app = Celery('pyrty')

app.config_from_object('django.conf:settings', namespace='CELERY')

app.autodiscover_tasks()


@app.task(bind=True)
def debug_task(self):
    print('Request: {0!r}'.format(self.request))

pyrty/pyrty/settings.py:

# Celery conf
CELERY_BROKER_URL = 'redis://127.0.0.1:6379/0' #also tried localhost and
CELERY_RESULT_BACKEND = 'redis://127.0.0.1:6379/0' #also tried without the '/0'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'America/Argentina/Buenos_Aires'

pyrty/pyrty/init.py:

from __future__ import absolute_import, unicode_literals

from .celery import app as celery_app


__all__ = ('celery_app',)

requirements.txt:

Django==3.1
psycopg2==2.8.3
djangorestframework==3.11.0
celery==4.4.7
redis==3.5.3
Pillow==7.1.2
django-extensions==2.2.9
amqp==2.6.1
billiard==3.6.3
kombu==4.6.11
vine==1.3.0
pytz==2020.1

这就是所有配置,然后当我执行 docker-compose up 时,我在终端中得到以下信息(关于 Celery 和 Redis):

redis_1         | 1:M 19 Sep 2020 18:09:08.117 # Server initialized
redis_1         | 1:M 19 Sep 2020 18:09:08.117 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1         | 1:M 19 Sep 2020 18:09:08.118 * Loading RDB produced by version 6.0.8
redis_1         | 1:M 19 Sep 2020 18:09:08.118 * RDB age 16 seconds
redis_1         | 1:M 19 Sep 2020 18:09:08.118 * RDB memory usage when created 0.77 Mb
redis_1         | 1:M 19 Sep 2020 18:09:08.118 * DB loaded from disk: 0.000 seconds
redis_1         | 1:M 19 Sep 2020 18:09:08.118 * Ready to accept connections

celeryworker_1  |  
celeryworker_1  |  -------------- celery@f334b468b079 v4.4.7 (cliffs)
celeryworker_1  | --- ***** ----- 
celeryworker_1  | -- ******* ---- Linux-5.4.0-47-generic-x86_64-with 2020-09-19 18:09:16
celeryworker_1  | - *** --- * --- 
celeryworker_1  | - ** ---------- [config]
celeryworker_1  | - ** ---------- .> app:         pyrty:0x7fd280ac7640
celeryworker_1  | - ** ---------- .> transport:   redis://127.0.0.1:6379/0
celeryworker_1  | - ** ---------- .> results:     redis://127.0.0.1:6379/0
celeryworker_1  | - *** --- * --- .> concurrency: 6 (prefork)
celeryworker_1  | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
celeryworker_1  | --- ***** ----- 
celeryworker_1  |  -------------- [queues]
celeryworker_1  |                 .> celery           exchange=celery(direct) key=celery
celeryworker_1  |                 
celeryworker_1  | 
celeryworker_1  | [tasks]
celeryworker_1  |   . pyrty.celery.debug_task
celeryworker_1  | 
celeryworker_1  | [2020-09-19 18:09:16,865: ERROR/MainProcess] consumer: Cannot connect to redis://127.0.0.1:6379/0: Error 111 connecting to 127.0.0.1:6379. Connection refused..
celeryworker_1  | Trying again in 2.00 seconds... (1/100)
celeryworker_1  | 
celeryworker_1  | [2020-09-19 18:09:18,871: ERROR/MainProcess] consumer: Cannot connect to redis://127.0.0.1:6379/0: Error 111 connecting to 127.0.0.1:6379. Connection refused..
celeryworker_1  | Trying again in 4.00 seconds... (2/100)

我真的不明白我错过了什么,我一直在阅读文档,但我无法解决这个问题。请帮忙!

尝试更新您的应用设置以使用 redis 主机名作为 redis 而不是 127.0.0.1

# Celery conf
CELERY_BROKER_URL = 'redis://redis:6379/0'
CELERY_RESULT_BACKEND = 'redis://redis:6379/0'

参考:

Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.

https://docs.docker.com/compose/networking/