Celery 5.0.1 且仅使用 ForkPoolWorker-31

Celery 5.0.1 and using only ForkPoolWorker-31

我觉得很奇怪,我的 celery worker 只记录 ForkPoolWorker-31,好像它只用一个处理器处理。

甚至 运行ning top 也显示只有一个处理器很忙,其他处理器不那么忙。

我运行芹菜搭配 celery -A my_service.celery_tasks:celery_app worker --loglevel=INFO -n ${CELERY_INSTANCE} -E

[2020-11-07 00:16:32,677: INFO/MainProcess] celery@grid12 ready.
[2020-11-07 00:16:36,416: WARNING/ForkPoolWorker-31] 19889
[2020-11-07 00:16:36,427: WARNING/ForkPoolWorker-31] 19934
[2020-11-07 00:16:36,427: WARNING/ForkPoolWorker-31] 19882
[2020-11-07 00:16:36,432: WARNING/ForkPoolWorker-31] 20282
[2020-11-07 00:16:36,441: WARNING/ForkPoolWorker-31] 20031
[2020-11-07 00:16:36,446: WARNING/ForkPoolWorker-31] 19884
[2020-11-07 00:16:36,452: WARNING/ForkPoolWorker-31] 20124
[2020-11-07 00:16:36,456: WARNING/ForkPoolWorker-31] 20030
[2020-11-07 00:17:53,313: WARNING/ForkPoolWorker-31] 19897
[2020-11-07 00:17:53,446: INFO/ForkPoolWorker-31] POST Some logs... [status:200 request:11.930s]
[2020-11-07 00:17:54,099: INFO/ForkPoolWorker-31] Some logs...
[2020-11-07 00:17:55,771: INFO/ForkPoolWorker-31] POST Some logs... [status:200 request:15.501s]
[2020-11-07 00:17:56,307: INFO/ForkPoolWorker-31] 
 -------------- celery@XXXXX v5.0.1 (singularity)
--- ***** ----- 
-- ******* ---- Linux-4.14.13-1.el7.elrepo.x86_64-x86_64-with-glibc2.10 2020-11-07 00:22:33
- *** --- * --- 
- ** ---------- [config]
- ** ---------- .> app:         my_service.celery_tasks:0x7fffed3beaf0
- ** ---------- .> transport:   redis://:**@grid12:6385/0
- ** ---------- .> results:     redis://:**@grid12:6385/0
- *** --- * --- .> concurrency: 48 (prefork)
-- ******* ---- .> task events: ON
--- ***** ----- 
 -------------- [queues]
                .> celery           exchange=celery(direct) key=celery
                

[tasks]
  . myTask

[2020-11-07 00:22:34,002: INFO/MainProcess] Connected to redis://:**@grid12:6385/0
[2020-11-07 00:22:34,041: INFO/MainProcess] mingle: searching for neighbors
[2020-11-07 00:22:35,942: INFO/MainProcess] mingle: sync with 30 nodes
[2020-11-07 00:22:36,164: INFO/MainProcess] mingle: sync complete
[2020-11-07 00:22:37,733: INFO/MainProcess] pidbox: Connected to redis://:**@grid12:6385/0.

机器有48个核心,平均使用率<2%。

有很多待处理的任务。有什么建议吗?

最近我遇到了同样的问题,并且能够通过将标志 -O fair 添加到 运行 Celery 命令来解决它。

我的整个命令如下:

# "-O fair" is a key component for simultaneous task execution by prefork workers
# celery app is module name in my program with Celery instance inside
# cel_app_worker is the name of Celery worker 
# -P prefork - is not necessary since it is default value, but I decided to keep it
celery -A celery_app worker --loglevel=INFO --concurrency=8 -O fair -P prefork -n cel_app_worker

请尝试一下,如果对你有用请告诉我。

我在 Docker、Docker文件中使用 Celery 应用程序:

FROM python:3.7-alpine

WORKDIR /usr/src/app

RUN apk add --no-cache tzdata

ENV TZ=Europe/Moscow

RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

# Create a group and user
RUN addgroup -S appgroup && adduser -S celery_user -G appgroup

# Tell docker that all future commands should run as the appuser user
USER celery_user

# !! "-O fair" is a key component for simultaneous task execution by on worker !!
CMD celery -A celery_app worker --loglevel=INFO --concurrency=8 -O fair -P prefork -n cel_app_worker

我最近遇到了这个错误。发现,芹菜任务陷入了(递归)无限(不确定)for 循环。必须终止 celery worker 并修复无限(无限期)'for' 循环问题并启动 worker。错误消失了。