为什么芹菜工人将任务置于 PENDING 状态这么久?
Why celery worker puts a task in PENDING state for so long?
我有一个芹菜工人 运行s tasks.py 如下:
from celery import Celery
from kombu import Connection, Exchange, Queue, Consumer
import socket
app = Celery('tasks', backend='redis://', broker='pyamqp://guest:guest@localhost/')
app.conf.task_default_queue = 'default'
app.conf.task_queues = (
Queue('queue1', routing_key='tasks.add'),
Queue('queueA', routing_key='tasks.task_1'),
Queue('queueB', routing_key='tasks.task_2'),
Queue('queueC', routing_key='tasks.task_3'),
Queue('queueD', routing_key='tasks.task_4')
)
@app.task
def add(x, y):
print ("add("+ str(x) + "+" + str(y) + ")")
return x + y
还有一个 tasks_canvas.py 创建任务链,如下所示:
from celery import signature
from celery import chain
from tasks import *
signature('tasks.add', args=(2,2))
result = chain(add.s(2,2), add.s(4), add.s(8)).apply_async(queue='queue1')
print (result.status)
print (result.get())
但是,当 运行 宁 tasks_canvas.py 时,result.status 始终处于 PENDING 状态,并且 worker 永远不会 运行 整个链。这是 运行ning tasks_canvas.py:
的输出
C:\Users\user_\Desktop\Aida>tasks_canvas.py
PENDING
而且,这是工人的输出:
C:\Users\user_\Desktop\Aida>celery -A tasks worker -l info -P eventlet
-------------- celery@User-RazerBlade v4.2.0 (windowlicker)
---- **** -----
--- * *** * -- Windows-10-10.0.17134-SP0 2018-07-16 12:04:20
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x41d5390
- ** ---------- .> transport: amqp://guest:**@localhost:5672//
- ** ---------- .> results: redis://
- *** --- * --- .> concurrency: 4 (eventlet)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> queue1 exchange=(direct) key=tasks.add
.> queueA exchange=(direct) key=tasks.task_1
.> queueB exchange=(direct) key=tasks.task_2
.> queueC exchange=(direct) key=tasks.task_3
.> queueD exchange=(direct) key=tasks.task_4
[tasks]
. tasks.add
. tasks.task_1
. tasks.task_2
. tasks.task_3
. tasks.task_4
. tasks.task_5
[2018-07-16 12:04:20,334: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2018-07-16 12:04:20,351: INFO/MainProcess] mingle: searching for neighbors
[2018-07-16 12:04:21,394: INFO/MainProcess] mingle: all alone
[2018-07-16 12:04:21,443: INFO/MainProcess] celery@User-RazerBlade ready.
[2018-07-16 12:04:21,448: INFO/MainProcess] pidbox: Connected to amqp://guest:**@127.0.0.1:5672//.
[2018-07-16 12:04:23,101: INFO/MainProcess] Received task: tasks.add[e6306b5b-211f-4015-b57e-05e2d0ac2df2]
[2018-07-16 12:04:23,102: WARNING/MainProcess] add(2+2)
[2018-07-16 12:04:23,128: INFO/MainProcess] Task tasks.add[e6306b5b-211f-4015-b57e-05e2d0ac2df2] succeeded in 0.031000000000858563s: 4
我想知道为什么会发生这种情况,因为我是 celery 的新手,以及如何让工作人员 运行 整个链?
我解决了这个问题。有一个指南 here 说明了为什么任务总是处于挂起状态。但是,它并不涵盖所有情况。就我而言,存在任务路由问题。当我使用默认队列 运行 链中的所有任务时。
我有一个芹菜工人 运行s tasks.py 如下:
from celery import Celery
from kombu import Connection, Exchange, Queue, Consumer
import socket
app = Celery('tasks', backend='redis://', broker='pyamqp://guest:guest@localhost/')
app.conf.task_default_queue = 'default'
app.conf.task_queues = (
Queue('queue1', routing_key='tasks.add'),
Queue('queueA', routing_key='tasks.task_1'),
Queue('queueB', routing_key='tasks.task_2'),
Queue('queueC', routing_key='tasks.task_3'),
Queue('queueD', routing_key='tasks.task_4')
)
@app.task
def add(x, y):
print ("add("+ str(x) + "+" + str(y) + ")")
return x + y
还有一个 tasks_canvas.py 创建任务链,如下所示:
from celery import signature
from celery import chain
from tasks import *
signature('tasks.add', args=(2,2))
result = chain(add.s(2,2), add.s(4), add.s(8)).apply_async(queue='queue1')
print (result.status)
print (result.get())
但是,当 运行 宁 tasks_canvas.py 时,result.status 始终处于 PENDING 状态,并且 worker 永远不会 运行 整个链。这是 运行ning tasks_canvas.py:
的输出C:\Users\user_\Desktop\Aida>tasks_canvas.py
PENDING
而且,这是工人的输出:
C:\Users\user_\Desktop\Aida>celery -A tasks worker -l info -P eventlet
-------------- celery@User-RazerBlade v4.2.0 (windowlicker)
---- **** -----
--- * *** * -- Windows-10-10.0.17134-SP0 2018-07-16 12:04:20
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: tasks:0x41d5390
- ** ---------- .> transport: amqp://guest:**@localhost:5672//
- ** ---------- .> results: redis://
- *** --- * --- .> concurrency: 4 (eventlet)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> queue1 exchange=(direct) key=tasks.add
.> queueA exchange=(direct) key=tasks.task_1
.> queueB exchange=(direct) key=tasks.task_2
.> queueC exchange=(direct) key=tasks.task_3
.> queueD exchange=(direct) key=tasks.task_4
[tasks]
. tasks.add
. tasks.task_1
. tasks.task_2
. tasks.task_3
. tasks.task_4
. tasks.task_5
[2018-07-16 12:04:20,334: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2018-07-16 12:04:20,351: INFO/MainProcess] mingle: searching for neighbors
[2018-07-16 12:04:21,394: INFO/MainProcess] mingle: all alone
[2018-07-16 12:04:21,443: INFO/MainProcess] celery@User-RazerBlade ready.
[2018-07-16 12:04:21,448: INFO/MainProcess] pidbox: Connected to amqp://guest:**@127.0.0.1:5672//.
[2018-07-16 12:04:23,101: INFO/MainProcess] Received task: tasks.add[e6306b5b-211f-4015-b57e-05e2d0ac2df2]
[2018-07-16 12:04:23,102: WARNING/MainProcess] add(2+2)
[2018-07-16 12:04:23,128: INFO/MainProcess] Task tasks.add[e6306b5b-211f-4015-b57e-05e2d0ac2df2] succeeded in 0.031000000000858563s: 4
我想知道为什么会发生这种情况,因为我是 celery 的新手,以及如何让工作人员 运行 整个链?
我解决了这个问题。有一个指南 here 说明了为什么任务总是处于挂起状态。但是,它并不涵盖所有情况。就我而言,存在任务路由问题。当我使用默认队列 运行 链中的所有任务时。