Celery 和 RabbitMQ 最终因内存耗尽而停止
Celery and RabbitMQ eventually stopping due to memory exhaustion
我有基于 Celery 的任务队列,使用 RabbitMQ 作为代理。我每天处理大约 100 条消息。我没有设置后端。
我这样启动任务大师:
broker = os.environ.get('AMQP_HOST', None)
app = Celery(broker=broker)
server = QueueServer((default_http_host, default_http_port), app)
...然后我这样启动 worker:
broker = os.environ.get('AMQP_HOST', None)
app = Celery('worker', broker=broker)
app.conf.update(
CELERYD_CONCURRENCY = 1,
CELERYD_PREFETCH_MULTIPLIER = 1,
CELERY_ACKS_LATE = True,
)
服务器正常运行了一段时间,但大约两周后突然停止。我已经追踪到 RabbitMQ 由于内存耗尽而不再接收消息的停止:
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: vm_memory_high_watermark set. Memory used:252239992 allowed:249239961
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: =WARNING REPORT==== 25-Feb-2016::02:01:39 ===
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: memory resource limit alarm set on node rabbit@e654ac167b10.
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: **********************************************************
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: *** Publishers will be blocked until this alarm clears ***
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: **********************************************************
问题是我无法弄清楚需要进行哪些不同的配置以防止这种耗尽。显然有些东西没有被清除,但我不明白是什么。
例如,大约 8 天后,rabbitmqctl 状态显示如下:
{memory,[{total,138588744},
{connection_readers,1081984},
{connection_writers,353792},
{connection_channels,1103992},
{connection_other,2249320},
{queue_procs,428528},
{queue_slave_procs,0},
{plugins,0},
{other_proc,13555000},
{mnesia,74832},
{mgmt_db,0},
{msg_index,43243768},
{other_ets,7874864},
{binary,42401472},
{code,16699615},
{atom,654217},
{other_system,8867360}]},
...刚开始的时候低很多:
{memory,[{total,51076896},
{connection_readers,205816},
{connection_writers,86624},
{connection_channels,314512},
{connection_other,371808},
{queue_procs,318032},
{queue_slave_procs,0},
{plugins,0},
{other_proc,14315600},
{mnesia,74832},
{mgmt_db,0},
{msg_index,2115976},
{other_ets,1057008},
{binary,6284328},
{code,16699615},
{atom,654217},
{other_system,8578528}]},
...即使所有队列都为空(当前正在处理的一项作业除外):
root@dba9f095a160:/# rabbitmqctl list_queues -q name memory messages messages_ready messages_unacknowledged
celery 61152 1 0 1
celery@render-worker-lg3pi.celery.pidbox 117632 0 0 0
celery@render-worker-lkec7.celery.pidbox 70448 0 0 0
celeryev.17c02213-ecb2-4419-8e5a-f5ff682ea4b4 76240 0 0 0
celeryev.5f59e936-44d7-4098-aa72-45555f846f83 27088 0 0 0
celeryev.d63dbc9e-c769-4a75-a533-a06bc4fe08d7 50184 0 0 0
我不知道如何找到内存消耗的原因。任何帮助将不胜感激。
您似乎没有生成大量消息,因此 2GB 的内存消耗似乎高得离谱。尽管如此,您可以尝试让 rabbitmq 删除旧消息 - 在您的 celery 配置集中
CELERY_DEFAULT_DELIVERY_MODE = 'transient'
日志说你用了252239992字节,也就是250Mb左右,不算高。
您在这台机器上有多少内存,rabbitmq 的 vm_memory_high_watermark
值是多少? (可以通过运行rabbitmqctl eval "vm_memory_monitor:get_vm_memory_high_watermark()."
查看)
也许你应该增加水印。
另一个选项可以让你所有的队列 lazy
https://www.rabbitmq.com/lazy-queues.html
我有基于 Celery 的任务队列,使用 RabbitMQ 作为代理。我每天处理大约 100 条消息。我没有设置后端。
我这样启动任务大师:
broker = os.environ.get('AMQP_HOST', None)
app = Celery(broker=broker)
server = QueueServer((default_http_host, default_http_port), app)
...然后我这样启动 worker:
broker = os.environ.get('AMQP_HOST', None)
app = Celery('worker', broker=broker)
app.conf.update(
CELERYD_CONCURRENCY = 1,
CELERYD_PREFETCH_MULTIPLIER = 1,
CELERY_ACKS_LATE = True,
)
服务器正常运行了一段时间,但大约两周后突然停止。我已经追踪到 RabbitMQ 由于内存耗尽而不再接收消息的停止:
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: vm_memory_high_watermark set. Memory used:252239992 allowed:249239961
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: =WARNING REPORT==== 25-Feb-2016::02:01:39 ===
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: memory resource limit alarm set on node rabbit@e654ac167b10.
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: **********************************************************
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: *** Publishers will be blocked until this alarm clears ***
Feb 25 02:01:39 render-mq-1 docker/e654ac167b10[2189]: **********************************************************
问题是我无法弄清楚需要进行哪些不同的配置以防止这种耗尽。显然有些东西没有被清除,但我不明白是什么。
例如,大约 8 天后,rabbitmqctl 状态显示如下:
{memory,[{total,138588744},
{connection_readers,1081984},
{connection_writers,353792},
{connection_channels,1103992},
{connection_other,2249320},
{queue_procs,428528},
{queue_slave_procs,0},
{plugins,0},
{other_proc,13555000},
{mnesia,74832},
{mgmt_db,0},
{msg_index,43243768},
{other_ets,7874864},
{binary,42401472},
{code,16699615},
{atom,654217},
{other_system,8867360}]},
...刚开始的时候低很多:
{memory,[{total,51076896},
{connection_readers,205816},
{connection_writers,86624},
{connection_channels,314512},
{connection_other,371808},
{queue_procs,318032},
{queue_slave_procs,0},
{plugins,0},
{other_proc,14315600},
{mnesia,74832},
{mgmt_db,0},
{msg_index,2115976},
{other_ets,1057008},
{binary,6284328},
{code,16699615},
{atom,654217},
{other_system,8578528}]},
...即使所有队列都为空(当前正在处理的一项作业除外):
root@dba9f095a160:/# rabbitmqctl list_queues -q name memory messages messages_ready messages_unacknowledged
celery 61152 1 0 1
celery@render-worker-lg3pi.celery.pidbox 117632 0 0 0
celery@render-worker-lkec7.celery.pidbox 70448 0 0 0
celeryev.17c02213-ecb2-4419-8e5a-f5ff682ea4b4 76240 0 0 0
celeryev.5f59e936-44d7-4098-aa72-45555f846f83 27088 0 0 0
celeryev.d63dbc9e-c769-4a75-a533-a06bc4fe08d7 50184 0 0 0
我不知道如何找到内存消耗的原因。任何帮助将不胜感激。
您似乎没有生成大量消息,因此 2GB 的内存消耗似乎高得离谱。尽管如此,您可以尝试让 rabbitmq 删除旧消息 - 在您的 celery 配置集中
CELERY_DEFAULT_DELIVERY_MODE = 'transient'
日志说你用了252239992字节,也就是250Mb左右,不算高。
您在这台机器上有多少内存,rabbitmq 的 vm_memory_high_watermark
值是多少? (可以通过运行rabbitmqctl eval "vm_memory_monitor:get_vm_memory_high_watermark()."
查看)
也许你应该增加水印。
另一个选项可以让你所有的队列 lazy
https://www.rabbitmq.com/lazy-queues.html