PyTorch 模型 + HTTP API = 执行速度非常慢

PyTorch model + HTTP API = very slow execution

我有:

但是当我将它们组合在一起时,中值响应时间几乎变成 200 毫秒

这种退化的原因是什么?


注意:


这是我测量模型性能的方法(我单独测量了中位时间,它与平均时间几乎相同):

def predict_all(predictor, data):
    for i in range(len(data)):
        predictor(data[i])

data = load_random_data()
predictor = load_predictor()
%timeit predict_all(predictor, data)
# manually divide total time by number of records in data

这是快速API版本:

from fastapi import FastAPI
from starlette.requests import Request
from my_code import load_predictor

app = FastAPI()

app.predictor = load_predictor()


@app.post("/")
async def root(request: Request):
    predictor = request.app.predictor
    data = await request.json()
    return predictor(data)

HTTP 性能测试:

wrk2 -t2 -c50 -d30s -R100 --latency -s post.lua http://localhost:8000/

编辑。

这是我尝试使用和不使用 async 的略微修改版本:

@app.post("/")
# async def root(request: Request, user_dict: dict):
def root(request: Request, user_dict: dict):
    predictor = request.app.predictor
    start_time = time.time()
    y = predictor(user_dict)
    finish_time = time.time()
    logging.info(f"user {user_dict['user_id']}: "
                 "prediction made in {:.2f}ms".format((finish_time - start_time) * 1000))
    return y

所以我刚刚添加了预测时间的记录。

异步版本的日志:

2021-02-03 11:14:31,822: user 12345678-1234-1234-1234-123456789123: prediction made in 2.87ms
INFO:     127.0.0.1:49284 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,329: user 12345678-1234-1234-1234-123456789123: prediction made in 3.93ms
INFO:     127.0.0.1:49286 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,345: user 12345678-1234-1234-1234-123456789123: prediction made in 15.06ms
INFO:     127.0.0.1:49287 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,351: user 12345678-1234-1234-1234-123456789123: prediction made in 4.78ms
INFO:     127.0.0.1:49288 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,358: user 12345678-1234-1234-1234-123456789123: prediction made in 6.85ms
INFO:     127.0.0.1:49289 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,363: user 12345678-1234-1234-1234-123456789123: prediction made in 3.71ms
INFO:     127.0.0.1:49290 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,369: user 12345678-1234-1234-1234-123456789123: prediction made in 5.49ms
INFO:     127.0.0.1:49291 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:14:56,374: user 12345678-1234-1234-1234-123456789123: prediction made in 5.00ms

因此预测速度很快,平均不到 10 毫秒,但整个请求需要 200 毫秒。

同步版本日志:

2021-02-03 11:17:58,332: user 12345678-1234-1234-1234-123456789123: prediction made in 65.49ms
2021-02-03 11:17:58,334: user 12345678-1234-1234-1234-123456789123: prediction made in 23.05ms
INFO:     127.0.0.1:49481 - "POST / HTTP/1.1" 200 OK
INFO:     127.0.0.1:49482 - "POST / HTTP/1.1" 200 OK
2021-02-03 11:17:58,338: user 12345678-1234-1234-1234-123456789123: prediction made in 72.39ms
2021-02-03 11:17:58,341: user 12345678-1234-1234-1234-123456789123: prediction made in 78.66ms
2021-02-03 11:17:58,341: user 12345678-1234-1234-1234-123456789123: prediction made in 85.74ms

现在预测需要很长时间!无论出于何种原因,完全相同的调用,但在同步上下文中进行,开始花费大约 30 倍的时间。但是整个请求大约需要相同的时间 - 160-200ms。

在执行高度密集计算的端点中,与其他端点相比可能需要更长的时间,使用非协程处理程序。

当您使用 def 而不是 async def 时,默认情况下 FastAPI 将使用来自 Starlette 的 run_in_threadpool 并且它还在下面使用 loop.run_in_executor

run_in_executor 将在默认循环执行器中执行该函数,它在单独的线程中执行该函数,如果您正在执行 [=30],您可能还需要检查像 ProcessPoolExecutor and ThreadPoolExecutor 这样的选项=] 密集工作。

这个数学简单的数学在使用协程时有很大帮助。

function
   if function_takes ≥ 500ms
       use `def`
   else
       use `async def`

让你的函数成为非协程应该会有好处。

@app.post("/")
def root(request: Request):
    predictor = request.app.predictor
    data = await request.json()
    return predictor(data)