如何将后台任务函数从 get.post FastAPI 传递到 html 模板?
How to pass a background task function into html template from get.post FastAPI?
其实我有两个问题。首先是我 运行 我的 api 中的一个后台任务是拍摄图像并对其进行预测。 .出于某种原因,我无法将后台任务存储在变量中 return。我需要为问题的第二部分执行此操作。
API代码:
from starlette.responses import RedirectResponse
from fastapi.templating import Jinja2Templates
from fastapi import FastAPI, File, UploadFile, BackgroundTasks
from tensorflow.keras import preprocessing
from fastapi.staticfiles import StaticFiles
from keras.models import load_model
from PIL import Image
import numpy as np
import uvicorn
app = FastAPI()
app.mount("/Templates", StaticFiles(directory="Templates"), name="Templates")
templates = Jinja2Templates(directory="Templates")
model_dir = 'F:\Saved-Models\Dog-Cat-Models\json_function_test_dog_cat_optuna.h5'
model = load_model(model_dir)
def predict_image(image):
pp_dogcat_image = Image.open(image.file).resize((150, 150), Image.NEAREST).convert("RGB")
pp_dogcat_image_arr = preprocessing.image.img_to_array(pp_dogcat_image)
input_arr = np.array([pp_dogcat_image_arr])
prediction = np.argmax(model.predict(input_arr), axis=-1)
if str(prediction) == '[1]':
answer = "It's a Dog"
else:
answer = "It's a Cat"
return answer
@app.get('/')
async def index():
return RedirectResponse(url="/Templates/index.html")
# Background tasks are so that we can return a response regardless how long it takes to process image data
@app.post('/prediction_page')
async def prediction_form(background_tasks: BackgroundTasks, dogcat_img: UploadFile = File(...)):
answer = background_tasks.add_task(predict_image, image=dogcat_img)
return answer
if __name__ == '__main__':
uvicorn.run(app, host='localhost', port=8000)
第二个问题是我试图以 Jinja 标签的形式将其传回我的 html 文件。如果我可以存储我的后台任务,我想 return 它到实际的 html 模板。我看了很远很远,FastAPI 没有用 get.post.
做这件事的无用信息
HTML代码:
<div class="prediction_box"><p>Select an image of a dog or a cat and the AI will print out a prediction of what he
thinks
it is.....</p><br>
<!--enctype="multipart/form-data" enables UploadFile data to pass through-->
<form action="/prediction_page" enctype="multipart/form-data" method="post">
<label for="image-upload" class="custom-file-upload">Select Image:</label>
<input type="file" id="image-upload" name="dogcat_img"><br>
<input class="custom-submit-button" type="submit">
</form>
<p>{{answer}}</p>
</div>
我同意@MatsLindh 的评论,您可能需要一个像 celery to schedule your image prediction tasks. This will also help with separation of concerns, so the application serving HTTP won't have to deal with ML 任务这样的任务队列系统。
因此,background 任务 运行s 在 return 首先响应:
...
from fastapi import BackgroundTasks
...
def predict_image(image):
...
@app.post('/prediction_page')
async def prediction_form(background_tasks: BackgroundTasks, dogcat_img: UploadFile = File(...)):
background_tasks.add_task(predict_image, image=dogcat_img)
return {"message": "Processing image in the background"}
但在您的情况下,您希望 return 立即向用户提供预测结果,并且由于您的任务是 CPU 绑定的,理想情况下它应该 运行 在另一个过程中,所以它不是阻止其他请求。
这里可以采用两种方法:
执行 PRG (Post/Redirect/G et) 设计模式,所以你有一个索引页面 /
和 HTML 表单,图像被发送到 /prediction_page
端点并且请求被重定向到 /result
页面显示结果。您可能必须将结果存储在数据库中,或者找到其他方式将它们传递并显示在 /results
页面上。
构建一个 JavaScript SPA 应用程序 (Single Page App) 将使用 JS 向后端发出 HTTP 请求。
话虽如此,这里有一个简单的示例,它将使用 ProcessPoolExecutor to make the prediction and JS Fetch 方法来获取和显示结果:
main.py:
import asyncio
import random
import uvicorn
from concurrent.futures import ProcessPoolExecutor
from fastapi import FastAPI, File, Request, UploadFile
from fastapi.concurrency import run_in_threadpool
from fastapi.responses import HTMLResponse
from fastapi.templating import Jinja2Templates
app = FastAPI()
templates = Jinja2Templates(directory=".")
executor = None
@app.on_event("startup")
async def startup():
global executor
executor = ProcessPoolExecutor()
@app.on_event("shutdown")
async def shutdown():
global executor
executor.shutdown()
def predict_image(image):
return random.choice(("It's a Dog", "It's a Cat"))
@app.get("/", response_class=HTMLResponse)
def index(request: Request):
return templates.TemplateResponse("index.html", {"request": request})
@app.post("/prediction_page")
async def prediction_form(dogcat_img: UploadFile = File(...)):
loop = asyncio.get_running_loop()
# For images larger than 1MB the contents are written to disk and
# a true file-like object is returned that can't be pickled,
# use run_in_threadpool instead in that case
result = await loop.run_in_executor(executor, predict_image, dogcat_img)
# result = await run_in_threadpool(predict_image, dogcat_img)
return {"prediction": result}
if __name__ == "__main__":
uvicorn.run("main:app", host="localhost", port=8000, reload=True)
index.html:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Test</title>
</head>
<body>
<div class="prediction_box">
<p>Select an image of a dog or a cat and the AI will print out a prediction of what he thinks it is.....
</p>
<br>
<!--enctype="multipart/form-data" enables UploadFile data to pass through-->
<form id="uploadform" action="/prediction_page" enctype="multipart/form-data" method="post">
<label for="image-upload" class="custom-file-upload">Select Image:</label>
<input type="file" id="image-upload" name="dogcat_img"><br>
<input class="custom-submit-button" type="submit">
</form>
<p id="result"></p>
</div>
<script>
const p = document.getElementById("result");
uploadform.onsubmit = async (e) => {
e.preventDefault();
p.innerHTML = "Processing image...";
let res = await fetch("/prediction_page", {
method: "POST",
body: new FormData(uploadform),
});
if (res.ok) {
let result = await res.json();
p.innerHTML = result["prediction"];
} else {
p.innerHTML = `Response error: ${res.status}`;
};
};
</script>
</body>
</html>
其实我有两个问题。首先是我 运行 我的 api 中的一个后台任务是拍摄图像并对其进行预测。 .出于某种原因,我无法将后台任务存储在变量中 return。我需要为问题的第二部分执行此操作。
API代码:
from starlette.responses import RedirectResponse
from fastapi.templating import Jinja2Templates
from fastapi import FastAPI, File, UploadFile, BackgroundTasks
from tensorflow.keras import preprocessing
from fastapi.staticfiles import StaticFiles
from keras.models import load_model
from PIL import Image
import numpy as np
import uvicorn
app = FastAPI()
app.mount("/Templates", StaticFiles(directory="Templates"), name="Templates")
templates = Jinja2Templates(directory="Templates")
model_dir = 'F:\Saved-Models\Dog-Cat-Models\json_function_test_dog_cat_optuna.h5'
model = load_model(model_dir)
def predict_image(image):
pp_dogcat_image = Image.open(image.file).resize((150, 150), Image.NEAREST).convert("RGB")
pp_dogcat_image_arr = preprocessing.image.img_to_array(pp_dogcat_image)
input_arr = np.array([pp_dogcat_image_arr])
prediction = np.argmax(model.predict(input_arr), axis=-1)
if str(prediction) == '[1]':
answer = "It's a Dog"
else:
answer = "It's a Cat"
return answer
@app.get('/')
async def index():
return RedirectResponse(url="/Templates/index.html")
# Background tasks are so that we can return a response regardless how long it takes to process image data
@app.post('/prediction_page')
async def prediction_form(background_tasks: BackgroundTasks, dogcat_img: UploadFile = File(...)):
answer = background_tasks.add_task(predict_image, image=dogcat_img)
return answer
if __name__ == '__main__':
uvicorn.run(app, host='localhost', port=8000)
第二个问题是我试图以 Jinja 标签的形式将其传回我的 html 文件。如果我可以存储我的后台任务,我想 return 它到实际的 html 模板。我看了很远很远,FastAPI 没有用 get.post.
做这件事的无用信息HTML代码:
<div class="prediction_box"><p>Select an image of a dog or a cat and the AI will print out a prediction of what he
thinks
it is.....</p><br>
<!--enctype="multipart/form-data" enables UploadFile data to pass through-->
<form action="/prediction_page" enctype="multipart/form-data" method="post">
<label for="image-upload" class="custom-file-upload">Select Image:</label>
<input type="file" id="image-upload" name="dogcat_img"><br>
<input class="custom-submit-button" type="submit">
</form>
<p>{{answer}}</p>
</div>
我同意@MatsLindh 的评论,您可能需要一个像 celery to schedule your image prediction tasks. This will also help with separation of concerns, so the application serving HTTP won't have to deal with ML 任务这样的任务队列系统。
因此,background 任务 运行s 在 return 首先响应:
...
from fastapi import BackgroundTasks
...
def predict_image(image):
...
@app.post('/prediction_page')
async def prediction_form(background_tasks: BackgroundTasks, dogcat_img: UploadFile = File(...)):
background_tasks.add_task(predict_image, image=dogcat_img)
return {"message": "Processing image in the background"}
但在您的情况下,您希望 return 立即向用户提供预测结果,并且由于您的任务是 CPU 绑定的,理想情况下它应该 运行 在另一个过程中,所以它不是阻止其他请求。
这里可以采用两种方法:
执行 PRG (Post/Redirect/G et) 设计模式,所以你有一个索引页面
/
和 HTML 表单,图像被发送到/prediction_page
端点并且请求被重定向到/result
页面显示结果。您可能必须将结果存储在数据库中,或者找到其他方式将它们传递并显示在/results
页面上。构建一个 JavaScript SPA 应用程序 (Single Page App) 将使用 JS 向后端发出 HTTP 请求。
话虽如此,这里有一个简单的示例,它将使用 ProcessPoolExecutor to make the prediction and JS Fetch 方法来获取和显示结果:
main.py:
import asyncio
import random
import uvicorn
from concurrent.futures import ProcessPoolExecutor
from fastapi import FastAPI, File, Request, UploadFile
from fastapi.concurrency import run_in_threadpool
from fastapi.responses import HTMLResponse
from fastapi.templating import Jinja2Templates
app = FastAPI()
templates = Jinja2Templates(directory=".")
executor = None
@app.on_event("startup")
async def startup():
global executor
executor = ProcessPoolExecutor()
@app.on_event("shutdown")
async def shutdown():
global executor
executor.shutdown()
def predict_image(image):
return random.choice(("It's a Dog", "It's a Cat"))
@app.get("/", response_class=HTMLResponse)
def index(request: Request):
return templates.TemplateResponse("index.html", {"request": request})
@app.post("/prediction_page")
async def prediction_form(dogcat_img: UploadFile = File(...)):
loop = asyncio.get_running_loop()
# For images larger than 1MB the contents are written to disk and
# a true file-like object is returned that can't be pickled,
# use run_in_threadpool instead in that case
result = await loop.run_in_executor(executor, predict_image, dogcat_img)
# result = await run_in_threadpool(predict_image, dogcat_img)
return {"prediction": result}
if __name__ == "__main__":
uvicorn.run("main:app", host="localhost", port=8000, reload=True)
index.html:
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Test</title>
</head>
<body>
<div class="prediction_box">
<p>Select an image of a dog or a cat and the AI will print out a prediction of what he thinks it is.....
</p>
<br>
<!--enctype="multipart/form-data" enables UploadFile data to pass through-->
<form id="uploadform" action="/prediction_page" enctype="multipart/form-data" method="post">
<label for="image-upload" class="custom-file-upload">Select Image:</label>
<input type="file" id="image-upload" name="dogcat_img"><br>
<input class="custom-submit-button" type="submit">
</form>
<p id="result"></p>
</div>
<script>
const p = document.getElementById("result");
uploadform.onsubmit = async (e) => {
e.preventDefault();
p.innerHTML = "Processing image...";
let res = await fetch("/prediction_page", {
method: "POST",
body: new FormData(uploadform),
});
if (res.ok) {
let result = await res.json();
p.innerHTML = result["prediction"];
} else {
p.innerHTML = `Response error: ${res.status}`;
};
};
</script>
</body>
</html>