无法使用 docker compose 通过 postgresql 启动 FastAPI 服务器
Unable to start FastAPI server with postgresql using docker compose
我正在使用 Postgresql 作为数据库创建一个具有简单 CRUD 功能的 FastAPI 服务器。一切都在我的本地环境中运行良好。但是,当我尝试使用 docker-compose up
在容器中使其成为 运行 时,它失败了。我收到此错误:
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
rest_api_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
rest_api_1 | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
rest_api_1 | Is the server running on host "db" (172.29.0.2) and accepting
rest_api_1 | TCP/IP connections on port 5432?
rest_api_1 |
rest_api_1 | (Background on this error at: https://sqlalche.me/e/14/e3q8)
networks_lab2_rest_api_1 exited with code 1
目录结构:
├── Dockerfile
├── README.md
├── __init__.py
├── app
│ ├── __init__.py
│ ├── __pycache__
│ ├── crud.py
│ ├── database.py
│ ├── main.py
│ ├── models.py
│ ├── object_store
│ └── schemas.py
├── docker-compose.yaml
├── requirements.txt
├── tests
│ ├── __init__.py
│ ├── __pycache__
│ ├── assets
│ ├── test_create.py
│ ├── test_delete.py
│ ├── test_file.py
│ ├── test_get.py
│ ├── test_heartbeat.py
│ └── test_put.py
└── venv
├── bin
├── include
├── lib
└── pyvenv.cfg
我的docker-compose.yaml
version: "3"
services:
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: ${DATABASE_TYPE}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
POSTGRES_DB: ${DATABASE_NAME}
ports:
- "5432:5432"
rest_api:
build: .
command: uvicorn app.main:app --host 0.0.0.0
env_file:
- ./.env
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgres_data:
我的 Dockerfile
用于 fastAPI 服务器(在 ./app
下)
FROM python:3.8-slim-buster
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
&& pip install psycopg2
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
我的database.py
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from dotenv import load_dotenv
import os
# def create_connection_string():
# load_dotenv()
# db_type = os.getenv("DATABASE_TYPE")
# username = os.getenv("DATABASE_USERNAME")
# password = os.getenv("DATABASE_PASSWORD")
# host = os.getenv("DATABASE_HOST")
# port = os.getenv("DATABASE_PORT")
# name = os.getenv("DATABASE_NAME")
#
# return "{0}://{1}:{2}@{3}/{4}".format(db_type, username, password, host, name)
SQLALCHEMY_DATABASE_URI = "postgresql://postgres:postgres@db:5432/postgres"
engine = create_engine(
SQLALCHEMY_DATABASE_URI
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
我的main.py
from typing import List, Optional
import os, base64, shutil
from functools import wraps
from fastapi import Depends, FastAPI, HTTPException, UploadFile, File, Request, Header
from fastapi.responses import FileResponse
from sqlalchemy.orm import Session
from . import crud, models, schemas
from .database import SessionLocal, engine
models.Base.metadata.create_all(bind=engine)
app = FastAPI()
SECRET_KEY = os.getenv("SECRET")
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
def check_request_header(x_token: str = Header(...)):
if x_token != SECRET_KEY:
raise HTTPException(status_code=401, detail="Unauthorized")
# endpoints
@app.get("/heartbeat", dependencies=[Depends(check_request_header)], status_code=200)
def heartbeat():
return "The connection is up"
更完整的日志是:
Creating db_1 ... done
Creating rest_api_1 ... done
Attaching to db_1, rest_api_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
...
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting default time zone ... UTC
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... sh: locale: not found
db_1 | 2021-09-29 18:13:35.027 UTC [31] WARNING: no usable system locales were found
rest_api_1 | Traceback (most recent call last):
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 3240, in _wrap_pool_connect
rest_api_1 | return fn()
...
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 584, in connect
rest_api_1 | return self.dbapi.connect(*cargs, **cparams)
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
rest_api_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
rest_api_1 | psycopg2.OperationalError: could not connect to server: Connection refused
rest_api_1 | Is the server running on host "db" (172.29.0.2) and accepting
rest_api_1 | TCP/IP connections on port 5432?
rest_api_1 |
rest_api_1 |
rest_api_1 | The above exception was the direct cause of the following exception:
rest_api_1 |
rest_api_1 | Traceback (most recent call last):
rest_api_1 | File "/usr/local/bin/uvicorn", line 8, in <module>
rest_api_1 | sys.exit(main())
...
est_api_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 584, in connect
rest_api_1 | return self.dbapi.connect(*cargs, **cparams)
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
rest_api_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
rest_api_1 | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
rest_api_1 | Is the server running on host "db" (172.29.0.2) and accepting
rest_api_1 | TCP/IP connections on port 5432?
rest_api_1 |
rest_api_1 | (Background on this error at: https://sqlalche.me/e/14/e3q8)
rest_api_1 exited with code 1
db_1 | ok
db_1 | syncing data to disk ... ok
db_1 |
db_1 |
db_1 | Success. You can now start the database server using:
...
db_1 | 2021-09-29 18:13:36.325 UTC [1] LOG: starting PostgreSQL 13.4 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20210424) 10.3.1 20210424, 64-bit
db_1 | 2021-09-29 18:13:36.325 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2021-09-29 18:13:36.325 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2021-09-29 18:13:36.328 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2021-09-29 18:13:36.332 UTC [48] LOG: database system was shut down at 2021-09-29 18:13:36 UTC
db_1 | 2021-09-29 18:13:36.336 UTC [1] LOG: database system is ready to accept connections
我进行了非常广泛的搜索并阅读了 docs/tutorials 关于 运行ning FastAPI 服务器和 Postgresql 的 docker-compose
,例如
https://testdriven.io/blog/fastapi-docker-traefik/
https://github.com/AmishaChordia/FastAPI-PostgreSQL-Docker/blob/master/FastAPI/docker-compose.yml
https://www.jeffastor.com/blog/pairing-a-postgresql-db-with-your-dockerized-fastapi-app
他们的方法与我的相同,但它一直给我这个 Connection refused Is the server running on host "db" (172.29.0.2) and accepting TCP/IP connections on port 5432?
错误消息 ...
有人可以帮我吗?任何帮助将不胜感激!!
首先,database.py
中的 SQLALCHEMY_DATABASE_URI
应与您的 docker-compose.yaml
中提供的用户、密码和数据库名称相匹配。确保您 运行ning docker-compose up
环境正确。在您的情况下,docker-compose up
的环境应该是:
DATABASE_TYPE=postgres
DATABASE_PASSWORD=postgres
DATABASE_NAME=postgres
但我认为您的问题出在其他地方。即使您将 API 服务声明为 depends_on: - db
,postgres 服务器也可能尚未准备就绪。 depends_on
确保目标图像不会在引用图像之前被实例化,但不确保更多。 postgres 服务器在 运行ning 容器中初始化、启动和 运行ing 需要一些时间,如果你的 API 在此之前尝试连接,它将失败。
常见且最简单的解决方案是编写一堆代码,在实际连接发生之前反复检查数据库是否已启动并 运行ning。由于您没有提供整个回溯(实际上,您已经用 ...
替换了最重要的部分),我只能猜测在您的代码的哪一行触发了连接事件。我建议将您的 database.py
修改为(未测试,可能需要一些调整):
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from dotenv import load_dotenv
import os
import time
def wait_for_db(db_uri):
"""checks if database connection is established"""
_local_engine = create_engine(db_uri)
_LocalSessionLocal = sessionmaker(
autocommit=False, autoflush=False, bind=_local_engine
)
up = False
while not up:
try:
# Try to create session to check if DB is awake
db_session = _LocalSessionLocal()
# try some basic query
db_session.execute("SELECT 1")
db_session.commit()
except Exception as err:
print(f"Connection error: {err}")
up = False
else:
up = True
time.sleep(2)
SQLALCHEMY_DATABASE_URI = "postgresql://postgres:postgres@db:5432/postgres"
wait_for_db(SQLALCHEMY_DATABASE_URI)
engine = create_engine(
SQLALCHEMY_DATABASE_URI
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
一个更复杂的解决方案是使用 docker-compose healtchecks(仅限 v2)。对于 docker-compose v3,他们建议手动执行,类似于上面提供的解决方案。
要改进此解决方案,请将 wait_for_db
包含在 python 命令行脚本中,并将其 运行 包含在预启动阶段的某种图像入口点中。无论如何,对于 运行ning 迁移,您都需要在入口点进行预启动阶段(您 do 在您的项目中包含迁移,对吗?)
您还可以使用 Docker 的重启机制处理重试。如果您使用最大尝试次数和正确的延迟,您可以做到这一点,这样数据库很可能会在第二次尝试时准备就绪,同时防止无限重启。
rest_api:
...
deploy:
restart_policy:
condition: on-failure
delay: 5s # default
max_attempts: 5
...
请注意,我不是 Docker 专家,但这似乎更符合容器作为牛而不是宠物的范例。当问题可以由更高层中的现有功能处理时,为什么要增加应用程序的复杂性?
我正在使用 Postgresql 作为数据库创建一个具有简单 CRUD 功能的 FastAPI 服务器。一切都在我的本地环境中运行良好。但是,当我尝试使用 docker-compose up
在容器中使其成为 运行 时,它失败了。我收到此错误:
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
rest_api_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
rest_api_1 | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
rest_api_1 | Is the server running on host "db" (172.29.0.2) and accepting
rest_api_1 | TCP/IP connections on port 5432?
rest_api_1 |
rest_api_1 | (Background on this error at: https://sqlalche.me/e/14/e3q8)
networks_lab2_rest_api_1 exited with code 1
目录结构:
├── Dockerfile
├── README.md
├── __init__.py
├── app
│ ├── __init__.py
│ ├── __pycache__
│ ├── crud.py
│ ├── database.py
│ ├── main.py
│ ├── models.py
│ ├── object_store
│ └── schemas.py
├── docker-compose.yaml
├── requirements.txt
├── tests
│ ├── __init__.py
│ ├── __pycache__
│ ├── assets
│ ├── test_create.py
│ ├── test_delete.py
│ ├── test_file.py
│ ├── test_get.py
│ ├── test_heartbeat.py
│ └── test_put.py
└── venv
├── bin
├── include
├── lib
└── pyvenv.cfg
我的docker-compose.yaml
version: "3"
services:
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
POSTGRES_USER: ${DATABASE_TYPE}
POSTGRES_PASSWORD: ${DATABASE_PASSWORD}
POSTGRES_DB: ${DATABASE_NAME}
ports:
- "5432:5432"
rest_api:
build: .
command: uvicorn app.main:app --host 0.0.0.0
env_file:
- ./.env
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
volumes:
postgres_data:
我的 Dockerfile
用于 fastAPI 服务器(在 ./app
下)
FROM python:3.8-slim-buster
RUN apt-get update \
&& apt-get -y install libpq-dev gcc \
&& pip install psycopg2
WORKDIR /app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
我的database.py
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from dotenv import load_dotenv
import os
# def create_connection_string():
# load_dotenv()
# db_type = os.getenv("DATABASE_TYPE")
# username = os.getenv("DATABASE_USERNAME")
# password = os.getenv("DATABASE_PASSWORD")
# host = os.getenv("DATABASE_HOST")
# port = os.getenv("DATABASE_PORT")
# name = os.getenv("DATABASE_NAME")
#
# return "{0}://{1}:{2}@{3}/{4}".format(db_type, username, password, host, name)
SQLALCHEMY_DATABASE_URI = "postgresql://postgres:postgres@db:5432/postgres"
engine = create_engine(
SQLALCHEMY_DATABASE_URI
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
我的main.py
from typing import List, Optional
import os, base64, shutil
from functools import wraps
from fastapi import Depends, FastAPI, HTTPException, UploadFile, File, Request, Header
from fastapi.responses import FileResponse
from sqlalchemy.orm import Session
from . import crud, models, schemas
from .database import SessionLocal, engine
models.Base.metadata.create_all(bind=engine)
app = FastAPI()
SECRET_KEY = os.getenv("SECRET")
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
def check_request_header(x_token: str = Header(...)):
if x_token != SECRET_KEY:
raise HTTPException(status_code=401, detail="Unauthorized")
# endpoints
@app.get("/heartbeat", dependencies=[Depends(check_request_header)], status_code=200)
def heartbeat():
return "The connection is up"
更完整的日志是:
Creating db_1 ... done
Creating rest_api_1 ... done
Attaching to db_1, rest_api_1
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
db_1 |
...
db_1 | selecting dynamic shared memory implementation ... posix
db_1 | selecting default max_connections ... 100
db_1 | selecting default shared_buffers ... 128MB
db_1 | selecting default time zone ... UTC
db_1 | creating configuration files ... ok
db_1 | running bootstrap script ... ok
db_1 | performing post-bootstrap initialization ... sh: locale: not found
db_1 | 2021-09-29 18:13:35.027 UTC [31] WARNING: no usable system locales were found
rest_api_1 | Traceback (most recent call last):
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 3240, in _wrap_pool_connect
rest_api_1 | return fn()
...
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 584, in connect
rest_api_1 | return self.dbapi.connect(*cargs, **cparams)
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
rest_api_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
rest_api_1 | psycopg2.OperationalError: could not connect to server: Connection refused
rest_api_1 | Is the server running on host "db" (172.29.0.2) and accepting
rest_api_1 | TCP/IP connections on port 5432?
rest_api_1 |
rest_api_1 |
rest_api_1 | The above exception was the direct cause of the following exception:
rest_api_1 |
rest_api_1 | Traceback (most recent call last):
rest_api_1 | File "/usr/local/bin/uvicorn", line 8, in <module>
rest_api_1 | sys.exit(main())
...
est_api_1 | File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 584, in connect
rest_api_1 | return self.dbapi.connect(*cargs, **cparams)
rest_api_1 | File "/usr/local/lib/python3.8/site-packages/psycopg2/__init__.py", line 122, in connect
rest_api_1 | conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
rest_api_1 | sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
rest_api_1 | Is the server running on host "db" (172.29.0.2) and accepting
rest_api_1 | TCP/IP connections on port 5432?
rest_api_1 |
rest_api_1 | (Background on this error at: https://sqlalche.me/e/14/e3q8)
rest_api_1 exited with code 1
db_1 | ok
db_1 | syncing data to disk ... ok
db_1 |
db_1 |
db_1 | Success. You can now start the database server using:
...
db_1 | 2021-09-29 18:13:36.325 UTC [1] LOG: starting PostgreSQL 13.4 on x86_64-pc-linux-musl, compiled by gcc (Alpine 10.3.1_git20210424) 10.3.1 20210424, 64-bit
db_1 | 2021-09-29 18:13:36.325 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2021-09-29 18:13:36.325 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2021-09-29 18:13:36.328 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2021-09-29 18:13:36.332 UTC [48] LOG: database system was shut down at 2021-09-29 18:13:36 UTC
db_1 | 2021-09-29 18:13:36.336 UTC [1] LOG: database system is ready to accept connections
我进行了非常广泛的搜索并阅读了 docs/tutorials 关于 运行ning FastAPI 服务器和 Postgresql 的 docker-compose
,例如
https://testdriven.io/blog/fastapi-docker-traefik/ https://github.com/AmishaChordia/FastAPI-PostgreSQL-Docker/blob/master/FastAPI/docker-compose.yml https://www.jeffastor.com/blog/pairing-a-postgresql-db-with-your-dockerized-fastapi-app
他们的方法与我的相同,但它一直给我这个 Connection refused Is the server running on host "db" (172.29.0.2) and accepting TCP/IP connections on port 5432?
错误消息 ...
有人可以帮我吗?任何帮助将不胜感激!!
首先,database.py
中的 SQLALCHEMY_DATABASE_URI
应与您的 docker-compose.yaml
中提供的用户、密码和数据库名称相匹配。确保您 运行ning docker-compose up
环境正确。在您的情况下,docker-compose up
的环境应该是:
DATABASE_TYPE=postgres
DATABASE_PASSWORD=postgres
DATABASE_NAME=postgres
但我认为您的问题出在其他地方。即使您将 API 服务声明为 depends_on: - db
,postgres 服务器也可能尚未准备就绪。 depends_on
确保目标图像不会在引用图像之前被实例化,但不确保更多。 postgres 服务器在 运行ning 容器中初始化、启动和 运行ing 需要一些时间,如果你的 API 在此之前尝试连接,它将失败。
常见且最简单的解决方案是编写一堆代码,在实际连接发生之前反复检查数据库是否已启动并 运行ning。由于您没有提供整个回溯(实际上,您已经用 ...
替换了最重要的部分),我只能猜测在您的代码的哪一行触发了连接事件。我建议将您的 database.py
修改为(未测试,可能需要一些调整):
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from dotenv import load_dotenv
import os
import time
def wait_for_db(db_uri):
"""checks if database connection is established"""
_local_engine = create_engine(db_uri)
_LocalSessionLocal = sessionmaker(
autocommit=False, autoflush=False, bind=_local_engine
)
up = False
while not up:
try:
# Try to create session to check if DB is awake
db_session = _LocalSessionLocal()
# try some basic query
db_session.execute("SELECT 1")
db_session.commit()
except Exception as err:
print(f"Connection error: {err}")
up = False
else:
up = True
time.sleep(2)
SQLALCHEMY_DATABASE_URI = "postgresql://postgres:postgres@db:5432/postgres"
wait_for_db(SQLALCHEMY_DATABASE_URI)
engine = create_engine(
SQLALCHEMY_DATABASE_URI
)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
一个更复杂的解决方案是使用 docker-compose healtchecks(仅限 v2)。对于 docker-compose v3,他们建议手动执行,类似于上面提供的解决方案。
要改进此解决方案,请将 wait_for_db
包含在 python 命令行脚本中,并将其 运行 包含在预启动阶段的某种图像入口点中。无论如何,对于 运行ning 迁移,您都需要在入口点进行预启动阶段(您 do 在您的项目中包含迁移,对吗?)
您还可以使用 Docker 的重启机制处理重试。如果您使用最大尝试次数和正确的延迟,您可以做到这一点,这样数据库很可能会在第二次尝试时准备就绪,同时防止无限重启。
rest_api:
...
deploy:
restart_policy:
condition: on-failure
delay: 5s # default
max_attempts: 5
...
请注意,我不是 Docker 专家,但这似乎更符合容器作为牛而不是宠物的范例。当问题可以由更高层中的现有功能处理时,为什么要增加应用程序的复杂性?