I have deployed a FastAPI application on DigitalOcean droplets and Azure App Service, with MySQL databases hosted on DigitalOcean and Azure respectively. Despite several attempts to mitigate the issue, I am consistently facing database connection problems, resulting in errors like:
2024-06-06T07:27:13.243707191Z: [ERROR] File "/usr/local/lib/python3.11/site-packages/pymysql/connections.py", line 759, in _write_bytes
2024-06-06T07:27:13.243710391Z: [ERROR] raise err.OperationalError(
2024-06-06T07:27:13.243713391Z: [ERROR] sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2006, "MySQL server has gone away (ConnectionResetError(104, 'Connection reset by peer'))")
2024-05-21T12:03:39.909286838Z: [ERROR] File "/usr/local/lib/python3.11/site-packages/pymysql/connections.py", line 692, in _read_packet
2024-05-21T12:03:39.909290038Z: [ERROR] packet_header = self._read_bytes(4)
2024-05-21T12:03:39.909293038Z: [ERROR] ^^^^^^^^^^^^^^^^^^^
2024-05-21T12:03:39.909296338Z: [ERROR] File "/usr/local/lib/python3.11/site-packages/pymysql/connections.py", line 738, in _read_bytes
2024-05-21T12:03:39.909299638Z: [ERROR] raise err.OperationalError(
2024-05-21T12:03:39.909303838Z: [ERROR] pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query ([Errno 110] Connection timed out)')
The issue occurs intermittently, causing disruptions to my application. Here are the steps I’ve taken to address it:
- Added middleware to handle database connections.
- Configured Azure settings to "Open to all host (*)".
- Set
pool_pre_ping=True
in thecreate_engine()
function. - Appended
?connect_timeout=10&read_timeout=30&write_timeout=30
to the database URL. - Implemented
db_session.close()
in afinally
block to ensure connections are properly closed.
Despite these efforts, the problem persists. I previously encountered similar issues with a Flask application on DigitalOcean, which were mitigated by explicitly closing database connections at the end of each API route. However, this approach is not feasible for my current FastAPI setup.
Details:
- FastAPI version: 0.90.0
- SQLAlchemy version: 2.0.2
- Python version: 3.11
Any insights or suggestions on how to resolve these MySQL connection issues in FastAPI deployments on DigitalOcean and Azure would be greatly appreciated. Thank you!
Code Structure:
main.py
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from routers import *
app = FastAPI()
origins = ['*']
app.add_middleware(
CORSMiddleware,
allow_origins = origins,
allow_credentials = True,
allow_methods = ["*"],
allow_headers = ["*"],
)
app.include_router(user.router)
app/database.py
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
SQLALCHEMY_DATABASE_URL = f"mysql+pymysql://{user}:{pwrd}@{host}:{port}/{name}"
SQLALCHEMY_DATABASE_URL = (
f"mysql+pymysql://{user}:{pwrd}@{host}:{port}/{name}"
"?connect_timeout=10&read_timeout=30&write_timeout=30"
)
engine = create_engine(SQLALCHEMY_DATABASE_URL, pool_pre_ping = True)
SessionLocal = sessionmaker(autocommit = False, autoflush = False, bind = engine)
Base = declarative_base()
def get_db():
try:
db_session = SessionLocal()
yield db_session
finally:
db_session.close()
db_models/SQL_Models.py
(contains all SQLAlchemy classes defining tables)
import sqlalchemy as db
from app.database import Base
class Table_1(Base):
__tablename__ = 'Table_1'
row_id = db.Column(db.Integer, primary_key=True, index=True, autoincrement=True)
...
from app.database import engine; Base.metadata.create_all(engine)
routers/routes.py
@router.get('/{student_id}')
async def get_student_details(
student_id: int,
db_session: Session = Depends(get_db), # Here I am handling database connection
token: Union[str, None] = Header(default=None)
):
students = db_session
.query(Students)
.filter(Students.student_id == student_id)
.first()
return students
Note that I am using MySQL RDBMS.
2
Answers
The error that you are getting is a timeout error. You have set the timeout to 30 seconds.
It could be that the SQL statement is taking more than 30 seconds
You could forcing SQLAlchemy to reconnect after an hour with the
pool_recycle
argument: