I’ve created a few migrations already using Alembic, it’s a really interesting tool.
Using it on a new DB would be straightforward enough, however our usecase is to apply these migrations to existing DB’s, not only to bring them “up to speed” but to make sure that for example, should a table or column be altered through the phpmyadmin front-end or if a table exists already, that it can correct/created.
This would be perfect, as the MigrationID created for that script would then be able to be applied to any DB we have, correcting any inconsistencies without creating duplicates by skipping over what it deems to be correct.
As mentioned, a fresh database built using Alembic would able to be fixed via downgrade and reapplication, but I’m wondering if this can be done to an existing DB where Alembic is applied post.
For reference, here’s the first migration code example if it’s somehow useful.
"""create account table
Revision ID: bd4ec9e8afe8
Revises:
Create Date: 2019-10-29 15:25:39.736234
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects.mysql import TINYINT
# revision identifiers, used by Alembic.
revision = 'bd4ec9e8afe8'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
op.create_table(
'account',
sa.Column('id', sa.Integer, primary_key=True, nullable=False),
sa.Column('title', sa.String(150), nullable=False),
sa.Column('description', sa.Text, nullable=False),
sa.Column('uploads_allowed', TINYINT(), nullable=True),
sa.Column('viewers_contact_details', TINYINT(), default=0),
)
op.alter_column('account', sa.Column('id', sa.Integer, nullable=False, autoincrement=True))
# op.execute("""
# CREATE TABLE `account` (
# `id` int(11) NOT NULL,
# `title` varchar(150) NOT NULL,
# `description` text,
# `uploads_allowed` tinyint(1) DEFAULT '0',
# `viewers_contact_details` tinyint(1) DEFAULT '0'
# );
# """)
def downgrade():
op.drop_table('account')
2
Answers
So my problem was that I couldn't detect db changes without using if/else and try/except in every single migration. Whilst this worked, it could get confused and break entirely if the query I was using to check the db wasn't correct or didn't return exactly what was expected.
The solution was to create a model of my db, and use that as the model from which Alembic uses --autogenerate.
@Supershoot pointed me toward this tool. It creates a model of whatever db you point it at. Using this, I was able to then modify my env.py like so:
My dir structure looks like this:
Using the command
sqlacodegen mysql://u:p@host/name --outfile name_of_file.py
And then:
alembic revision --autogenerate -m "base"
generated an automatic migration script after having checked the model was correct.This then compared the metadata it had, to the db it was pointed to and correctly detected missing or altered columns. You are greatly encouraged to check that the autogenerated migration is correct however, as there may be bugs or other discrepancies that alembic may not have picked up on.
You can create a new revision using :
$ alembic revision -m “some description for the revision”
Then a revision file with ID will be created.
Populate the upgrade and downgrade functions as per your changes, then run the below command :
$ alembic upgrade head
This will make changes to the existing database.