skip to Main Content

I’ve created a few migrations already using Alembic, it’s a really interesting tool.

Using it on a new DB would be straightforward enough, however our usecase is to apply these migrations to existing DB’s, not only to bring them “up to speed” but to make sure that for example, should a table or column be altered through the phpmyadmin front-end or if a table exists already, that it can correct/created.

This would be perfect, as the MigrationID created for that script would then be able to be applied to any DB we have, correcting any inconsistencies without creating duplicates by skipping over what it deems to be correct.

As mentioned, a fresh database built using Alembic would able to be fixed via downgrade and reapplication, but I’m wondering if this can be done to an existing DB where Alembic is applied post.

For reference, here’s the first migration code example if it’s somehow useful.

"""create account table

Revision ID: bd4ec9e8afe8
Revises: 
Create Date: 2019-10-29 15:25:39.736234

"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects.mysql import TINYINT


# revision identifiers, used by Alembic.
revision = 'bd4ec9e8afe8'
down_revision = None
branch_labels = None
depends_on = None


def upgrade():

    op.create_table(
        'account',
        sa.Column('id', sa.Integer, primary_key=True, nullable=False),
        sa.Column('title', sa.String(150), nullable=False),
        sa.Column('description', sa.Text, nullable=False),
        sa.Column('uploads_allowed', TINYINT(), nullable=True),
        sa.Column('viewers_contact_details', TINYINT(), default=0),
    )

    op.alter_column('account', sa.Column('id', sa.Integer, nullable=False, autoincrement=True))



    # op.execute("""
    # CREATE TABLE `account` (
    # `id` int(11) NOT NULL,
    # `title` varchar(150) NOT NULL,
    # `description` text,
    # `uploads_allowed` tinyint(1) DEFAULT '0',
    # `viewers_contact_details` tinyint(1) DEFAULT '0'
    # );
    # """)




def downgrade():
    op.drop_table('account')

2

Answers


  1. Chosen as BEST ANSWER

    So my problem was that I couldn't detect db changes without using if/else and try/except in every single migration. Whilst this worked, it could get confused and break entirely if the query I was using to check the db wasn't correct or didn't return exactly what was expected.

    The solution was to create a model of my db, and use that as the model from which Alembic uses --autogenerate.

    @Supershoot pointed me toward this tool. It creates a model of whatever db you point it at. Using this, I was able to then modify my env.py like so:

    from logging.config import fileConfig
    
    from sqlalchemy import engine_from_config
    from sqlalchemy import pool
    
    from alembic import context
    
    import sys
    from os.path import abspath, dirname
    sys.path.insert(0, "/home/user/gitprojects/myproject/alembic/alembic/models")
    import base
    
    
    # this is the Alembic Config object, which provides
    # access to the values within the# from myapp import mymodel
    # target_metadata = mymodel.Base.metadata .ini file in use.
    config = context.config# from myapp import mymodel
    # target_metadata = mymodel.Base.metadata
    # from myapp import mymodel
    # target_metadata = mymodel.Base.metadata
    # Interpret the config file for P# from myapp import mymodel
    # target_metadata = mymodel.Base.metadataython logging.
    # This line sets up loggers basically.
    fileConfig(config.config_file_name)
    
    # add your model's MetaData object here
    # for 'autogenerate' support
    # from myapp import mymodel
    # target_metadata = mymodel.Base.metadata
    target_metadata = base.Base.metadata
    # target_metadata = None
    

    My dir structure looks like this:

    ├── alembic
    │   ├── env_manualmig.py
    │   ├── env.py
    │   ├── models
    │   │   ├── base.py
    │   │   └── __pycache__
    │   │       └── base.cpython-37.pyc
    │   ├── __pycache__
    │   │   └── env.cpython-37.pyc
    │   ├── README
    │   ├── script.py.mako
    │   └── versions
    │       ├── bd4ec9e8afe8_create_account_table.py
    │       ├── f05b82b25f44_base.py
    │       ├── fada1a35f790_add_a_column.py
    │       └── __pycache__
    │           ├── bd4ec9e8afe8_create_account_table.cpython-37.pyc
    │           ├── f05b82b25f44_base.cpython-37.pyc
    │           └── fada1a35f790_add_a_column.cpython-37.pyc
    ├── alembic.ini
    ├── Pipfile
    ├── Pipfile.lock
    └── sa

    Using the command sqlacodegen mysql://u:p@host/name --outfile name_of_file.py

    And then: alembic revision --autogenerate -m "base" generated an automatic migration script after having checked the model was correct.

    This then compared the metadata it had, to the db it was pointed to and correctly detected missing or altered columns. You are greatly encouraged to check that the autogenerated migration is correct however, as there may be bugs or other discrepancies that alembic may not have picked up on.


  2. You can create a new revision using :

    $ alembic revision -m “some description for the revision”

    Then a revision file with ID will be created.

    Populate the upgrade and downgrade functions as per your changes, then run the below command :

    $ alembic upgrade head

    This will make changes to the existing database.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search