skip to Main Content

I have a CDK Pipelines pipeline that is handling the self mutation and deployment of my application on ECS and I am having a tough time figuring out how to implement database migrations.

My migration files as well as the migration command reside inside of the docker container that are built and deployed in the pipeline. Below are two things I’ve tried so far:


My first thought was just creating a pre step on the stage, but i believe there is a chicken/egg situation. Since the migration command requires database to exist (as well as having the endpoint and credentials) and the migration step is pre, the stack doesn’t exist when this command would run…

    const pipeline = new CodePipeline(this, "CdkCodePipeline", {
      // ...
      // ...
    }

    pipeline.addStage(applicationStage).addPre(new CodeBuildStep("MigrateDatabase", {
      input: pipeline.cloudAssemblyFileSet,
      buildEnvironment: {
        environmentVariables: {
          DB_HOST: { value: databaseProxyEndpoint },
          // ...
          // ...
        },
        privileged: true,
        buildImage: LinuxBuildImage.fromAsset(this, 'Image', {
          directory: path.join(__dirname, '../../docker/php'),
        }),
      },
      commands: [
        'cd /var/www/html',
        'php artisan migrate --force',
      ],
    }))

In the above code, databaseProxyEndpoint has been everything from a CfnOutput, SSM Parameter to a plain old typescript reference but all failed due to the value being empty, missing, or not generated yet.

I felt this was close, since it works perfectly fine until I try and reference databaseProxyEndpoint.


My second attempt was to create an init container in ECS.

   const migrationContainer = webApplicationLoadBalancer.taskDefinition.addContainer('init', {
      image: ecs.ContainerImage.fromDockerImageAsset(webPhpDockerImageAsset),
      essential: false,
      logging: logger,
      environment: {
        DB_HOST: databaseProxy.endpoint,
        // ...
        // ...
      },
      secrets: {
        DB_PASSWORD: ecs.Secret.fromSecretsManager(databaseSecret, 'password')
      },
      command: [
        "sh",
        "-c",
        [
          "php artisan migrate --force",
        ].join(" && "),
      ]
    });

    // Make sure migrations run and our init container return success
    serviceContainer.addContainerDependencies({
      container: migrationContainer,
      condition: ecs.ContainerDependencyCondition.SUCCESS,
    });

This worked, but I am not a fan at all. The migration command should run once in the ci/cd pipeline on a deploy, not when the ECS service starts/restarts or scales… My migrations failed once and it locked up cloudformation because the health check failed both on the deploy and then naturally on the rollback as well causing a completely broken loop of pain.

Any ideas or suggestions on how to pull this off would save me from losing the remaining hair i have left!

2

Answers


  1. I wouldn’t solve it within a build step of a CDK Pipeline.

    Rather I’d go for the CustomResource approach.
    With Custom Resources, especially in CDK, you’re always aware of the dependencies and when you need to run them.
    This gets completely lost within a CDK Pipeline context and you need to find out/implement by yourself.

    So, what does a Custom Resource look like?

    
    // this lambda function is an example definition, where you would run your actual migration commands
    const migrationFunction = new lambda.Function(this, 'MigrationFunction', {
          runtime: lambda.Runtime.PROVIDED_AL2,
          code: lambda.Code.fromAsset('path/to/migration.ts'),
          layers: [
            // find the layers here: 
            // https://bref.sh/docs/runtimes/#lambda-layers-in-details
            // https://bref.sh/docs/runtimes/#layer-version-
            lambda.LayerVersion.fromLayerVersionArn(this, 'BrefPHPLayer', 'arn:aws:lambda:us-east-1:209497400698:layer:php-80:21')
          ],
          timeout: cdk.Duration.seconds(30),
          memorySize: 256,
        });
    
          const migrationFunctionProvider = new Provider(this, 'MigrationProvider', {
          onEventHandler: migrationFunction,
        });
    
        new CustomResource(this, 'MigrationCustomResource', {
          serviceToken: migrationFunctionProvider.serviceToken,
          properties: {
            date: new Date(Date.now()).toUTCString(),
          },
        });
      }
    
      // grant your migration lambda the policies to read secrets for your DB connection etc.
    
    // migration.ts
    import child_process from 'child_process';
    import AWS from 'aws-sdk';
    
    const sm = new AWS.SecretsManager();
    
    export const handler = async (event, context) => {
      // an event provides more flexibility than env vars
      const { dbName, secretName } = event;
    
      // Retrieve the database credentials from AWS Secrets Manager
      const secret = await sm.getSecretValue({ SecretId: secretName }).promise();
      const { username, password } = JSON.parse(secret.SecretString);
    
      // Run the migration command with the database credentials
      const command = `php artisan migrate --database=mysql --host=your-database-host --port=3306 --database=${dbName} --username=${username} --password=${password}`;
      child_process.exec(command, (error, stdout, stderr) => {
        if (error) {
          console.error(`exec error: ${error}`);
          return;
        }
        console.log(`stdout: ${stdout}`);
        console.error(`stderr: ${stderr}`);
      });
    };
    

    The Custom-Resource takes your migration lambda function.
    The Lambda runs the actual command to do your database migration.
    The Custom Resource is applied every time when running a deployment.
    This is applied via the date value.
    You can control the execution by altering any property within the CustomResource.

    Login or Signup to reply.
  2. You can run your migrations (1) within a stack’s deployment with a Custom Resource construct, (2) after a stack’s or stage’s deployment with a post Step, (3) or after the pipeline has run with an EventBridge rule.

    1. Within a stack: Migrations as a Custom Resource

    One option is to define your migrations as a CustomResource. It’s a CloudFormation feature for executing user-defined code (typically in a Lambda) during the stack deployment lifecycle. See @mchlfchr’s answer for an example. Also consider the CDK Trigger construct, a higher-level Custom Resource implementation.

    2. After a stack or stage: "post" Step

    If you split your application into, say, a StatefulStack (database) and StatelessStack (application containers), you can run your migrations code as a post Step between the two. This is the approach attempted in the OP.

    In your StatefulStack, the variable producer, expose a CfnOutput instance variable for the environment variable values: readonly databaseProxyEndpoint: CfnOutput. Then consume the variables in a pipeline migration action by passing them to a post step as envFromCfnOutputs. The CDK will synth them into CodePipeline Variables:

    pipeline.addStage(myStage, { // myStage includes the StatefulStack and StatelessStack instances
        stackSteps: [
            {
                stack: statefulStack,
                post: [
                    new pipelines.CodeBuildStep("Migrate", {
                        commands: [ 'cd /var/www/html', 'php artisan migrate --force',],
                        envFromCfnOutputs: { TABLE_ARN: stack1.tableArn },
                        // ... other step config
                    }),
                ],
            },
        ],
        post: // steps to run after the stage
    });
    

    The addStage method’s stackSteps option runs post steps after a specific stack in a stage. The post option work similarly, but runs after the stage.

    3. After the Pipeline execution: EventBridge rule

    Although it’s likely not the best option, you could run migrations after the pipeline executes. CodePipeline emits events during pipeline execution. With an EventBridge rule, listen for CodePipeline Pipeline Execution State Change events where "state": "SUCCEEDED".


    Note on failure modes: The three options have different failure modes. If the migrations fail as a Custom Resource, the StatefulStack deployment will fail (with changes rolled back) and the pipeline execution will fail. If the migrations are implemented as a step, the pipeline execution will fail but the StatefulStack won’t roll back. Finally, if migrations are event-triggered, a failed migration will affect neither the stack nor execution, as they will already be finished when the migrations run.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search