skip to Main Content

I have a CDK Pipeline stack that synths and deploys some infrastructure. After the infrastructure is created, I want to build a frontend react app that knows the URL to the newly constructed API Gateway. Once the app is built, I want to move the built files to a newly created S3 bucket.

I have the first two steps working no problem. I use a CfnOutput to get the API URL and the bucket name. I then use envFromCfnOutputs in my shell step to build the react app with the right env variable set up.

I can’t figure out how to move my files to a s3 bucket. I’ve tried for days to figure out something using s3deploy, but run into various permission issues. I thought I could try to just use the aws cli and move the files manually, but I don’t know how to give the CLI command permission to add and delete objects. To make things a bit more complicated, My infrastructure is deployed to a separate account from where my pipeline lives.

Any idea how I can use the CLI or another thought on how I can move the built files to a bucket?

// set up pipeline
const pipeline = new CodePipeline(this, id, {
  crossAccountKeys: true,
  pipelineName: id,
  synth: mySynthStep
});

// add a stage with all my constructs
const pipelineStage = pipelineAddStage(myStage)

// create a shellstep that builds and moves the frontend assets
const frontend = new ShellStep('FrontendBuild', {
  input: source,
   commands: [
     'npm install -g aws-cli',
     'cd frontend',
     'npm ci',
     'VITE_API_BASE_URL="$AWS_API_BASE_URL" npm run build',
     'aws s3 sync ./dist/ s3://$AWS_FRONTEND_BUCKET_NAME/ --delete'
   ],
   envFromCfnOutputs: {
     AWS_API_BASE_URL: myStage.apiURL,
     AWS_FRONTEND_BUCKET_NAME: myStage.bucketName
   }
})

// add my step as a poststep to my stage.
pipelineStage.addPost(frontendApp);

2

Answers


  1. The ShellStep is likely running under the IAM permissions/role of the Pipeline. Add additional permissions to the Pipeline’s role and that should trickle down the AWS CLI call.

    You’ll also need to probably call buildPipeline before you try to do this:

    pipeline.buildPipeline();
    pipeline.pipeline.addToRolePolicy(...)
    
    Login or Signup to reply.
  2. I want to give this a shot and also suggest a solution for cross account pipelines.

    You figured out the first half of how to build the webapp, this works by passing the output of the cloudformation to the environment of a shell action building the app with the correct outputs (e.g. API endpoint url).

    You now could add permissions to a CodeBuildStep and attach a policy there to allow the step to do call certain actions. That should work if your pipeline and your bucket are in the same account (and also cross account with a lot more fiddling). But there is a problem with scoping those permissions:

    The Pipeline and the Bucket are created in an order where first the Pipeline is created or self-updated, so you do not know the Bucket Name or anything else at this point. It then deploys the resources to its own account or to another account. So you need to assign a name which is known beforehand. This is a general problem and broadens if you e.g. also need to create a Cloudfront Invalidation and so on.

    My approach is the following (in my case for a cross account deployment):

    1. Create a Role alongside the resources which allows the role to do things (e.g. ReadWrite S3 bucket, create Cloudfront Invalidation, …) with a predefined name and allow a matching principal to assume that role (In my case an Account principal)

    Code snippet

    const deploymentRole = new IAM.Role(this, "DeploymentRole", {
      roleName: "WebappDeploymentRole",
      assumedBy: new IAM.AccountPrincipal(pipelineAccountId),
    });
    // Grant permissions
    bucket.grantReadWrite(deploymentRole);
    

    2. Create a `CodeBuildStep` which has permissions to assume that role (by a pre-defined name)

    Code snippet

    new CodeBuildStep("Deploy Webapp", {
      rolePolicyStatements: [
        new PolicyStatement({
          actions: ["sts:AssumeRole"],
          resources: [
            `arn:aws:iam::${devStage.account}:role/${webappDeploymentRoleName}`,
          ],
          effect: Effect.ALLOW,
        }),
      ],
      ...
    }
    

    3. In the `commands` i do call `aws sts assume-role` with the predefined role name and save the credentials to the environment for following calls to use
    Code snippet

      envFromCfnOutputs: {
        bucketName: devStage.webappBucketName,
        cloudfrontDistributionID: devStage.webbappCloudfrontDistributionId,
      },
      commands: [
        "yarn run build-webapp",
        // Assume role, see https://stackoverflow.com/questions/63241009/aws-sts-assume-role-in-one-command
        `export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" $(aws sts assume-role --role-arn arn:aws:iam::${devStage.account}:role/${webappDeploymentRoleName} --role-session-name WebappDeploySession --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" --output text))`,
        `aws s3 sync ${webappPath}/build s3://$bucketName`,
        `aws cloudfront create-invalidation --distribution-id $cloudfrontDistributionID --paths "/*"`,
      ],
    

    4. I do call other aws cli actions like `aws s3 sync …` with the credentials from step 3. which are now correctly scoped to the actions needed

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search