skip to Main Content

I’m running into an issue in Azure Devops. I have two questions regarding the issue. The issue is that I have an Azure Bicep template that deploys a bunch of resources in a resource group within my Azure subscription.

One of these resources is an Azure Container Registry (ACR) to which I want to push a certain image when the image code is updated. Now what I essentially am trying to achieve is that I have a single multi-stage Azure build Pipeline in which

  1. The resources are deployed via Azure Bicep, after which
  2. I build and push the image to the (ACR) automatically

Now the issue here is that to push an image to ACR a service connection needs to be made in Azure Devops, which can only happen through the portal after the Azure Bicep pipeline has run. Now I have found that I can use an Azure CLI command: az devops service-endpoint create to create a connection from a .json-file from the command line, which essentially means I could maybe add a .json-file, however I would not have the right credentials until after the AZ bicep build and would probably have to expose sensitive Azure account information in my json file to create the connection (if even possible).

This leaves me with two questions:

  1. In practice, is this something that one would do, or does it make more sense to just have two pipelines; one for the infrastructure-as-code and one for the application code. I would think that it is preferable to be able to deploy everything in one go, but am quite new to DevOps and can’t really find an answer to this question.
  2. Is there anyway that this would still be possible to achieve securely in a single Azure DevOps pipeline?

2

Answers


  1. Answer to Q1.

    From my experience, infrastructure and application has always been kept separate. We generally want to split those two so that it’s easier to manage. For example, you might want to test a new feature of the ACR separately, like new requirements for adding firewall rules to your ACR, or maybe changing replication settings, without rebuilding/pushing a new image every time.

    On the other hand the BAU pipeline involves building new images daily or weekly. One action is a one-off thing, the other is more of a BAU. You usually just want to build the ACR and forget about it, only referencing when required.

    In addition, the ACR could eventually be used for images of many other application pipelines you would have in the future. So you don’t really want to tie it to a specific application pipeline. If you wanted to have a future proof solution, I’d suggest keeping them separate and then have different pipelines for different applications builds.

    It’s generally best to keep core infrastructure resources code separate from the BAU stuff.

    Answer to Q2.

    I don’t know in detail the specifics of how you’re running your pipeline but from what I understand, regarding exposing the sensitive content, there are two ways (best practice) I would handle this.

    1. Keep the file with the sensitive content as secure file in the pipeline library and then retrieve it when required.
    2. Keep the content or any secrets in an Azure KeyVault and read them during your pipeline run.
    Login or Signup to reply.
  2. I completely agree with the accepted answer about not doing everything in the same pipeline.

    Tho ACR supports RBAC and you could grant the service principal running your pipeline AcrPush permission. This way you would remove the need of creating another service connection:

    // container registry name
    param registryName string
    
    // role to assign
    param roleId string = '8311e382-0749-4cb8-b61a-304f252e45ec' // AcrPush role
    
    // objectid of the service principal
    param principalId string
    
    resource registry 'Microsoft.ContainerRegistry/registries@2021-12-01-preview' existing = {
      name: registryName
    }
    
    // Create role assignment
    resource registryRoleAssignment 'Microsoft.Authorization/roleAssignments@2020-04-01-preview' = {
      name: guid(subscription().subscriptionId, resourceGroup().name, registryName, roleId, principalId)
      scope: registry
      properties: {
        roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', roleId)
        principalId: principalId
      }
    }
    

    In other subsequent pipelines, you could login then buildAndPush to the container registry without the need of creating manually a service connection or storing any other secrets:

    steps:
    ...
    - task: AzureCLI@2
      displayName: Connect to container registry
      inputs:
        azureSubscription: <service connection name>
        scriptType: pscore
        scriptLocation: inlineScript
        inlineScript: |
          az acr login --name <azure container registry name>
    
    - task: Docker@2
      displayName: Build and push image
      inputs:
        command: buildAndPush
        repository: <azure container registry name>.azurecr.io/<repository name>
        ...
    
    

    My answer is really about not having to create an extra set of credentials that you would also have to maintain separately.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search