I’m running into an issue in Azure Devops. I have two questions regarding the issue. The issue is that I have an Azure Bicep template that deploys a bunch of resources in a resource group within my Azure subscription.
One of these resources is an Azure Container Registry (ACR) to which I want to push a certain image when the image code is updated. Now what I essentially am trying to achieve is that I have a single multi-stage Azure build Pipeline in which
- The resources are deployed via Azure Bicep, after which
- I build and push the image to the (ACR) automatically
Now the issue here is that to push an image to ACR a service connection needs to be made in Azure Devops, which can only happen through the portal after the Azure Bicep pipeline has run. Now I have found that I can use an Azure CLI command: az devops service-endpoint create
to create a connection from a .json-file from the command line, which essentially means I could maybe add a .json-file, however I would not have the right credentials until after the AZ bicep build and would probably have to expose sensitive Azure account information in my json file to create the connection (if even possible).
This leaves me with two questions:
- In practice, is this something that one would do, or does it make more sense to just have two pipelines; one for the infrastructure-as-code and one for the application code. I would think that it is preferable to be able to deploy everything in one go, but am quite new to DevOps and can’t really find an answer to this question.
- Is there anyway that this would still be possible to achieve securely in a single Azure DevOps pipeline?
2
Answers
Answer to Q1.
From my experience, infrastructure and application has always been kept separate. We generally want to split those two so that it’s easier to manage. For example, you might want to test a new feature of the ACR separately, like new requirements for adding firewall rules to your ACR, or maybe changing replication settings, without rebuilding/pushing a new image every time.
On the other hand the BAU pipeline involves building new images daily or weekly. One action is a one-off thing, the other is more of a BAU. You usually just want to build the ACR and forget about it, only referencing when required.
In addition, the ACR could eventually be used for images of many other application pipelines you would have in the future. So you don’t really want to tie it to a specific application pipeline. If you wanted to have a future proof solution, I’d suggest keeping them separate and then have different pipelines for different applications builds.
It’s generally best to keep core infrastructure resources code separate from the BAU stuff.
Answer to Q2.
I don’t know in detail the specifics of how you’re running your pipeline but from what I understand, regarding exposing the sensitive content, there are two ways (best practice) I would handle this.
I completely agree with the accepted answer about not doing everything in the same pipeline.
Tho ACR supports RBAC and you could grant the service principal running your pipeline
AcrPush
permission. This way you would remove the need of creating another service connection:In other subsequent pipelines, you could
login
thenbuildAndPush
to the container registry without the need of creating manually a service connection or storing any other secrets:My answer is really about not having to create an extra set of credentials that you would also have to maintain separately.