As the title suggests, I am trying to setup vault agent sidecars on services deploying on AWS ECS. I want to store the services env variables in the vault as well as use vault to generate certificates. I am also using terraform to deploy this.
My vault is dedicated through Hashicorp platform.
I am unable to figure out how to write the setup for the task definition as well as the config file for the vault agent.
Here is what I currently have, though I have not tried to implement the pki engine for certificates yet.
The ecs service resource:
resource "aws_ecs_task_definition" "task" {
family = var.service_name
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = var.cpu
memory = var.memory
task_role_arn = var.ecs_task_role_arn
execution_role_arn = var.ecs_task_execution_role_arn
container_definitions = <<DEFINITION
[
{
"image": "${var.container_image}",
"name": "${var.service_name}",
"readonlyRootFilesystem": false,
"networkMode": "awsvpc",
"environmentFiles": [
{
"value": "/etc/secrets/env_vars",
"type" : "s3"
}
],
"portMappings": [
{
"name": "http",
"containerPort": ${var.port1},
"hostPort": ${var.port1},
"protocol": "tcp",
"appProtocol": "http"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${var.service_name}",
"awslogs-region": "${var.region}",
"awslogs-stream-prefix": "${var.environment}",
"awslogs-create-group": "true"
}
},
"mountPoints": [
{
"sourceVolume": "vault-secrets",
"containerPath": "/etc/secrets"
}
]
},
{
"name" : "${var.service_name}-datadog-agent",
"image" : "public.ecr.aws/datadog/agent:latest",
"cpu" : 100,
"memory" : 512,
"essential" : true,
"portMappings" : [
{
"hostPort" : 8126,
"protocol" : "tcp",
"containerPort" : 8126
}
],
"environment" : [
{
"name" : "ECS_FARGATE",
"value" : "true"
}
}
]
},
{
"name" : "vault-agent",
"image" : "hashicorp/vault-agent:latest",
"cpu" : 50,
"memory" : 128,
"essential" : false,
"command" : ["vault", "agent", "-config=/vault/config/${var.service_name}-vault-agent-config.hcl"],
"mountPoints" : [
{
"sourceVolume": "vault-secrets",
"containerPath": "/vault/secrets"
},
{
"sourceVolume": "vault-config",
"containerPath": "/vault/config"
}
],
"environment" : [
{
"name": "VAULT_ADDR",
"value": "${var.vault_address}"
}
]
}
]
DEFINITION
volume {
name = "vault-secrets"
docker_volume_configuration {
scope = "shared"
}
}
volume {
name = "vault-config"
docker_volume_configuration {
scope = "shared"
}
}
}
And this is my vault agent config template:
pid_file = "/tmp/vault-agent-pid"
auto_auth {
method "aws" {
mount_path = "auth/aws"
config = {
type = "iam"
role = "${iam_role}"
}
}
sink "file" {
config = {
path = "/vault/secrets/env_vars"
}
}
}
vault {
address = "${vault_address}"
}
template {
source = "/vault/templates/env.ctmpl"
destination = "/vault/secrets/env_vars"
}
The error I get when trying to deploy via terraform is:
Error: creating ECS Task Definition (test): operation error ECS: RegisterTaskDefinition, https response error StatusCode: 400, RequestID: f612cedb-f65a-4a9c-adb3-998a1fbc5f54, ClientException: Invalid arn syntax.
with module.ecs_service.aws_ecs_task_definition.task
on ecs/main.tf line 2, in resource "aws_ecs_task_definition" "task":
resource "aws_ecs_task_definition" "task" {
2
Answers
The "Invalid ARN syntax" points to a problem with the ARNs for
task_role_arn
orexecution_role_arn
.Make sure that these ARNs follow the correct format (
arn:aws:iam::<account-id>:role/<role-name>
) and that the variablesvar.ecs_task_role_arn
andvar.ecs_task_execution_role_arn
are properly defined and passed in.You should also get more insights from running
terraform plan
. Use that to check if the ARNs are interpolated correctly, or enable debugging withTF_LOG=DEBUG terraform apply
for a more detailed output.I hope I can help you with this solution.
Reviewing the problem, I see that it may be rooted in invalid ARN syntax in the terraform configuration for the task definition you have in ECS. There’s a possibility that a variable that should contain a valid ARN is either poorly defined or empty. To correct the ARNs and ensure that all variables and therefore syntax are correct, let’s go step by step:
Can you check the following:
Invalid ARNs in
task_role_arn
andexecution_role_arn
:Verify the Variables: Ensure that
var.ecs_task_role_arn
andvar.ecs_task_execution_role_arn
contain valid ARNs and are correctly assigned.Correct Format: ARNs should follow this format:
For example:
Empty or Null Values: Check very carefully that these variables are not empty or undefined. What Terraform does is pass null values, causing those ARN syntax errors. It could be in these files
variables.tf
andterraform.tfvars
files to confirm that these variables have assigned values.Syntax Errors in the
container_definitions
JSON:JSON Validation: Use a tool like JSONLint You could use a tool like JSONLint to validate the syntax. I know sometimes this solution is very obvious, but it doesn’t hurt to do it and verify that your JSON is correct in
container_definitions
.Incorrect Properties: For instance, within your container definitions, you’re using
"networkMode": "awsvpc"
,This is a task-level property, I don’t understand why you have it at the level of individual containers. In my opinion, this should be removed from your container definition.Incorrect Configuration of
environmentFiles
:Using
environmentFiles
: I see that this parameter is expecting you to reference an environment file stored in S3, and the value must absolutely be the ARN of the S3 object.Correction: To load environment variables from a file, keep in mind that you need to provide the ARN of that S3 object. I’ll give you an example:
Alternative: Thinking of an alternative for this, whether you’re mounting the file from a volume, I advise you to consider using the
secrets
parameter or loading the variables within the applicationUsing Volumes with Fargate:
Fargate Restrictions: Remember well that Fargate does not support
docker_volume_configuration
. For this, you can use or should use EFS (Elastic File System) volumes instead.EFS Volume Configuration:
Mount Points in Containers:
Carefully review that the
mountPoints
in the containers correspond to the defined volumes and are correctly configured for Fargate.Reviewing IAM Roles:
Necessary Permissions: It doesn’t hurt to review the necessary permissions you have for the referenced IAM roles, these are required for ECS tasks. I know it’s obvious, but check it, as this may prevent access to other resources like EFS and S3
Existence of Roles: The same goes for the specific roles; you must ensure they exist in the AWS account.
Undefined or Misreferenced Variables:
Confirm Variables: Inspect that your variables used in the configuration (
${var.*}
) are defined, and if they are, ensure that they are correctly referenced.Output Variables for Debugging:
To do a more accurate debugging, you could add the following to the Terraform configuration to get the values of the variables:
And then you run
terraform plan
to see what values they have and to give you the assurance that those values are correct.Simplify for Debugging:
Summary and Recommendations:
So in summary of all this, the "Invalid arn syntax" error is almost always due to a malformed ARN or null values passed in variables that should have those values. The recommendation I would give you is that the first thing you do is verify the variables
ecs_task_role_arn
andecs_task_execution_role_arn
to ensure they have valid ARNs and are well-formatted. Apart from this, carefully review yourcontainer_definitions
configuration to correct syntax errors or properties in locations where they shouldn’t be, that are incorrectly placed. Lastly, adjust your volumes and mount points so that these are fully compatible with Fargate.
I hope this brings you to a solution or closer to one. If you have more questions or need more details, please ask. Best regards.