skip to Main Content

I am trying to create an EBS Volume and then copy some data from S3 to the EBS volume. I am getting an error fatal error: Unable to locate credentials; I am assuming that the EC2 instance that packer creates to create the volume needs to have AWS Credentials and/or be granted access to S3; how do I do that?

My code looks like,

source "amazon-ebsvolume" "data-volume" {
  region        = "us-east-1"
  ssh_username  = "ubuntu"
  instance_type = "t2.micro"
  source_ami_filter {
    filters = {
      name                = "ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"
      root-device-type    = "ebs"
      virtualization-type = "hvm"
    }
    most_recent = true
    owners      = ["099720109477"]
  }

  ebs_volumes {
    volume_type           = "gp2"
    device_name           = "/dev/xvdf"
    delete_on_termination = false
    tags = {
      zpool = "data"
      Name  = "development-data"
    }
    volume_size = 20
  }
}

build {
  name = "development-data-volume"
  sources = [
    "source.amazon-ebsvolume.data-volume"
  ]

  provisioner "shell" {
    script = "scripts/install_awscli.sh"
  }

  provisioner "shell" {
    environment_vars = [
      "DATA_PATH=data",

    ]
    inline = [
      "rm -rf $DATA_PATH",
      "mkdir $DATA_PATH",
      "aws s3 cp s3://path/to/kaiju/viruses/names.dmp $DATA_PATH/names.dmp"
    ]
  }

}

2

Answers


  1. Chosen as BEST ANSWER

    I was able to download the data from s3 if I added a temporary_iam_instance_profile_policy_document to grant access to s3.

    source "amazon-ebsvolume" "data-volume" {
      ...
      temporary_iam_instance_profile_policy_document {
        Version = "2012-10-17"
        Statement {
          Effect = "Allow"
          Action = [
            "s3:PutObject",
            "s3:GetObject",
            "s3:ListBucket",
            "s3:GetBucketLocation",
            "s3:PutObjectAcl"
          ]
          Resource = ["*"]
        }
      }
    }
    

  2. It looks like you’re trying to access an S3 bucket from an EC2 instance that Packer creates, and you’re getting an error because the EC2 instance doesn’t have the necessary AWS credentials to access the S3 bucket.

    There are a couple of ways you can provide AWS credentials to your EC2 instance:

    1. Instance Profiles: This is the recommended way to provide AWS credentials to an EC2 instance. An instance profile is a role that you can attach to your EC2 instance at launch time, and it allows your instance to make AWS API requests. You can attach an instance profile that has an IAM role with the necessary permissions to access your S3 bucket. In the Packer template, you can specify this role using the iam_instance_profile field:

      source "amazon-ebsvolume" "data-volume" {
        ...
        iam_instance_profile = "name-of-your-instance-profile"
        ...
      }
      

      You will need to create an IAM role with a policy that allows access to the necessary S3 actions (like s3:GetObject) on the required resources (like your S3 bucket and the objects in it), and then create an instance profile for that IAM role.

    2. Environment Variables: If you don’t want to use instance profiles, another way is to pass the AWS credentials as environment variables in your Packer provisioner. This is generally less secure and not recommended for production environments. If you choose to do this, your Packer template might look something like this:

      provisioner "shell" {
        environment_vars = [
          "AWS_ACCESS_KEY_ID=your-access-key-id",
          "AWS_SECRET_ACCESS_KEY=your-secret-access-key",
          "AWS_SESSION_TOKEN=your-session-token", # if you're using temporary credentials
          "DATA_PATH=data",
        ]
        inline = [
          "rm -rf $DATA_PATH",
          "mkdir $DATA_PATH",
          "aws s3 cp s3://path/to/kaiju/viruses/names.dmp $DATA_PATH/names.dmp"
        ]
      }
      

      Note: You should never hard-code sensitive information like access keys directly in your scripts or source code. Always use secure ways to store and retrieve sensitive information, like environment variables, secret management services, etc.

    Remember, the instance needs to have the necessary permissions to perform the s3 cp command, so you’ll need to ensure that the IAM role or user associated with the provided AWS credentials has at least s3:GetObject and s3:ListBucket permissions for the relevant S3 bucket.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search