skip to Main Content

I have a lambda that runs via a http trigger in API Gateway.

My goal is to:

  1. Copy files from input-bucket (where the user uploads his/her files)
  2. Create a UUID as a new location/folder in a separate bucket (output-bucket)
  3. Paste those objects in that new location
  4. Delete all objects from input-bucket and return a 200 http status with the new location name in the body

However, I keep getting this:

[ERROR] ClientError: An error occurred (AccessDenied) when calling the CopyObject operation: Access Denied

This is my lambda:

LOGGER = logging.getLogger(__name__)
logging.basicConfig(level=logging.ERROR)
logging.getLogger(__name__).setLevel(logging.DEBUG)

session = boto3.Session()
client = session.client('s3')

s3 = session.resource('s3')
src_bucket = s3.Bucket('input-bucket')

def lambda_handler(event,context):
    LOGGER.info('Reading files in {}'.format(src_bucket))

    location = str(uuid.uuid4()) 
    client.put_object(Bucket='output-bucket', Body='',Key = (location + "/"))

    LOGGER.info('Generated folder location ' + "/"+ location + "/ in output-bucket")

    for obj in src_bucket.objects.all():
        copy_source = {
                'Bucket': 'input-bucket',
                'Key': obj.key
            }
        client.copy_object(Bucket='output-bucket',
            Key=location + '/' + obj.key,CopySource = copy_source) ### ERROR OCCURS HERE
        LOGGER.info('Copied: {} from {} to {} folder in {}'.format(obj.key,'input-bucket',location,'output-bucket'))


    src_bucket.objects.all().delete()
    LOGGER.info('Deleted all objects from {}'.format(src_bucket))    
    return  { "statusCode": 200,
        "body": json.dumps({
            'folder_name': location
        })
    }

As far as I know, I have enabled everything correctly for the s3 bucket policies (this is only for output-bucket, I have identical policies for input-bucket:

resource "aws_s3_bucket" "file_output" {
  bucket = "output-bucket"
  acl    = "private"
  policy = <<EOF
{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Sid":"ModifySrcBucket",
         "Effect":"Allow",
         "Principal":{
            "AWS":[
               "<XXXXXX>"
            ]
         },
         "Action":[
            "s3:PutObject",
            "s3:PutObjectTagging",
            "s3:GetObject",
            "s3:GetObjectTagging",
            "s3:DeleteObject"
         ],
         "Resource":["arn:aws:s3:::output-bucket/*","arn:aws:s3:::output-bucket/*/*"]
      },
      {
         "Effect":"Allow",
         "Principal":{
            "AWS":"<XXXXXXX>"
         },
         "Action":[
            "s3:ListBucket"
         ],
         "Resource":"arn:aws:s3:::output-bucket"
      }
   ]
}
    EOF
}

resource "aws_s3_bucket_ownership_controls" "disable_acl_output" {
  bucket = aws_s3_bucket.file_output.id

  rule {
    object_ownership = "BucketOwnerPreferred"
  }
}

2

Answers


  1. I believe setting the permissions on the S3 bucket policy might not be enough. Your Lambda Execution role should have permissions to invoke some operations as well, as explained in the AWS docs:

    • on the source bucket: s3:ListBucket and s3:GetObject
    • on the destination bucket: s3:ListBucket and s3:PutObject

    There might be more permissions required based on your exact use case (server-side encryption with KMS, versioned objects, etc.).

    Login or Signup to reply.
  2. Since your source and destination S3 buckets are in the same account as the AWS Lambda function:

    • Do not create a Bucket Policy
    • Instead, create an IAM Role and assign it to the Lambda function
    • Add these permissions to the IAM Role:
    
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Effect": "Allow",
             "Action": [
                "s3:GetObject",
                "s3:DeleteObject"
             ],
             "Resource": "arn:aws:s3:::input-bucket/*"
          },
          {
             "Effect": "Allow",
             "Action": "s3:ListBucket"
             "Resource": "arn:aws:s3:::input-bucket"
          },
          {
             "Effect": "Allow",
             "Action": "s3:PutObject"
             "Resource": "arn:aws:s3:::output-bucket/*"
          },
       ]
    }
    

    If the buckets were in different AWS Accounts, then you would need to put a Bucket Policy on the bucket in the ‘other’ account to grant permission for this IAM Role to be able to access it. (There are plenty of Answers on StackOverflow demonstrating how to do this.)

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search