skip to Main Content

I have a presigned URL for a file in a vendor’s S3 bucket. I want to copy that file into my own bucket. I’d rather not copy it to the machine I’m running the copy from. My thought was to use the CLI s3 sync or cp commands to copy the file from one bucket to another. But those commands require s3:// URLs, not https://.

I tried converting the HTTP URL by replacing "https://bucketname.s3.region.amazonaws.com" with "s3://bucketname", but that gives an Access Denied error with s3 sync and a Bad Request with s3 cp. Is there any way to do this, or do I need to download it locally with HTTP, then upload to my bucket with the CLI?

2

Answers


  1. Problem here is that you need to authenticate into two different accounts, the source to read and the destination to write. If you had access to both, i.e. the credentials you use to read could also write to your own bucket, you would be able to bypass the middle-man.

    That’s not the case here, so your best bet is to download it first, then authenticate with your own account and put the object there.

    Login or Signup to reply.
  2. Amazon S3 has an in-built CopyObject command that can read from an S3 bucket and write to an S3 bucket without needing to download the data. To use this command, you require credentials that have GetObject permission on the source bucket and PutObject permissions on the destination bucket. The credentials themselves can be issued by either the AWS Account having the source bucket or the AWS Account having the destination bucket. Thus, you would need to work with the account admins who control the ‘other’ AWS Account.

    If this is too difficult and your only way of accessing the source object is via a pre-signed URL, then you cannot use the CopyObject command. Instead, you would need to download the source file and then separately upload it to Amazon S3.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search