skip to Main Content

I have an EC2 instance in eu-west-2. I use python boto3 to generate a pre-signed URL that I return to the web browser to enable temporary access to files so that they can be displayed.

I’ve set up replicated S3 buckets in a number of regions, and a multi-region access point.

The code I am using to generate the pre-signed URL is as follows:

s3Config = Config(
    signature_version = 'v4'
)

s3Client_MultiRegion = boto3.client(
    's3',
    aws_access_key_id = appConfig.S3_ACCESS_KEY,
    aws_secret_access_key = appConfig.S3_SECRET_KEY,
    config = s3Config
)

protectedFileUrl = s3Client_MultiRegion.generate_presigned_url(
  HttpMethod='GET',
  ClientMethod="get_object", 
  Params={
    'Bucket': appConfig.AMAZON_S3_BUCKET_MULTIREGIONACCESSPOINT_ARN,
    'Key':folderPath
  },
  ExpiresIn=60
)

When I make a request from a web browser via a VPN location outside eu-west-2, for example in ap-southeast-1, I get the following error:

Error parsing the X-Amz-Credential parameter; the region 'us-east-1' is wrong; expecting 'ap-southeast-1'

I assume that it uses us-east-1 by default if no region is specified, but my understanding is that the pre-signed URL should be location-agnostic and that the access point should route the request.

If it DOES require a region to be specified, how would I correctly do that given that the code is executed on the web server?

If not, what am I doing wrong here?

Incidentally, I have public files set up behind a cloudfront distribution that uses a lambda@edge function to modify the headers to add Sig4 headers. I am assuming that this isn’t necessary for protected files since the pre-signed URL should include Sig4 headers anyway.

Thank you.

2

Answers


  1. Chosen as BEST ANSWER

    Thanks to Fedi's response I was able to more deeply understand the requirement to have a region in boto3 presign requests. I ended up assertaining the closest region by latency via a ping check against all the regions I am using based on this example.

    Once the closest of my regions has been ascertained (basically the first to return a 200) that is set for the duration of the session and the region and bucket name for that selection is passed in the header of each request made to the server.

    I can then use the Flask request.headers.get() method to get the headers, and inject them into the boto3 pre-signing request. The URL returned to the browser is now correctly referencing the user's closest regional S3 bucket.


  2. The issue you’re facing is due to a mismatch between the region used when signing the pre-signed URL and the region where the request is routed. Even though you’re using an S3 Multi-Region Access Point, pre-signed URLs still need to include a valid region-specific signature.

    When a pre-signed URL is created, it includes a region as part of the signature. If the request ends up in a different region (like in your example routed to ap-southeast-1 when the URL was signed for us-east-1), the signature will be rejected.

    You can use the Lambda@edge to add some attribute to the headers for multiple region offload, or you can add the region_name to the boto3.client s3 config region_name=’eu-west-2′, this will work well if your clients and your server are generally in the same region, or you can make a function to dynamically handle the region_name based on the client location

    def get_region_based_on_client(): 
        ...
        return 'ap-southeast-1'
    
    s3_config = Config(
        signature_version='v4',
        region_name=get_region_based_on_client()
    )
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search