skip to Main Content

I want to upload the file to S3 from browser accessing the web application running on Fargate

However, this S3 must have block public access=ON.

So, It is impossible to upload directly from browser.

Then, I have some alternative ideas,

  • 1 Upload files into the container in Fargate and copy them to S3 by lambda.

    Q1) Is it possible to access the directory inside the container from lambda?

    Q2) How lambda can be triggered?

  • 2 Upload files to the EBS and copy them to S3 by lambda

    Q1) Is it possible to access EBS from lambda? (maybe yes?)

    Q2) Is it possible to trigger the lambda by new files is created in EBS?

Or is there any standard practical method in this case?

Web application system is python django.

Any hint is appreciated.

Thank you very much.

2

Answers


  1. One approach would be to use pre-signed URLs to allow your users to upload files directly to S3 without exposing the bucket. Pre-signed URLs are generated using your AWS credentials and are only valid for a limited period of time. This way, the S3 bucket can have block public access enabled, and your users can still upload files from the browser.

    See code (copied from official aws documentation):
    Generate a presigned URL that can perform an S3 action for a limited time. Use the Requests package to make a request with the URL.

    import argparse
    import logging
    import boto3
    from botocore.exceptions import ClientError
    import requests
    
    logger = logging.getLogger(__name__)
    
    
    def generate_presigned_url(s3_client, client_method, method_parameters, expires_in):
        """
        Generate a presigned Amazon S3 URL that can be used to perform an action.
    
        :param s3_client: A Boto3 Amazon S3 client.
        :param client_method: The name of the client method that the URL performs.
        :param method_parameters: The parameters of the specified client method.
        :param expires_in: The number of seconds the presigned URL is valid for.
        :return: The presigned URL.
        """
        try:
            url = s3_client.generate_presigned_url(
                ClientMethod=client_method,
                Params=method_parameters,
                ExpiresIn=expires_in
            )
            logger.info("Got presigned URL: %s", url)
        except ClientError:
            logger.exception(
                "Couldn't get a presigned URL for client method '%s'.", client_method)
            raise
        return url
    
    
    def usage_demo():
        logging.basicConfig(level=logging.INFO, format='%(levelname)s: %(message)s')
    
        print('-'*88)
        print("Welcome to the Amazon S3 presigned URL demo.")
        print('-'*88)
    
        parser = argparse.ArgumentParser()
        parser.add_argument('bucket', help="The name of the bucket.")
        parser.add_argument(
            'key', help="For a GET operation, the key of the object in Amazon S3. For a "
                        "PUT operation, the name of a file to upload.")
        parser.add_argument(
            'action', choices=('get', 'put'), help="The action to perform.")
        args = parser.parse_args()
    
        s3_client = boto3.client('s3')
        client_action = 'get_object' if args.action == 'get' else 'put_object'
        url = generate_presigned_url(
            s3_client, client_action, {'Bucket': args.bucket, 'Key': args.key}, 1000)
    
        print("Using the Requests package to send a request to the URL.")
        response = None
        if args.action == 'get':
            response = requests.get(url)
        elif args.action == 'put':
            print("Putting data to the URL.")
            try:
                with open(args.key, 'r') as object_file:
                    object_text = object_file.read()
                response = requests.put(url, data=object_text)
            except FileNotFoundError:
                print(f"Couldn't find {args.key}. For a PUT operation, the key must be the "
                      f"name of a file that exists on your computer.")
    
        if response is not None:
            print("Got response:")
            print(f"Status: {response.status_code}")
            print(response.text)
    
        print('-'*88)
    
    
    if __name__ == '__main__':
        usage_demo()
    

    Generate a presigned POST request to upload a file.

    class BucketWrapper:
        """Encapsulates S3 bucket actions."""
        def __init__(self, bucket):
            """
            :param bucket: A Boto3 Bucket resource. This is a high-level resource in Boto3
                           that wraps bucket actions in a class-like structure.
            """
            self.bucket = bucket
            self.name = bucket.name
    
        def generate_presigned_post(self, object_key, expires_in):
            """
            Generate a presigned Amazon S3 POST request to upload a file.
            A presigned POST can be used for a limited time to let someone without an AWS
            account upload a file to a bucket.
    
            :param object_key: The object key to identify the uploaded object.
            :param expires_in: The number of seconds the presigned POST is valid.
            :return: A dictionary that contains the URL and form fields that contain
                     required access data.
            """
            try:
                response = self.bucket.meta.client.generate_presigned_post(
                    Bucket=self.bucket.name, Key=object_key, ExpiresIn=expires_in)
                logger.info("Got presigned POST URL: %s", response['url'])
            except ClientError:
                logger.exception(
                    "Couldn't get a presigned POST URL for bucket '%s' and object '%s'",
                    self.bucket.name, object_key)
                raise
            return response
    

    Source: https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html#generating-presigned-url

    Login or Signup to reply.
  2. 1 Upload files into the container in Fargate and copy them to S3 by
    lambda.

    Q1) Is it possible to access the directory inside the container from
    lambda?

    Not the default ephemeral file system in Fargate, but you could share an EFS volume between Fargate and Lambda.

    Q2) How lambda can be triggered?

    There’s not a clean way to do this.

    2 Upload files to the EBS and copy them to S3 by lambda

    Q1) Is it possible to access EBS from lambda? (maybe yes?)

    Not only is it not possible to access EBS from lambda, but it is also not possible to access EBS from Fargate.

    Q2) Is it possible to trigger the lambda by new files is created in
    EBS?

    No.


    My first question would be why you even want to involve Lambda in the process. Why do you think including Lambda would be helpful here at all?

    You could simply upload directly to your Django application running in Fargate, and then have the Python code copy the file to S3 after it receives the upload. There’s even a static files S3 storage backend for Django. This looks like a good tutorial for doing that.

    Although it doesn’t sound like you want to serve the files from S3, only upload them to S3, so that tutorial may be more than you need. You probably just need to use boto3 to copy the file to S3 after it is uploaded.


    Alternatively, you could use S3 pre-signed URLs, which generates a URL on the server side that can be passed to the browser, which the browser can use to upload directly to S3, even though the bucket has block public access=ON.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search