skip to Main Content

Would love to hear your ideas.

In this project, multiple users (let’s say 1000 users) will upload files into the same bucket.

The app does have user authentication from a Web API.

Questions

  1. Is it correct that each user will have his/her own bucket or bucket objects?
  2. How to set permissions for different users in such a way that user the files are private and only shared individual can access that?

How to achieve this without creating users amazon accounts and then giving this access

Tried:

  • I tried creating 3 IAM users with different roles and and managing the permissions from the backend by storing the name for the user allowed folders in the database.

Expected:

Features to achieve:

  • Users:
    • User-A
    • User-B
    • User-C
  • User-A, User-B, User-c: has a folder name [userId:email] where users can only read those folders inside a S3-bucket which are allowed to him

How to achieve this without creating users amazon accounts and then giving this access?

2

Answers


    1. Each user can have his own bucket if the file is not repetitive. If the file is repetitive, try to make the bucket permission as per Levels.
      Ex: Admin, User, Read, etc
    2. The Admin can access the User, Read, and below levels, continue…
    3. You can store the file in a particular location where this file can be accessible to that particular role and its high-level persons. That way you can save the unnecessary repetitive file in your s3 bucket.
    4. Add a role column to enable this permission system in your user table.
    Login or Signup to reply.
  1. You should definitely not create one bucket per user. There are limits to the number of buckets you can have in your AWS Account.

    You should also not provide users with AWS credentials.

    Instead, it will be the responsibility of your application to manage the upload process and to control which users have access to which objects.

    Uploading

    The upload process would be:

    You then have a choice about how to associate the uploaded file with the user. You can either:

    • Use separate folders in the S3 bucket for each user, or
    • The web app can use a database to maintain a record of all uploaded files and their ‘owners’

    If users might upload thousands of files each, then the database will provide faster listing of objects. Without a database, the app would need to list the contents of the user’s folder, which can be relatively slow since each API call only returns a maximum of 1000 objects.

    Downloading

    Your web app would provide a list of the objects to the user. When generating the HTML for this page, the list of each object would include an Amazon S3 pre-signed URL that provides time-limited access to private objects in Amazon S3. Basically, they can use the link for a given time period (eg 5 minutes), after which the link will no longer work.

    Your app would be responsible for making sure that the HTML page only lists objects that the user is entitled to access. As mentioned above, this list would either come from listing the user’s folder or by referencing the database that tracks uploaded objects.

    It’s like a photo-sharing website

    The above processes are similar to how you would interact with a photo-sharing website. Users logon via a web page, upload files via the web page and then get a listing of their photos on a web page. They can download by clicking a link on the web page. They never use AWS credentials to upload/download — the fact that photos are stored in S3 is irrelevant to the users.

    If you are seeking more advanced features such as the ability to share files between users (such as sharing photos with family members), then the web app would need to maintain a database of permissions to know which files can be shared with which users.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search