I’m using the boto3 client to get a download link for the zip file of a lambda function on AWS.
I want to transfer this zip file from that download link, directly to an s3 bucket, without storing or piping anything on my machine.
Is it possible to do this with the available AWS apis?
Edit: Can datasync perhaps help with this?
2
Answers
The only way to retrieve your function package is calling the GetFunction , which would return you a link valid for 10 minutes to download the zip file.
I guess if you really want to take it serverless you will want to build a Lambda function that calls GetFunction, download then S3 PutObject
Then you can invoke the function programatically or use the CLI.
You could use the
aws s3 cp -
command that can stream from standard input to a specified bucket and key and combine this withaws lambda get-function
.For example, this will stream your function’s package directly to S3.
curl
in this context does not save the file locally. Instead, it streams the data directly to stdout, which is then piped as stdin to theaws s3 cp -
command.Or, if you’re using
boto3
, you could combine it withrequests
with thestream
set toTrue
.Sample code:
Or you could use Go Upload Managers capabilities that provide concurrent upload of content to S3 by taking advantage of S3’s Multipart APIs.
The code has been adapted from this answer.
I’ve added:
http.Get
to use theresp.Body
instead of reading from a local fileTo build this, just run:
And then, use it like this:
There’s an additional flag
--region
, but this one defaults toeu-central-1
. No need to supply it. However, if your region is different, feel free to change it.Note on
datasync
. No, it can’t do what you want. From the FAQ:source