In several Lambda functions and Elastic Beanstalk instances for a project, they all use the same helper functions and constants.
I am trying to follow the DRY method and not hard code these parameters into each Lambda/EB application, but instead have the modules that contain them just import into each Lambda/EB application.
I was ideally hoping to
- put all these modules in a separate GitHub repo
- create a codepipeline to an S3 bucket
- import them into EB/Lambdas wherever needed
I have the first 2 steps done, but can’t figure out how to import the modules from S3.
Does anyone have any suggestions on how to do this?
4
Answers
There are a few ways I would consider:
requirements.txt
of both services).git+https://github.com/path/to/package-two@41b95ec#egg=package-two
to therequirements.txt
.The best way to track changes in code is using a repo but if you need to use an s3 as a repo you can consider enabling versioning in the s3 bucket/repo and define some s3 event source to trigger your pipeline.
For using those dependencies I think it’s best to consider using layer for lambda functions or shared EFS volumes in instances for Beanstalk if these dependencies are very important in size.
You can use the Python package manager, pip, to install packages from an S3 bucket. To do so, you need to add the following to your requirements.txt file:
–extra-index-url https://s3.amazonaws.com/%5Bbucket_name%5D/%5Bpackage_name%5D
You can then run the pip install command to install the package from the S3 bucket. Once the package is installed, you can import it into your Lambda/EB application.
For sharing across lambdas you can use lambda layers.
Official doc