skip to Main Content

I have multiple python scripts(test1.py, test2.py, test3.py) which I already uploaded on S3 bucket(test_s3_bucket) inside the scripts directory.

scripts/test1.py

def abc():
  return "function abc"

scripts/test2.py

from test1 import abc
def xyz():
  t1 = abc()
  return "function xyz > {}".format(t1)

Now I need to run those python code(stored in s3 bucket) through Lambda Function.

import boto3
def lambda_handler(event, context):
  test2_obj = s3.download_file("test_s3_bucket", "script/test2.py", "/tmp/test2.py")
return test2_obj.xyz()

But I am facing issue when I run the lambda function

{
  "errorMessage": "'NoneType' object has no attribute 'xyz'",
  "errorType": "AttributeError",
  "requestId": "exxxxxxx-bxxe-4xxd-bxx1-xxxxxxxxxxxx",
  "stackTrace": [
    "  File "/var/task/lambda_function.py", line 4, in lambda_handlern    test2_obj.xyz()n"
  ]
}

If this is not a correct approch, could you please suggest which approch will be good to use.

Thanks in advance.

2

Answers


  1. Chosen as BEST ANSWER

    I got the answer, we can achieve it 2 different ways.

    1st using "upload from" S3 or .zip where we will put our all dependency with lambda code in single package so that we can access all dependency parallely.

    2nd using layers with versioning.

    but in my scenario I will create multiple lambda function for different operation and those dependency library is use by all lambda function so in that case Layers is a better choice.


  2. The Lambda developer guide shows you how to package your code as a zip file.

    https://docs.aws.amazon.com/lambda/latest/dg/python-package.html

    Individual files aren’t loaded from S3.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search