skip to Main Content

I have been having a hard time to deploy my locally trained SKlearn model (pipeline with custom code + logistic model) to Sagemaker Endpoint.
My Pipeline is as follows:

enter image description here

All this custom code (RecodeCategorias) does is normalize and recode some categories columns into a "other" value, for some features:

class RecodeCategorias(BaseEstimator, TransformerMixin):

def __init__(self, feature, categs, exclude=True):
    self.feature = feature
    self.categs = categs
    self.exclude = exclude

def fit(self, X, y=None):
    return self

def transform(self, X, y=None):
    X[self.feature] = X[self.feature].str.lower().str.strip()
    if self.exclude is True:
        X[self.feature] = np.where(
            (X[self.feature].isin(self.categs)) & (~X[self.feature].isna()),
            "outro",
            X[self.feature],
        )
    elif self.exclude is False:
        X[self.feature] = np.where(
            (X[self.feature].isin(self.categs)) | (X[self.feature].isna()),
            X[self.feature],
            "outro",
        )
    else:
        raise ValueError(
            """PLease set exclude the categs to True (to change the categs to 'others')
            or False (to keep the categs and change the remaning to 'others')"""
        )
    return X

My model data is saved in an S3 bucket in a tar.gz file containing: inference.py, model.joblib and pipeline.joblib. My deploy script is:

modelo = SKLearnModel(
model_data='s3://'+s3_bucket+"/"+prefix+"/"+model_path,
role=role,
entry_point="inference.py",
framework_version="1.0-1",
py_version="py3",
sagemaker_session=sagemaker_session,
name="testesdk3",
source_dir='custom_transformers',
dependencies=['custom_transformers/recodefeat.py']
)
try:
    r = modelo.deploy(
             endpoint_name="testesdkendpoint3",
             serverless_inference_config=ServerlessInferenceConfig(
             memory_size_in_mb=4096, max_concurrency=100),
             )
    print(f"Model deploy with name: {modelo.name} and endpoint {modelo.endpoint_name}")
except Exception as e:
   print(e)

Point is, I have tried :

  • adding the classes definition to a file in the root of model.tar.gz and passing it to dependencies (it should get the same from the local file aswell since same files folder)
  • adding to to a "custom_transformers" to a folder in same directory as inference.py and passing it to dependencies or source_dir.

Have tried solutions from AWS Sagemaker SKlearn entry point allow multiple script , from AWS Sagemaker SKlearn entry point allow multiple script and from https://github.com/aws/amazon-sagemaker-examples/issues/725
but none seems to work and always give me a

sagemaker_containers._errors.ClientError: Can't get attribute 'RecodeCategorias' on <module '__main__' from '/miniconda3/bin/gunicorn'>

How exactly should I pass my class dependencies for it to be loaded correctly?

Thanks

2

Answers


  1. Chosen as BEST ANSWER

    Turns out that the problem was only that I created my classes inside training script and not imported it from somewhere else. After setting my classes to be imported into training, following the same folder hierarchy in inference script made it work fine.


  2. It is better to use Boto3 (Python SDK) for AWS to conduct this operation as it is more low level. In your model.tar.gz you want to capture any joblib artifacts. It seems as if your issue is that is in your inference script you are not reading these artifacts properly. For SKLearn there’s four default handler functions that you need to abide to (MMS the model server implements these handlers). Example of an inference script is as follows:

    import joblib
    import os
    import json
    
    """
    Deserialize fitted model
    """
    def model_fn(model_dir):
        model = joblib.load(os.path.join(model_dir, "model.joblib"))
        return model
    
    """
    input_fn
        request_body: The body of the request sent to the model.
        request_content_type: (string) specifies the format/variable type of the request
    """
    def input_fn(request_body, request_content_type):
        if request_content_type == 'application/json':
            request_body = json.loads(request_body)
            inpVar = request_body['Input']
            return inpVar
        else:
            raise ValueError("This model only supports application/json input")
    
    """
    predict_fn
        input_data: returned array from input_fn above
        model (sklearn model) returned model loaded from model_fn above
    """
    def predict_fn(input_data, model):
        return model.predict(input_data)
    
    """
    output_fn
        prediction: the returned value from predict_fn above
        content_type: the content type the endpoint expects to be returned. Ex: JSON, string
    """
    
    def output_fn(prediction, content_type):
        res = int(prediction[0])
        respJSON = {'Output': res}
        return respJSON
    

    Specifically in your model_fn you want to load your joblib files. The model_fn loads your trained artifacts which you can then utilize in the predict_fn. Please restructure your inference script to this format and let me know if you face the same issue.

    Blog on pre-trained sklearn deployment on SageMaker: https://towardsdatascience.com/deploying-a-pre-trained-sklearn-model-on-amazon-sagemaker-826a2b5ac0b6

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search