skip to Main Content

I am developing an AWS Lambda function which uses a runtime of Python 3.8. The source code is packaged into a custom Docker image and then passed to the Lambda service.

In the Python program itself, I am executing various Terraform commands including "plan" and "show" using subprocess. I am writing the output of the plan to the /tmp directory using the "terraform plan -out=plan.txt" flag. Then, I convert the plan into JSON for processing using "terraform show -json plan.txt".

Since the plan file could contain sensitive data, I do not want to write it to the /tmp directory; rather I want to keep it in-memory to increase security. I have explored mounting tmpfs to /tmp which is not possible in this context. How can I override the behavior of Terraform’s "-out=" flag or create an in-memory filesystem in the container?

2

Answers


  1. @msel, I guess you can take help of "terrahelp". Though I have not used it personally, I believe this could be really handy tool to handle sensitive data at plan level.

    Something similar to below you need to do

    terraform plan -out=plan.txt | terrahelp mask
    

    You can read more about terrahelp here

    https://github.com/opencredo/terrahelp

    Reference : https://github.com/runatlantis/atlantis/issues/163

    Login or Signup to reply.
  2. Terraform itself can only write a saved plan file to something which behaves like a file. If you are on a Unix system then you may be able to exploit the "everything is a file" principle to trick Terraform to writing to something that isn’t a file on disk, such as a pipe passed in from the parent process as an additional inherited file descriptor, but there is no built-in mechanism for doing so, and Terraform may require the filehandle to support system calls other than just write that are available for normal files, such as seek.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search