I’m trying to understand why having parameters stored in AWS Parameter Store (SSM) encrypted using KMS is a better solution than saving parameters in .env file on a nginx server.
As far as the nginx server setup goes, .env file is not exposed to public hence it can’t be viewed unless someone breaks into the server.
nginx config has public
folder set as a root
root /home/user/app/public;
Looked like the consensus was if someone manages to break into the server, they will be able to see all the parameters stored in .env file contents in plain text hence less secure than Parameter Store.
But isn’t that the same for AWS Parameter Store? (Main question)
In the php file, the way I load parameters from the Parameter Store is using SSM Client.
e.g.
$client = new SsmClient([
'version' => 'latest',
'region' => 'us-west-2',
]);
$credentials = $client->getParameters([
'Names' => ['MY_KEY', 'MY_SECRET'],
'WithDecryption' => true
]);
$key = $credentials['Parameters'][0]['Value'];
$secret = $credentials['Parameters'][1]['Value'];
If someone breaks into the server, they will be able to perform these and retrieve any parameters.
So what makes SSM more secure than .env?
2
Answers
SSM makes it easy to coordinate values across multiple machines. How does that
.env
file get onto your servers? What happens if you have to change a value in it? SSM helps make that process easier. And when it’s easy to replace secrets, it’s more likely you will rotate them on a regular basis. AWS Secrets Manager makes that process even simpler by automating rotation. It runs a Lambda function that modifies both the secret value and what uses it (for example it can change the database password for you). So even if your secret does get leaked somehow, it’s only good for a limited time.Another reason having secrets on a separate server can be more secure, is that breaking into a server doesn’t always mean full control. Sometimes hackers can just access files using a misconfigured service. If there are no files containing secret information, there is nothing for them to see. Other times hackers can access other servers in your network using a misconfigured proxy. If all your internal servers require some sort of authentication, the hackers won’t be able to hack those too.
This is where the concept of defense in depth comes in. You need multiple layers of security. If you just assume "once they’re in, they’re in", you are actually making it easier on hackers. They can exploit any small opening and get complete control of your system. This becomes even more important when you factor in the weakest link in your security — people. You, your developers, your operators, and everyone in your company will make mistakes eventually. If every little mistakes gave complete access to the system, you’d be in a pretty bad shape.
Adding to above answer