skip to Main Content

I have started an EC2 instance (with standard monitoring).

From my understanding, the EC2 service will publish 1 datapoint every 5 minutes for the CPUUtilization to Cloudwatch.

Hence my question is, why are the graphs different for a 5 minutes visualization for different statistics (Min, Max, Avg, …) ?

Since there is only 1 datapoint per 5 minutes, the Min, Max or Average of a single datapoint should be the same right ?

Example:
enter image description here

Just by changing the "average" statistic to the "max", the graph changes (I don’t understand why).

enter image description here

Thanks

2

Answers


  1. Honestly I have never thought about it carefully but from my understanding the following is going on.

    Amazon EC2 sends metric data to CloudWatch in the configured period of time, five minutes in this case unless you enable detailed monitoring for the instance.

    This metric data will not consist only of the average, but also the maximum and minimum CPU utilization percentage observed during that period of time. I mean, it will tell CloudWatch: in this period of time on average the CPU utilization was 40%, with a maximum of 90%, and a minimum of 20%. I hope you get the idea.

    That explains why your graphs look different depending on the statistic chosen.

    Please, consider read this entry in the AWS documentation, in which they explain how the CloudWatch statistics definitions work.

    Login or Signup to reply.
  2. Just to add on to @jccampanero’s answer, I’d like to explain it with a bit more details.

    From my understanding, the EC2 service will publish 1 datapoint every 5 minutes for the CPUUtilization to CloudWatch.

    Yes, your understanding is correct, but there are two types of datapoint. One type is called "raw data", and the other type is called "statistic set". Both types use the same PutMetricData API to publish metrics to CloudWatch, but they use different options.

    Since there is only 1 datapoint per 5 minutes, the Min, Max or Average of a single datapoint should be the same right?

    Not quite. This is only true when all datapoints are of type "raw data". Basically just think of it as a number. If you have statistic sets, then the Min, Max and Average of a single datapoint can be different, which is exactly what happens here.

    If you choose the SampleCount statistic, you can see that one datapoint here is an aggregation of 5 samples. Just to give you a concrete example, actually, let’s take the one in @jccampanero’s answer.

    In this period of time on average the CPU utilization was 40%, with a maximum of 90%, and a minimum of 20%,. I hope you get the idea.

    Translated to code (e.g. AWS CLI), it’s something like

    aws cloudwatch put-metric-data 
        --metric-name CPUUtilization 
        --namespace AWS/EC2 
        --unit Percent 
        --statistic-values Sum=200,Minimum=20,Maximum=90,SampleCount=5 
        --dimensions InstanceId=i-123456789
    

    If EC2 were using AWS CLI to push the metrics to CloudWatch, this would be it. I think you get the idea now, and it’s quite common to aggregate the data to save some money on the CloudWatch bill.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search