I have an aks cluster with 4 nodepools consisting of windows and linux nodepools and a total of 700 namespaces in it. The total node count would be between 50-60 all the time . So i had cleared down more than 200 namespaces which were utilizing the cluster bt still the cluster run between 50-60 average cpu and memory usage of cluster is very low and below 50 all the time. I’m still not sure why the scale down is not happening properly after clearing down namespaces autoscaling vmss is all in place and its working bt only scales in between 50-60 nodes.
2
Answers
I followed the below steps to scale down the AKS node pool
I have created the AKS cluster with name aks-clusterz, The scale up operations are performed by the cluster auto scaler
scale down operations will decide to delete or deallocate the nodes in Aks cluster upon scaling down
I have installed the Aks preview extension
az extension add --name aks-preview
Created the node pool with 20 nodes and specified the scale down and the nodes are to be deallocated via scale down mode
By changing the node count 5 and scaling the node pool I will deallocate to remaining nodes
Deleted the deallocated nodes using this command
The default behaviour of cluster without using scale down mode is to delete the nodes when we scale down the cluster, using scale down it can be explicitly achieved by setting scale down delete mode
NOTE:
At a time we cannot delete more than 30 nodes , if we delete more than that it will not scale down properly
The node utilization level is defined as the sum of requested resources divided by its capacity, based on the node utilization it will monitor
So how many pods per node are you running? is your subnet full? 60 nodes x # pods per node
Example: (say 30 pods per node is the config)
60 x 30 pod per node = 1800 IP’s reserved, the minimum subnet size for a single node pool with this configuration is a /21, but really a /20 to allow for side x side upgrade of the node pool without needing to scale down to half subnet usage before hand.
Check your subnet has enough room to add more nodes, otherwise it will just cap out at the max number of nodes it can deploy based on pods per node size, regardless what you set the max to.