skip to Main Content

I installed the kube-prometheus-0.9.0, and want to deploy a sample application on which to test the Prometheus metrics autoscaling, with the following resource manifest file: (hpa-prome-demo.yaml)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hpa-prom-demo
spec:
  selector:
    matchLabels:
      app: nginx-server
  template:
    metadata:
      labels:
        app: nginx-server
    spec:
      containers:
      - name: nginx-demo
        image: cnych/nginx-vts:v1.0
        resources:
          limits:
            cpu: 50m
          requests:
            cpu: 50m
        ports:
        - containerPort: 80
          name: http
---
apiVersion: v1
kind: Service
metadata:
  name: hpa-prom-demo
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "80"
    prometheus.io/path: "/status/format/prometheus"
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http
  selector:
    app: nginx-server
  type: NodePort

For testing purposes, used a NodePort Service and luckly I can get the http repsonse after applying the deployment. Then I installed
Prometheus Adapter via Helm Chart by creating a new hpa-prome-adapter-values.yaml file to override the default Values values, as follows.

rules:
  default: false
  custom:
  - seriesQuery: 'nginx_vts_server_requests_total'
    resources:
      overrides:
        kubernetes_namespace:
          resource: namespace
        kubernetes_pod_name:
          resource: pod
    name:
      matches: "^(.*)_total"
      as: "${1}_per_second"
    metricsQuery: (sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))

prometheus:
  url: http://prometheus-k8s.monitoring.svc
  port: 9090

Added a rules rule and specify the address of Prometheus. Install Prometheus-Adapter with the following command.

$ helm install prometheus-adapter prometheus-community/prometheus-adapter -n monitoring -f hpa-prome-adapter-values.yaml
NAME: prometheus-adapter
LAST DEPLOYED: Fri Jan 28 09:16:06 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
prometheus-adapter has been deployed.
In a few minutes you should be able to list metrics using the following command(s):

  kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1

Finally the adatper was installed successfully, and can get the http response, as follows.

$ kubectl get po -nmonitoring |grep adapter
prometheus-adapter-665dc5f76c-k2lnl    1/1     Running   0          133m

$ kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "custom.metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "namespaces/nginx_vts_server_requests_per_second",
      "singularName": "",
      "namespaced": false,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    }
  ]
}


But it was supposed to be like this,

$  kubectl get --raw="/apis/custom.metrics.k8s.io/v1beta1" | jq
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "custom.metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "namespaces/nginx_vts_server_requests_per_second",
      "singularName": "",
      "namespaced": false,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    },
    {
      "name": "pods/nginx_vts_server_requests_per_second",
      "singularName": "",
      "namespaced": true,
      "kind": "MetricValueList",
      "verbs": [
        "get"
      ]
    }
  ]
}

Why I can’t get the metrics pods/nginx_vts_server_requests_per_second? as a result, below query was also failed.

 kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
Error from server (NotFound): the server could not find the metric nginx_vts_server_requests_per_second for pods

Anybody cloud please help? many thanks.

2

Answers


  1. It is worth knowing that using the kube-prometheus repository, you can also install components such as Prometheus Adapter for Kubernetes Metrics APIs, so there is no need to install it separately with Helm.

    I will use your hpa-prome-demo.yaml manifest file to demonstrate how to monitor nginx_vts_server_requests_total metrics.


    First of all, we need to install Prometheus and Prometheus Adapter with appropriate configuration as described step by step below.

    Copy the kube-prometheus repository and refer to the Kubernetes compatibility matrix in order to choose a compatible branch:

    $ git clone https://github.com/prometheus-operator/kube-prometheus.git 
    $ cd kube-prometheus
    $ git checkout release-0.9
    

    Install the jb, jsonnet and gojsontoyaml tools:

    $ go install -a github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest
    $ go install github.com/google/go-jsonnet/cmd/jsonnet@latest
    $ go install github.com/brancz/gojsontoyaml@latest 
    

    Uncomment the (import 'kube-prometheus/addons/custom-metrics.libsonnet') + line from the example.jsonnet file:

    $ cat example.jsonnet
    local kp =
      (import 'kube-prometheus/main.libsonnet') +
      // Uncomment the following imports to enable its patches
      // (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
      // (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
      // (import 'kube-prometheus/addons/node-ports.libsonnet') +
      // (import 'kube-prometheus/addons/static-etcd.libsonnet') +
      (import 'kube-prometheus/addons/custom-metrics.libsonnet') +          <--- This line
      // (import 'kube-prometheus/addons/external-metrics.libsonnet') +
    ...
    

    Add the following rule to the ./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet file in the rules+ section:

          {
            seriesQuery: "nginx_vts_server_requests_total",
            resources: {
              overrides: {
                namespace: { resource: 'namespace' },
                pod: { resource: 'pod' },
              },
            },
            name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
            metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
          },
    

    After this update, the ./jsonnet/kube-prometheus/addons/custom-metrics.libsonnet file should look like this:
    NOTE: This is not the entire file, just an important part of it.

    $ cat custom-metrics.libsonnet
    // Custom metrics API allows the HPA v2 to scale based on arbirary metrics.
    // For more details on usage visit https://github.com/DirectXMan12/k8s-prometheus-adapter#quick-links
    
    {
      values+:: {
        prometheusAdapter+: {
          namespace: $.values.common.namespace,
          // Rules for custom-metrics
          config+:: {
            rules+: [
              {
                seriesQuery: "nginx_vts_server_requests_total",
                resources: {
                  overrides: {
                    namespace: { resource: 'namespace' },
                    pod: { resource: 'pod' },
                  },
                },
                name: { "matches": "^(.*)_total", "as": "${1}_per_second" },
                metricsQuery: "(sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>))",
              },
    ...
    

    Use the jsonnet-bundler update functionality to update the kube-prometheus dependency:

    $ jb update
    

    Compile the manifests:

    $ ./build.sh example.jsonnet
    

    Now simply use kubectl to install Prometheus and other components as per your configuration:

    $ kubectl apply --server-side -f manifests/setup
    $ kubectl apply -f manifests/
    

    After configuring Prometheus, we can deploy a sample hpa-prom-demo Deployment:
    NOTE: I’ve deleted the annotations because I’m going to use a ServiceMonitor to describe the set of targets to be monitored by Prometheus.

    $ cat hpa-prome-demo.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hpa-prom-demo
    spec:
      selector:
        matchLabels:
          app: nginx-server
      template:
        metadata:
          labels:
            app: nginx-server
        spec:
          containers:
          - name: nginx-demo
            image: cnych/nginx-vts:v1.0
            resources:
              limits:
                cpu: 50m
              requests:
                cpu: 50m
            ports:
            - containerPort: 80
              name: http
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: hpa-prom-demo
      labels:
        app: nginx-server
    spec:
      ports:
      - port: 80
        targetPort: 80
        name: http
      selector:
        app: nginx-server
      type: LoadBalancer
    

    Next, create a ServiceMonitor that describes how to monitor our NGINX:

    $ cat servicemonitor.yaml
    kind: ServiceMonitor
    apiVersion: monitoring.coreos.com/v1
    metadata:
      name: hpa-prom-demo
      labels:
        app: nginx-server
    spec:
      selector:
        matchLabels:
          app: nginx-server
      endpoints:
      - interval: 15s
        path: "/status/format/prometheus"
        port: http
    

    After waiting some time, let’s check the hpa-prom-demo logs to make sure that it is scrapped correctly:

    $ kubectl get pods
    NAME                            READY   STATUS    RESTARTS   AGE
    hpa-prom-demo-bbb6c65bb-49jsh   1/1     Running   0          35m
    
    $ kubectl logs -f hpa-prom-demo-bbb6c65bb-49jsh
    ...
    10.4.0.9 - - [04/Feb/2022:09:29:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
    10.4.0.9 - - [04/Feb/2022:09:29:32 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3771 "-" "Prometheus/2.29.1" "-"
    10.4.0.9 - - [04/Feb/2022:09:29:47 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
    10.4.0.9 - - [04/Feb/2022:09:30:02 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
    10.4.0.9 - - [04/Feb/2022:09:30:17 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
    10.4.2.12 - - [04/Feb/2022:09:30:23 +0000] "GET /status/format/prometheus HTTP/1.1" 200 3773 "-" "Prometheus/2.29.1" "-"
    ...
    

    Finally, we can check if our metrics work as expected:

    $ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq . | grep -A 7 "nginx_vts_server_requests_per_second"
          "name": "pods/nginx_vts_server_requests_per_second",
          "singularName": "",
          "namespaced": true,
          "kind": "MetricValueList",
          "verbs": [
            "get"
          ]
        },
    --
          "name": "namespaces/nginx_vts_server_requests_per_second",
          "singularName": "",
          "namespaced": false,
          "kind": "MetricValueList",
          "verbs": [
            "get"
          ]
        },
    
    $ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/nginx_vts_server_requests_per_second" | jq .
    {
      "kind": "MetricValueList",
      "apiVersion": "custom.metrics.k8s.io/v1beta1",
      "metadata": {
        "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/nginx_vts_server_requests_per_second"
      },
      "items": [
        {
          "describedObject": {
            "kind": "Pod",
            "namespace": "default",
            "name": "hpa-prom-demo-bbb6c65bb-49jsh",
            "apiVersion": "/v1"
          },
          "metricName": "nginx_vts_server_requests_per_second",
          "timestamp": "2022-02-04T09:32:59Z",
          "value": "533m",
          "selector": null
        }
      ]
    }
    
    Login or Signup to reply.
  2. ENV:

    1. helm install all Prometheus charts from prometheus-community https://prometheus-community.github.io/helm-chart
    2. k8s cluster enabled by docker for mac

    Solution:
    I met the same problem, from Prometheus UI, i found it had namespace label and no pod label in metrics as below.

    nginx_vts_server_requests_total{code="1xx", host="*", instance="10.1.0.19:80", job="kubernetes-service-endpoints", namespace="default", node="docker-desktop", service="hpa-prom-demo"}
    

    I thought Prometheus may NOT use pod as a label, so i checked Prometheus config and found:

    121       - action: replace
    122         source_labels:
    123         - __meta_kubernetes_pod_node_name
    124         target_label: node
    
    

    then searched
    https://prometheus.io/docs/prometheus/latest/configuration/configuration/ and do the similar thing as below under every __meta_kubernetes_pod_node_name i searched(ie. 2 places)

    125       - action: replace
    126         source_labels:
    127         - __meta_kubernetes_pod_name
    128         target_label: pod
    

    after a while, the configmap reloaded, UI and API could find pod label

    $ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq                                    
    {
      "kind": "APIResourceList",
      "apiVersion": "v1",
      "groupVersion": "custom.metrics.k8s.io/v1beta1",
      "resources": [
        {
          "name": "pods/nginx_vts_server_requests_per_second",
          "singularName": "",
          "namespaced": true,
          "kind": "MetricValueList",
          "verbs": [
            "get"
          ]
        },
        {
          "name": "namespaces/nginx_vts_server_requests_per_second",
          "singularName": "",
          "namespaced": false,
          "kind": "MetricValueList",
          "verbs": [
            "get"
          ]
        }
      ]
    }
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search