skip to Main Content

I am trying to configure a Prometheus instance on a compute engine on GCP to scrape metrics from several compute engine instances. About that, everything should be standard but how should I configure Prometheus to individuate automatically new Compute Engines instances?
For the moment I am not using K8s.

For instance:
I have 2 nginx instances monitored with Prometheus. If I add a new nginx instance I would like to have new metrics on Prometheus automatically.

Thanks

2

Answers


  1. There’s an important distinction between "scrape metrics from several compute engine instances" and "add a new nginx instance I would like to have new metrics on Prometheus automatically".

    Automatically adding targets to Prometheus requires some form of service discovery. Prometheus includes service discovery for GCE. Generally (!) the expectation of this solution is that your instances will be running Prometheus’ Node Exporter and you’ll configure the discovery to find the Node Exporters running on your instances.

    To discover (metrics) for servers|services running on your instances, requires a different solution. Somehow Prometheus needs to be able to programmatically determine that your VMs are running servers|services e.g. (multiple) NGINX instances and that these services are exporting (Prometheus) metrics. You don’t get this with the GCE SD solution.

    You’ll need another solution.

    Kubernetes ‘blurs’ (removes) the distinction between individual VMs and allows its users to focus more on the services (e.g. NGINX) running on the platform. With Kubernetes, your NGINX deployments would likely be represented by Kubernetes Services and you can then configure Prometheus to discover Kubernetes Services (perhaps specifically those labeled nginx) as targets (automatically).

    In summary, you’ve (at least) two choices:

    1. Manually configure Prometheus with a list of targets of NGINX endpoints as you create them.
    2. Programmatically configure Prometheus with a list of targets of NGINX endpoints as you create them. File-based service discovery is a frequently recommended (I’ve not used it) solution in this scenario.
    3. Use another form of service discovery (Consul is a good option and can be used for service discovery by Prometheus). NOTE You’ll still need to configure Consul to find NGINX instances so this may just punt your problem.
    4. There may be better alternatives.
    Login or Signup to reply.
  2. I don’t know if this will solves your problem.

    I have node exporter installed on all my instances and the automatic discovery service configured for it. On top of that, a few instances have their own app_exporter that publishes app specific metrics on another port (9854).

    What I did:
    I added an extra GCP discovery scraper job for that port:

      scrape_configs:
        # Dynamic GCP service discovery for extra app-metrics
        - job_name: 'gcp_solr_discovery'
          metrics_path: "/admin/metrics"
          gce_sd_configs:
             # Europe West 1
           - project: "your-project-here"
             zone: "europe-west1-b"
             port: 9854
           - project: "your-project-here"
             zone: "europe-west1-c"
             port: 9854
    

    This will make the GCP service discovery query those ports regardless if the VMs have node exporter or not and your instances will be added dynamically.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search