skip to Main Content

I’m using the repo here: https://github.com/deviantony/docker-elk

CentOS 8

ELK versions 7.4.0

docker-compose version 1.24.1

Docker version 18.06.3-ce

When I bring up the containers, Elasticsearch loads up fine.
After it loads, the Kibana and Logstash containers start up. But once they load, they are not able to see the Elasticsearch container, producing these messages:

Logstash:

[2019-10-22T18:32:57,321][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] No route to host (Host unreachable)"}

Kibana:

{"type":"log","@timestamp":"2019-10-22T18:41:22Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","@timestamp":"2019-10-22T18:41:22Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
{"type":"log","@timestamp":"2019-10-22T18:41:22Z","tags":["license","warning","xpack"],"pid":6,"message":"License information from the X-Pack plugin could not be obtained from Elasticsearch for the [data] cluster. Error: No Living connections"}
{"type":"log","@timestamp":"2019-10-22T18:41:22Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","@timestamp":"2019-10-22T18:41:22Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}

When I check to see what the elasticsearch hostname is, I get the auto generated one by Docker.

# docker-compose exec elasticsearch hostname
7d50d6a75028

I was under the impression that if the containers are under the same network then docker should map elasticsearch:9200 to the proper ip address of the container?

I tried setting the hostname in the docker-compose file like this:

...

services:
  elasticsearch:
    hostname: elasticsearch

...

And the change is reflected in the container:

# docker-compose exec elasticsearch hostname
elasticsearch

But Kibana and Logstash still don’t see it.

I can’t see that host from the Kibana container:

# docker-compose exec kibana curl http://elasticsearch:9200
curl: (7) Failed connect to elasticsearch:9200; No route to host

Checking the logs in the ES container, it seems to be working fine:

# docker logs 9ef8

{"type": "server", "timestamp": "2019-10-22T18:50:55,870Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "elasticsearch", "message": "Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.monitoring-es-7-2019.10.22][0]]]).", "cluster.uuid": "O7t3UC1tSFibbjkwjqbX6A", "node.id": "qbzcFdQpR2KHBad-s8U1Vw"  }

I must be missing something but I cannot figure out it is.
I searched for this error and it seems that the way I have it set up should work.

Can anyone help?

My docker-compose file looks like this:

version: '3.7'

services:
  elasticsearch:
    build:
      context: elasticsearch/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./elasticsearch/config/elasticsearch.yml
        target: /usr/share/elasticsearch/config/elasticsearch.yml
        read_only: true
      - type: volume
        source: elasticsearch
        target: /usr/share/elasticsearch/data
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk

  logstash:
    build:
      context: logstash/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./logstash/config/logstash.yml
        target: /usr/share/logstash/config/logstash.yml
        read_only: true
      - type: bind
        source: ./logstash/pipeline
        target: /usr/share/logstash/pipeline
        read_only: true
    ports:
      - "5000:5000"
      - "9600:9600"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    build:
      context: kibana/
      args:
        ELK_VERSION: $ELK_VERSION
    volumes:
      - type: bind
        source: ./kibana/config/kibana.yml
        target: /usr/share/kibana/config/kibana.yml
        read_only: true
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

networks:
  elk:
    driver: bridge

volumes:

2

Answers


  1. Chosen as BEST ANSWER

    I was able to get it to run on CentOS 7.7 without an issue. Seems to be CentOS 8 related.


  2. Even i have same problem running elastic,kibana inside docker for CENTOS 8.

    Looks some problem as kibana couldn’t talk to elastic inside docker network .

    CURL inside kibana to elastic :curl -X GET http://elasticsearch:9200 throws error Failed connect to elasticsearch:9200; No route to host

    The same docker-compose file working in windows docker , docker in ubuntu, etc,..

    Edited :

    After some search i was able to come-up with solution.
    Reason is centos firewalld blocks the DNS within docker container network and need to bypass or disable it fully.
    But instead of fully disabling the firewalld, able to find a go-round by bypassing the docker DNS in firewall using the step shown in here Post doing the steps, kibana able to connect with Elastic inside docker .. Thanks 🙂

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search