skip to Main Content

I’m using that fluentd daemonset docker image and sending logs to ES with fluentd is working perfectly by the way of using following code-snippets:

  containers:
    - name: fluentd
      image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
      env:
        - name: FLUENT_ELASTICSEARCH_HOST
          value: "my-aws-es-endpoint"
        - name: FLUENT_ELASTICSEARCH_PORT
          value: "443"
        - name: FLUENT_ELASTICSEARCH_SCHEME
          value: "https"
        - name: FLUENT_ELASTICSEARCH_USER
          value: null
        - name: FLUENT_ELASTICSEARCH_PASSWORD
          value: null

But the problem happening is for DR/HA, we’re about to save logs into S3 as well. My question is is there anyway that we can add multiple outputs in fluentd-kubernetes-daemonset in kubernetes such as S3, Kinesis and so on?

2

Answers


  1. It’s subjective to how you are deploying Fluentd to the cluster. Do you use a templating engine like Helm or Skaffold?

    If so, these should have a configmap / configuration option inside of them to customize the deployment and provide your own inputs. For example, the Helm fluentd can be defined by adding outputs here:

    https://github.com/helm/charts/blob/master/stable/fluentd/values.yaml#L97

    This should allow you to make multiple streams so the fluentd data is output to numerous locations.

    I notice in your specific Docker Image you provided they have some templated items in Ruby. The config specifically allows for you to mount a volume to conf.d/ in the fluentd folder: https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/templates/conf/fluent.conf.erb#L9

    Maybe /etc/fluentd but I’d recommend running the image locally and checking for yourself.

    As long as your config files end in .conf you should be able to add anything you want.

    Login or Signup to reply.
  2. As mentioned in 1st answer – you need to override the whole config.
    You’re looking for the output type "copy":

    <match **>
      @type copy
      <store>
        @type elasticsearch
        ...
      </store>
      <store>
        @type s3
        ...
      </store>
      <store>
        @type kinesis_streams
        ...
      </store>
    </match>
    

    TIP: Because every <store> is gonna be long, it becomes not very readable with a larger number of stores. I usually wrap every store in a label to increase readability:

    <match **>
      @type copy
      <store>
        @type relabel
        @label @es
      </store>
      <store>
        @type relabel
        @label @s3
      </store>
      <store>
        @type relabel
        @label @stream
      </store>
    </match>
    
    <label @es>
      <match **>
        @type elasticsearch
        ...
      </match>
    </label>
    
    <label @s3>
      <match **>
        @type s3
        ...
      </match>
    </label>
    
    <label @stream>
      <match **>
        @type kinesis_streams
        ...
      </match>
    </label>
    

    Now you can move labels to a separate config files.
    Besides readability this has more benefits:

    • Every label is an independent event stream inside fluentd thus it can implement a separate set of filters, without affecting other labels. This is very useful when you want to filter what you’re are sending to different stores, e.g. only INFO to stream, all levels to ES.
    • Label is reusable, you can call it from multiple places. Let’s say you have two sources – send both to the same label.
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search