skip to Main Content

I’m using AWS EKS 1.21 with Fargate (serverless). I’m trying to run Fluentd as a daemonset however the daemonset is not running at all.

All the other objects like role, rolebinding, serviceaccount, configmap are already in place in the cluster.

NAME                 DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
aws-node             0         0         0       0            0           <none>          8d
fluentd-cloudwatch   0         0         0       0            0           <none>          3m36s
kube-proxy           0         0         0       0            0           <none>          8d

This is my Daemonset:-

apiVersion: apps/v1 #Latest support AWS EKS 1.21
kind: DaemonSet
metadata:
  labels:
    k8s-app: fluentd-cloudwatch
  name: fluentd-cloudwatch
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-cloudwatch
  template:
    metadata:
      labels:
        k8s-app: fluentd-cloudwatch
    spec:
      containers:
      - env:
        - name: REGION
          value: us-east-1 # Correct AWS EKS region should be verified before running this Daemonset
        - name: CLUSTER_NAME
          value: eks-fargate-alb-demo # AWS EKS Cluster Name should be verified before running this Daemonset
        image: fluent/fluentd-kubernetes-daemonset:v1.1-debian-cloudwatch
        imagePullPolicy: IfNotPresent
        name: fluentd-cloudwatch
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /config-volume
          name: config-volume
        - mountPath: /fluentd/etc
          name: fluentdconf
        - mountPath: /var/log
          name: varlog
        - mountPath: /var/lib/docker/containers
          name: varlibdockercontainers
          readOnly: true
        - mountPath: /run/log/journal
          name: runlogjournal
          readOnly: true
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - sh
        - -c
        - cp /config-volume/..data/* /fluentd/etc
        image: busybox
        imagePullPolicy: Always
        name: copy-fluentd-config
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /config-volume
          name: config-volume
        - mountPath: /fluentd/etc
          name: fluentdconf
      serviceAccount: fluentd
      serviceAccountName: fluentd
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: fluentd-config
        name: config-volume
      - emptyDir: {}
        name: fluentdconf
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /var/lib/docker/containers
          type: ""
        name: varlibdockercontainers
      - hostPath:
          path: /run/log/journal
          type: ""
        name: runlogjournal

When I describe it, I do not see any events as well. I can run other pods like Nginx etc on this cluster but this is not running at all.

kubectl describe ds fluentd-cloudwatch -n kube-system



Name:           fluentd-cloudwatch
Selector:       k8s-app=fluentd-cloudwatch
Node-Selector:  <none>
Labels:         k8s-app=fluentd-cloudwatch
Annotations:    deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status:  0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           k8s-app=fluentd-cloudwatch
  Service Account:  fluentd
  Init Containers:
   copy-fluentd-config:
    Image:      busybox
    Port:       <none>
    Host Port:  <none>
    Command:
      sh
      -c
      cp /config-volume/..data/* /fluentd/etc
    Environment:  <none>
    Mounts:
      /config-volume from config-volume (rw)
      /fluentd/etc from fluentdconf (rw)
  Containers:
   fluentd-cloudwatch:
    Image:      fluent/fluentd-kubernetes-daemonset:v1.1-debian-cloudwatch
    Port:       <none>
    Host Port:  <none>
    Limits:
      memory:  200Mi
    Requests:
      cpu:     100m
      memory:  200Mi
    Environment:
      REGION:        us-east-1
      CLUSTER_NAME:  eks-fargate-alb-demo
    Mounts:
      /config-volume from config-volume (rw)
      /fluentd/etc from fluentdconf (rw)
      /run/log/journal from runlogjournal (ro)
      /var/lib/docker/containers from varlibdockercontainers (ro)
      /var/log from varlog (rw)
  Volumes:
   config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      fluentd-config
    Optional:  false
   fluentdconf:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
   varlog:
    Type:          HostPath (bare host directory volume)
    Path:          /var/log
    HostPathType:
   varlibdockercontainers:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/docker/containers
    HostPathType:
   runlogjournal:
    Type:          HostPath (bare host directory volume)
    Path:          /run/log/journal
    HostPathType:
Events:            <none>

ConfigMap:-

apiVersion: v1
data:
  containers.conf: |
    <source>
      @type tail
      @id in_tail_container_logs
      @label @containers
      path /var/log/containers/*.log
      pos_file /var/log/fluentd-containers.log.pos
      tag *
      read_from_head true
      <parse>
        @type json
        time_format %Y-%m-%dT%H:%M:%S.%NZ
      </parse>
    </source>

    <label @containers>
      <filter **>
        @type kubernetes_metadata
        @id filter_kube_metadata
      </filter>

      <filter **>
        @type record_transformer
        @id filter_containers_stream_transformer
        <record>
          stream_name ${tag_parts[3]}
        </record>
      </filter>

      <match **>
        @type cloudwatch_logs
        @id out_cloudwatch_logs_containers
        region "#{ENV.fetch('REGION')}"
        log_group_name "/k8s-nest/#{ENV.fetch('CLUSTER_NAME')}/containers"
        log_stream_name_key stream_name
        remove_log_stream_name_key true
        auto_create_stream true
        <buffer>
          flush_interval 5
          chunk_limit_size 2m
          queued_chunks_limit_size 32
          retry_forever true
        </buffer>
      </match>
    </label>
  fluent.conf: |
    @include containers.conf
    @include systemd.conf

    <match fluent.**>
      @type null
    </match>
  systemd.conf: |
    <source>
      @type systemd
      @id in_systemd_kubelet
      @label @systemd
      filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
      <entry>
        field_map {"MESSAGE": "message", "_HOSTNAME": "hostname", "_SYSTEMD_UNIT": "systemd_unit"}
        field_map_strict true
      </entry>
      path /run/log/journal
      pos_file /var/log/fluentd-journald-kubelet.pos
      read_from_head true
      tag kubelet.service
    </source>

    <source>
      @type systemd
      @id in_systemd_kubeproxy
      @label @systemd
      filters [{ "_SYSTEMD_UNIT": "kubeproxy.service" }]
      <entry>
        field_map {"MESSAGE": "message", "_HOSTNAME": "hostname", "_SYSTEMD_UNIT": "systemd_unit"}
        field_map_strict true
      </entry>
      path /run/log/journal
      pos_file /var/log/fluentd-journald-kubeproxy.pos
      read_from_head true
      tag kubeproxy.service
    </source>

    <source>
      @type systemd
      @id in_systemd_docker
      @label @systemd
      filters [{ "_SYSTEMD_UNIT": "docker.service" }]
      <entry>
        field_map {"MESSAGE": "message", "_HOSTNAME": "hostname", "_SYSTEMD_UNIT": "systemd_unit"}
        field_map_strict true
      </entry>
      path /run/log/journal
      pos_file /var/log/fluentd-journald-docker.pos
      read_from_head true
      tag docker.service
    </source>

    <label @systemd>
      <filter **>
        @type record_transformer
        @id filter_systemd_stream_transformer
        <record>
          stream_name ${tag}-${record["hostname"]}
        </record>
      </filter>

      <match **>
        @type cloudwatch_logs
        @id out_cloudwatch_logs_systemd
        region "#{ENV.fetch('REGION')}"
        log_group_name "/k8s-nest/#{ENV.fetch('CLUSTER_NAME')}/systemd"
        log_stream_name_key stream_name
        auto_create_stream true
        remove_log_stream_name_key true
        <buffer>
          flush_interval 5
          chunk_limit_size 2m
          queued_chunks_limit_size 32
          retry_forever true
        </buffer>
      </match>
    </label>
kind: ConfigMap
metadata:
  labels:
    k8s-app: fluentd-cloudwatch
  name: fluentd-config
  namespace: kube-system

Please let me know where the problem is, thanks

2

Answers


  1. Chosen as BEST ANSWER

    After doing the research, I found that the Kubernetes Daemon set object is not supported for Fargate yet in AWS. Now the options left:- A) Run the Fluentd as a sidecar pattern along with other containers in a pod B) Change the cluster from Fargate to NodeGroup based


  2. As you figured, EKS/Fargate does not support Daemonsets (because there are no [real] nodes). Actually, you don’t need to run FluentBit as a sidecar in every pod. EKS/Fargate supports a logging feature called Firelens that allows you to just configure where you want to log (destination) and Fargate will configure a "hide-car" in the back end (not visible to the user) to do that. Please see this page of the documentation with the details.

    Snippet:

    Amazon EKS on Fargate offers a built-in log router based on Fluent Bit. This means that you don't explicitly run a Fluent Bit container as a sidecar, but Amazon runs it for you. All that you have to do is configure the log router. The configuration happens through a dedicated ConfigMap....

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search