skip to Main Content

I have a task to automate the uploading of AKS logs (control plane and workload) to the Azure storage account so that they can be viewed later or may be an alert notification to the email/teams channel in case of any failure. It would have been an easy task if the log analytics workspace would have been used however, to save the cost we have kept it disabled.

I have tried using the below cronjob which would upload the pod logs to storage account on a regular basis, but it is throwing me the below errors[1]

apiVersion: batch/v1
kind: CronJob
metadata:
  name: log-uploader
spec:
  schedule: "0 0 * * *" # Run every day at midnight
  jobTemplate:
    spec:
      template:
        spec:
          containers:
            - name: log-uploader
              image: mcr.microsoft.com/azure-cli:latest
              command:
                - bash
                - "-c"
                - |
                  az aks install-cli
                  # Set environment variables for Azure Storage Account and Container
                  export AZURE_STORAGE_ACCOUNT=test-101
                  export AZURE_STORAGE_CONTAINER=logs-101
                  # Iterate over all pods in the cluster and upload their logs to Azure Blob Storage
                  for pod in $(kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{.metadata.name} {.metadata.namespace}{"n"}{end}'); do
                    namespace=$(echo $pod | awk '{print $2}')
                    pod_name=$(echo $pod | awk '{print $1}')
                    # Use the Kubernetes logs API to retrieve the logs for the pod
                    logs=$(kubectl logs -n $namespace $pod_name)
                    # Use the Azure CLI to upload the logs to Azure Blob Storage
                    echo $logs | az storage blob upload --file - --account-name $AZURE_STORAGE_ACCOUNT --container-name $AZURE_STORAGE_CONTAINER --name "$namespace/$pod_name_`date`.log"
                  done
          restartPolicy: OnFailure

Errors[1]

error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples

The same commands are running fine outside the container.

Any thoughts/suggestions would be highly appreciated.

Regards,

Piyush

2

Answers


  1. Chosen as BEST ANSWER

    So, I have found a better approach to automate the export of aks logs to the azure storage account. I have used a tool called Vector (By DataDog). It is much easier to implement and it is lightweight than fluentd. Vector not only exports the data in near real-time but you can perform a lot of transformation to the data before it is actually transported to the destination. I have created an end-to-end video tutorial to implement this Link to the video


  2. A better approach for achieving this would be deploying a fluentd daemonset in your cluster and use the azure storage plugin to upload logs to a storage account.

    This tool was built for this specific purpose and will probably serve you better for this purpose.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search