skip to Main Content

I’m trying to deploy a local image to my raspberry pi kubernetes cluster:

$ sudo kubectl get nodes
NAME          STATUS   ROLES                  AGE    VERSION
oren2         Ready    <none>                 40m    v1.28.4+k3s2
oren1         Ready    <none>                 39m    v1.28.4+k3s2
oren4         Ready    <none>                 42m    v1.28.4+k3s2
raspberrypi   Ready    control-plane,master   167m   v1.28.4+k3s2

My docker image is ready:

$ sudo docker images
REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
translator   latest    fd8afed3a99b   27 minutes ago   1.04GB

And I upload it:

$ sudo docker save translator | sudo k3s ctr images import -
$ sudo k3s ctr images ls | grep translator
docker.io/library/translator:latest
# ... omitted ...

I’m using the following yaml (adopted from here):

apiVersion: v1
kind: Pod
metadata:
  name: translator
  labels:
    component: web
spec:
  containers:
    - name: translator
      image: translator
      imagePullPolicy: Never
      ports:
        - containerPort: 3000
  restartPolicy: Never

When I try to get my pods it doesn’t work:

$ sudo kubectl create -f config.yml 
pod/translator created
$ sudo kubectl get pods
NAME               READY   STATUS              RESTARTS   AGE
translator-g9rlh   0/1     ErrImageNeverPull   0          20m
translator         0/1     ErrImageNeverPull   0          12s

my question is similar to this post, but does not use Minikube

2

Answers


  1. Looks like you are missing image tag in your container spec.
    Change your pod config image definition to: docker.io/library/translator:latest.
    You might also verify if this image is present on your worker nodes, as you have image pull policy set to Never

    Login or Signup to reply.
  2. You have:

    +----------------+             +---------------------+           
    |                |             |                     |           
    |   Docker Host  |             | Kubernetes Cluster  |           
    |                |             | (Raspberry Pi, k3s) |          
    +-------+--------+             +------+----------+---+          
            |                             |          |             
            |                             |          |             
            |  docker save                | kubectl  |             
            +---------------------------> | create   |             
            |                             |          |             
            |  Docker Image: translator   |          |             
            +---------------------------> |          |             
                                          +----------+             
    

    Since you are using a local image, you need to make sure the image name in the Kubernetes YAML matches the name used in your Docker environment: docker.io/library/translator:latest.

    apiVersion: v1
    kind: Pod
    metadata:
      name: translator
      labels:
        component: web
    spec:
      containers:
        - name: translator
          image: docker.io/library/translator:latest  # Use the full image name
          imagePullPolicy: Never
          ports:
            - containerPort: 3000
      restartPolicy: Never
    

    And check that the image is available on each node:

    #!/bin/bash
    
    # Fetch the list of node names from Kubernetes
    NODES=$(kubectl get nodes -o jsonpath='{.items[*].metadata.name}')
    
    # Loop through each node
    for NODE in $NODES; do
        echo "Checking image on $NODE..."
    
        # Run the command to check for the image and store the output
        # Replace 'username' with the appropriate username for SSH access
        IMAGE_CHECK=$(ssh username@"$NODE" 'docker images | grep translator')
    
        # Check if the output contains the image name
        if [[ -z "$IMAGE_CHECK" ]]; then
            echo "Image 'translator' is MISSING on node $NODE"
        else
            echo "Image 'translator' is PRESENT on node $NODE"
        fi
    
        echo "-----------------------------------"
    done
    

    Finally, if your local Docker image translator is only available on a specific node (or a subset of nodes) in your Kubernetes cluster, you need to make sure your pod is scheduled on a node where the image is present. That is particularly important since you have multiple nodes (oren2, oren1, oren4, raspberrypi) in your cluster.

    To achieve this, you can use either Node Affinity or Taints and Toleration in your Kubernetes configuration:

    • Node affinity allows you to constrain which nodes your pod can be scheduled on, based on labels on nodes.
      For example, if the translator image is only available on oren2, you can add a label to oren2 and update your pod specification to include node affinity for this label.

    • Taints and toleration involves applying a taint to nodes that do not have the translator image, and adding a corresponding toleration to the pod specification. That way, Kubernetes will not schedule the pod on nodes that have a taint which the pod does not tolerate.

    For instance, here is how to add node affinity to your Kubernetes configuration, assuming oren2 is labeled appropriately (e.g., has-translator-image=true):

    apiVersion: v1
    kind: Pod
    metadata:
      name: translator
      labels:
        component: web
    spec:
      containers:
        - name: translator
          image: docker.io/library/translator:latest
          imagePullPolicy: Never
          ports:
            - containerPort: 3000
      restartPolicy: Never
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: has-translator-image
                operator: In
                values:
                - "true"
    

    By adding node affinity or taints and toleration, you can make sure your pod is scheduled on a node where the necessary Docker image is available, which will prevent potential issues related to image availability across different nodes in your cluster.


    Other than the name fix, I think your answer made me realize something I missed: I have to run sudo docker save translator | sudo k3s ctr images import - for every worker node, right?

    I was somehow thinking my worker nodes only need to have docker installed, and the actual image (translator) would be "automagically" sent to them.

    In a Kubernetes cluster, especially one that is not using a centralized container registry, each worker node needs to have the required Docker image available locally if the imagePullPolicy is set to Never: it means Kubernetes will not attempt to pull the image from a remote registry.

    Since you are using k3s, which has its own container runtime (containerd), you need to import the Docker image into k3s’s image store on each node. That is not automatically handled by Kubernetes or k3s. The command sudo docker save translator | sudo k3s ctr images import - does this for the node it is executed on, but it does not distribute the image to other nodes in the cluster.

    If your nodes are labelled, to identify the nodes where you want to deploy the image (e.g., has-translator-image=true), you can script this (chmod +x deploy_image.sh):

    #!/bin/bash
    
    # Label that identifies the nodes where the image should be deployed
    LABEL_SELECTOR="has-translator-image=true"
    
    # Docker image to be saved and transferred
    DOCKER_IMAGE="translator"
    
    # Save the Docker image as a tarball
    sudo docker save "$DOCKER_IMAGE" > "$DOCKER_IMAGE.tar"
    
    # Fetch the list of node names that have the specified label
    NODES=$(kubectl get nodes -l $LABEL_SELECTOR -o jsonpath='{.items[*].metadata.name}')
    
    # Loop through each node
    for NODE in $NODES; do
        echo "Transferring image to $NODE"
    
        # Transfer the tarball to the node
        scp "$DOCKER_IMAGE.tar" username@"$NODE":~
    
        echo "Importing image on $NODE"
        # SSH into the node, import the image, and then optionally remove the tarball
        ssh username@"$NODE" "sudo k3s ctr images import ~/$DOCKER_IMAGE.tar && rm ~/$DOCKER_IMAGE.tar"
    
        echo "Image imported successfully on $NODE"
        echo "------------------------------------"
    done
    
    # Optionally remove the tarball from the local machine
    rm "$DOCKER_IMAGE.tar"
    

    Make sure you have SSH keys set up for passwordless access to each node.

    That would transfer the image tarball over the network to each node and requires sufficient disk space on both the source machine and the target nodes.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search