skip to Main Content

I’m having a k3s (single node) kubernetes cluster and try to make a deployment on it via ansible.
I know of cource that I use a k8s collection for k3s, but maybe it can be solved anyhow.

The relevant part of my playbook is:

---
- hosts: k3s_cluster
  become: yes
  tasks:
  - name: Create a Deployment by reading the definition from a local file
    kubernetes.core.k8s:
      api_key: mytoken
      state: present
      src: deployment.yml

The deployment file is simple.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:alpine
          ports:
            - containerPort: 80

If I do it directly via kubectl it works.
The config exists on the target host(and also on my local machine) in ~/.kube/config. Specifying that location doesn’t help.

2

Answers


  1. Provide the host to access the API server. If not using a tls cert chain that can be validated by your system certificates you may also need validate_certs set to false.

    Login or Signup to reply.
  2. Ansible+k3s on Raspberry Pi CM4 cluster

    1. Here is a video tutorial

    How to install k3s on Raspberry Pi CM4 cluster using Ansible

    Requirements:

    For Raspberry Pi / CM4
    cgroup_memory=1 cgroup_enable=memory
    

    added to

    /boot/cmdline.txt
    
    For Ansible
    • ansible user added to each remote node
    • ansible user added to sudo/wheel/admins group
    • sudo/wheel/admins group set in /etc/sudoers to perform command with elevated privileges
    1. Create ansible inventory
    sudo vim /etc/ansible/hosts
    
    all:
      children:
        master:
          hosts:
            master-node:
              ansible_host: 10.10.0.112
              ansible_user: ansible
        workers:
          hosts:
            worker-node-1:
              ansible_host: 10.10.0.102
              ansible_user: ansible
            worker-node-2:
              ansible_host: 10.10.0.104
              ansible_user: ansible
    
    1. Create ansible playbook. See the video for the explanation.
    vim k3s-raspberry-cluster.yml 
    
    1. Put the below content into this file.
    ---
    - name: Install K3s on Master Node
      hosts: master
      become: yes
      tasks:
        - name: Install K3s
          shell: curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik,servicelb" K3S_KUBECONFIG_MODE="644" sh -
    
        - name: Install NGINX as ingress controller
          shell: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml
        - name: Create a patch file for NGINX ingress controller 
          shell:
            cmd: |
              cat > ingress.yaml << EOF
                spec:
                  template:
                    spec:
                      hostNetwork: true
              EOF
          args:
            executable: /bin/bash
    
        - name: patch NGINX ingress controller
          shell: kubectl patch deployment ingress-nginx-controller -n ingress-nginx --patch "$(cat ingress.yaml)"
    
        - name: Get K3s node token
          shell: cat /var/lib/rancher/k3s/server/node-token
          register: k3s_token
          delegate_to: "{{ inventory_hostname }}"
    
    
    
    - name: Install K3s on Worker Nodes
      hosts: workers
      become: yes
      vars:
        k3s_url: "https://{{ hostvars['master-node']['ansible_host'] }}:6443"
        k3s_token: "{{ hostvars['master-node'].k3s_token.stdout }}"
      tasks:
        - name: Join worker nodes to the cluster
          shell: "curl -sfL https://get.k3s.io | K3S_URL={{ k3s_url }} K3S_TOKEN={{ k3s_token }} sh -"
    
    
    - name: Label K3s workers on Master Node
      hosts: master
      become: yes
      tasks:
        - name: Label worker 1
          shell: kubectl label nodes worker1 kubernetes.io/role=worker
        - name: Label worker 2
          shell: kubectl label nodes worker2 kubernetes.io/role=worker
    
    1. Run the playbook like below:
    ansible-playbook k3s-raspberry-cluster.yml
    
    1. Check on the remote k3s master node the configuration using below commands:
    kubectl get nodes
    kubectl get pods -A
    kubectl get svc -A
    kubectl get all -A
    

    Scripts and configuration files are available here:

    Add k3s configuration into the environment if needed on master node after the deployment what usually is not needed.

    Edit the file

    sudo vim /etc/environment
    

    Add the below entry:

    KUBECONFIG=/etc/rancher/k3s/k3s.yaml
    
    source /etc/environment
    

    Explanation:
    source is a Bash shell built-in command that executes the content of the file passed as an argument in the current shell. It has a synonym in . (period).

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search