skip to Main Content

I am attempting to have playbooks that run once to set up a new user and disable root ssh access.

For now, I am doing that by declaring all of my inventory twice. Each host needs an entry that accesses with the root user, used to create a new user, set up ssh settings, and then disable root access.

Then each host needs another entry with the new user that gets created.

My current inventory looks like this. It’s only one host for now, but with a larger inventory, the repetition would just take up a ton of unnecessary space:

---
# ./hosts.yaml
---
all:
  children:
    master_roots:
      hosts:
        demo_master_root:
          ansible_host: a.b.c.d  # same ip as below
          ansible_user: root
          ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
    masters:
      hosts:
        demo_master:
          ansible_host: a.b.c.d  # same ip as above
          ansible_user: infraops
          ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops

Is there a cleaner way to do this?

Is this an anti-pattern in any way? It is not idempotent. It would be nice to have this run in a way that running the same playbook twice always has the same output – either "success", or "no change".

I am using DigitalOcean and they have a functionality to have this done via a bash script before the VM comes up for the first time, but I would prefer a platform-independent solution.

Here is the playbook for setting up the users & ssh settings and disabling root access

---
# ./initial-host-setup.yaml
---
# References

# Digital Ocean recommended droplet setup script:
# - https://docs.digitalocean.com/droplets/tutorials/recommended-setup
# Digital Ocean tutorial on installing kubernetes with Ansible:
#  - https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-debian-9
# Ansible Galaxy (Community) recipe for securing ssh:
# - https://github.com/vitalk/ansible-secure-ssh
---
- hosts: master_roots
  become: 'yes'
  tasks:
    - name: create the 'infraops' user
      user:
        state: present
        name: infraops
        password_lock: 'yes'
        groups: sudo
        append: 'yes'
        createhome: 'yes'
        shell: /bin/bash

    - name: add authorized keys for the infraops user
      authorized_key: 'user=infraops key="{{item}}"'
      with_file:
        '{{ hostvars[inventory_hostname].ansible_ssh_private_key_file }}.pub'

    - name: allow infraops user to have passwordless sudo
      lineinfile:
        dest: /etc/sudoers
        line: 'infraops ALL=(ALL) NOPASSWD: ALL'
        validate: visudo -cf %s

    - name: disable empty password login for all users
      lineinfile:
        dest: /etc/ssh/sshd_config
        regexp: '^#?PermitEmptyPasswords'
        line: PermitEmptyPasswords no
      notify: restart sshd

    - name: disable password login for all users
      lineinfile:
        dest: /etc/ssh/sshd_config
        regexp: '^(#s*)?PasswordAuthentication '
        line: PasswordAuthentication no
      notify: restart sshd

    - name: Disable remote root user login
      lineinfile:
        dest: /etc/ssh/sshd_config
        regexp: '^#?PermitRootLogin'
        line: 'PermitRootLogin no'
      notify: restart sshd

  handlers:
    - name: restart sshd
      service:
        name: sshd
        state: restarted

Everything after this would use the masters inventory.

EDIT

After some research I have found that "init scripts"/"startup scripts"/"user data" scripts are supported across AWS, GCP, and DigitalOcean, potentially via cloud-init (this is what DigitalOcean uses, didn’t research the others), which is cross-provider enough for me to just stick with a bash init script solution.

I would still be interested & curious if someone had a killer Ansible-only solution for this, although I am not sure there is a great way to make this happen without a pre-init script.

Regardless of any ansible limitations, it seems that without using the cloud init script, you can’t have this. Either the server starts with a root or similar user to perform these actions, or the server starts without a user with those powers, then you can’t perform these actions.

Further, I have seen Ansible playbooks and bash scripts that try to solve the desired "idempotence" (complete with no errors even if root is already disabled) by testing root ssh access, then falling back to another user, but "I can’t ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh.

EDIT 2 placing this here, since I can’t use newlines in my response to a comment:

β.εηοιτ.βε responded to my assertion:

"but "I can’t ssh with root" is a poor test for "is the root user disabled" because there are plenty of ways your ssh access could fail even though the server is still configured to allow root to ssh

with

then, try to ssh with infraops and assert that PermitRootLogin no is in the ssh daemon config file?"

It sounds like the suggestion is:

- attempt ssh with root 
  - if success, we know user/ssh setup tasks have not completed, so run those tasks
  - if failure, attempt ssh with infraops
    - if success, go ahead and run everything except the user creation again to ensure ssh config is as desired
    - if failure... ? something else is probably wrong, since I can't ssh with either user

I am not sure what this sort of if-then failure recovery actually looks like in an Ansible script

2

Answers


  1. You could only define the demo_master group and alter the ansible_user and ansible_ssh_private_key_file at run time, using command flags --user and --private-key.

    So with an host.yaml containing

    all:
      children:
        masters:
          hosts:
            demo_master:
              ansible_host: a.b.c.d  # same ip as above
              ansible_user: infraops
              ansible_ssh_private_key_file: ~/.ssh/id_rsa_infra_ops
    

    And run on - hosts: master, the first run would, for example be with

    ansible-playbook initial-host-setup.yaml 
      --user root 
      --private-key ~/.ssh/id_rsa_root
    

    When the subsequent runs would simply by

    ansible-playbook subsequent-host-setup.yaml
    

    Since all the required values are in the inventory already.

    Login or Signup to reply.
  2. You can overwrite host variables for a given play by using vars.

    - hosts: masters
      become: 'yes'
      
      vars:
        ansible_ssh_user: "root"
        ansible_ssh_private_key_file: "~/.ssh/id_rsa_infra_ops"
    
      tasks:
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search