I am running Packer + Ansible provisioner from the Bitbucket pipeline. but ansible not becoming root even become: true
is given. Packer is used to create an Amazon Linux AMI and Ansible provisioner is used to run some server hardening scripts and configurations.
output from simple id command:
When run from Pipeline
TASK [aws-basic : debug] *****************************************
ok: [default] => {
"command_output.stdout_lines": [
"uid=1000(ec2-user) gid=1000(ec2-user) groups=1000(ec2-user),4(adm),10(wheel),190(systemd-journal)"
]
}
When running from Locally
TASK [aws-basic : debug] *****************************************
ok: [default] => {
"command_output.stdout_lines": [
"uid=0(root) gid=0(root) groups=0(root)"
]
}
Following is my Ansible Playbook with two roles
- name: AWS EC2 AMLinux Configuration playbook
hosts: default
remote_user: ec2-user
connection: ssh
become: true
vars:
_date: "{{ansible_date_time.iso8601}}"
reop_path: /usr/tmp/
roles:
- role: role-1
- role: role-2
Packer ansible provisioner config
provisioner "ansible" {
playbook_file = "../ansible/aws-ec2-base.yml"
extra_arguments = ["--extra-vars", "api_key=${var.api_key}"]
galaxy_file = "../ansible/requirements.yml"
ansible_ssh_extra_args = ["-oHostKeyAlgorithms=+ssh-rsa -oPubkeyAcceptedKeyTypes=+ssh-rsa"]
}
Even putting become_user: root in the ansible-playbook is not working.
Any reason this only happens in the bitbucket pipeline? I am using an ubuntu docker image with Ansible and Packer installed.
2
Answers
This issue was caused because of the use of an older version of the packer plugin. can also resolve the issue by using a bitbucket runner.
My gut is there would be some config in each system that triggers a different behaviour. I’d try
in both your local workstation and the CI system and try to peek any difference that might be causing this.