i am new to ansible
i have installed ansible on ec2 instance (as a master VM)
and now i want to setup my target node as a gcp vm
so for that i have created a gcp vm and update the inventory file
ansible-target ansible_host=gcp_vm_ip ansible_connection=ssh ansible_user=apigeehybrid
but when i run ansible ansible-target -m ping
i got error
<35.184.210.81> ESTABLISH SSH CONNECTION FOR USER: apigeehybrid
<35.184.210.81> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="apigeehybrid"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ef74ba60db 35.184.210.80 '/bin/sh -c '"'"'echo ~apigeehybrid && sleep 0'"'"''
<35.184.210.81> (255, '', 'Permission denied (publickey,gssapi-keyex,gssapi-with-mic).rn')
target-aio | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).",
"unreachable": true
}
now i knew i have to use SSH key something but i am a bit confuse becuase i tried a lot method of create key and put on somewhere but that not gonna work in my case
can anyone please elabore the perfect setup to established a connection b/w master vm(ec2) to target vm(gcp instance) that would be great…
Terraform X Apigee integration
the architecture would be same but i have create a node.tf file in master vm of ec2
resource "google_compute_network" "default" {
name = "my-network"
}
resource "google_compute_subnetwork" "default" {
name = "my-subnet"
ip_cidr_range = "10.0.0.0/16"
region = "us-central1"
network = google_compute_network.default.id
}
resource "google_compute_address" "internal_ip" {
name = "my-internal-address"
project = var.projectname
subnetwork = google_compute_subnetwork.default.id
address_type = "INTERNAL"
address = "10.0.1.0"
region = "us-central1"
purpose = "GCE_ENDPOINT"
}
resource "google_compute_address" "static" {
name = "vm-public-address"
project = var.projectname
region = "us-central1"
}
resource "google_compute_firewall" "firewall2" {
name = "gritfy-firewall-externalssh2"
network = google_compute_network.default.name
allow {
protocol = "tcp"
ports = ["22"]
}
source_ranges = ["0.0.0.0/0"]
}
output "ip1" {
value = google_compute_address.static.address
}
resource "google_compute_instance" "node1" {
project = var.projectname
name = "node1"
machine_type = "custom-8-16384" //10 core and 20GB of ram custom-10-20480
zone = "us-central1-a"
can_ip_forward = true
boot_disk {
initialize_params {
image = "centos-cloud/centos-7"
}
}
network_interface {
subnetwork = google_compute_subnetwork.default.id
network_ip = google_compute_address.internal_ip.address
access_config {
nat_ip = google_compute_address.static.address
}
}
metadata = {
ssh-keys = "${var.user}:${file(var.publickeypath)}"
}
lifecycle {
ignore_changes = [attached_disk]
}
provisioner "file" {
source = "perquisites.sh"
destination = "/tmp/perquisites.sh"
connection {
host = google_compute_address.static.address
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
}
provisioner "local-exec" {
command = "echo ${google_compute_address.static.address} > inventory "
}
provisioner "local-exec" {
command = "ssh-keygen -R ${google_compute_address.static.address}"
}
provisioner "local-exec" {
command = "ansible-playbook /root/playbooks/aio.yaml"
}
}
resource "google_compute_attached_disk" "default3" {
disk = google_compute_disk.default2.id
instance = google_compute_instance.node1.id
}
resource "google_compute_disk" "default2" {
name = "disk1"
type = "pd-balanced"
zone = "us-central1-a"
image = "centos-7-v20210609"
size = 100 //in GBs 300
}
and here is ansible.cfg file
[defaults]
inventory = ./inventory
deprecation_warnings=False
remote_user = rohan
host_key_checking = False
private_key_file = ./lastkey
[privilege_escalation]
become = true
become_method = sudo
become_user = root
become_ask_pass = False
if anyone notices in terraform code I have used remote-exec and that is working I can see a log showing connected
but in ansible, it is showing
2
Answers
ssh-keygen -o -C "mykey" -f ~/.ssh/id_rsa
Note: UFW should not be installed
Based on the informations you gave us and the error message (
Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
) you provided, you need to create a SSH key pair likessh-keygen -t ed25519 -f ~/.ssh/<KEY_FILENAME> -C <USER>
(for a full list of options see: https://man.openbsd.org/ssh-keygen)
and then add the PUBLIC KEY (not the private key) to the apigeehybrid user on your GCE instance, either manually by putting it into the
~apigeehybrid/.ssh/authorized_keys
file or the GCP native ways like:OS Login
Metadata
Run the gcloud compute instances describe command to get the metadata for the VM:
Replace VM_NAME with the name of the VM for which you need to add or remove public SSH keys.
The output is similar to the following:
Copy the ssh-keys metadata value.
Create and open a new text file on your workstation.
In the file, paste the list of keys that you just copied.
Add your new key at the end of the list, in one of the following formats:
Format for a key without an expiration time:
Format for a key with an expiration time:
Replace the following:
KEY_VALUE: the public SSH key value
USERNAME: the username for the SSH key, specified when the key was created
EXPIRE_TIME: the time the key expires, in ISO 8601 format. For example: 2021-12-04T20:12:00+0000
Save and close the file.
Run the gcloud compute instances add-metadata command to set the ssh-keys value:
Replace the following:
VM_NAME: the VM you want to add the SSH key for
KEY_FILE with one of the following:
The path to the file you created in the previous step, if the VM had existing SSH keys
The path to your new public SSH key file, if the VM didn’t have existing SSH keys