skip to Main Content

i am new to ansible
i have installed ansible on ec2 instance (as a master VM)
and now i want to setup my target node as a gcp vm
so for that i have created a gcp vm and update the inventory file

ansible-target ansible_host=gcp_vm_ip ansible_connection=ssh ansible_user=apigeehybrid

but when i run ansible ansible-target -m ping
i got error

<35.184.210.81> ESTABLISH SSH CONNECTION FOR USER: apigeehybrid
<35.184.210.81> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="apigeehybrid"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ef74ba60db 35.184.210.80 '/bin/sh -c '"'"'echo ~apigeehybrid && sleep 0'"'"''
<35.184.210.81> (255, '', 'Permission denied (publickey,gssapi-keyex,gssapi-with-mic).rn')
target-aio | UNREACHABLE! => {
    "changed": false, 
    "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).", 
    "unreachable": true
}

now i knew i have to use SSH key something but i am a bit confuse becuase i tried a lot method of create key and put on somewhere but that not gonna work in my case

can anyone please elabore the perfect setup to established a connection b/w master vm(ec2) to target vm(gcp instance) that would be great…

Terraform X Apigee integration
the architecture would be same but i have create a node.tf file in master vm of ec2

resource "google_compute_network" "default" {
  name = "my-network"
}

resource "google_compute_subnetwork" "default" {
  name          = "my-subnet"
  ip_cidr_range = "10.0.0.0/16"
  region        = "us-central1"
  network       = google_compute_network.default.id
}

resource "google_compute_address" "internal_ip" {
  name         = "my-internal-address"
  project      = var.projectname
  subnetwork   = google_compute_subnetwork.default.id
  address_type = "INTERNAL"
  address      = "10.0.1.0"
  region       = "us-central1"
  purpose      = "GCE_ENDPOINT"
}

resource "google_compute_address" "static" {
  name    = "vm-public-address"
  project = var.projectname
  region  = "us-central1"
}

resource "google_compute_firewall" "firewall2" {
  name    = "gritfy-firewall-externalssh2"
  network = google_compute_network.default.name
  allow {
    protocol = "tcp"
    ports    = ["22"]
  }
  source_ranges = ["0.0.0.0/0"]
}

output "ip1" {
  value = google_compute_address.static.address
}

resource "google_compute_instance" "node1" {
  project        = var.projectname
  name           = "node1"
  machine_type   = "custom-8-16384" //10 core and 20GB of ram custom-10-20480
  zone           = "us-central1-a"
  can_ip_forward = true
  boot_disk {
    initialize_params {
      image = "centos-cloud/centos-7"
    }
  }
  network_interface {
    subnetwork = google_compute_subnetwork.default.id
    network_ip = google_compute_address.internal_ip.address
    access_config {
      nat_ip = google_compute_address.static.address
    }
  }
  metadata = {
    ssh-keys = "${var.user}:${file(var.publickeypath)}"
  }
  lifecycle {
    ignore_changes = [attached_disk]
  }
  provisioner "file" {
    source      = "perquisites.sh"
    destination = "/tmp/perquisites.sh"
    connection {
      host        = google_compute_address.static.address
      type        = "ssh"
      user        = var.user
      private_key = file(var.privatekeypath)
    }
  }
  provisioner "local-exec" {
    command = "echo ${google_compute_address.static.address} > inventory "
  }
  provisioner "local-exec" {
    command = "ssh-keygen -R ${google_compute_address.static.address}"
  }
  provisioner "local-exec" {
    command = "ansible-playbook /root/playbooks/aio.yaml"
  }
}

resource "google_compute_attached_disk" "default3" {
  disk     = google_compute_disk.default2.id
  instance = google_compute_instance.node1.id
}

resource "google_compute_disk" "default2" {
  name  = "disk1"
  type  = "pd-balanced"
  zone  = "us-central1-a"
  image = "centos-7-v20210609"
  size  = 100 //in GBs 300
}

and here is ansible.cfg file

[defaults]
inventory = ./inventory
deprecation_warnings=False
remote_user = rohan
host_key_checking = False
private_key_file = ./lastkey

[privilege_escalation]
become = true
become_method = sudo
become_user = root
become_ask_pass = False

if anyone notices in terraform code I have used remote-exec and that is working I can see a log showing connected

but in ansible, it is showing

logs

2

Answers


  1. Chosen as BEST ANSWER
    1. Generate ssh key in master vm ssh-keygen -o -C "mykey" -f ~/.ssh/id_rsa
    2. then copy the public key in the metadata section in the gcp
    3. just make sure while you are connecting to same instance please use that private key

    Note: UFW should not be installed


  2. Based on the informations you gave us and the error message (Permission denied (publickey,gssapi-keyex,gssapi-with-mic)) you provided, you need to create a SSH key pair like

    ssh-keygen -t ed25519 -f ~/.ssh/<KEY_FILENAME> -C <USER>

    (for a full list of options see: https://man.openbsd.org/ssh-keygen)

    and then add the PUBLIC KEY (not the private key) to the apigeehybrid user on your GCE instance, either manually by putting it into the ~apigeehybrid/.ssh/authorized_keys file or the GCP native ways like:

    OS Login

    gcloud compute os-login ssh-keys add 
        --key-file=KEY_FILE_PATH 
        --project=PROJECT 
        --ttl=EXPIRE_TIME
    

    Metadata

    Run the gcloud compute instances describe command to get the metadata for the VM:

    gcloud compute instances describe VM_NAME
    

    Replace VM_NAME with the name of the VM for which you need to add or remove public SSH keys.

    The output is similar to the following:

    ...
    metadata:
    ...
    - key: ssh-keys
      value: |-
        cloudysanfrancisco:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAu5kKQCPF...
        baklavainthebalkans:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQDx3FNVC8... google-ssh {"userName":"baklavainthebalkans","expireOn":"2021-06-14T16:59:03+0000"}
    ...
    

    Copy the ssh-keys metadata value.

    Create and open a new text file on your workstation.

    In the file, paste the list of keys that you just copied.

    Add your new key at the end of the list, in one of the following formats:

    Format for a key without an expiration time:

    USERNAME:KEY_VALUE
    

    Format for a key with an expiration time:

    USERNAME:KEY_VALUE google-ssh {"userName":"USERNAME","expireOn":"EXPIRE_TIME"}
    

    Replace the following:

    KEY_VALUE: the public SSH key value

    USERNAME: the username for the SSH key, specified when the key was created

    EXPIRE_TIME: the time the key expires, in ISO 8601 format. For example: 2021-12-04T20:12:00+0000

    Save and close the file.

    Run the gcloud compute instances add-metadata command to set the ssh-keys value:

    gcloud compute instances add-metadata VM_NAME --metadata-from-file ssh-keys=KEY_FILE
    

    Replace the following:

    VM_NAME: the VM you want to add the SSH key for

    KEY_FILE with one of the following:

    • The path to the file you created in the previous step, if the VM had existing SSH keys

    • The path to your new public SSH key file, if the VM didn’t have existing SSH keys

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search