i have a terraform config that create a kubernetes(GKE) on GCP, install ingress and cert-manager using Helm.
the only part missing is the letsencrypt ClusterIssuer (when i deploy the letsencrypt.yaml manually all works fine).
my Terraform config:
# provider
provider "kubernetes" {
host = google_container_cluster.runners.endpoint
cluster_ca_certificate = base64decode(google_container_cluster.runners.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
provider "helm" {
kubernetes {
host = google_container_cluster.runners.endpoint
cluster_ca_certificate = base64decode(google_container_cluster.runners.master_auth.0.cluster_ca_certificate)
token = data.google_client_config.current.access_token
}
}
# create namespace for ingress controller
resource "kubernetes_namespace" "ingress" {
metadata {
name = "ingress"
}
}
# deploy ingress controller
resource "helm_release" "ingress" {
name = "ingress"
namespace = kubernetes_namespace.ingress.metadata[0].name
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
values = [
"${file("./helm_values/ingress.yaml")}"
]
set {
name = "controller.service.loadBalancerIP"
value = google_compute_address.net_runner.address
}
}
#create namespace for cert mananger
resource "kubernetes_namespace" "cert" {
metadata {
name = "cert-manager"
}
}
#deploy cert maanger
resource "helm_release" "cert" {
name = "cert-manager"
namespace = kubernetes_namespace.cert.metadata[0].name
repository = "https://charts.jetstack.io"
chart = "cert-manager"
depends_on = ["helm_release.ingress"]
set {
name = "version"
value = "v1.4.0"
}
set {
name = "installCRDs"
value = "true"
}
}
my letsencrypt.yaml:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: [email protected]
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
any idea how to deploy the ClusterIssuer using terraform?
2
Answers
You can apply the directly YAML file to the cluster
or else you can also use the TF provider to apply the YAML file
https://registry.terraform.io/providers/gavinbunney/kubectl/latest/docs#installation
update :
if you have not set up the Kubernetes provider to authenticate you can use from :
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs
I recently did this most successfully using the Terraform tool tfk8s to migrate yaml files. You can also use Terraform yamldecode.
tfk8s cluster-issuer.yaml -o cluster-issuer.tf
This will create a workingkubernetes_manifest
resource.Here is an example of my entire terraform script that installs it along with CRDs and ClusterIssuer.
NOTE This tool creates a resource type
kubernetes_manifest
, and Terraform docs state that it’s not a stable resource to use with an initialapply
command. In other words, create the cluster, etc first, then add the files and apply again. Otherwise, you need to manually migrate each kubernetes_manifest into its own dedicated resource type (deployment, service, etc).