I have a deployment that needs the ip address of the cloud redis instance.
I’m creating the cloud redis instance via config connector:
apiVersion: redis.cnrm.cloud.google.com/v1beta1
kind: RedisInstance
metadata:
name: redis-name
annotations:
cnrm.cloud.google.com/project-id: project-id
spec:
region: region
displayName: Cloud Redis
tier: BASIC
memorySizeGb: 1
authorizedNetworkRef:
external: projects/project-id/global/networks/network-name
I have a deployment where I want to add this via an env var
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
spec:
template:
spec:
containers:
- name: web
env:
- name: REDIS_HOST
value: "needs to be replaced"
I have tried to replace it a few ways with no success
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
bases:
- ../../base
replacements:
- source:
kind: RedisInstance
name: redis-name
namespace: my-namespace
fieldPath: status.host
I get the error
fieldPath `status.host` is missing for replacement source RedisInstance.[noVer].[noGrp]/redis-name.my-namespace
I’ve also tried with
vars:
- name: REDIS_HOST
objref:
kind: RedisInstance
name: redis-name
apiVersion: redis.cnrm.cloud.google.com/v1beta1
fieldref:
fieldpath: status.host
I’m assuming that this can’t be done because status doesnt exist until the resource is "live".. Is there a better way to do this?
In Terraform I would be able to reference the existing resource.. Seems like in Kustomize this isn’t possible?
2
Answers
Here is an another way to reference the redis host/ip created by config connector in GKE from a Kustomization
1.Create a customization with
REDIS_HOST
envNote: above yaml only works when cat base/redis.yaml have status.host.
2.This is redis instance have below yaml:
Your yaml is failing because originally the RedisInstance manifest doesn’t have status.host nor you can define it while creating the RedisInstance manifest.
3.Deployment file :
4.At last I can see REDIS_HOST env inside POD
I don’t see any way through you can supply
REDIS_HOST env
unless you create a redis resource and then update redis.yaml (kubectl get redisinstance -o yaml > ../../base/redis.yaml
) then only it looks possible to supply as envI recently had to do something like this. You can create a PostSync Job that runs
kubectl get redisinstance <your redisinstance name> -o json | jq -r .status.host
and inject it into your live deployment manifest via a configmap. After all the Redis instance is a K8s resource in Config Connector. The Job will need to be run with a K8s service account bound to a GCP service account via Workload Identity and givenget
permission forredisinstances
in the APIGroupredis.cnrm.cloud.google.com
andcreate
andupdate
permission forconfigmaps
.