skip to Main Content

I have a Flask web application running as a Docker image that is deployed to a Kubernetes pod running on GKE. There are a few environment variables necessary for the application which are included in the docker-compose.yaml like so:

...
services:
  my-app:
    build: 
      ...
    environment:
      VAR_1: foo
      VAR_2: bar
...

I want to keep these environment variables in the docker-compose.yaml so I can run the application locally if necessary. However, when I go to deploy this using a Kubernetes deployment, these variables are missing from the pod and it throws an error. The only way I have found to resolve this is to add the following to my deployment.yaml:

containers:
      - name: my-app
        ...
        env:
          - name: VAR_1
            value: foo
          - name: VAR_2
            value: bar
...

Is there a way to migrate the values of these environment variables directly from the Docker container image into the Kubernetes pod?

I have tried researching this in Kubernetes and Docker documentation and Google searching and the only solutions I can find say to just include the environment variables in the deployment.yaml, but I’d like to retain them in the docker-compose.yaml for the purposes of running the container locally. I couldn’t find anything that explained how Docker container environment variables and Kubernetes environment variables interacted.

3

Answers


  1. Kompose can translate docker compose files to kubernetes resources:

    https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/

    Login or Signup to reply.
  2. The docker-compose.yml file and the Kubernetes YAML file serve similar purposes; both explain how to create a container from a Docker image. The Compose file is only read when you’re running docker-compose commands, though; the configuration there isn’t read when deploying to Kubernetes and doesn’t make any permanent changes to the image.

    If something needs to be set as an environment variable but really is independent of any particular deployment system, set it as an ENV in your image’s Dockerfile.

    ENV VAR_1=foo
    ENV VAR_2=bar
    # and don't mention either variable in either Compose or Kubernetes config
    

    If you can’t specify it this way (e.g., database host names and credentials) then you need to include it in both files as you’ve shown. Note that some of the configuration might be very different; a password might come from a host environment variable in Compose but a Kubernetes Secret.

    Login or Signup to reply.
  3. Let us assume docker-compose file and kubernetes runs the same way,
    Both take a ready to use image and schedule a new pod or container based on it.

    By default this image accept a set of env variables, to send those variables: docker-compose manage them in a way and kubernetes in an another way. (a matter of syntax)

    So you can use the same image over compose and over kubernetes, but the syntax of sending the env variables will differ.

    If you want them to presist no matter of the deployment and tool, you can always hardcode those env variables in the image itself, in another term, in your dockerfile that you used to build the image.

    I dont recommend this way ofc, and it might not work for you in case you are using pre-built official images, but the below is an example of a dockerfile with env included.

    FROM alpine:latest
    
    # this is how you hardcode it
    ENV VAR_1 foo
    
    COPY helloworld.sh .
    
    RUN chmod +x /helloworld.sh
    
    CMD ["/helloworld.sh"]
    

    If you want to move toward managing this in a much better way, you can use an .env file in your docker-compose to be able to update all the variables, especially when your compose have several apps that share the same variables.

      app1:
        image: ACRHOST/app1:latest
        env_file:
          - .env
    

    And on kubernetes side, you can create a config map, link your pods to that configmap and then you can update the value of the configmap only.

    https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/

    kubectl create configmap <map-name> <data-source>
    

    Also note that you can set the values in your configmap directly from the .env file that you use in docker, check the link above.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search