skip to Main Content

I have the following project:
https://github.com/ably/kafka-connect-ably

Running the dockerfile locally works perfectly well.
I have tried a few methods to get it working in k8s…

I have tried Kompose. This created the correct .yaml files, as well as a persistent volume properly linked with the mountpath "config", but when run I get the error:

java.nio.file.NoSuchFileException: /config/docker-compose-worker-distributed.properties

Is there a way I can add the properties file to the persistent volume?
I have tried

kubectl cp config/docker-compose-worker-distributed.properties connector:/

but I get:

error: unable to upgrade connection: container not found ("connector")

I have also tried tagging the image with docker tag mycontainerreg.azure.io/name and then docker pushing, but that also fails, perhaps because not enough resources are allocated to it?

Mainly I just need the connector to port from Kafka (Apache) running in AKS to Ably, I understand I can add the connector to Confluent cloud if I purchase Enterprise but thats expensive!

2

Answers


  1. You need to create a configMap file that will contains the properties and specify it in your deployment.yml

    This is an exemple of a configMap named properties.yml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: kafka-properties
    binaryData:
      propertieA: >- valueA
    

    In your deployment yaml file you need to add the reference to the configMap created

     volumes:
           - name: kafka-properties-volume
             configMap:
               name: kafka-properties
        
    

    check the official documentation about configMaps

    Hope this helps

    Login or Signup to reply.
  2. You don’t need a configmap. The file may get overridden by the container entrypoint script

    Install the linked plugin into a container such as Confluent connect image (that’s what their Dockerfile already does, but uses Maven rather than Confluent Hub client, which also works with Apache Kafka), or use Strimzi (rather than Kompose) and you can either configure the worker property file with environment variables like CONNECT_BOOTSTRAP_SERVERS, or with Kafka Connect Strimzi resource spec natively with k8s resource files. This will template the distributed worker properties file as part of the container startup process

    Connectors themselves would be configured with JSON payload to the REST API, not files

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search