skip to Main Content

I’m having authentication issues when I try to connect a producer following the instruction of the helm chart installation:

I’m installing this kafka helm chart:

https://artifacthub.io/packages/helm/bitnami/kafka?modal=install

I can perfectly install it with helm install, and this is the output from the installation:

NAME: kafka
LAST DEPLOYED: Mon Dec  4 14:55:43 2023
NAMESPACE: develop
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 26.4.3
APP VERSION: 3.6.0

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka.develop.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-controller-0.kafka-controller-headless.develop.svc.cluster.local:9092
    kafka-controller-1.kafka-controller-headless.develop.svc.cluster.local:9092
    kafka-controller-2.kafka-controller-headless.develop.svc.cluster.local:9092

The CLIENT listener for Kafka client connections from within your cluster have been configured with the following security settings:
    - SASL authentication

To connect a client to your Kafka, you need to create the 'client.properties' configuration files with the content below:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required 
    username="user1" 
    password="$(kubectl get secret kafka-user-passwords --namespace develop -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.6.0-debian-11-r2 --namespace develop --command -- sleep infinity
    kubectl cp --namespace develop /path/to/client.properties kafka-client:/tmp/client.properties
    kubectl exec --tty -i kafka-client --namespace develop -- bash

    PRODUCER:
        kafka-console-producer.sh 
            --producer.config /tmp/client.properties 
            --broker-list kafka-controller-0.kafka-controller-headless.develop.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.develop.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.develop.svc.cluster.local:9092 
            --topic test

    CONSUMER:
        kafka-console-consumer.sh 
            --consumer.config /tmp/client.properties 
            --bootstrap-server kafka.develop.svc.cluster.local:9092 
            --topic test 
            --from-beginning

The pods are correctly created and running:

k get pods

NAME                                                 READY   STATUS      RESTARTS        AGE
kafka-controller-0                                   2/2     Running     1 (2m42s ago)   3m33s
kafka-controller-2                                   2/2     Running     1 (2m42s ago)   3m33s
kafka-controller-1                                   2/2     Running     1 (2m36s ago)   3m33s

Then I follow the instructions of the output to create a kafka-client, and then I connect to it:

k exec --tty -i kafka-client --namespace develop -- bash

I already copied the clients.properties as mentioned in the description, as you can see the file is there:

@kafka-client:/$ cat /tmp/client.properties 
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required 
    username="user1" 
    password="$(kubectl get secret kafka-user-passwords --namespace develop -o jsonpath='{.data.client-passwords}' | base64 -d | cut -d , -f 1)";

Finally, I try to create the producer, but got an authentication error:

@kafka-client:/$ kafka-console-producer.sh 
                --producer.config /tmp/client.properties 
                --broker-list kafka-controller-0.kafka-controller-headless.develop.svc.cluster.local:9092,kafka-controller-1.kafka-controller-headless.develop.svc.cluster.local:9092,kafka-controller-2.kafka-controller-headless.develop.svc.cluster.local:9092 
                --topic test

And this are the logs:

[2023-12-04 14:06:37,028] ERROR [Producer clientId=console-producer] Connection to node -3 (kafka-controller-2.kafka-controller-headless.develop.svc.cluster.local/10.1.63.93:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)
[2023-12-04 14:06:37,028] WARN [Producer clientId=console-producer] Bootstrap broker kafka-controller-2.kafka-controller-headless.develop.svc.cluster.local:9092 (id: -3 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2023-12-04 14:06:37,571] ERROR [Producer clientId=console-producer] Connection to node -2 (kafka-controller-1.kafka-controller-headless.develop.svc.cluster.local/10.1.63.89:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)

Checking the logs in the first kafka pod, I can see some of the configuration applied in the startup:

sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [PLAIN, SCRAM-SHA-256, SCRAM-SHA-512]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism.controller.protocol = PLAIN
sasl.mechanism.inter.broker.protocol = PLAIN
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
sasl.server.callback.handler.class = null
sasl.server.max.receive.size = 524288

And this is the specific error in the kafka logs regarding to the authentication failure:

[2023-12-04 14:06:41,955] INFO [SocketServer listenerType=BROKER, nodeId=0] Failed authentication with /10.1.63.92 (channelId=10.1.63.90:9092-10.1.63.92:55104-1) (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector)
[2023-12-04 14:07:09,798] INFO [RaftManager id=0] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient)

Any idea? I tried a different few things, such as override some chart helm variables for the sals configuration, but without any progress.
I believe this should work as is, as this helm chart should be already properly configure to accept kakfa-clients with the configuration provided in the output of the installation.. but it is pretty clear there are some missing steps or something.

2

Answers


  1. Have you tried disabling authentication ? You can always apply the security rules at the moment you process the messages in your listener.

    Login or Signup to reply.
  2. Im facing the same issue. Did you get the solution?

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search