skip to Main Content

strong textTill the cert-manager every pods working good as followed by aerospike docs. But while installing the operator the operator pods get crash loop backoff.

Installing operator using:

git clone https://github.com/aerospike/aerospike-kubernetes-operator.git
git checkout 2.5.0
cd aerospike-kubernetes-operator/helm-charts
helm install aerospike-kubernetes-operator ./aerospike-kubernetes-operator --set replicas=3

Pods running:

PS C:UsersB.Jimmyaerospike-kubernetes-operator-1.0.0> kubectl get pods -A
NAMESPACE      NAME                                             READY   STATUS             RESTARTS         AGE
cert-manager   cert-manager-576c79cb45-xkr88                    1/1     Running            0                4h41m
cert-manager   cert-manager-cainjector-664f76bc59-4b5kz         1/1     Running            0                4h41m
cert-manager   cert-manager-webhook-5d4fd5cb7f-f96qx            1/1     Running            0                4h41m
default        aerospike-kubernetes-operator-7bbb8745c8-86884   1/2     CrashLoopBackOff   36 (59s ago)     159m
default        aerospike-kubernetes-operator-7bbb8745c8-jzkww   1/2     Error              36 (5m14s ago)   159m
kube-system    aws-node-7b4nb                                   1/1     Running            0                21h
kube-system    aws-node-llnzh                                   1/1     Running            0                21h
kube-system    coredns-6c97f4f789-fhnq6                         1/1     Running            0                21h
kube-system    coredns-6c97f4f789-wmcdm                         1/1     Running            0                21h
kube-system    kube-proxy-5gwld                                 1/1     Running            0                21h
kube-system    kube-proxy-z2nwk                                 1/1     Running            0                21h
olm            catalog-operator-56db4cd676-hln6h                1/1     Running            0                21h
olm            olm-operator-5b8f867598-7h9z6                    1/1     Running            0                21h
olm            operatorhubio-catalog-bd8rq                      1/1     Running            0                178m
olm            packageserver-7cbbc9c85f-jms5f                   1/1     Running            0                21h
olm            packageserver-7cbbc9c85f-z45jg                   1/1     Running            0                21h

Crashing Pod Log:

PS C:UsersB.Jimmyaerospike-kubernetes-operator-1.0.0> kubectl logs -f aerospike-kubernetes-operator-7bbb8745c8-86884
Defaulted container "manager" out of: manager, kube-rbac-proxy
flag provided but not defined: -config
Usage of /manager:
  -health-probe-bind-address string
        The address the probe endpoint binds to. (default ":8081")
  -kubeconfig string
        Paths to a kubeconfig. Only required if out-of-cluster.
  -leader-elect
        Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.
  -metrics-bind-address string
        The address the metric endpoint binds to. (default ":8080")
  -zap-devel
        Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error) (default true)
  -zap-encoder value
        Zap log encoding (one of 'json' or 'console')
  -zap-log-level value
        Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
  -zap-stacktrace-level value
        Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').

Do I need to configure nginx ingress after installing cert-manager.

2

Answers


  1. Based on the provided logs, it seems that the Aerospike Kubernetes operator pod is encountering an error related to command-line flags, specifically the flag "-config" that is not defined for the "manager" container.

    The error message indicates that the flag is not recognized:

    flag provided but not defined: -config

    This suggests that there might be a mismatch between the operator version and the Helm chart configuration. The Aerospike Kubernetes operator might have undergone some changes since version 2.5.0, and the Helm chart you are using might not be compatible with this specific version.

    To troubleshoot this issue, here are some suggestions:

    Check Compatibility: Verify if the Helm chart version is compatible with the Aerospike operator version you are trying to install. Check the official documentation and release notes of the operator and the Helm chart for any version-specific requirements or changes.

    Review Command: Ensure that the Helm install command is correct. Verify if there are any specific options or configurations required for the Helm chart during installation.

    Update Helm Chart: If there is a newer version of the Helm chart available that matches your Aerospike operator version, try using that version instead.

    Operator Configuration: Review any configuration options required for the Aerospike operator itself. It’s possible that some parameters need to be set in the YAML file during installation.

    Check Dependencies: Confirm if there are any dependencies required for the Aerospike operator to function properly, such as cert-manager, nginx ingress, or DNS configurations. Make sure they are correctly set up.

    Debugging: If the issue persists, you can enable more detailed logging or debugging options for the Aerospike operator to get more insights into the problem. Check the operator’s documentation for information on how to enable additional logging.

    As for the specific mention of nginx ingress or automatic TLS certificate verification, they might not be directly related to the current error. However, it’s possible that they could be necessary for the proper functioning of Aerospike, depending on your use case and requirements.

    Remember, always refer to the official documentation of the Aerospike operator and Helm chart for accurate installation and configuration instructions. Additionally, check for any updates or community discussions related to the specific version you are using, as issues and solutions might have been addressed in subsequent releases or forum posts.

    If you encounter any other issues or need further assistance, feel free to provide more details, and I’ll do my best to help!

    Login or Signup to reply.
  2. I can recreate a similar behavior by following the steps you provided. I think there may be an accidental mistype with those steps in regards to the branch checkout so its attempting to use the master branch instead of 2.5.0

    The steps should be:

    git clone https://github.com/aerospike/aerospike-kubernetes-operator.git
    cd aerospike-kubernetes-operator/helm-charts
    git checkout 2.5.0
    helm install aerospike-kubernetes-operator ./aerospike-kubernetes-operator --set replicas=3
    

    notice the cd and git checkout commands flipped

    **NOTE: You may need to uninstall the current helm chart first before reinstalling **

    Example:

    helm uninstall aerospike-kubernetes-operator
    

    As a side note: I see you also have OLM namespaces already and may benefit using the OLM installation for AKO found here: https://docs.aerospike.com/cloud/kubernetes/operator/install-operator-operatorhub

    Hopefully this helps!

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search