skip to Main Content

I’m running docker-registry inside a deployment with ingress setup (nginx-ingress), and I use cloudflare. I started getting issues when trying to push images larger then 1GB if a layer is bit larger then that I just get "Retrying in x", and it begins from 0. Strange enough pushing any layer below that threshold just passes without issue and push succeeds.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
    name: {{ .Values.name }}
    annotations:
        kubernetes.io/ingress.class: nginx
        cert-manager.io/cluster-issuer: {{ .Values.certManager.name }}
        nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
        nginx.ingress.kubernetes.io/ssl-redirect: "false"
        nginx.org/client-max-body-size: "0"
        nginx.ingress.kubernetes.io/proxy-buffering: "off"
        nginx.ingress.kubernetes.io/proxy-http-version: "1.1"
        nginx.ingress.kubernetes.io/proxy_ignore_headers: "X-Accel-Buffering"
        nginx.ingress.kubernetes.io/connection-proxy-header: "keep-alive"
        nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
        nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
        nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
        nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "600"
        nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "10"
        nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
        nginx.ingress.kubernetes.io/proxy-body-size: "8192m"
        kubernetes.io/tls-acme: 'true'
        nginx.ingress.kubernetes.io/configuration-snippet: |
            more_set_headers "proxy_http_version 1.1";
            more_set_headers "X-Forwarded-For $proxy_add_x_forwarded_for";
            more_set_headers "Host $http_host";
            more_set_headers "Upgrade $http_upgrade";
            more_set_headers "Connection keep-alive";
            more_set_headers "X-Real-IP $remote_addr";
            more_set_headers "X-Forwarded-For $proxy_add_x_forwarded_for";
            more_set_headers "X-Forwarded-Proto: https";
            more_set_headers "X-Forwarded-Ssl on";
           
 
    labels:
        app: {{ .Values.name }}
spec:
    tls:
        - hosts: {{- range  .Values.certificate.dnsNames }}
               - {{ . }}
            {{- end}}
          secretName: {{ .Values.certificate.secretName }}
    rules:
        - host: {{ .Values.certManager.mainHost }}
          http:
              paths:
                  - path: /
                    pathType: Prefix
                    backend:
                        service:
                            name: {{ .Values.service.name }}
                            port:
                                number: {{ .Values.service.port }}

I want to be able to upload any size image as long as storage is available.

2

Answers


  1. Blob unknown Error: This error may be returned when a blob is unknown to the registry in a specified repository. This can be returned with a standard get or if a manifest references an unknown layer during upload.

    On Failure: Authentication Required

    401 Unauthorized
    WWW-Authenticate: <scheme> realm="<realm>", ..."
    Content-Length: <length>
    Content-Type: application/json
    
    {
        "errors": [
            {
                "code": <error code>,
                "message": "<error message>",
                "detail": ...
            },
            ...
        ]
    }
    

    The client is not authenticated.

    How to authenticate : Registry V1 clients first contact the index to initiate a push or pull. Under the Registry V2 workflow, clients should contact the registry first. If the registry server requires authentication it will return a 401 Unauthorized response with a WWW-Authenticate header detailing how to authenticate to this registry.

    Error authorizing context: May be the issue while trying to push an image on a slow broadband connection. It was only 200MB – not quite the 5GB above which suggests this is definitely a timeout problem and not a size problem.

    Check the Docker Context and Large Docker Images Cause Authorization Errors #1944 for more information.

    Cloudflare side : Also check the output of $ time docker push {your-image}

    Seems Cloudflare successfully connected to the origin web server, but the origin did not provide an HTTP response before the default 100 second connection timed out. Try checking troubleshooting Cloudflare 524 timeout error.

    Edit :

    Check Cloudflare status:
    There were multiple "DNS delays" and "Cloudflare API service issues" in the past few hours, which might have an effect on your installation.

    Also check this Downloads fail for files more than 1Gb if download speed is less than 8MB/sec and it may help to resolve your issue.

    No, CloudFlare only offers that kind of customization on Enterprise plans.

    Login or Signup to reply.
  2. First, verify you are using nginx-ingress and not ingress-nginx, which uses a different configuration for the body size:

    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    

    Next, track down where the connection is getting dropped by checking the proxy and registry logs. This includes cloudflare, the nginx pod, and the registry pod. There’s no need to debug all three simultaneously, so figure out which one of these is rejecting the large put requests. If the issue is cloudflare, e.g. if you don’t see any logs in your nginx ingress instance or registry containers, then consider pushing directly to your nginx ingress rather than cloudflare. Those logs may also indicate if the issue is based on time rather than size, which would be a different setting to adjust.

    And finally, as a workaround if you can’t push with one large blob, there is an option to do a chunked blob put to a registry, which breaks the upload up into smaller requests, each of which should be below the proxy limits. Docker by default does a chunked upload but with only a single chunk, and I’m not aware of any way to change it’s settings. My own project is regclient, and it can copy images in an OCI layout or exported from the docker engine to a registry. With regclient/regctl working with an exported OCI Layout or docker save output, that could be implemented with the following:

    regctl registry login $registry
    # when a blob exceeds 500M, push it as 50M chunks, note each chunk is held in ram
    regctl registry set --blob-max 500000000 --blob-chunk 50000000 $registry
    regctl image import $registry/$repo:$tag $file_tar
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search