I am using Gitlab’s CI/CD pipeline to build an image (2,080 GB), it’s artifacts are saved in a S3 Object Storage. When trying to push the artifacts the Gitlab-runner throws following Error:
ERROR: Uploading artifacts as "archive" to coordinator | 413 Request Entity Too Large | id=96757 responseStatus=413 Request Entity Too Large
Additional information:
Gitlab is selfhosted and deployed in a kubernetes cluster.
- Gitlab v15.9.1
- Gitlab Workhorse v15.9.1
- Gitlab-Runner v15.4.2
After some research i made two changes:
i increased Maximum artifacts size in Gitlab at Instance-, Group- and Project-Level:
configuration gitlab
and
i changed the client_max_body_size of ingress-nginx
apiVersion: v1
data:
allow-snippet-annotations: "true"
client-body-timeout: "600"
client_max_body_size: "0"
enable-vts-status: "false"
proxy-body-size: "0"
proxy-buffer-size: 128k
proxy-read-timeout: "600"
proxy-send-timeout: "600"
use-forwarded-headers: "true"
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
I also opened a ticket to my s3 service provider and asked if they have a limit to their uploads but they told me it’s unlimited.
The error message is still the same and my question is, does anyone know what is causing this?
Every little hint is appreciated.
2
Answers
That would be the most likey issue, as illustrated here for instance:
The error you are seeing is returned by nginx, not MinIO. There is no limitation related to object size imposed by MinIO.