When I try to upload files to S3 from storage via an application deployed on the stand with a file size of 1mb and above, I get an error
com.amazonaws.SdkClientException exception: The HTTP request cannot be executed: The peer resets the connection.
Everything is working locally. All files is uploading without errors.
Without a config
request.getRequestClientOptions().setReadLimit(custom buffer size);
I get an error like this
The request to the service failed for a reason that can be repeated, but the reset of the request input stream failed. See exception.getExtraInfo or the debug log for information about the initial failure that caused this retry.; If the request includes an input stream, the maximum size of the stream buffer can be configured using request.getRequestClientOptions().setReadLimit(int)
Although locally, without any settings, everything is ok
Previously, I caught an error, the server address with the bucket was specified with https instead http, files were loaded a couple of kb in size, 100 kb was an http error <Remote end is closed>
method code:
public ResponseEntity uploadFile(String catalogId, MultipartFile file) {
if (file == null || file.isEmpty()) {
return ResponseEntity.status(400).body(new ErrorResponse("Файл не передан"));
}
if (file.getSize() > maxFileSize * 1024 * 1024) {
return ResponseEntity.status(PAYLOAD_TOO_LARGE).contentType(MediaType.APPLICATION_JSON).body(new ErrorResponse("Размер файла не должен превышать: " + maxFileSize + " Мб"));
}
log.info("Начинаем загрузку файла: {}", file.getOriginalFilename());
String filePath = PathGenerator.generateFileUri(catalogId);
try {
// if (checkExists(filePath)) {
// log.info("Такой файл уже есть, удаляем для создания нового");
// deleteFile(filePath);
// }
ObjectMetadata meta = new ObjectMetadata();
meta.setContentType(file.getContentType());
meta.setContentLength(file.getSize());
meta.addUserMetadata("originalFileName", URLEncoder.encode(requireNonNull(file.getOriginalFilename()), StandardCharsets.UTF_8));
PutObjectRequest request = new PutObjectRequest(bucketName, filePath, file.getInputStream(), meta);
if (isCustomBufferSize) {
request.getRequestClientOptions().setReadLimit(customBufferSize);
}
awsConfig.getAwsClient().putObject(request);
log.info("Файл успешно загружен!");
HashMap<String, String> response = new HashMap<>();
response.put("fileUuid", filePath);
return ResponseEntity.ok().body(response);
} catch (Exception e) {
log.error(e.getMessage(), e);
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).contentType(MediaType.APPLICATION_JSON).body(new ErrorResponse("Ошибка записи: " + e.getMessage()));
}
}
AwsConfig code:
public AmazonS3 getAwsClient() {
String str = s3endpoint;
String[] arr = str.split("://");
String endpoint;
Protocol protocol;
if (arr.length == 1) {
protocol = Protocol.HTTP;
endpoint = str;
} else {
protocol = (arr[0].equalsIgnoreCase(Protocol.HTTPS.name())) ? Protocol.HTTPS : Protocol.HTTP;
endpoint = arr[1];
}
ClientConfiguration clientConfig = new ClientConfiguration();
clientConfig.setProtocol(protocol);
clientConfig.setSocketTimeout(60000);
if (endpoint.contains("10.241.34.61")) {
return AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey, secretKey)))
.withClientConfiguration(clientConfig)
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpoint, Region.EU_Ireland.getFirstRegionId()))
.withPathStyleAccessEnabled(true)
.build();
} else {
return AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey, secretKey)))
.withClientConfiguration(clientConfig)
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpoint, Region.US_Standard.getFirstRegionId()))
.withPathStyleAccessEnabled(true)
.build();
}
}
Configs of aws fully identitive on local and remote application.
2
Answers
Problem was in network. Nginx from remote produced timeout problems.
Devopses just changed k8s pods for application and nginx stopped closing the connection my app to remote.
You have several issues at play here. First, you are using Java V1 of S3 API. THis is not best practice and you should update to AWS SDK for Java V2 ASAP. See:
https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-aws-sdk-for-java-v1-x-on-december-31-2025/
Now when using V2 to upload larger files, the best practice is to use Transfer Manager API. That is, use software.amazon.awssdk.transfer.s3.S3TransferManager API. The Amazon S3 Transfer Manager API can certainly handle larger files.
See example of this API in AWS Code example github here:
https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/transfermanager/UploadFile.java
All explained in the Java V2 AWS Dev Guide:
https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/transfer-manager.html