skip to Main Content

I’m trying to upload file to blob container via HTTP.

On request receiving file through:

public class UploadFileFunction
{
    //Own created wrapper on BlobContainerClient 
    private readonly IBlobFileStorageClient _blobFileStorageClient;

    public UploadFileFunction(IBlobFileStorageClient blobFileStorageClient)
    {
        _blobFileStorageClient = blobFileStorageClient;
    }

    [FunctionName("UploadFile")]
    public async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "/equipments/{route}")]
        HttpRequest request,
        [FromRoute] string route)
    {
        IFormFile file = request.Form.Files["File"];
        if (file is null)
        {
            return new BadRequestResult();
        }

        string fileNamePath = $"{route}_{request.Query["fileName"]}_{request.Query["fileType"]}";
        BlobClient blob = _blobFileStorageClient.Container.GetBlobClient(fileNamePath);

        try
        {
            await blob.UploadAsync(file.OpenReadStream(), new BlobHttpHeaders { ContentType = file.ContentType });
        }
        catch (Exception)
        {
            return new ConflictResult();
        }

        return new OkResult();
    }
}

Than making request with file:
enter image description here

On UploadAsync whole stream of the file is uploaded in process memory
enter image description here

Is exists some way to upload directly to blob without uploading in process memory?
Thank you in advance.

2

Answers


  1. Thats the default way UploadAsync works, this will be ok for files that are small. I ran into an out of memory issue with large files; the solution here is to use AppendBlobAsync

    You will need to create the blob as an append blob, so you can keep appending to end of the blob. Basic gist is:

    1. Create an append blob
    2. Go through the existing file and grab xMB(say 2 MB) chunks at a time
    3. Append these chunks to the append blob until the end of file

    pseudo code something like below

    var appendBlobClient = _blobFileStorageClient.GetAppendBlobClient(fileNamePath);
    await appendBlobClient.CreateIfNotExistsAsync();
    
    var appendBlobMaxAppendBlockBytes = appendBlobClient.AppendBlobMaxAppendBlockBytes;
    
    using (var file = file.OpenReadStream())
    {
        int bytesRead;
        var buffer = new byte[appendBlobMaxAppendBlockBytes];
        while ((bytesRead = file.Read(buffer, 0, buffer.Length)) > 0)
        {
            //Stream stream = new MemoryStream(buffer);
            var newArray = new Span<byte>(buffer, 0, bytesRead).ToArray();
            Stream stream = new MemoryStream(newArray);
            stream.Position = 0;
            appendBlobClient.AppendBlock(stream);
        }
    }
    
    Login or Signup to reply.
  2. The best way to avoid this is to not upload your file via your own HTTP endpoint at all. Asking how to avoid the uploaded data not ending up in the process memory (via an HTTP endpoint) makes no sense.

    Simply use the Azure Blob Storage REST API to directly upload this file to the Azure blob storage. Your own HTTP endpoint simply needs to issue a Shared access signature (SAS) token for a file upload and the client can upload the file directly to the Blob storage.

    This pattern should be used for file uploads unless you have a very good reason not to. Your trigger function is only called after the HTTPRunTime is finished with the HTTP request, hence the trigger’s HttpRequest object is allocated in the process memory which is then passed to the trigger.

    I also suggest block blobs if you want to upload in multiple stages.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search