skip to Main Content

In a Remix app, I have a form I’m using to select & upload an image, just a simple <input name="image" type="file" />

The form POSTs to a Remix handler, and in the handler (see code below) I’m trying to upload this image to Amazon S3 using the v3 SDK. I’ve tried with both the PutObjectCommand from client-s3 and with Upload from lib-storage – same result…

The upload "works" – It successfully creates the file in the bucket. I can see the filesize matches exactly what I expect when I upload the same file directly into the S3 bucket via the web UI.

However as an "image" the file doesn’t open – when I view the uploaded image from the S3 bucket the browser renders the standard "broken image icon" (similar issue if I download the image and try to open in Preview on OSX) – I have no other information about what’s wrong. Something seems to be either corrupted, or created in a way that the file is not recognized as an image.


In the handler, I get the file data as an AsyncIterable<Uint8Array> and the only real processing I’m doing is to convert that into a single Uint8Array for the Upload (and have already tried setting the body to a buffer and a blob, same result).

Is there something wrong with my conversion to a Uint8Array or is there a more correct way to convert that for the upload, maybe a different type?

Or something else I need to configure in the way I’m setting the upload to S3?

async ({ name, contentType, data, filename }) => {
  const arrayOfUInt8Array: Uint8Array[] = [];

  let length = 0;
  for await (const x of data)
  {
    arrayOfUInt8Array.push(x);
    length += x.length;
  }

  const uInt8Array = new Uint8Array(length);

  for (const x of arrayOfUInt8Array) {
    uInt8Array.set(x);
  }


  // Tried creating a buffer from the Uint8Array...
  const buff = Buffer.from(uInt8Array);
  // Tried creating a blob from the Uint8Array...
  const blob = new Blob([uInt8Array], { type: contentType });


  const uploadCommand = new Upload({
    client: s3Client,
    params: {
      Bucket: s3Bucket,
      Key: filename,
      Body: uInt8Array,
      ContentType: contentType,
    }
  });

  await uploadCommand.done();

  return '';
},

2

Answers


  1. Chosen as BEST ANSWER

    As noted by @Botje in a comment, the issue was with the construction of the Uint8Array, where the source was continually being overwritten at the beginning, and the rest of the array was empty.

    So instead of:

      for (const x of arrayOfUInt8Array) {
        uInt8Array.set(x);
      }
    

    I needed:

      let i = 0;
      let currentIndex = 0;
      for (const x of arrayOfUInt8Array) {
        uInt8Array.set(x, currentIndex)
        currentIndex += arrayOfUInt8Array[i].length
        i += 1
      }
    

  2. I usually have a two-parts strategy to handle this:

    1st parse the upload as NodeDiskOnFile using Remix’s built in functionality:

    const uploadHandler = unstable_composeUploadHandlers(
        unstable_createFileUploadHandler({
          maxPartSize: 10_000_000,
          file: ({ filename }) => filename,
        }),
        // parse everything else into memory
        unstable_createMemoryUploadHandler()
      );
    
    const formData = await unstable_parseMultipartFormData(
        request,
        uploadHandler
      );
    

    Then I upload the files using a presigned URL:

    for (const file of files) {
        const uploadedURL = await uploadFileWithPresignedUrl(file);
        dataWithAWSUrls.uploadedFiles.push(uploadedURL);
      }
    

    Here’s how the presignedUrl functions are structured:

    import {
      GetObjectCommand,
      PutObjectCommand,
      S3Client,
    } from "@aws-sdk/client-s3";
    import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
    import { NodeOnDiskFile } from "@remix-run/node";
    import axios from "axios";
    
    import slugify from "slugify";
    import { slugifyOptions } from "~/constants/constants";
    import { env } from "~/env";
    
    const s3Client = new S3Client({
      region: env.AWS_REGION,
      credentials: {
        accessKeyId: env.AWS_ACCESS_KEY_ID,
        secretAccessKey: env.AWS_SECRET_ACCESS_KEY,
      },
    });
    
    async function streamToBuffer(
      readable: ReadableStream<Uint8Array>
    ): Promise<Buffer> {
      const data: Uint8Array[] = [];
      const reader = readable.getReader();
    
      try {
        while (true) {
          const { value, done } = await reader.read();
          if (done) break;
          data.push(value);
        }
      } finally {
        reader.releaseLock();
      }
    
      return Buffer.concat(data.map((chunk) => Buffer.from(chunk)));
    }
    
    export async function uploadFileWithPresignedUrl(file: File | NodeOnDiskFile) {
      const buffer = await streamToBuffer(file.stream());
    
      const slugifiedFileName = slugify(file.name, slugifyOptions);
    
      const command: any = new PutObjectCommand({
        Bucket: env.AWS_BUCKET_NAME,
        Key: slugifiedFileName,
        Body: buffer,
        ContentType: file.type,
      });
    
      const url = await getSignedUrl(s3Client, command, { expiresIn: 3600 });
    
      // Upload file with presigned url
      await axios.put(url, buffer, {
        headers: {
          "Content-Type": file.type,
          "Content-Length": buffer.length,
        },
      });
    
      const publicUrl = `https://${env.AWS_BUCKET_NAME}.s3.amazonaws.com/${slugifiedFileName}`;
    
      // Return the url of the uploaded file
      return publicUrl;
    }
    
    export async function getFileWithPresignedUrl(key: string) {
      const command = new GetObjectCommand({
        Bucket: env.AWS_BUCKET_NAME,
        Key: key,
      });
      const signedUrl = getSignedUrl(s3Client, command, { expiresIn: 3600 });
      return signedUrl;
    }
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search