skip to Main Content
export async function* initiateObjectStream(
  Key: string,
  start: number,
  end: number,
): AsyncGenerator<any, any, unknown> {
  const streamRange = `bytes=${start}-${end}`;

  const getObjectCommand = new GetObjectCommand({
    Bucket: bucket,
    Key,
    Range: streamRange,
  });

  const { Body: chunks} = await s3Client.send(getObjectCommand);


  for await (const chunk of chunks) {
    yield chunk;
  }
}

I’m using this function to get an mp4 video file that will be later streamed to an HTML video player

When the start = 0 parameter is passed to the function argument, everything works. Because the chunks of the video file are broadcast from the beginning. And the first chunks of the video have the necessary metadata that indicate to the HTML Video player that this is a video and it should be displayed

When I want to display the video not from the beginning, but from the middle, the start value is different. B from AWS, I receive a video fragment that no longer has the first chunks with metadata. Because of this, the HTML video player closes the stream after receiving the first 3 chunks in which it expects metadata

How can I get the first chunks with metadata of my file through aws and glue them with the chunks of the desired fragment?

Or how can I create the first metadata chunks myself? And what values should I specify in the metadata?

I was trying to add metadata by another chunks, but it didn’t help me

export async function* initiateObjectStream(
  Key: string,
  start: number,
  end: number,
): AsyncGenerator<any, any, unknown> {
  const streamRange = `bytes=${start}-${end}`;

  const getObjectCommand = new GetObjectCommand({
    Bucket: bucket,
    Key,
    Range: streamRange,
  });

  const { Body: chunks } = await s3Client.send(getObjectCommand);

  //@ts-ignore
  const passThroughStream = new PassThrough();

  const ftypChunk = Buffer.alloc(28);
  ftypChunk.writeUInt32BE(28, 0);
  ftypChunk.write('ftyp', 4);
  ftypChunk.write('mmp4', 8);
  ftypChunk.write('isom', 12);
  ftypChunk.write('iso2', 16);
  ftypChunk.write('mp41', 20);
  ftypChunk.write('mp42', 24);

  const mdatChunk1 = Buffer.alloc(8);
  mdatChunk1.writeUInt32BE(8, 0);
  mdatChunk1.write('mdat', 4);

  const mdatChunk2Size = 303739;
  const mdatChunk2 = Buffer.alloc(8 + mdatChunk2Size, 0x01); // I dont know what I need to do on this part
  mdatChunk2.writeUInt32BE(8 + mdatChunk2Size, 0);
  mdatChunk2.write('mdat', 4);

  const moovChunkSize = 6202;
  const moovChunk = Buffer.alloc(8 + moovChunkSize, 0x02); // I dont know what I need to do on this part
  moovChunk.writeUInt32BE(8 + moovChunkSize, 0);
  moovChunk.write('moov', 4);

  passThroughStream.write(ftypChunk);
  passThroughStream.write(mdatChunk1);
  passThroughStream.write(mdatChunk2);
  passThroughStream.write(moovChunk);
  passThroughStream.end();

  for await (const chunk of passThroughStream) {
    yield chunk;
  }

  //@ts-ignore
  for await (const chunk of chunks) {
    yield chunk;
  }
}

I also tried to use this library in such a format to share method chunks with chunks of the video fragment itself, but that didn’t help either.

const ffmpegStream = ffmpeg()
    .input(chunks)
    .format('mp4')
    .addOutputOptions(
      '-movflags +frag_keyframe+separate_moof+omit_tfhd_offset+empty_moov',
    )
    .on('error', function (err) {
      console.log('An error occurred: ' + err.message);
    })
    .on('end', function () {
      console.log('Processing finished !');
    });

  const ffstream = ffmpegStream.pipe().on('data', function (chunk) {
    console.log('ffmpeg just wrote ' + chunk.length + ' bytes');
  });

2

Answers


  1. Amazon S3 does not modify video content. It stores videos. Use the AWS Elemental MediaConvert service to perform the video modifications. MediaConvert is a file-based video transcoding service that can convert media files from one format to another, resize, add overlays, and perform other video processing tasks.

    There is a related STO here — AWS Elemental MediaConvert Video File trimming start and end duration

    Login or Signup to reply.
  2. You have several options for playing a sub-clip from an asset:

    [a] For a VOD source, you could extract a sub-clip from the original asset into a new asset using AWS Elemental MediaConvert. The MediaConvert job format supports time clipping. Other open source video utilities can also do time clipping if you care to do the necessary integration & ongoing maintenance. MediaConvert is probably the least work to integrate.

    [b] For a live source such as the webinar you mentioned– you could package the live stream with AWS Elemental MediaPackage and simply request the timespan you want from the endpoint cache (DVR startover window) by passing start & end times on the playback URL. This has several advantages: It’s instantaneous, supports DRM, and no separate asset creation is required. Startover cache exists for up to 14 days. During that time you can choose to harvest a timespan to a permanent VOD asset.

    [c] Some players accept a start offset time and play duration parameters; ffplay supports this. If your target player has this support, you can pass the parameters to the player and have it present just the desired timespan.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search