I’m working on a project where I need to apply a TFLite model for each frame of the local user’s video during a video call using the Agora Flutter SDK.
I came across the VideoFrameObserver in the documentation, which seems to be a promising tool for this task. Here’s the code snippet I found:
onCaptureVideoFrame: (VideoSourceType sourceType, VideoFrame videoFrame) async {
Uint8List yData = videoFrame.yBuffer!;
Uint8List uData = videoFrame.uBuffer!;
Uint8List vData = videoFrame.vBuffer!;
int height = videoFrame.height!;
int width = videoFrame.width!;
// how to create JPEG image
// ???
},
I’ve been trying to figure out how to convert the videoFrame to a JPEG image, but I haven’t been successful so far. I’ve also checked this issue , but the solution provided there didn’t work for me.
My question is : How to create JPEG image from yBuffer, uBuffer, vBuffer ?
2
Answers
Solution provided by brendan-duncan the owner of image package and it's working fine !
You can use the
image
package to handle the conversion here’s how to do this:image
package to yourpubspec.yaml
file:onCaptureVideoFrame
callback, convert theyBuffer
,uBuffer
, andvBuffer
into a JPEG image: