skip to Main Content

I have this existing program that uses gst-plugin-1.0 and it passes this:

-e udpsrc port=3003 buffer-size=200000 ! h264parse ! queue ! http://mux.video_0 alsasrc device=plughw:1,0 ! "audio/x-raw,channels=1,depth=16,width=16,rate=44100" ! voaacenc bitrate=128000 ! aacparse ! queue ! http://mux.audio_0 qtmux name=mux ! filesink location="$RECPATH/record-`date +%Y%m%d%-H%M%S`.mp4" sync=true

This takes the video from an udp source which is in x264 and the audio directly from the microphone. It works but since it doesn’t encode the video and the audio at the same time I have a bit of delay on the audio when the video stream has latency (due to higher quality settings).

So as a quick-fix I was thinking about adding a delay on the audio recording to compensate. I would calculate that delay by hand depending on the video quality.

Constraint: gst-launch-1.0 version 1.10.4 (on a raspberry pi, debian stretch), use-driver-timestamps doesn’t seem to be accessible, I get the error ‘WARNING: erroneous pipeline: no property "use-driver-timestamps" in element "alsasrc0"’.

So my question is: is there an easy way to add delay to the audio?

2

Answers


  1. the queue element had the min-threshold-time property, which lets you hold on to data for n amount of time.

    https://gstreamer.freedesktop.org/documentation/coreelements/queue.html?gi-language=c#queue:min-threshold-time

    Alternatively I found this too, might be useful for your case pipeline Gstremer video streaming with delay

    Login or Signup to reply.
  2. Try ! autoaudiosink ts-offset=100000000

    ts-offset is documented here.

    You can also experiment pipelines with latency compensation;

    https://gstreamer.freedesktop.org/documentation/additional/design/latency.html#latency-compensation

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search