audio passthrough

ciphered1
2014-05-26
2014-05-28
  • ciphered1

    ciphered1 - 2014-05-26

    Hi Peter,

    I would like to input audio source into snowmix and get it out of snowmix mixer without alternating anything. Just passthrough the audio with the mixed video out of snowmix.
    What configuration in snowmix is required to passthrough the audio channel?

    I understand that i need 2 feeds as input and 2 feeds as output each dedicated to audio/video

    Thanks

     
    • Peter Maersk-Moller

      Hi Ciphered.

      Why send audio into Snowmix in the first place, if you do not plan to alter it? If you dont need to alter audio, then bypass Snowmix using some GStreamer pipelines

      If you insist on sending audio through Snowmix, you need to define an audio feed and an audio sink and optionally define an audio mixer. You need to set format, channels and speed for these and connect or set the source for the sink and the mixer if defined. You also need to start the mixer.

      To do anything with audio in Snowmix, you need to read and understand this page: https://sourceforge.net/p/snowmix/wiki/Audio/

       
  • ciphered1

    ciphered1 - 2014-05-26

    Thanks Peter, excuse my bad english as i find it hard to express myself.

    Ok, so now from gstreamer i would go like this?

    V4L2-> Gstreamer (Video)->>Snowmix->Gstreamer (Mux'ed)
    OSSSRC (Audio)->>>>>>>>>>>Gstreamer (Mux'ed)

    Will the audio and video be synchronized this way? I mean the video will have an overhead of going into Snowmix to be mixed and then go out from snowmix, wouldnt that introduce some delay to the audio which is going into muxing with the video out from snowmix?

    Please note that audio and video source are both live feed from a CAM

    Thank you

     
    Last edit: ciphered1 2014-05-26
    • Peter Maersk-Moller

      No problem. I try to answer what I understand :-)

      Your audio and video would not automatically be synchronized as Snowmix introduce one or two frame period delay for the video depending on configuration of Snowmix and your pipelines.

      What is your source of audio and video?

      Assuming your source is a muxed audio and video stream you would have something like this

      gst-launch-1.0 udpsrc caps='...' ! decodebin name=decoder !\
          videoconvert ! $MIXERFORMAT ! shmsink ... decoder. !\
          'audio/x-raw, ....' ! tcpserversink host=127.0.0.1 port=9000
      
      gst-launch-1.0 shmsink ... ! $MIXERFORMAT ! queue ! x264enc !\
          h264parse ! queue ! YOURMUXERCHOICE name=muxer ! queue !\
          tcpserversink host=0.0.0.0 port=$SERVER_PORT sync-method=2 recover-policy=keyframe \
          tcpclientsrc host=127.0.0.1 port=9000 !\
          'audio/x-raw, ....' !\
          queue min-threshold-time=80000000 !\
          faac bitrate=128000
          aacparse ! queue ! muxer.
      

      You have to select the YOURMUXERCHOICE, MIXERFORMAT, SERVER_PORT and the audio format. I have assumed a frame period of 40 ms when determining the min-threshold-time value. You may need to experiment with the settings for that queue to make it work.

      You can get inspiration from the scripts input2feed and av_output2tcpserver.

      Downside is you have to learn setup GStreamer pipelines correctly, but you get most of it here.

      Regards
      Peter

       
      Last edit: Peter Maersk-Moller 2014-05-26
  • ciphered1

    ciphered1 - 2014-05-26

    My source is a camera with a built in mic. The video is using v4l2 and the audio is using an osssrc, both are RAW and being muxe'd in gstreamer. So it goes like
    V4l2->GST->H264->FLAC
    OOSSRC->GST-AAC->FLAC

    So i want mainly to have the mixed video as well as the audio encoded and muxed together

    Thank you

     
    • Peter Maersk-Moller

      Are the v4l2src and the osssrc on the same machine as Snowmix? If not, how do you get it from the source machine to the Snowmix machine ?

      If it is on the same machine as Snowmix, you simply skip the first step of getting the audio from the osssrc and the fetch the audio from the osssrc when muxing it together with the output of SNowmix and of course adding a delay with the queue as described in the previous mail.

      Got it?

      P

       
    • Peter Maersk-Moller

      Not sure what 'GST-AAC' is ?

      Please note that not all muxers support FLAC. Why not use aac or mp3 ?

      P

       
    • Peter Maersk-Moller

      Not sure what H264->FLAC' is????

      What you could have assuming the source are on the Snowmix machine is something like

      gst-launch v4l2src ... ! videoconvert ! shmsink
      snowmix
      gst-launch shmsink ! queue ! x264enc h264parse ! queue ! YOURMUXER name=muxer ! tcpserversink osssrc ! queue min-threshold-time=.. ! AUDIOENCODER ! queue ! muxer.
      

      Simple ?

       
  • ciphered1

    ciphered1 - 2014-05-26

    The audio and video are over a network, i am using udpsrc and udpsink to get the video and now want to get the audio in the same way
    I think this might work as you explained? Is this the best way to do it? I dont think 2 frames of difference between audio/video is noticed right? Thannk you a lot Peter

    From networkcam:
    v4l2src ! x264enc ! udpsink(snowmixip port1)
    osssrc ! aacenc ! udpsink(snowmixip port1|)

    Snowmix side:
    udpsrc caps='...' ! decodebin name=decoder !\ videoconvert ! $MIXERFORMAT ! shmsink ... decoder. !\ 'audio/x-raw, ....' ! tcpserversink host=127.0.0.1 port=9000

    shmsink ... ! $MIXERFORMAT ! queue ! x264enc !\ h264parse ! queue ! YOURMUXERCHOICE name=muxer ! queue !\ tcpserversink host=0.0.0.0 port=$SERVER_PORT sync-method=2 recover-policy=keyframe \ tcpclientsrc host=127.0.0.1 port=9000 !\ 'audio/x-raw, ....' !\ queue min-threshold-time=80000000 !\ faac bitrate=128000
    aacparse ! queue ! muxer.
    muxer. ! filesink ...

     
    • Peter Maersk-Moller

      Forgot rtpmp4adepay. Its in there now.

       
  • Peter Maersk-Moller

    Almost.

    On your source PC:

    gst-launch-1.0 v4l2src ... ! \
        'video/x-raw,width=...,height=...,framerate=25/1' !\
        queue ! x264enc ..... ! queue ! rtph264pay ! udpsink port=9000 ....
    gst-launch-1.0 osssrc do-timestamp=true ... ! \
        'audio/x-raw, ....' !\
        aacenc .... ! aacparse ! rtpmp4apay ! udpsink port=9002 ...
    

    On Snowmix PC:

    gst-launch-1.0 udpsrc caps=... port=9000 ! \
        decodebin ! videoconvert ! $MIXERFORMAT ! shmsink ...
    
    gst-launch-1.0 shmsrc ... ! $MIXERFORMAT ! videoconvert !\
        queue ! x264enc ... ! queue ! \
        YOURMUXERCHOICE name=muxer ! queue !\
        tcpserversink host=0.0.0.0 port=$SERVER_PORT sync-method=2 recover-policy=keyframe \
        udpsrc port=9002 caps=... ! rtpmp4adepay ! \
        queue min-threshold-time=.... ! muxer.
    

    Got it?

    It is important to get the '!' and the '\' right.

    P

     
    Last edit: Peter Maersk-Moller 2014-05-26
  • ciphered1

    ciphered1 - 2014-05-26

    Got it :) i will try it out, Hope the delay between audio/video is not noticeable

     
    Last edit: ciphered1 2014-05-26
    • Peter Maersk-Moller

      if it is, you can add delay with the queue and threshold value before muxing it.

      You may or may not need a 'do-timestamp=true' here and there

       
  • Ciphered1

    Ciphered1 - 2014-05-27

    Hi Peter,

    Should i execute the video/audio pipeline at the same time on the source PC? Does it matter in order to have them synchronized?

    Thanks

     
    • Peter Maersk-Moller

      It should not matter if pipelines are done correctly.

       
  • Ciphered1

    Ciphered1 - 2014-05-28

    Hi Peter,

    I noticed when the received audio/video are processed directly into autovideosink/autoaudiosink (without going into snowmix) they are synch'd perfectly. When video goes into snowmix and then into autovideosink and played along with audio which does not go into snowmix, the audio is delayed by 1 sec or so.

    Can you please advise how to solve

    Thanks

     
    Last edit: Ciphered1 2014-05-28
    • Peter Maersk-Moller

      Hi Ciphered.

      As always, post you ini file, and your scripts with all pipelines, if you want me to say anything qualified about your approach.

      When you produce your audio and video stream from source, you need to grab audio and video in one process, timestamp them correctly, encode them, mux them and stream them. If you use two processes, it it not guaranteed they will be in sync and a process in the other end may not know how to sync them if they do not have the same timestamp base. Depending on mux and streaming protocol, the timestamps might be relative and how do you sync two streams, audio and video, if the time stamps are relative? That said, it is possible to make them appear almost in sync if done correctly. And that is the keyword, correctly.

      Now assume your muxed stream arrives at the Snowmix PC with timestamps for audio and video in sync, relative or not, then you can play them and they appear in in sync, but only if done correctly with sync flag set and timestamps enabled. Otherwise they may appear in sync, but are not, except you may not discover it to begin with.

      So they way you have to se it is this. Snowmix is an application you have to play your audio and video stream to in sync just like the videosink and the audio sink does for GStreamer. But here is a difference. Snowmix will read your video frames and your audio frames as fast as possibly. In that way they differ from an audio device that will not play faster than what it is configured for. So make sure you have set the sync flag and have enabled timestamping.

      Now for Snowmix. Snowmix read video frames and audio samples once every frame period. If you have 25fps, that is 40ms.

      Now when you play or send a video frame to Snowmix, you may be just a nano second to late, so Snowmix will to begin with introduce between 0-40ms delay. 0ms is the best case and 40ms is the worst. The same applies for audio samples.

      Now when Snowmix is reading a video frame, if used, it is mixed and send to output, but it will not send the mixed frame out, until there is at least one extra frame. This can be configured to a value larger than 1 if needed. So Snowmix adds at least an extra frame period delay. For 25fps, we now have between 40-80ms delay. When a frame is send out, the receiving shmsrc pipeline should timestamp it.

      Now for the audio. When your input pipeline in sync sends audio data to Snowmix, Snowmix will in the same frame period reads the samples on its audio feed, mix the data in its audio mixer and send data out on its audio sink. That way no delay is introduced and audio will as a minimum be delayed between 0-40ms. So audio sent out of SNowmix is ahead of video if they were in sync when entering Snowmix. However Snowmix has ways to add delays to audio. You can add a one time delay to the audio feed that is applied when a a connection to audio feed is made AND the samples are received the first time. That delay may get consumed if your audio source is generating audio a little slower than Snowmix expects. The otherway is to tell the SNowmix audio mixer that its source must have a minimum delay. If needed, then silence is added to compensate for the missing delay. That is usefull to ensure any minimum delay you may need and it also helps if the audio source is generating samples a little bit to slow. If the source generates them a little bit to fast, you can set a maximum delay too to compensate.

      So now to you. If your audio and video is not in sync, then it is because you have configured that way. So back to the pipelines and ini file.

      Regards
      Peter

       


Anonymous

Cancel  Add attachments





Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:





No, thanks