Frame 13511 : CVideoOutput out of chm buffers. Setting m_got_no_free_buffers flag.

Evgeny
2014-03-17
2014-03-30
  • Evgeny

    Evgeny - 2014-03-17

    I often get this error by Snowmix:

    Frame 13511 : CVideoOutput out of chm buffers. Setting m_got_no_free_buffers flag.

    Apparently the script av_o2s can not cope with stream speed and have lock.
    http://sagatov.net/temp/mixer/av_o2s

    (echo audio sink ctr isaudio 1 ; sleep 100000000000 & echo $! > /tmp/gstproc.pid ; wait ; rm /tmp/gstproc.pid) | \
      nc 127.0.0.1 9999 | \
    ( head -1
      gst-launch-1.0 shmsrc socket-path=$ctrsocket do-timestamp=true is-live=false ! \
      $VIDEOFORMAT          ! \
      tee name=tv               ! \
      multiqueue name=q1            ! \
      videocrop top=0 left=0 right=350 bottom=198   ! \
      shmsink socket-path=/tmp/video-control-pipe shm-size=40000000 wait-for-connection=0 sync=true   \
      tv.                   ! \
      multiqueue name=q2            ! \
      videoconvert              ! \
      autovideosink               \
      fdsrc fd=0 do-timestamp=true    ! \
      $AUDIOFORMAT          ! \
      tee name=ta ! q1.           \
      q1. ! shmsink socket-path=/tmp/audio-control-pipe shm-size=40000000 wait-for-connection=0 sync=true     \
      ta. ! q2.               \
      q2. ! autoaudiosink sync=false
      if [[ -e /tmp/gstproc.pid ]]; then kill `cat /tmp/gstproc.pid`; fi
    )
    

    See flow diagram in the attachment.

    Maybe you could recommend something to remedy the situation?

     
    Last edit: Evgeny 2014-03-17
    • Peter Maersk-Moller

      Hi E.

      When Snowmix runs out of shared memory for outputting frames, it will print the message CVideoOutput out of cshm buffers. and it will suspend outputting more frames until at least half of the configured shm buffers are available again.

      It is very important that the script that reads mixed frames from Snowmix runs flawlessly and have sufficient CPU and memory bandwidth availble. The script may:
      1) perhaps encode them and
      2) perhaps read audio and
      3) perhaps encode it and
      4) perhaps mux encoded audio and video

      In your case, first step is to investigate whether you are running low on CPU or you just have poorly designed pipelines. So check you CPU usage, if it is low, then your problem is in the design of your pipelines.

      Whether or not you have enough CPU I can strongly recommend you use a simple and proven script like this script shown below here until you have a better solution.

      AUDIOSRC='fdsrc fd=0 do-timestamp=true ! queue '
      VIDEOSRC='shmsrc socket-path=/tmp/mixer1 do-timestamp=true ! queue '
      AUDIOFORMAT='audio/x-raw, format=(string)S16LE, rate=(int)44100, channels=(int)2'
      AUDIOFORMATOUT='audio/mpeg,mpegversion=4, stream-format=raw'
      VIDEOFORMAT='video/x-raw, format=(string)BGRA, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, width=(int)1280, height=(int)720, framerate=(fraction)25/1'
      VIDEOFORMATOUT='video/x-h264, alignment=au, stream-format=byte-stream, profile=(string)main'
      
      ( echo 'audio sink ctr isaudio 1' ; sleep 100000000 ) | \
        /bin/nc 127.0.0.1 9999      | \
       (head -1
        /usr/bin/gst-launch-1.0 -v \
          $AUDIOSRC !\
          $AUDIOFORMAT !\
          audioparse !\
          audioconvert !\
          faac bitrate=128000 !\
          $AUDIOFORMATOUT !\
          aacparse !\
          queue !\
          muxer. $VIDEOSRC !\
          $VIDEOFORMAT !\
          videoconvert !\
          x264enc bitrate=3000 tune=zerolatency speed-preset=5 key-int-max=50 bframes=0 !\
          $VIDEOFORMATOUT !\
          h264parse !\
          queue !\
          matroskamux name=muxer streamable=true !\
          queue !\
          tcpserversink host=0.0.0.0 port=5010 sync-method=2 recover-policy=keyframe
      )
      

      Then you can read audio and video from port 5010 as many times as you like and if one of the client reading the data from the port fails, it should not impact other clients. One of the clients reading data can be a pipeline that reads encoded and muxed data and saves it to disk.

      If you have too little CPU, change the speed-preset of the x264enc to use less CPU.

      If you insist in using tee you have to ensure that all your pipelines are flowing flawlessly and you have to drop frames on queues that are not flowing fast enough. Otherwise they will lock up the pipelines.

      Now you have introduced a tee on both video and audio to monitor locally. While it is fine and possible, you don't have to. You can use the Snowmix command monitor on to get a window with video. This saves CPU and memory bandwidth. You can also create an extra audio sink and have a pipeline reading data and send it to your audio device. Or if you use my script, you have a local pipeline reading data from port 5010 and playing it locally on demand.

      Then you want to videocrop. Why do you want to videocrop? Why not do that in Snowmix and save both CPU and memory bandwidth? Having a videocrop between a shmsrc and a shmsink likes yours, removes some of the benefit of shm. And with tee and multiple levels of shmsrc and shmsink you are not optimizing CPU usage and memory bandwidth.

      Regards
      Peter

       
    • Peter Maersk-Moller

      If you remove your tees you can try to add drops to your queue right after your shmsrc that reads mixed frames from Snowmix. Its better to drop frames in a controlled fashion. But try out the things first I wrote in the other post.

      Peter

       
  • Evgeny

    Evgeny - 2014-03-17

    I no have problem with CPU, but have with pipeline design.
    I try to modify your pipline for using lossless video codec.
    For example avenc_huffyuv. But I have

    ERROR: from element /GstPipeline:pipeline0/GstTCPClientSrc:tcpclientsrc0: Internal data flow error.
    

    every time when try to open this stream.

    gst-launch-1.0 shmsrc socket-path=$ctrsocket do-timestamp=true is-live=false ! \
    $VIDEOFORMAT                           ! \
    videoconvert ! \
    avenc_huffyuv ! \
    queue ! \
    matroskamux name=muxer streamable=true !\
    queue ! \
    tcpserversink host=0.0.0.0 port=5010 sync-method=2 recover-policy=keyframe  \
    fdsrc fd=0 do-timestamp=true          ! \
    $AUDIOFORMAT                          ! \
    muxer.
    
    gst-launch-1.0 -v tcpclientsrc host=127.0.0.1 port=5010 ! decodebin name=decoder ! autovideosink
    

    I think what need h264parse analog for avenc_huffyuv codec.
    But apparently this does not exist?
    I can not transfer this video without compression?

     
    • Peter Maersk-Moller

      Hi E.

      Why do you want to transfer uncompressed video ?
      What is the purpose?
      What do you want to be able to do?

      1280x720@25fps = 1280x720x25x4x8 = 738 Mbps. That is not insignificant.
      Okay remove the alpha channel and convert to I420/YUV you'll get
      1280x720x25x1.5x8 = 276 Mbps, but still way over what you can get with an USB2.0 based Gigabit Ethernet and many Ethernet adapters are sitting on the USB controller.

      P

       
      Last edit: Peter Maersk-Moller 2014-03-17
  • Evgeny

    Evgeny - 2014-03-18

    I do av mixer for online broadcasts.
    http://sagatov.net/temp/mixer/pic1.png

    Uncompressed video I want to display on monitor. If it is compressed, it will take a lot of CPU time, will be a long delay and image distortion.

    Another gstreamer process will be crop only the main screen, compress and send compressed video over the network. See the attachment, please.

     
    • Peter Maersk-Moller

      Hi E.

      Thanks for answers. Why do you want to mix a larger video frame 1046x590 and monitor that frame, but only encode/save/broadcast a much smaller frame 696x392? What is the purpose of monitoring all that and then just drop/crop it? I am asking because with limited ressources, it might be better to run two instances of Snowmix, one for broadcast and one for monitoring. The monitoring part can even be reduced in size and framerate to save system ressources such as CPU and memory bandwidth and enable remote monitoring through very fast encoding.

      Anyway, is using the monitor on command viable solution? If it is, then the following should work. Please note subtle changes compared to earlier solutions. There is a videocrop and a leaky queue.

      WIDTH=1046
      HEIGHT=590
      VIDEOCROP='videocrop top=0 left=0 right=350 bottom=198 '
      AUDIOSRC='fdsrc fd=0 do-timestamp=true ! queue '
      VIDEOSRC='shmsrc socket-path=/tmp/mixer1 do-timestamp=true ! queue leaky=2 '
      AUDIOFORMAT='audio/x-raw, format=(string)S16LE, rate=(int)44100, channels=(int)2'
      AUDIOFORMATOUT='audio/mpeg,mpegversion=4, stream-format=raw'
      VIDEOFORMAT='video/x-raw, format=(string)BGRA, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, width=(int)'$WIDTH', height=(int)'$HEIGHT', framerate=(fraction)25/1'
      VIDEOFORMATOUT='video/x-h264, alignment=au, stream-format=byte-stream, profile=(string)main'
      
      ( echo 'audio sink ctr isaudio 1' ; sleep 100000000 ) | \
        /bin/nc 127.0.0.1 9999      | \
       (head -1
        /usr/bin/gst-launch-1.0 -v \
          $AUDIOSRC !\
          $AUDIOFORMAT !\
          audioparse !\
          audioconvert !\
          faac bitrate=128000 !\
          $AUDIOFORMATOUT !\
          aacparse !\
          queue !\
          muxer. $VIDEOSRC !\
          $VIDEOFORMAT !\
          videoconvert !\
          $VIDEOCROP !\
          x264enc bitrate=3000 tune=zerolatency speed-preset=5 key-int-max=50 bframes=0 !\
          $VIDEOFORMATOUT !\
          h264parse !\
          queue !\
          matroskamux name=muxer streamable=true !\
          queue !\
          tcpserversink host=0.0.0.0 port=5010 sync-method=2 recover-policy=keyframe
      )
      

      Does this script work for you without SNowmix running out of shm buffers ?

      Regards
      Peter

       
    • Peter Maersk-Moller

      The following script works well on my very old Thinkpad T61 with only a dual core CPU. It displays video and plays audio on the screen.

          WIDTH=1046
          HEIGHT=590
          VIDEOCROP='videocrop top=0 left=0 right=350 bottom=198 '
          AUDIORATE=48000
          AUDIOSRC='fdsrc fd=0 do-timestamp=true ! queue '
          VIDEOSRC='shmsrc socket-path=/tmp/mixer1 do-timestamp=true ! queue leaky=2 '
          AUDIOFORMAT='audio/x-raw, format=S16LE, rate='$AUDIORATE', channels=2'
          AUDIOFORMATOUT='audio/mpeg,mpegversion=4, stream-format=raw'
          VIDEOFORMAT='video/x-raw, format=BGRA,pixel-aspect-ratio=1/1,interlace-mode=progressive'
          VIDEOFORMAT=$VIDEOFORMAT',width='$WIDTH',height='$HEIGHT',framerate=25/1'
          VIDEOFORMATOUT='video/x-h264, alignment=au, stream-format=byte-stream, profile=(string)main'
      
          ( echo 'audio sink ctr isaudio 1' ; sleep 100000000 ) | \
            /bin/nc 127.0.0.1 9999      | \
           (head -1
            gst-launch-1.0 -v \
              $AUDIOSRC !\
              $AUDIOFORMAT !\
              audioparse rate=$AUDIORATE !\
              tee name=t2 !\
                queue leaky=2 !\
                audioconvert !\
                autoaudiosink t2. !\
              audioconvert !\
              faac bitrate=128000 !\
              $AUDIOFORMATOUT !\
              aacparse !\
              queue !\
              muxer. $VIDEOSRC !\
              $VIDEOFORMAT !\
              videoconvert !\
              tee name=t1 !\
                queue leaky=2 !\
                autovideosink t1. !\
              queue leaky=2 !\
              $VIDEOCROP !\
              x264enc bitrate=3000 tune=zerolatency speed-preset=5 key-int-max=50 bframes=0 !\
              $VIDEOFORMATOUT !\
              h264parse !\
              queue !\
              matroskamux name=muxer streamable=true !\
              queue !\
              tcpserversink host=0.0.0.0 port=5010 sync-method=2 recover-policy=keyframe
          )
      

      Does this work for you?

      You may also add the following to your ini file to make you life easier

      audio sink add 0 dummy
      audio sink channels 0 2
      audio sink rate 0 48000
      audio sink format 0 16 signed
      audio sink source mixer 0 1
      #audio sink mute off 0
      audio sink start 0
      
       
      Last edit: Peter Maersk-Moller 2014-03-18
    • Peter Maersk-Moller

      Forgot to mention that the audio sink 0 should be started before the audio mixer1.

      Out of curiosity, as I have very little knowledge of what SNowmix is used for by others, what are you using Snowmix for? In what context?

      Regards
      Peter

       
  • Anonymous - 2014-03-18

    Hi Peter!

    View the picture http://sagatov.net/temp/mixer/pic1.png
    Cameras 1-4 and Projector this is input SnowMix streams from real USB cameras.
    SnowMix switch this inputs to main screen (it is left top feed) by user commands from left panel.

    I use SnowMix for visual switching of video cameras streams.
    On Monitor I show all feeds from SnowMix for admin.
    But for broadcasting I use only main feed.

    I use SnowMix as the software mixer for video broadcasting of lectures at my university.

    For this reason, I need display uncompressed video to the monitor and broadcast compressed.

    The first variant the solution is not suitable for my application. I will try second variant and let you know the result.

    Thank you!

     
  • Evgeny

    Evgeny - 2014-03-18

    I tried the solution suggested by you.
    Sound on monitor late for seven seconds. This is unacceptable, because the administrator must see and hear nearly in real time (~1s late).
    I tried to change the parameters of the queue (drops and max values), but not found a solution.
    When I change

     x264enc bitrate=3000 tune=zerolatency speed-preset=5 key-int-max=50 bframes=0 !\
    

    at

     x264enc bitrate=448 speed-preset=slower tune=film
    

    the sound from the monitor goes mute, and I could not fix it.

     
    • Peter Maersk-Moller

      Hi E.

      If I add the following to your ini file:

      audio sink add 2 Line-Out 2
      audio sink channels 2 2
      audio sink rate 2 48000
      audio sink format 2 16 signed
      audio sink source mixer 2 1
      audio sink mute off 2
      

      and if I run the following script

      ( echo 'audio sink ctr isaudio 2' ; sleep 100000000 ) |\
       /bin/nc 127.0.0.1 9999 | \
      (head -1
       gst-launch-1.0 -v fdsrc fd=0 do-timestamp=true !\
          'audio/x-raw, format=S16LE, rate=48000, channels=2' !\
          audioparse rate=48000 !\
          queue !\
          audioconvert !\
          autoaudiosink
      )
      

      and if I run a script to read frames from Snowmix (like output2dummy)

      and if I feed audio and video into Snowmix (like input2feed 1

      and if I turn on the monitor, then the images I see are at most 1 frame late (around 40ms) and the audio I hear is in sync with the video. So when you say 7 seconds, that has to be with another setup.

      Now if I replace the *output2dummy and leave audio and video monitor as is and then run the script listed here below, I see another video window pop-up where the video is 1 frame late, which is around 40ms, and the audio is exactly the same as the audio monitor I left running. So again no 7 seconds delay but rather sub 100ms delay.

          WIDTH=1046
          HEIGHT=590
          VIDEOCROP='videocrop top=0 left=0 right=350 bottom=198 '
          AUDIORATE=48000
          AUDIOSRC='fdsrc fd=0 do-timestamp=true ! queue '
          VIDEOSRC='shmsrc socket-path=/tmp/mixer1 do-timestamp=true ! queue leaky=2 '
          AUDIOFORMAT='audio/x-raw, format=S16LE, rate='$AUDIORATE', channels=2'
          AUDIOFORMATOUT='audio/mpeg,mpegversion=4, stream-format=raw'
          VIDEOFORMAT='video/x-raw, format=BGRA,pixel-aspect-ratio=1/1,interlace-mode=progressive'
          VIDEOFORMAT=$VIDEOFORMAT',width='$WIDTH',height='$HEIGHT',framerate=25/1'
          VIDEOFORMATOUT='video/x-h264, alignment=au, stream-format=byte-stream, profile=(string)main'
      
          ( echo 'audio sink ctr isaudio 1' ; sleep 100000000 ) | \
            /bin/nc 127.0.0.1 9999      | \
           (head -1
            gst-launch-1.0 -v \
              $AUDIOSRC !\
              $AUDIOFORMAT !\
              audioparse rate=$AUDIORATE channels=$channels !\
              tee name=t2 !\
                queue leaky=2 !\
                audioconvert !\
                autoaudiosink t2. !\
              audioconvert !\
              faac bitrate=128000 !\
              $AUDIOFORMATOUT !\
              aacparse !\
              queue !\
              muxer. $VIDEOSRC !\
              $VIDEOFORMAT !\
              videoconvert !\
              tee name=t1 !\
                queue leaky=2 !\
                autovideosink t1. !\
              queue leaky=2 !\
              $VIDEOCROP !\
              x264enc bitrate=3000 tune=zerolatency speed-preset=5 key-int-max=50 bframes=0 !\
              $VIDEOFORMATOUT !\
              h264parse !\
              queue !\
              matroskamux name=muxer streamable=true !\
              queue !\
              tcpserversink host=0.0.0.0 port=5010 sync-method=2 recover-policy=keyframe
          )
      

      And when I play the generated stream with VLC, the result is 2-3 seconds later compared to the monitor setup, but that is to be expected and due to internal buffering in VLC.

      So what are you doing precisely?

       
      Last edit: Peter Maersk-Moller 2014-03-24


Anonymous

Cancel  Add attachments