Menu

ASIO Buffer Latency Issue

2019-09-07
2020-03-28
  • Manuel Skorupka

    Manuel Skorupka - 2019-09-07

    I did some latency measurements with ASIO-buffer sizes and figured out that there is no latency improvement between 128 buffers and 64 buffers. If i´m right there should be an improvement of 2,67 ms. Anybody knows why there is no lower latency at 64 buffers?
    From 64 to 32 buffers there is a improvement of 1,32 ms.
    And from 32 to 16 buffers there is also a improvement of 0,65 ms.

    Is there any possibility to fix this?

    Regards Manuel Skorupka

     
  • Volker Fischer

    Volker Fischer - 2019-09-08

    Hi Manuel, what you measured is expected behavior in Jamulus. Jamulus is designed internally to work on buffers of 128 samples length. So if your audio card works on buffers greater than 128, you will notice a big improvement by shrinking the buffer size. But if you shrink the buffers beyond 128, the only latency improvement will be in your audio driver. If the audio card buffers are less than 128, Jamulus simply waits until the 128 samples bock is filled until it starts processing it.

     
  • Manuel Skorupka

    Manuel Skorupka - 2019-09-09

    Hi Volker, so if the ASIO-buffer size is less or other than a multiple of 128 samples jamulus waits for 128 sample blocks which equals 128 / 48000 Hz = 2,67 ms. That is conversion buffer latency from Jamulus? So the ASIO-buffer advantage from 128 to 64 (2,67 ms) ASIO-buffers is without profits because of the Jamulus conversion buffer. If Iˋm right the minimum buffer OPUS codec can deal with is 128? Many thanks so far for this great piece of Software.

     
  • Volker Fischer

    Volker Fischer - 2019-09-09

    Yes, exactly. This is how the conversion buffer in Jamulus works. As far as I know, OPUS is capable of smaller buffer sizes. But if the buffer is smaller than 128, the protocol overhead compared to the actual audio data is very high. So I decided to use a basic block size of 128 samples in Jamulus as a trade-off between the amount of audio data compared to the protocol data needed for the UDP transmission.

     
  • Manuel Skorupka

    Manuel Skorupka - 2019-09-20

    Hi Volker again. I found out that the minimal usable OPUS buffer size is 64 Buffers. I did some changes and compiled a Jamulus Version with 64 Buffers. After that i measured the latency improvement between original Jamulus (128 buffers) and Jamulus with 64 buffers.
    All measurements where made with a motu 1296 audiointerface on a local area network with following Settings: Jitter Buffer Sizes (Auto off, Local Size 2, Server Size 2), Misc (Audio Channels Stereo, Audio Quality High). I´ve measured the round trip latency with a extern oscilloscope.

    Results in ms:

    ASIO-Buffer: Jamulus 128: Jamulus 64: Improvement:
    16 15,83 9,17 6,67
    32 16,5 9,83 6,67
    64 17,83 9,83 8,00
    128 17,83 12,50 5,33

    I´ve also an ubuntu server up and running. It´s working fine. With a ping time of 32 ms I´m able to play with 43 ms round trip latency at ASIO-buffer size 64. For me, or us, that´s a big improvement.
    ;)
    regards Manuel Skorupka

     

    Last edit: Manuel Skorupka 2019-09-20
    • Volker Fischer

      Volker Fischer - 2020-03-28

      I did some measurements today, too. Using the Central Server hardware, I have a ping about 13 ms. I use MacOS with a Lexicon Omega audio interface and the Jitter Buffers both set to 2. Then I get:
      Jamulus 128: 35 ms
      Jamulus 64: 26 ms
      So I get a difference of about 9 ms which is great :-).

       
  • Volker Fischer

    Volker Fischer - 2019-09-21

    Hi Manuel,
    thank you for sharing your results here. These are very interesting numbers. Have you also checked the network bandwidth of the Jamulus stream if you compare 128 with 64 samples block size?
    Regards, Volker

     
  • Manuel Skorupka

    Manuel Skorupka - 2019-09-21

    Hi Volker,

    network bandwith in kbps (or Audio Stream Rate in Settings Dialog)

    ASIO-Buffer: Jamulus 128: Jamulus 64:
    16 657 888
    32 657 888
    64 657 888
    128 657 657

    I also quickly monitored the network bandwith with windows network resources monitor for up and downstream at 64 SYSTEM_FRAME_SIZE_SAMPLES. For Jamulus 128 it was 552 kBit/s and for Jamulus 64 it was 680 kBit/s.
    Regards
    Manuel Skorupka

     
  • Manuel Skorupka

    Manuel Skorupka - 2019-09-21

    Necassary code modifications for a quick and dirty Jamulus 64 Version:
    Only the STEREO_HIGH_QUALITY mode is usable!
    For the other modes I don´t know what exactly has to be changed.

    client.h
    '#define OPUS_NUM_BYTES_STEREO_HIGH_QUALITY 71

    client.cpp
    iCeltNumCodedBytes ( OPUS_NUM_BYTES_STEREO_HIGH_QUALITY ),

    global.h
    '#define SYSTEM_FRAME_SIZE_SAMPLES 64
    '#define FRAME_SIZE_FACTOR_PREFERRED 1 // 64 (for frame size 64)
    '#define FRAME_SIZE_FACTOR_DEFAULT 2 // 128 (for frame size 64)
    '#define FRAME_SIZE_FACTOR_SAFE 4 // 256 (for frame size 64)

    server.cpp
    '#if ( SYSTEM_FRAME_SIZE_SAMPLES != 64 )
    '#error "Only system frame size of 64 samples is supported by this module"
    '#endif

    void CHighPrecisionTimer::Start()
    {
    // reset position pointer and counter
    iCurPosInVector = 0;
    iIntervalCounter = 0;
    // start internal timer with 1 ms resolution
    Timer.start ( 1 ); //changed from 2 --> 1
    }

     

    Last edit: Manuel Skorupka 2019-09-21
  • Volker Fischer

    Volker Fischer - 2019-09-21

    Thank you for your measurements and your patches to the source code.
    680 kbps vs 552 kbps does not seem to be a big issue. I thought it would be more overhead. Very interesting tests you did. I hope you have a lot of fun with Jamulus, now that it works ok for you.

     
  • EmlynB

    EmlynB - 2020-03-27

    Thanks for the info and the code changes - I'm enjoying this lower latency too!