Contemplating changing to a different Cloud host, I was puzzling over whether a reduction from a monthly network quota of 5000GB to 1000GB was going to have a cost - would traffic exceed the smaller quota? So I started by looking into the Jamulus documentation and found this table at https://jamulus.io/wiki/Network-Requirements. I imported it into a spreadsheet and colour scaled it.
The first surprise was that the Jamulus client Settings window Audio Stream Rate does not seem to correspond with the numbers on the sheet (I haven't yet tabulated that). So I used Windows Resource Monitor to see what rate the data was being sent and received (not the bit rate but the Byte rate, which is what the cloud host is counting). Those results are in the second colour-scaled table. Comparing the two has more surprises:
Mono In/Stereo Out is the same Byte rate as Stereo. I thought it was a single channel sent under Mono In but, no, it's two bearing the same audio.
There's something of an inverse correlation between stated bit rate and measured Byte rate
Which raises questions:
1. Is there something wrong with this analysis?
2. Is the published table accurate for version 3.6.2?
3. Is Windows Resource Monitor accurate?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I checked the numbers on the network requirements page once and they seemed to be correct, but the page has changed and the numbers now look suspect to me.
I think the numbers shown in the Settings window when Jamulus is running are correct, but will check again.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
So, I just measured, both at the PC and at the server. The values, as expected are the same.
They are, however very different from what is in the documentation and displayed in the settings window:
Stereo Hi Stereo N Stereo Lo Mono Hi Mono N Mono Lo64 small b 705 480 405 480 400 35564 630 350 280 380 280 210128 630 350 280 380 280 210256 570 285 215 320 215 150all in Kbps
Last edit: DonC 2021-02-17
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Stereo and Mono In/Stereo Out are identical.
I used the Performance window of the Win10 Task Manager on the PC and cbm on the Ubuntu 20.04 server to measure.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
DonC, I don't see how you get those numbers directly from the Win10 Task manager or CBM but I concur that the documentation and Audio Stream Rate values are not valid. Based on one example, that of Mono/Stereo Low quality with a 256 sample buffer, the Settings statement of 259kbps Audio Stream Rate is not supported by my observation either.
CBM shows data varying both ways even when there is no client on the server and ~218 kbps varying transmit total with one client and whatever else is going on. The non-Jamulus receive data is higher so I'm seeing over 280kbps but no way to filter just the Jamulus data. Taking a guess based on the bouncing numbers when my client is not connected, I'd say the rate for one client alone was ~210kbps each way. not far off your 215kbps.
Likewise on Task Manager, the Performance page does not filter on Jamulus and the traffic is bouncing around. On the Processes page, it does filter but totals send/receive to 0.4Mbps which somewhat corroborates 210-215kbps each way when I would expect 2x259kbps to sum up to 0.5Mbps.
The only filter for Jamulus alone that I've found that provides high precision is in the Win 10 Resources Monitor and it gives Bytes/sec which stabilises to +/- ~20 Bytes, a minute or so after a Jamulus setting has been changed.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
It's probably misleading to say "it's two channels bearing the same audio" since a stereo Opus stream uses M/S encoding and will not spend significant bandwidth on the empty side channel (in fact, the Opus decoders and encoders have no problems decoding a stereo stream with a mono decoder and vice versa), but the problem remains that it uses excessive upstream bandwidth for the Mono/Stereo setting (which also causes more decoding work since Jamulus does not use variable bit rates) and also wastes processing power juggling the data.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
a stereo Opus stream uses M/S encoding and will not spend significant bandwidth on the empty side channel
My Byte measurement was unchanged whether the Mute Myself was on or off. So it seems to me that the data rate is constant regardless. So either one of these is correct, but not both:
Mute Myself does not stop audio data (Bytes) from being transmitted to the server at a constant rate
Jamulus does not spend significant network bandwidth on the empty side-channel
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
My Byte measurement was unchanged whether the Mute Myself was on or off. So it seems to me that the data rate is constant regardless. So either one of these is correct, but not both:
Mute Myself does not stop audio data (Bytes) from being transmitted to the server at a constant rate
Jamulus does not spend significant network bandwidth on the empty side-channel
Both are correct. "Mute Myself" sends the same volume of audio data (since Jamulus does not use variable bitrate transmission) but it corresponds to a zero signal. While you do hear yourself when "Mute Myself" is active, this is because the Jamulus client stores the actual local signal locally and then mixes it in itself.
This means that when you use "Mute Myself", the signal you continue to hear from yourself is perfect regardless of the audio quality selected.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
It's not logical to say Jamulus uses insignificant network bandwidth for a side-channel containing no information because 62kB/s are being sent regardless of the audio information being inputted. Zeroes in the side-channel are being encoded and sent. So the two statements cannot be simultaneously correct but that may be a quibble over semantics. We are agreed that it is wasteful of network bandwidth that the Mono In/Stereo Out option uses the bandwidth of a stereo channel upstream when it should use that of a mono channel.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
It's not logical to say Jamulus uses insignificant network bandwidth for a side-channel containing no information because 62kB/s are being sent regardless of the audio information being inputted. Zeroes in the side-channel are being encoded and sent. So the two statements cannot be simultaneously correct but that may be a quibble over semantics.
It is perfectly logical since the rest of the bandwidth is then invested into the mid-channel, making the resulting mono of higher quality than it would be if one were sending two different channels.
So sending two different signals in Stereo mode as "High Quality" results in a worse per-channel quality than sending a mono signal in Mono-In/Stereo-Out.
Lossy compression is tricky...
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Contemplating changing to a different Cloud host, I was puzzling over whether a reduction from a monthly network quota of 5000GB to 1000GB was going to have a cost - would traffic exceed the smaller quota? So I started by looking into the Jamulus documentation and found this table at https://jamulus.io/wiki/Network-Requirements. I imported it into a spreadsheet and colour scaled it.
The first surprise was that the Jamulus client Settings window Audio Stream Rate does not seem to correspond with the numbers on the sheet (I haven't yet tabulated that). So I used Windows Resource Monitor to see what rate the data was being sent and received (not the bit rate but the Byte rate, which is what the cloud host is counting). Those results are in the second colour-scaled table. Comparing the two has more surprises:
Which raises questions:
1. Is there something wrong with this analysis?
2. Is the published table accurate for version 3.6.2?
3. Is Windows Resource Monitor accurate?
Interesting - although I think the Network Requirements page was only a rough guide (and designed to give people a sense of how things worked).
I think the tables also don't account for the small packet feature introduced a few months ago too.
I checked the numbers on the network requirements page once and they seemed to be correct, but the page has changed and the numbers now look suspect to me.
I think the numbers shown in the Settings window when Jamulus is running are correct, but will check again.
So, I just measured, both at the PC and at the server. The values, as expected are the same.
They are, however very different from what is in the documentation and displayed in the settings window:
Last edit: DonC 2021-02-17
Thanks, Don. What do you use to measure?
And does Mono In/Stereo Out differ from Stereo?
Stereo and Mono In/Stereo Out are identical.
I used the Performance window of the Win10 Task Manager on the PC and cbm on the Ubuntu 20.04 server to measure.
DonC, I don't see how you get those numbers directly from the Win10 Task manager or CBM but I concur that the documentation and Audio Stream Rate values are not valid. Based on one example, that of Mono/Stereo Low quality with a 256 sample buffer, the Settings statement of 259kbps Audio Stream Rate is not supported by my observation either.
CBM shows data varying both ways even when there is no client on the server and ~218 kbps varying transmit total with one client and whatever else is going on. The non-Jamulus receive data is higher so I'm seeing over 280kbps but no way to filter just the Jamulus data. Taking a guess based on the bouncing numbers when my client is not connected, I'd say the rate for one client alone was ~210kbps each way. not far off your 215kbps.
Likewise on Task Manager, the Performance page does not filter on Jamulus and the traffic is bouncing around. On the Processes page, it does filter but totals send/receive to 0.4Mbps which somewhat corroborates 210-215kbps each way when I would expect 2x259kbps to sum up to 0.5Mbps.
The only filter for Jamulus alone that I've found that provides high precision is in the Win 10 Resources Monitor and it gives Bytes/sec which stabilises to +/- ~20 Bytes, a minute or so after a Jamulus setting has been changed.
See this discussion.
It's probably misleading to say "it's two channels bearing the same audio" since a stereo Opus stream uses M/S encoding and will not spend significant bandwidth on the empty side channel (in fact, the Opus decoders and encoders have no problems decoding a stereo stream with a mono decoder and vice versa), but the problem remains that it uses excessive upstream bandwidth for the Mono/Stereo setting (which also causes more decoding work since Jamulus does not use variable bit rates) and also wastes processing power juggling the data.
Thanks, David. OPUS is new to me.
My Byte measurement was unchanged whether the Mute Myself was on or off. So it seems to me that the data rate is constant regardless. So either one of these is correct, but not both:
Both are correct. "Mute Myself" sends the same volume of audio data (since Jamulus does not use variable bitrate transmission) but it corresponds to a zero signal. While you do hear yourself when "Mute Myself" is active, this is because the Jamulus client stores the actual local signal locally and then mixes it in itself.
This means that when you use "Mute Myself", the signal you continue to hear from yourself is perfect regardless of the audio quality selected.
It's not logical to say Jamulus uses insignificant network bandwidth for a side-channel containing no information because 62kB/s are being sent regardless of the audio information being inputted. Zeroes in the side-channel are being encoded and sent. So the two statements cannot be simultaneously correct but that may be a quibble over semantics. We are agreed that it is wasteful of network bandwidth that the Mono In/Stereo Out option uses the bandwidth of a stereo channel upstream when it should use that of a mono channel.
It is perfectly logical since the rest of the bandwidth is then invested into the mid-channel, making the resulting mono of higher quality than it would be if one were sending two different channels.
So sending two different signals in Stereo mode as "High Quality" results in a worse per-channel quality than sending a mono signal in Mono-In/Stereo-Out.
Lossy compression is tricky...