Replies: 25 comments
-
Yes, this has been mentioned before on the forums and it's an interesting idea I think. I sometimes hang out as a listener on servers to see what people are talking about, and there's often discussion about the complexity of the interaction of buffers and things which can confuse people. But the trade-off between latency and signal integrity seems to be a constant. One might also imagine a "simple" and "advanced" UI where the current UI is the advanced, and the slider is the simple (is it "quick --|--safe" or some other vocabulary? "more response--|--more integrity"?) |
Beta Was this translation helpful? Give feedback.
-
This issue which I see with the request is that the Auto Jitterbuffer functionality is not only located at the client but also at the server. So if I would introduce such a setting, the server would also have to support this since it does not make sense to auto adjust the local/server jitter buffers differently. So we would have to do changes in the client, introduce a new protocol message to tell the server about the setting and do the implementation in the server as well. So there is quite a lot to do. But the bigger concern is that we have a lot of old Jamulus server versions registered at the server lists. So the new feature would not work on most of the registered servers. |
Beta Was this translation helpful? Give feedback.
-
Thanks for thinking along, as gilgongo said it is an frequently discussed subject on different sessions and aftertalks. Of course there are different criteria and circumstances. Recently we finished our Jamulus Choir project with a small streamed festival, and of course just that evening i had an unreliable internet connection. This prompted me to un-auto and add about 10ms finally on both sides..... Once you've figured out tweaking your local buffers, the auto jitterbuffer is the best way still to deal with this, if i could make it more critical :-) Replying to corrados: combined server-client side makes it more complex yes, i didn't realize that. But we got the pan feature that is also not supported on the older servers. If I am a critical musician i will update both client and server, or choose an updated server, so I think for the users it would not really be a problem. Realizing it in programming is of course another thing. I am not a programmer, but I could imagine a default value which the current auto function now has, and an adjustable value only if the version of server and client allow. Just like the pan-wheel disappears when I connect to an outdated server. |
Beta Was this translation helpful? Give feedback.
-
Maybe the simplest solution would be to adjust the auto jitter buffer parameters so that you get a bit less drop outs. For the ones who want to have lowest latency possible, they will most probably disable the auto jitter buffer anyway (which I do). |
Beta Was this translation helpful? Give feedback.
-
Kind of related to this topic, are there any statistic in the pgm showing the amount of packets dropped out, arrived late and discarded, ... that may serve as troubleshooting for bad connections? |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
That is an interesting hidden feature, still not quite clear to me what is shown, are those calculated error rates versus buffer size? I get a slightly different table. |
Beta Was this translation helpful? Give feedback.
-
Auto off for lowest latency: I suppose you have a more stable internet connection, that you can keep it on one value. However it depends on what you're playing perhaps as wel as how much network variability you have. Today I was playing staccato chords in Baggy Trousers, then the jitter bothered me less than in for instance string playing of singing. |
Beta Was this translation helpful? Give feedback.
-
No, I have a pretty bad internet connection. But if no other device is turned off in my house, the jitter is pretty stable. The issue is that if your son sends a Whatsapp video, you will get many drop outs for a period of, let's say, 10 seconds. In that time the auto jitter buffer turns the buffers up. Then after a while it gets back to the old values. But actually you do not benefit from the increased buffer sizes since during the 10 seconds video upload, you cannot use Jamulus no matter what jitter buffer setting you use. But after the 10 seconds, you can immediately play at a low latency if the auto jitter buffer was disabled. |
Beta Was this translation helpful? Give feedback.
-
My question is still open. What are your opinions? |
Beta Was this translation helpful? Give feedback.
-
Suggest making the Auto Jitter Buffer adjustment a little less aggressive, refresh the Jitter Buffer status more frequently, or both. I suspect it will not require a big adjustment to resolve the Auto Buffer under buffer issue. The problem is at the lower margin. |
Beta Was this translation helpful? Give feedback.
-
To me that would give a more reliable Auto jitterbuffer. Of course I can't speak for others, as said. When I want to be on the safe side (like performing on worldjam) I add like 2 or 3 ms on both, rehearsing I add less. Should we do a poll on one of the facebook group how players feel about this/ how much to add? Making it adjustable would still be my first choice, we had more items like panning that were situated on both client and server side, and were wonderfully updated with backward compatiblilty. Of course thinking on this with more programmers could help, that's the power of open source! |
Beta Was this translation helpful? Give feedback.
-
Ok, I have just lowered the droup out probability of the auto jitter buffer algorithm. The change will go into the next release 3.5.10. But it will take some time until the servers will update. So at the beginning only the "local" jitter buffer setting will be changed. Let's see how the feedback of this change will be. |
Beta Was this translation helpful? Give feedback.
-
I'm not sure if the statement vom 19 Jul "...it does not make sense to auto adjust the local/server jitter buffers differently..." is correct. My experience is that people have often problems only with upload, special the cable provider customers. Does someone agree? BTW, I recognized yesterday evening that just one attendees with bad connection will disturbe all others package ping. I have normal a ping of 6 ms to Central server, but yesterday it jumps up to 30 ms as log as this spanish musician was in the session. Someone with the same experience? In other words, I realy recomment auto functions! |
Beta Was this translation helpful? Give feedback.
-
What I can confirm is that when playing together (3 musicians) we encounter high overall delay rates (50ms+) while the ping rates displayed for the chosen server are reasonably low (15 .. 20ms), most likely because one musician is with a cable provider, see |
Beta Was this translation helpful? Give feedback.
-
I find the report by @nico-dr very curious. What can a user send that causes the server to become busy? Can a user send packets much faster or larger than the other users to clog the buffer? Can the user send some packet content that takes a lot more computing than the packets from the other users? Very puzzling. Regarding @drummer1154's report, what is the data path inside the server for the overall delay? Speedtest is making measurements using TCP. I am assuming Jamulus traffic is (only) UDP. Is your ISP allowed to prioritize some traffic over others? |
Beta Was this translation helpful? Give feedback.
-
@gene96817 If you ask me questions related to "the server" (which one?) or "my ISP" - I cannot give an answer. I am not a carrier grade network expert, but from my PoV - why should a UDP packet traverse different buffers or network paths than a TCP packet? AFAIK TCP is just another "safeguarding" layer on top of the "normal" network transport mechanism giving the possibility to repeat a packet if it was damaged while a UDP packet is dropped (lost) if damaged. Therefore I think the "buffer bloat" effect revealed by the dslreports tests |
Beta Was this translation helpful? Give feedback.
-
@drummer1154 We don't have enough data to know some details. However here are some of the issues.
|
Beta Was this translation helpful? Give feedback.
-
@gene96817 Thanks for these many aspects - I completely agree. I was especially thinking of
as being the root cause for our intermittent high overall delay times. I still suspect the cable network ISP being the culprit but I do not have any evidence. So - clearly - "Speedtest is not useful for our thinking about UDP" and up to now (i.e. prior to discovering the dslreports.com speedtest) I only saw speedtests indicate the maximum up/download rates without any jitter indication which of course does'nt help us. Do you think you could invest some time into finding out how the dslreports.com speedtest works and if it could help us finding the bottleneck(s)? Regarding the (Jamulus) server assessment - the public servers are completely out of my influence and I cannot investigate "the data path inside the server for the overall delay". And I also cannot answer the question "Is your ISP allowed to prioritize some traffic over others" (who would have the authority to allow/disallow this except the very ISP itself?), but I strongly believe there is a QoS distinction because after the death of ISDN we are forced to use VoIP for telephony and most likely the ISPs prioritize VoiP traffic. By writing this - could Jamulus somehow exploit this? |
Beta Was this translation helpful? Give feedback.
-
@drummer1154 I will find some time over the next few days to investigate. I am currently very busy to evaluate the test suite by Measurement Labs (M-Labs) for another project. Perhaps the M-Labs test will be useful for us. In the USA, QoS was not used when network neutrality was required. The ISPs do not have any incentive to prioritize VoIP traffic, especially in the USA, where they have a competing voice service. Also if VoIP is prioritized, Zoom and other video conferencing solutions would work much better. Most of the blemishes in video conf is due to jitter delays on UDP. More later.... |
Beta Was this translation helpful? Give feedback.
-
@drummer1154 I reviewed dslreports.com. Essentially the Ookla test. The test is best at discovering the best case speed for your service. My ISP cheats on the ping test. Running my own ping and traceroute to confirm the reported ping data discovered that the ping test (ICMP) packets are delivered directly to the ping server on a layer 2, bypassing the routers. :P Almost always get 1 - 2 ms delay times even when the router is completely overloaded. :P |
Beta Was this translation helpful? Give feedback.
-
Thanks for pointing me to the M-Lab tests. I just tried out the speed test; my WIN7 laptop (Firefox 84), Win8.1 PC (Chrome 87), and iMac (Safari 13.1) show roughly the same results (55/10Mbps down/up, 20...30ms latency Munich-Prague), but on the Mac there are some retransmissions (0.2%) - no idea why... Same results also with Ookla when I choose a Prague server. |
Beta Was this translation helpful? Give feedback.
-
I'll eventually look at that difference. However, for our purposes the speed test (TCP-centric) is not sufficient . I want to know more about the UDP transport time and the variance. It seems to me, our performance would improve with better flushing of late UDP packets. Of course, this only matters when the network is congested. I don't have time right now to study this. I am working on another project that could be improved by the M-Lab test suite. Similar to our needs, UDP latency is much more important than TCP latency. (N.B. Speed test imbedded into the M-Lab suite is important for comparing different measurements. It is the least useful measurement for usability.) |
Beta Was this translation helpful? Give feedback.
-
@drummer1154 I took a close look at the dslreports speedtest. Definitely not the Ookla test. There are two details that I like. (1) the test uses multiple servers. (2) the target servers are outside my ISP. My measured (effective) speed is 10% of the marketing speed. :P |
Beta Was this translation helpful? Give feedback.
-
Hi All - so that we can have the Issues list as a collection of actionable work tickets, I'm moving this to a discussion until such time as we can agree what needs to be done for the backlog. Thanks! |
Beta Was this translation helpful? Give feedback.
-
Auto jitterbuffer adjustable? It might have been said earlier, but I wouldn't mind a variable jitterbuffer auto setting. For me, dealing mainly with vocals, lack jitter is more important than some extra ms delay. So usually I put the jitterbuffer on auto, take it off again, and put both 1 up, to get even less jitter. After a while sometimes the network has a bad mood, and I have to adjust again. So for changing network conditions I prefer auto, but auto is not clean enough..... At the other hand I also read about people doing the opposite (faders 1 down, like in the help-popup). I don't know if the auto function of the buffer reacts on data already been lost, or if it interacts earlier, but a horizontal slider for auto between quick---|---safe might be practical for these two kinds of people...
Beta Was this translation helpful? Give feedback.
All reactions