Menu

#6 RFE: send many, wait for few

v1.0 (example)
closed
None
5
2023-02-13
2013-09-07
No

Most of the time, I use -w1 since I hate it when I have to wait just because one of the packets is lost. Also, with my current DSL connection, about every 2nd run has at least one query without a response. I'm impatient, I know. My DSL connection sucks, I know.

Now I can adjust the number of queries with -q. However, -q10 means, that 10 queries are sent, and traceroute also waits until all 10 have arrived.

Assuming that packets will be lost, wouldn't it be clever to send many (say 15 per hop) but only wait for a few responses (say 10 per hop) to arrive? I'm not asking for this to be the default, but I'd like to be able to specify the number of queries sent as well the number of responses to wait for per hop.

What do you think?

Discussion

  • Dmitry Butskoy

    Dmitry Butskoy - 2013-09-07

    Now I can adjust the number of queries with -q

    Are you sure you want to change the number of queries "per hop" (-q) rather than the number of simultaneous queries "at all" (-N) ?

    wouldn't it be clever to send many (say 15 per hop) but only wait for a few responses (say 10 per hop) to arrive?

    Note again: not "15 per hop", 15 per time (in other words, with the default -q value, 15/3=5 hops are active at a time).

    Well, actually, traceroute shows the results as it appear, step by step. Certainly, if some hop does not respond, but the next hop(s) already responded, traceroute is caused to wait for such a hop first (since it cannot show the next hops until the situation with the silent hop is clear).

    When a hop info is shown, traceroute counts the number of probes in work and increases it up to 15 (or -N value) at any time. This way nobody waits for "15 to be complete", all is sent step by step in a temp of how responses are received.

    Another problem is DNS resolving. In a slow connection (as with DSL) DNS info of the each hop might significantly slow the work. Try to use "-n" flag to avoid DNS resolving -- sometimes it might help.

    Besides that, in a slow connection it might be useful to send even 1 probe per time (ie. use "-N 1"), since routers in a slow path can just drop "too often" or "too many" probes. If it helps, just add "alias traceroute='traceroute -N 1' " to your shell init script (aka .bash_profile).

     
    • Sven Köhler

      Sven Köhler - 2013-09-07

      I don't think you understood what I mean. Let me try to explain again.

      Assume that some packets will be dropped. If I use the option "-q X", then traceroute will send X queries per hop. But it will also wait until replies for all X queries have arrived. Now since we expect some packets to be dropped, why would we wait for all X replies to arrive? Why doesn't traceroute say "Y replies have arrived for this hop. That's enough for me. I don't wait for the other X-Y replies." for some Y <= X?

      I would like to suggest to introduce an option, such that traceroute does not wait for all X replies per hop, but for Y replies per hop, where Y <= X.

      Setting -N 1 does not help in my case, packets are still dropped. Also, I usually see the first 6 or three hops immediately. Packets starts dropping after hop 8, somewhere on the connection between Israel and Germany.

       
  • Dmitry Butskoy

    Dmitry Butskoy - 2013-09-07
    • status: open --> closed
     
  • Dmitry Butskoy

    Dmitry Butskoy - 2013-09-07

    Now since we expect some packets to be dropped, why would we wait for all X replies to arrive?

    But how we expect this?

    In general, we know nothing, and send 3 probes per hop. When (after some experiments) we understand that something goes slow, we can just use "-q 1" for that.

    The algorithm you describe requires some "adaptive mechanisms" (ie. change "-q" value depending how many results we receive). It is not a target for such "dumby, ancient, robast, systembase" utility as traceroute is. Perhaps it is a target for some GUI tool. And yes, from the point of implementation, it is too hard to implement, especially considering the current traceroute simple/clean code (the traceroute is "very base" system utility, we cannot complicate it too much).

    Besides that, since the traceroute had appeared decades ago, the need of such an option or something similar did not appear (else it would already implemented someway). It seems the World satisfies either by default (-q 3) or by some combination of "-q", "-w", "-N", "-z" etc. (You can try also "-f" to skip first slow hops in a DSL path).

     
  • Sven Köhler

    Sven Köhler - 2013-09-07

    Am 08.09.2013 00:43, schrieb Dmitry Butskoy:

    The algorithm you describe requires some "adaptive mechanisms" (ie.
    change "-q" value depending how many results we receive).

    I'm not proposing an adaptive algorithm. I'm proposing a new command
    line option, that allows you to tune the number of replies that
    traceroute expects to receive per hop.

    It is not a
    target for such "dumby, ancient, robast, systembase" utility as
    traceroute is.

    I'm not proposing anything adaptive. I'm proposing something very dumb
    and robust extension to the current algorithm that is easy to implement:
    Send X queries per hop, expect Y replies per hop, where X and Y can be
    set via the command line. A command line option for X is already present
    (-q) but one for Y is not as the current implementation assumes Y=X.

     
  • Dmitry Butskoy

    Dmitry Butskoy - 2013-09-07

    Now I understand.

    Send X queries and break the rest of probes once Y replies are received. Formally, it introduces a redundancy of probes (send more than actual Y displayed).

    This redundancy looks useful in slow and other environments where a lot of probes disappear. OTOH the reason of such a disappearing most cases is "too many probes". IOW "routes filter our probe packets because we send too many packets and send it too often", and to avoid this, we try to send even more packets... (I mean the case we do not decrease X, but decrease Y only. "Not to send too many packets" assumes decrease X (-q) anyway, regardless of Y).

    But for the initial problem you describe, maybe the better way is not to use Y<X, but use a "common per hop timeout"? Or decreased timeouts for probes from the same hop after the first received?

    IOW, when we have at least one successful reply from a hop, do not wait very long for additional replies, go further instead. The whole hop can use "-w" timeout, but additional replies another timeout (depending on the reply time of the successful probe)...

     
  • Dmitry Butskoy

    Dmitry Butskoy - 2013-09-07
    • status: closed --> open
     
  • Sven Köhler

    Sven Köhler - 2013-09-08

    I did some math:

    The probability, that at least Y out of X packets arrive is equal to
    sum( (1-p)^i * p^(X-i) * binomial(X, i), i=Y..X);
    where the probability for a packet loss is p.

    For X=Y=3 and p=5%, we obtain that the probability that 3 of 3 packets
    arrive is just 85%. However, for X=4, Y=3, and p=5% we obtain that the
    probability that at least 3 out of 4 packets arrive is 98%!

    I believe, having X > Y can dramatically increases the probability that
    a traceroute runs smoothly!

     
  • Sven Köhler

    Sven Köhler - 2013-09-08

    I see that you would rather decrease Y rather then increase X. Well, by the formula above, the probability that at least 1 out of 3 packets arrives (with p=5% probability of a packet loss) is 99.98%. For at least 2 out of 3 the probability is still 99.27%. Much better then 85% IMHO.

     
  • Dmitry Butskoy

    Dmitry Butskoy - 2014-11-12
    • assigned_to: Dmitry Butskoy
     
  • Dmitry Butskoy

    Dmitry Butskoy - 2023-02-13
    • status: open --> closed
     
  • Dmitry Butskoy

    Dmitry Butskoy - 2023-02-13

    I believe since the version of 2.1.0, this (or similar) feature is now implemented. See "-w" option description in manual. (IOW, no more wait for long replies when there are already some fast replies received). So close now.

     

Log in to post a comment.

MongoDB Logo MongoDB
Gen AI apps are built with MongoDB Atlas
Atlas offers built-in vector search and global availability across 125+ regions. Start building AI apps faster, all in one place.