From: Bill P. <bpr...@sy...> - 2006-10-24 17:04:22
|
Lloyd Bryant wrote: >> My node refuses to stay as an ultrapeer for an extended period of >> time. When I first connect, I wait out the initial hour, and my >> node is promoted. Then, typically 1 to 1 1/2 hours later, it's >> demoted to leaf again. In "Preferences" "GnutellaNet", it reports >> "Sufficient Bandwidth" as "No". On 24 Oct 2006, dre...@co... wrote: > I also see this very freqently, but I also see min number of > connections is set to no. Could the stderr output tell us the > reason for the demotion? I think this issue was discussed before. Anyways, I noted this after version 0.95. It may have occurred with 0.94, but I don't know. I looked at the code and it appears that the socket connections are monitored for traffic and this determines the bw for your node. If many people are downloading from you, you will often be promoted to an ultra. At least, that is how I believe it works. That said, you can go to the preferences and click on "ultra" so that you are always an ultra node. I have done this and there are several problems. The major one being that you can easily be stuck in Gnet group/island. Ie, there is not good connectivity to *ALL* gnutella nodes. With the current, setup it appears that finding rare files is easier. As you keep tearing down and setting up ultra nodes, your pool of nodes is much larger. You start to hop from island to island [at least that is my hypothesis]. Alternatively, tt may be that new clients are connecting with the desired file. Once a file is found, you don't keep searching for them. If your goal is to create a high quality ultra node, then this strategy might not be the best. Having a large uptime ultra can also help the network. However, with the current hop level of four, it seems that there could be quite a few triangles/squares formed (ie, cycles in GNET). Any cycles will start to bias the host caches. I don't think that Gtkg tries does anything with queries it has received that are its own. If this is true, I am not sure that a large uptime ultra is "high quality". Examples, Three hop ultra cycle Four hop ultra cycle U1 <-------> U2 U1 <---------> U2 \ / ^ ^ \ / | | \ / | | \ / | | \ / v v U3 U4 <---------> U3 If U1 is a gtkg ultra, any query could be routed back to itself. If this is detected and the ultra relaying the query is disconnected, then these cycles don't happen... but I think they do. It is even possible to determine larger cycles by looking at duplicate queries from different ultras. In this case, a cycle of up to seven could be detected. There is nothing to act on in these cases though; a cycle of more than five is fully disconnected as queries don't route this far. A long uptime ultra would have a better connectivity if this query tracking is done [or maybe there is a better method?]. Otherwise, I don't think it is advantageous to constantly run as an ultra. It certainly doesn't help searches. fwiw, Bill Pringlemeir. |