Trying to play with the SharedJobCluster, I discovered an unpleasant problem with it, which does not exist in JobCluster. Whereas the latter can be passed a list of IP-addresses, the former can only accept a single such address.
This make life inconvenient not only on the truly multihomed machines -- those with multiple real network interfaces, each with its own IP-address -- it also cripples the use of the local interface (lo on Linux, lo0 on BSD) with its own 127.0.0.1 address.
In our case, we use psutil to enumerate all of the machine's interfaces (including the local one). The resulting list -- something like ['127.0.0.1', '10.10.11.11'] is accepted by the JobCluster's constructor, but the SharedJobCluster chokes on it -- at least, in Dispy-4.10.6.
If this problem is solved in the more recent releases, please, close this ticket.
More specifically, this is our code using
JobCluster. I wish, the same worked forSharedJobCluster, but it does not...Last edit: Mikhail T. 2022-08-30
Hmm, client using
SharedJobClusteronly connects to scheduler; it doesn't communicate with nodes. So there is no need to support more than one address for client. You may want to use-ioption more than once to startdispyscheduler.pyto use multiple addresses at scheduler.Ok, then I've misunderstood the usage/purpose of the
SharedJobCluster-- I only attempted to use it, when hitting the bug 19...Never mind then -- we don't even have the scheduler running. Sorry for the noise.
Last edit: Mikhail T. 2022-08-31