The openorb.iiop.clientPortMin/Max properties are
used to set the range of local port numbers a client
side ORB should use when creating the socket
connection to a server side ORB. These are
implemented in
org.openorb.orb.net.ConfiguredSocketFactory. The
current implementation has a bug whereby the
difference between the max and min values is
calculated, and the port number is randomly selected
between 0 and this difference (e.g. if min=5000,
max=5050, then the port would be randomly chosen
between 0 and 50).
To fix this we just need to add the min port value to
the currently randomly selected value. This will give
a port in the required range.
In addition when we create the socket, I catch and
ignore BindException, and terminate the loop for all
other IOExceptions. This is because BindException is
thrown if the selected port is already in use (so we
ignore it and try the next port in sequence). Other
IOExceptions are likely to affect whatever port we
select (e.g. if the process we are trying to connect
to does not exist, a ConnectException would be thrown
for all the ports in the requested range). If we
ignore all IOExceptions (as the current version does)
not only would looping through all ports in the range
potentially take a long time, a misleading exception
would be thrown at the end ("No port available in
specified range").
I have attached a diff file containing the changes
outlined above.
diff file