|
From: ahmadk72 <reg...@in...> - 2007-09-26 17:35:29
|
I have a problem that is perplexed me to no end. Even our UNIX admin is stumped, so this mailing list is a last resort. We have a third-party application that uses the Java Service Wrapper (v3.2.1). The application is installed on two different Zones in Solaris 10. One zone is called rhdam-dev and the other is called rhdam-tst. If i startup the rhdam-dev (DEV box) application, the Java service wrapper uses ports 31000 and 32000 for communicating to and listening from the JVM. Everything works fine, and I have no problems running the application as expected. Then recently we installed the application on the dam-tst (TEST box) application on another Solaris zone. When the wrapper starts up, I see the following error (logging output is set to DEBUG level) in the attached log file. >From what I have been able to deciper, it is complaining about port 32000 already being in use. However, the port is not in use on the rhdam-tst zone. I can run a netstat -an command and see that there is nothing running on port 32000. The really weird part is that is I stop the wrapper service on the rhdam-dev zone, and try to startup the rhdam-tst instance, then the wrapper service starts successfully. For now I have been able to start both up by changing adding the wrapper.port.min and wrappper.port.max settings so the ports don't conflict between rhdam-dev and rhdam-tst. However, I need to know why this is happening. Has anyone seen this behavior? The ports are suppose to be independent on Solaris zone. Why is the Java Service wrapper socket complaining about the port being used in another zone. If anyone can help me solve this mystery that would be just great. Thanks Kashif http://www.nabble.com/file/p12904287/artesia-service-wrapper.log artesia-service-wrapper.log -- View this message in context: http://www.nabble.com/Wrapper-behavior-on-Solaris-10-zone-tf4523339.html#a12904287 Sent from the Java Service Wrapper mailing list archive at Nabble.com. |
|
From: Santo74 <gds...@de...> - 2009-05-15 19:10:41
|
First of all I want to apologise for waking up an old thread, but actually the problem described in this thread still applies. I say this because I am experiencing exactly the same behaviour as ahmadk72 with our own java application, which uses the java service wrapper. These are the remarkable things that I experienced: 1) the service wrapper is reserving all ports between 31000 and 32000 by default -> I would expect that it looks for the first free port in this range and allocate / reserve that one, but apparently it reserves the whole range !! 2) On most systems it's not a huge problem in that this port range is probably available (most of the time) and therefore doesn't cause much trouble. However, a lot of companies don't like it that a single application is occupying a range of 1000 ports 3) On Solaris 10 zones it's even worse because the port range appears to be in use on ALL zones as soon as ONE zone is running a service wrapper based application (cfr the explanation of ahmadk72 below) The strangest part is that we can run multiple instances of other applications that use a particular port on multiple zones without any trouble at all. E.g. a solaris host with 5 zones, all 5 running an IBM Tivoli Policy Server instance using the same ports on all those zones is not a problem. So why can other applications allocate a particular port for that particular zone (i.e. independant), while the service wrapper is allocating (reserving) its ports on all zones at once (i.e. global) ?? 4) I know it's possible to make the port range smaller AND letting each server or zone use another port range, but this makes it very hard to package an application for deployment We are using java service wrapper v3.2.3 Can someone please look into this and take the necessary actions where required ? Thanks in advance, gds ahmadk72 wrote: > > I have a problem that is perplexed me to no end. Even our UNIX admin is > stumped, so this mailing list is a last resort. > > We have a third-party application that uses the Java Service Wrapper > (v3.2.1). The application is installed on two different Zones in Solaris > 10. One zone is called rhdam-dev and the other is called rhdam-tst. > > If i startup the rhdam-dev (DEV box) application, the Java service wrapper > uses ports 31000 and 32000 for communicating to and listening from the > JVM. Everything works fine, and I have no problems running the application > as expected. > > Then recently we installed the application on the dam-tst (TEST box) > application on another Solaris zone. When the wrapper starts up, I see > the following error (logging output is set to DEBUG level) in the attached > log file. > > From what I have been able to deciper, it is complaining about port 32000 > already being in use. However, the port is not in use on the rhdam-tst > zone. I can run a netstat -an command and see that there is nothing > running on port 32000. > > The really weird part is that is I stop the wrapper service on the > rhdam-dev zone, and try to startup the rhdam-tst instance, then the > wrapper service starts successfully. > > For now I have been able to start both up by changing adding the > wrapper.port.min and wrappper.port.max settings so the ports don't > conflict between rhdam-dev and rhdam-tst. > > However, I need to know why this is happening. Has anyone seen this > behavior? The ports are suppose to be independent on Solaris zone. Why > is the Java Service wrapper socket complaining about the port being used > in another zone. > > If anyone can help me solve this mystery that would be just great. > > Thanks > > Kashif http://www.nabble.com/file/p12904287/artesia-service-wrapper.log > artesia-service-wrapper.log > -- View this message in context: http://www.nabble.com/Wrapper-behavior-on-Solaris-10-zone-tp12904287p23565345.html Sent from the Java Service Wrapper mailing list archive at Nabble.com. |
|
From: Leif M. <lei...@ta...> - 2009-05-16 04:25:27
|
Santo, Sorry for this trouble with the Java Service Wrapper. We have designed it to work automatically when more than one copy of the Wrapper is run on the same machine. It does not intentionally allocate all 1000 ports. Rather it starts by attempting to allocate the first port, then moving on to the next if that first one is already allocated. Once it finds an open port it should never even attempt to access the rest of the range. How exactly are you determining that all 1000 ports are being reserved? allocated? when I run net stat on our test system with one copy of the Wrapper running I get this: --- # netstat TCP: IPv4 Local Address Remote Address Swind Send-Q Rwind Recv-Q State -------------------- -------------------- ----- ------ ----- ------ ----------- solx86.ssh 10.24.115.41.63503 49640 47 49640 0 ESTABLISHED localhost.63651 localhost.63650 49152 0 49152 0 TIME_WAIT localhost.31000 localhost.32000 49152 0 49152 0 ESTABLISHED localhost.32000 localhost.31000 49152 0 49170 0 ESTABLISHED Active UNIX domain sockets Address Type Vnode Conn Local Addr Remote Addr fffffe853d5cdac0 stream-ord fffffe855a118a80 00000000 /var/run/.inetd.uds --- Leaving the first wrapper up, I ran a couple other tests and then started a second wrapper. In that state here is what I get from netstat: --- # netstat TCP: IPv4 Local Address Remote Address Swind Send-Q Rwind Recv-Q State -------------------- -------------------- ----- ------ ----- ------ ----------- localhost.63661 localhost.63660 49152 0 49152 0 TIME_WAIT localhost.31001 localhost.32001 49170 0 49152 0 TIME_WAIT localhost.63667 localhost.63666 49152 0 49152 0 TIME_WAIT localhost.31002 localhost.32001 49170 0 49152 0 TIME_WAIT localhost.63670 localhost.63669 49152 0 49152 0 TIME_WAIT localhost.31003 localhost.32001 49152 0 49152 0 ESTABLISHED localhost.32001 localhost.31003 49152 0 49170 0 ESTABLISHED solx86.ssh 10.24.115.41.63503 49640 47 49640 0 ESTABLISHED localhost.31000 localhost.32000 49152 0 49152 0 ESTABLISHED localhost.32000 localhost.31000 49152 0 49170 0 ESTABLISHED Active UNIX domain sockets Address Type Vnode Conn Local Addr Remote Addr fffffe853d5cdac0 stream-ord fffffe855a118a80 00000000 /var/run/.inetd.uds --- The first Wrapper is using the socket 31000 -> 32000 (JVM to Wrapper) and the reverse of the socket. The second Wrapper is using the socket 31003 -> 32001 (JVM to Wrapper) and the reverse of the socket. Neither of those are ranges, they each use 2 halves of the socket as expected. The TIME_WAIT ports are from the other tests I mentioned and will remain in that state for 2 minutes until the system decides that no more data could come in and closes the ports. netstat then reports this: --- # netstat TCP: IPv4 Local Address Remote Address Swind Send-Q Rwind Recv-Q State -------------------- -------------------- ----- ------ ----- ------ ----------- localhost.31003 localhost.32001 49152 0 49152 0 ESTABLISHED localhost.32001 localhost.31003 49152 0 49170 0 ESTABLISHED solx86.ssh 10.24.115.41.63503 49640 47 49640 0 ESTABLISHED localhost.31000 localhost.32000 49152 0 49152 0 ESTABLISHED localhost.32000 localhost.31000 49152 0 49170 0 ESTABLISHED Active UNIX domain sockets Address Type Vnode Conn Local Addr Remote Addr fffffe853d5cdac0 stream-ord fffffe855a118a80 00000000 /var/run/.inetd.uds --- This is all worked as I expected but is also within a single Zone. When I stop the two wrappers and immediately run netstat again, I see: --- # netstat TCP: IPv4 Local Address Remote Address Swind Send-Q Rwind Recv-Q State -------------------- -------------------- ----- ------ ----- ------ ----------- localhost.31003 localhost.32001 49170 0 49152 0 TIME_WAIT solx86.ssh 10.24.115.41.63503 49640 47 49640 0 ESTABLISHED localhost.31000 localhost.32000 49170 0 49152 0 TIME_WAIT Active UNIX domain sockets Address Type Vnode Conn Local Addr Remote Addr fffffe853d5cdac0 stream-ord fffffe855a118a80 00000000 /var/run/.inetd.uds --- So the JVM to Wrapper half of the socket remains in a locked state for the TIME_WAIT period. This is expected because the JVM was the connecting process and the Wrapper was the listener. I admit that I have little experience with Solaris Zones but will definitely look into this further. Any additional information you could provide would be helpful in getting this resolved. Cheers, Leif On Sat, May 16, 2009 at 4:10 AM, Santo74 <gds...@de...> wrote: > > First of all I want to apologise for waking up an old thread, but actually > the problem described in this thread still applies. > I say this because I am experiencing exactly the same behaviour as ahmadk72 > with our own java application, which uses the java service wrapper. > > These are the remarkable things that I experienced: > > 1) the service wrapper is reserving all ports between 31000 and 32000 by > default > -> I would expect that it looks for the first free port in this range and > allocate / reserve that one, but apparently it reserves the whole range !! > > 2) On most systems it's not a huge problem in that this port range is > probably available (most of the time) and therefore doesn't cause much > trouble. > However, a lot of companies don't like it that a single application is > occupying a range of 1000 ports > > 3) On Solaris 10 zones it's even worse because the port range appears to be > in use on ALL zones as soon as ONE zone is running a service wrapper based > application > (cfr the explanation of ahmadk72 below) > > The strangest part is that we can run multiple instances of other > applications that use a particular port on multiple zones without any > trouble at all. > E.g. a solaris host with 5 zones, all 5 running an IBM Tivoli Policy Server > instance using the same ports on all those zones is not a problem. > So why can other applications allocate a particular port for that particular > zone (i.e. independant), > while the service wrapper is allocating (reserving) its ports on all zones > at once (i.e. global) ?? > > 4) I know it's possible to make the port range smaller AND letting each > server or zone use another port range, > but this makes it very hard to package an application for deployment > > We are using java service wrapper v3.2.3 > Can someone please look into this and take the necessary actions where > required ? > > Thanks in advance, > > gds > > > ahmadk72 wrote: >> >> I have a problem that is perplexed me to no end. Even our UNIX admin is >> stumped, so this mailing list is a last resort. >> >> We have a third-party application that uses the Java Service Wrapper >> (v3.2.1). The application is installed on two different Zones in Solaris >> 10. One zone is called rhdam-dev and the other is called rhdam-tst. >> >> If i startup the rhdam-dev (DEV box) application, the Java service wrapper >> uses ports 31000 and 32000 for communicating to and listening from the >> JVM. Everything works fine, and I have no problems running the application >> as expected. >> >> Then recently we installed the application on the dam-tst (TEST box) >> application on another Solaris zone. When the wrapper starts up, I see >> the following error (logging output is set to DEBUG level) in the attached >> log file. >> >> From what I have been able to deciper, it is complaining about port 32000 >> already being in use. However, the port is not in use on the rhdam-tst >> zone. I can run a netstat -an command and see that there is nothing >> running on port 32000. >> >> The really weird part is that is I stop the wrapper service on the >> rhdam-dev zone, and try to startup the rhdam-tst instance, then the >> wrapper service starts successfully. >> >> For now I have been able to start both up by changing adding the >> wrapper.port.min and wrappper.port.max settings so the ports don't >> conflict between rhdam-dev and rhdam-tst. >> >> However, I need to know why this is happening. Has anyone seen this >> behavior? The ports are suppose to be independent on Solaris zone. Why >> is the Java Service wrapper socket complaining about the port being used >> in another zone. >> >> If anyone can help me solve this mystery that would be just great. >> >> Thanks >> >> Kashif http://www.nabble.com/file/p12904287/artesia-service-wrapper.log >> artesia-service-wrapper.log |
|
From: Santo74 <gds...@de...> - 2009-05-18 08:53:49
|
Hi Leif, thanks for the quick answer. Actually the issue was reported in our Mantis system by several of our consultants and what I understood from their reports was that the port range was reserved on all the different platforms our product runs on. But apparently their is no real evidence of it and therefore this isn't necessarity true. The reason why it was assumed that the whole range was reserved / allocated is because they were never able to restart our application with the same port range after a crash or forced stop This happended multiple times (for several different reasons, but that's not important here) and each time they were forced to use a port range that didn't overlap with the default range (i.e. 31000-32000) On solaris on the other hand (and that's a situation that I tested and verified myself) it's very clear that a second instance of our application can't start with the default port range on any other zone on that same server. I'm not sure what the result of netstat was on such a solaris zone and unfortunately I can't check it at the moment because we are having issues with the raid controller of our solaris system which prevents it from booting :-( As soon as we get it up and running again and it would be of any help to you I would be glad to run a netstat on one of the zones and post the output of it here. regards, gds Leif Mortenson-3 wrote: > > Santo, > Sorry for this trouble with the Java Service Wrapper. We have > designed it to work automatically when more than one copy of the > Wrapper is run on the same machine. It does not intentionally > allocate all 1000 ports. Rather it starts by attempting to allocate > the first port, then moving on to the next if that first one is > already allocated. Once it finds an open port it should never even > attempt to access the rest of the range. > > How exactly are you determining that all 1000 ports are being > reserved? allocated? when I run net stat on our test system with one > copy of the Wrapper running I get this: > --- > # netstat > > TCP: IPv4 > Local Address Remote Address Swind Send-Q Rwind Recv-Q > State > -------------------- -------------------- ----- ------ ----- ------ > ----------- > solx86.ssh 10.24.115.41.63503 49640 47 49640 0 > ESTABLISHED > localhost.63651 localhost.63650 49152 0 49152 0 > TIME_WAIT > localhost.31000 localhost.32000 49152 0 49152 0 > ESTABLISHED > localhost.32000 localhost.31000 49152 0 49170 0 > ESTABLISHED > > Active UNIX domain sockets > Address Type Vnode Conn Local Addr Remote Addr > fffffe853d5cdac0 stream-ord fffffe855a118a80 00000000 > /var/run/.inetd.uds > --- > > Leaving the first wrapper up, I ran a couple other tests and then > started a second wrapper. In that state here is what I get from > netstat: > --- > # netstat > > TCP: IPv4 > Local Address Remote Address Swind Send-Q Rwind Recv-Q > State > -------------------- -------------------- ----- ------ ----- ------ > ----------- > localhost.63661 localhost.63660 49152 0 49152 0 > TIME_WAIT > localhost.31001 localhost.32001 49170 0 49152 0 > TIME_WAIT > localhost.63667 localhost.63666 49152 0 49152 0 > TIME_WAIT > localhost.31002 localhost.32001 49170 0 49152 0 > TIME_WAIT > localhost.63670 localhost.63669 49152 0 49152 0 > TIME_WAIT > localhost.31003 localhost.32001 49152 0 49152 0 > ESTABLISHED > localhost.32001 localhost.31003 49152 0 49170 0 > ESTABLISHED > solx86.ssh 10.24.115.41.63503 49640 47 49640 0 > ESTABLISHED > localhost.31000 localhost.32000 49152 0 49152 0 > ESTABLISHED > localhost.32000 localhost.31000 49152 0 49170 0 > ESTABLISHED > > Active UNIX domain sockets > Address Type Vnode Conn Local Addr Remote Addr > fffffe853d5cdac0 stream-ord fffffe855a118a80 00000000 > /var/run/.inetd.uds > --- > > The first Wrapper is using the socket 31000 -> 32000 (JVM to Wrapper) > and the reverse of the socket. > The second Wrapper is using the socket 31003 -> 32001 (JVM to Wrapper) > and the reverse of the socket. > Neither of those are ranges, they each use 2 halves of the socket as > expected. > The TIME_WAIT ports are from the other tests I mentioned and will > remain in that state for 2 minutes until the system decides that no > more data could come in and closes the ports. netstat then reports > this: > --- > # netstat > > TCP: IPv4 > Local Address Remote Address Swind Send-Q Rwind Recv-Q > State > -------------------- -------------------- ----- ------ ----- ------ > ----------- > localhost.31003 localhost.32001 49152 0 49152 0 > ESTABLISHED > localhost.32001 localhost.31003 49152 0 49170 0 > ESTABLISHED > solx86.ssh 10.24.115.41.63503 49640 47 49640 0 > ESTABLISHED > localhost.31000 localhost.32000 49152 0 49152 0 > ESTABLISHED > localhost.32000 localhost.31000 49152 0 49170 0 > ESTABLISHED > > Active UNIX domain sockets > Address Type Vnode Conn Local Addr Remote Addr > fffffe853d5cdac0 stream-ord fffffe855a118a80 00000000 > /var/run/.inetd.uds > --- > > This is all worked as I expected but is also within a single Zone. > > When I stop the two wrappers and immediately run netstat again, I see: > --- > # netstat > > TCP: IPv4 > Local Address Remote Address Swind Send-Q Rwind Recv-Q > State > -------------------- -------------------- ----- ------ ----- ------ > ----------- > localhost.31003 localhost.32001 49170 0 49152 0 > TIME_WAIT > solx86.ssh 10.24.115.41.63503 49640 47 49640 0 > ESTABLISHED > localhost.31000 localhost.32000 49170 0 49152 0 > TIME_WAIT > > Active UNIX domain sockets > Address Type Vnode Conn Local Addr Remote Addr > fffffe853d5cdac0 stream-ord fffffe855a118a80 00000000 > /var/run/.inetd.uds > --- > > So the JVM to Wrapper half of the socket remains in a locked state for > the TIME_WAIT period. This is expected because the JVM was the > connecting process and the Wrapper was the listener. > > I admit that I have little experience with Solaris Zones but will > definitely look into this further. Any additional information you > could provide would be helpful in getting this resolved. > > Cheers, > Leif > > On Sat, May 16, 2009 at 4:10 AM, Santo74 > <gds...@de...> wrote: >> >> First of all I want to apologise for waking up an old thread, but >> actually >> the problem described in this thread still applies. >> I say this because I am experiencing exactly the same behaviour as >> ahmadk72 >> with our own java application, which uses the java service wrapper. >> >> These are the remarkable things that I experienced: >> >> 1) the service wrapper is reserving all ports between 31000 and 32000 by >> default >> -> I would expect that it looks for the first free port in this range and >> allocate / reserve that one, but apparently it reserves the whole range >> !! >> >> 2) On most systems it's not a huge problem in that this port range is >> probably available (most of the time) and therefore doesn't cause much >> trouble. >> However, a lot of companies don't like it that a single application is >> occupying a range of 1000 ports >> >> 3) On Solaris 10 zones it's even worse because the port range appears to >> be >> in use on ALL zones as soon as ONE zone is running a service wrapper >> based >> application >> (cfr the explanation of ahmadk72 below) >> >> The strangest part is that we can run multiple instances of other >> applications that use a particular port on multiple zones without any >> trouble at all. >> E.g. a solaris host with 5 zones, all 5 running an IBM Tivoli Policy >> Server >> instance using the same ports on all those zones is not a problem. >> So why can other applications allocate a particular port for that >> particular >> zone (i.e. independant), >> while the service wrapper is allocating (reserving) its ports on all >> zones >> at once (i.e. global) ?? >> >> 4) I know it's possible to make the port range smaller AND letting each >> server or zone use another port range, >> but this makes it very hard to package an application for deployment >> >> We are using java service wrapper v3.2.3 >> Can someone please look into this and take the necessary actions where >> required ? >> >> Thanks in advance, >> >> gds >> >> >> ahmadk72 wrote: >>> >>> I have a problem that is perplexed me to no end. Even our UNIX admin is >>> stumped, so this mailing list is a last resort. >>> >>> We have a third-party application that uses the Java Service Wrapper >>> (v3.2.1). The application is installed on two different Zones in >>> Solaris >>> 10. One zone is called rhdam-dev and the other is called rhdam-tst. >>> >>> If i startup the rhdam-dev (DEV box) application, the Java service >>> wrapper >>> uses ports 31000 and 32000 for communicating to and listening from the >>> JVM. Everything works fine, and I have no problems running the >>> application >>> as expected. >>> >>> Then recently we installed the application on the dam-tst (TEST box) >>> application on another Solaris zone. When the wrapper starts up, I see >>> the following error (logging output is set to DEBUG level) in the >>> attached >>> log file. >>> >>> From what I have been able to deciper, it is complaining about port >>> 32000 >>> already being in use. However, the port is not in use on the rhdam-tst >>> zone. I can run a netstat -an command and see that there is nothing >>> running on port 32000. >>> >>> The really weird part is that is I stop the wrapper service on the >>> rhdam-dev zone, and try to startup the rhdam-tst instance, then the >>> wrapper service starts successfully. >>> >>> For now I have been able to start both up by changing adding the >>> wrapper.port.min and wrappper.port.max settings so the ports don't >>> conflict between rhdam-dev and rhdam-tst. >>> >>> However, I need to know why this is happening. Has anyone seen this >>> behavior? The ports are suppose to be independent on Solaris zone. Why >>> is the Java Service wrapper socket complaining about the port being used >>> in another zone. >>> >>> If anyone can help me solve this mystery that would be just great. >>> >>> Thanks >>> >>> Kashif http://www.nabble.com/file/p12904287/artesia-service-wrapper.log >>> artesia-service-wrapper.log > > ------------------------------------------------------------------------------ > Crystal Reports - New Free Runtime and 30 Day Trial > Check out the new simplified licensing option that enables > unlimited royalty-free distribution of the report engine > for externally facing server and web deployment. > http://p.sf.net/sfu/businessobjects > _______________________________________________ > Wrapper-user mailing list > Wra...@li... > https://lists.sourceforge.net/lists/listinfo/wrapper-user > > -- View this message in context: http://www.nabble.com/Wrapper-behavior-on-Solaris-10-zone-tp12904287p23593563.html Sent from the Java Service Wrapper mailing list archive at Nabble.com. |
|
From: Leif M. <le...@ta...> - 2009-05-18 10:32:09
|
Santo, We are in the process of setting up a Solaris 10 server to do some testing with Zones in house. We have a Solaris 9 server, but out Solaris 10 testing has been done IN a zone on Sun's EZqual loaner server. I will let you know what we found. As I explained, the Wrapper never actually attempts to allocate all 1000 ports unless they are already blocked. If the first instance of your application uses ports 32000 and 31000 and that crashes, it is possible that the 32000 port will be locked for 2 minutes so the second invocation of the JVM would use 32001 and 31000. But the other 999 ports would have never been accessed so I can imagine no reason why they would be locked. In your case with Solaris Zones. You say that the Wrapper can not start on these second Zone when one is running on the first. Are you able to verify that the wrapper on the second Zone works if the first had not been running for at least 2 minutes? I am wondering if it is a configuration issue. We will be able to test this shortly ourselves. And it doesn't sound like you will be able to test it until your system is back up and running. Sorry for this next question as it may show my lack of knowledge with Solaris Zones: With your system, are both Zones sharing the same IP address? If so, they should not be able to share ports on that IP. In this case however, we are only binding to localhost, so it should not matter. I will post back as soon as we have gotten this tested out. Cheers, Leif On Mon, May 18, 2009 at 5:53 PM, Santo74 <gds...@de...> wrote: > > Hi Leif, > > thanks for the quick answer. > > Actually the issue was reported in our Mantis system by several of our > consultants and what I understood from their reports > was that the port range was reserved on all the different platforms our > product runs on. > But apparently their is no real evidence of it and therefore this isn't > necessarity true. > The reason why it was assumed that the whole range was reserved / allocated > is because they were never able > to restart our application with the same port range after a crash or forced > stop > This happended multiple times (for several different reasons, but that's not > important here) and each time > they were forced to use a port range that didn't overlap with the default > range (i.e. 31000-32000) > > On solaris on the other hand (and that's a situation that I tested and > verified myself) it's very clear > that a second instance of our application can't start with the default port > range on any other zone on that same server. > > I'm not sure what the result of netstat was on such a solaris zone and > unfortunately I can't check it at the moment > because we are having issues with the raid controller of our solaris system > which prevents it from booting :-( > > As soon as we get it up and running again and it would be of any help to you > I would be glad to run a netstat on one of the zones and post the output of > it here. > > regards, > > gds > > > > > Leif Mortenson-3 wrote: >> >> Santo, >> Sorry for this trouble with the Java Service Wrapper. We have >> designed it to work automatically when more than one copy of the >> Wrapper is run on the same machine. It does not intentionally >> allocate all 1000 ports. Rather it starts by attempting to allocate >> the first port, then moving on to the next if that first one is >> already allocated. Once it finds an open port it should never even >> attempt to access the rest of the range. >> >> How exactly are you determining that all 1000 ports are being >> reserved? allocated? when I run net stat on our test system with one >> copy of the Wrapper running I get this: >> --- >> # netstat >> >> TCP: IPv4 >> Local Address Remote Address Swind Send-Q Rwind Recv-Q >> State >> -------------------- -------------------- ----- ------ ----- ------ >> ----------- >> solx86.ssh 10.24.115.41.63503 49640 47 49640 0 >> ESTABLISHED >> localhost.63651 localhost.63650 49152 0 49152 0 >> TIME_WAIT >> localhost.31000 localhost.32000 49152 0 49152 0 >> ESTABLISHED >> localhost.32000 localhost.31000 49152 0 49170 0 >> ESTABLISHED >> >> Active UNIX domain sockets >> Address Type Vnode Conn Local Addr Remote Addr >> fffffe853d5cdac0 stream-ord fffffe855a118a80 00000000 >> /var/run/.inetd.uds >> --- >> >> Leaving the first wrapper up, I ran a couple other tests and then >> started a second wrapper. In that state here is what I get from >> netstat: >> --- >> # netstat >> >> TCP: IPv4 >> Local Address Remote Address Swind Send-Q Rwind Recv-Q >> State >> -------------------- -------------------- ----- ------ ----- ------ >> ----------- >> localhost.63661 localhost.63660 49152 0 49152 0 >> TIME_WAIT >> localhost.31001 localhost.32001 49170 0 49152 0 >> TIME_WAIT >> localhost.63667 localhost.63666 49152 0 49152 0 >> TIME_WAIT >> localhost.31002 localhost.32001 49170 0 49152 0 >> TIME_WAIT >> localhost.63670 localhost.63669 49152 0 49152 0 >> TIME_WAIT >> localhost.31003 localhost.32001 49152 0 49152 0 >> ESTABLISHED >> localhost.32001 localhost.31003 49152 0 49170 0 >> ESTABLISHED >> solx86.ssh 10.24.115.41.63503 49640 47 49640 0 >> ESTABLISHED >> localhost.31000 localhost.32000 49152 0 49152 0 >> ESTABLISHED >> localhost.32000 localhost.31000 49152 0 49170 0 >> ESTABLISHED >> >> Active UNIX domain sockets >> Address Type Vnode Conn Local Addr Remote Addr >> fffffe853d5cdac0 stream-ord fffffe855a118a80 00000000 >> /var/run/.inetd.uds >> --- >> >> The first Wrapper is using the socket 31000 -> 32000 (JVM to Wrapper) >> and the reverse of the socket. >> The second Wrapper is using the socket 31003 -> 32001 (JVM to Wrapper) >> and the reverse of the socket. >> Neither of those are ranges, they each use 2 halves of the socket as >> expected. >> The TIME_WAIT ports are from the other tests I mentioned and will >> remain in that state for 2 minutes until the system decides that no >> more data could come in and closes the ports. netstat then reports >> this: >> --- >> # netstat >> >> TCP: IPv4 >> Local Address Remote Address Swind Send-Q Rwind Recv-Q >> State >> -------------------- -------------------- ----- ------ ----- ------ >> ----------- >> localhost.31003 localhost.32001 49152 0 49152 0 >> ESTABLISHED >> localhost.32001 localhost.31003 49152 0 49170 0 >> ESTABLISHED >> solx86.ssh 10.24.115.41.63503 49640 47 49640 0 >> ESTABLISHED >> localhost.31000 localhost.32000 49152 0 49152 0 >> ESTABLISHED >> localhost.32000 localhost.31000 49152 0 49170 0 >> ESTABLISHED >> >> Active UNIX domain sockets >> Address Type Vnode Conn Local Addr Remote Addr >> fffffe853d5cdac0 stream-ord fffffe855a118a80 00000000 >> /var/run/.inetd.uds >> --- >> >> This is all worked as I expected but is also within a single Zone. >> >> When I stop the two wrappers and immediately run netstat again, I see: >> --- >> # netstat >> >> TCP: IPv4 >> Local Address Remote Address Swind Send-Q Rwind Recv-Q >> State >> -------------------- -------------------- ----- ------ ----- ------ >> ----------- >> localhost.31003 localhost.32001 49170 0 49152 0 >> TIME_WAIT >> solx86.ssh 10.24.115.41.63503 49640 47 49640 0 >> ESTABLISHED >> localhost.31000 localhost.32000 49170 0 49152 0 >> TIME_WAIT >> >> Active UNIX domain sockets >> Address Type Vnode Conn Local Addr Remote Addr >> fffffe853d5cdac0 stream-ord fffffe855a118a80 00000000 >> /var/run/.inetd.uds >> --- >> >> So the JVM to Wrapper half of the socket remains in a locked state for >> the TIME_WAIT period. This is expected because the JVM was the >> connecting process and the Wrapper was the listener. >> >> I admit that I have little experience with Solaris Zones but will >> definitely look into this further. Any additional information you >> could provide would be helpful in getting this resolved. >> >> Cheers, >> Leif >> >> On Sat, May 16, 2009 at 4:10 AM, Santo74 >> <gds...@de...> wrote: >>> >>> First of all I want to apologise for waking up an old thread, but >>> actually >>> the problem described in this thread still applies. >>> I say this because I am experiencing exactly the same behaviour as >>> ahmadk72 >>> with our own java application, which uses the java service wrapper. >>> >>> These are the remarkable things that I experienced: >>> >>> 1) the service wrapper is reserving all ports between 31000 and 32000 by >>> default >>> -> I would expect that it looks for the first free port in this range and >>> allocate / reserve that one, but apparently it reserves the whole range >>> !! >>> >>> 2) On most systems it's not a huge problem in that this port range is >>> probably available (most of the time) and therefore doesn't cause much >>> trouble. >>> However, a lot of companies don't like it that a single application is >>> occupying a range of 1000 ports >>> >>> 3) On Solaris 10 zones it's even worse because the port range appears to >>> be >>> in use on ALL zones as soon as ONE zone is running a service wrapper >>> based >>> application >>> (cfr the explanation of ahmadk72 below) >>> >>> The strangest part is that we can run multiple instances of other >>> applications that use a particular port on multiple zones without any >>> trouble at all. >>> E.g. a solaris host with 5 zones, all 5 running an IBM Tivoli Policy >>> Server >>> instance using the same ports on all those zones is not a problem. >>> So why can other applications allocate a particular port for that >>> particular >>> zone (i.e. independant), >>> while the service wrapper is allocating (reserving) its ports on all >>> zones >>> at once (i.e. global) ?? >>> >>> 4) I know it's possible to make the port range smaller AND letting each >>> server or zone use another port range, >>> but this makes it very hard to package an application for deployment >>> >>> We are using java service wrapper v3.2.3 >>> Can someone please look into this and take the necessary actions where >>> required ? >>> >>> Thanks in advance, >>> >>> gds >>> >>> >>> ahmadk72 wrote: >>>> >>>> I have a problem that is perplexed me to no end. Even our UNIX admin is >>>> stumped, so this mailing list is a last resort. >>>> >>>> We have a third-party application that uses the Java Service Wrapper >>>> (v3.2.1). The application is installed on two different Zones in >>>> Solaris >>>> 10. One zone is called rhdam-dev and the other is called rhdam-tst. >>>> >>>> If i startup the rhdam-dev (DEV box) application, the Java service >>>> wrapper >>>> uses ports 31000 and 32000 for communicating to and listening from the >>>> JVM. Everything works fine, and I have no problems running the >>>> application >>>> as expected. >>>> >>>> Then recently we installed the application on the dam-tst (TEST box) >>>> application on another Solaris zone. When the wrapper starts up, I see >>>> the following error (logging output is set to DEBUG level) in the >>>> attached >>>> log file. >>>> >>>> From what I have been able to deciper, it is complaining about port >>>> 32000 >>>> already being in use. However, the port is not in use on the rhdam-tst >>>> zone. I can run a netstat -an command and see that there is nothing >>>> running on port 32000. >>>> >>>> The really weird part is that is I stop the wrapper service on the >>>> rhdam-dev zone, and try to startup the rhdam-tst instance, then the >>>> wrapper service starts successfully. >>>> >>>> For now I have been able to start both up by changing adding the >>>> wrapper.port.min and wrappper.port.max settings so the ports don't >>>> conflict between rhdam-dev and rhdam-tst. >>>> >>>> However, I need to know why this is happening. Has anyone seen this >>>> behavior? The ports are suppose to be independent on Solaris zone. Why >>>> is the Java Service wrapper socket complaining about the port being used >>>> in another zone. >>>> >>>> If anyone can help me solve this mystery that would be just great. >>>> >>>> Thanks >>>> >>>> Kashif http://www.nabble.com/file/p12904287/artesia-service-wrapper.log >>>> artesia-service-wrapper.log |
|
From: Santo74 <gds...@de...> - 2009-05-18 11:14:45
|
Leif, Regarding the issue of restarting the application after it crashed or was forcedly killed I will keep an eye on it and report back with more info whenever it should happen again. It is indeed not the behaviour that I would expect from the wrapper especialy now that you confirmed that it isn't allocating the whole range. As you already mentioned correctly I won't be able yet to verify if I can start the app on a second zone after having stopped the app on the first zone for at least 2 min. Concerning your last question: our dev/test system is currently configured with 5 zones, all using their own ip address. I have no idea about the configuration of the solaris systems at our customers. regards, gds Leif Mortenson-2 wrote: > > Santo, > We are in the process of setting up a Solaris 10 server to do some > testing with Zones in house. We have a Solaris 9 server, but out > Solaris 10 testing has been done IN a zone on Sun's EZqual loaner > server. I will let you know what we found. > > As I explained, the Wrapper never actually attempts to allocate all > 1000 ports unless they are already blocked. If the first instance of > your application uses ports 32000 and 31000 and that crashes, it is > possible that the 32000 port will be locked for 2 minutes so the > second invocation of the JVM would use 32001 and 31000. But the > other 999 ports would have never been accessed so I can imagine no > reason why they would be locked. > > In your case with Solaris Zones. You say that the Wrapper can not > start on these second Zone when one is running on the first. Are you > able to verify that the wrapper on the second Zone works if the first > had not been running for at least 2 minutes? I am wondering if it is > a configuration issue. > > We will be able to test this shortly ourselves. And it doesn't sound > like you will be able to test it until your system is back up and > running. > > Sorry for this next question as it may show my lack of knowledge with > Solaris Zones: > With your system, are both Zones sharing the same IP address? If so, > they should not be able to share ports on that IP. In this case > however, we are only binding to localhost, so it should not matter. > > I will post back as soon as we have gotten this tested out. > > Cheers, > Leif > -- View this message in context: http://www.nabble.com/Wrapper-behavior-on-Solaris-10-zone-tp12904287p23595451.html Sent from the Java Service Wrapper mailing list archive at Nabble.com. |
|
From: Leif M. <lei...@ta...> - 2009-05-20 06:41:57
|
Santo, We have done some tests with a server configured with 3 Zones as well as done some more research. It does not appear to be possible to have multiple Zones "share" an IP address. So they will each have their own IP. For that reason, there should be no reason why any of the Zones would ever have any conflict with bound ports. As I understand it. Below you will find the netstat output from 3 Zones on the same machine each running a copy of the Wrapper. Each has an SSH connection to the Zone as well as the two between the Wrapper and its JVM. In all cases, the port number are the same. Because you have had reports from a few of customers, I am sure that "something" is happening. But from the information to date, I am not sure what the cause might be. Is it possible that there are some security configurations setup on one or more of the Zones that would prevent the Wrapper from starting? The Wrapper will loop over its 1000 possible ports looking for the first one that it is able to bind to. If all 1000 fail to bind then it reports that fact to the user. Rather than all 1000 ports actually being already bound, it may be that the OS is refusing to allow the Wrapper to bind to those ports for security reasons? Anyway, here is the netstat output from our 3 Zones. --- jupiter TCP: IPv4 Local Address Remote Address Swind Send-Q Rwind Recv-Q State -------------------- -------------------- ----- ------ ----- ------ ----------- jupiter.22 192.168.0.128.59013 18816 0 49232 0 ESTABLISHED localhost.31000 localhost.32000 49152 0 49152 0 ESTABLISHED localhost.32000 localhost.31000 49152 0 49170 0 ESTABLISHED Active UNIX domain sockets Address Type Vnode Conn Local Addr Remote Addr ffffffff889688f8 stream-ord 00000000 ffffffff89c8bac0 /tmp/.X11-unix/X0 ffffffff88968ac0 stream-ord 00000000 00000000 /tmp/.X11-unix/X0 ffffffff87a38728 stream-ord 00000000 00000000 /tmp/.X11-unix/X0 ffffffff88968730 stream-ord 00000000 ffffffff89c8bac0 /tmp/.X11-unix/X0 ffffffff87a38560 stream-ord 00000000 ffffffff89c8bac0 /tmp/.X11-unix/X0 ffffffff87a38008 stream-ord ffffffff8852b780 00000000 /var/run/zones/kore.console_sock ffffffff88968c88 stream-ord 00000000 00000000 /tmp/.X11-unix/X0 ffffffff87a38398 stream-ord ffffffff89c8bac0 00000000 /tmp/.X11-unix/X0 ffffffff87a38ab8 stream-ord ffffffff882b5880 00000000 /var/run/zones/europa.console_sock ffffffff87a38c80 stream-ord ffffffff87a3d740 00000000 /var/run/.inetd.uds europa: TCP: IPv4 Local Address Remote Address Swind Send-Q Rwind Recv-Q State -------------------- -------------------- ----- ------ ----- ------ ----------- europa.22 192.168.0.128.55040 13440 0 49232 0 ESTABLISHED localhost.31000 localhost.32000 49152 0 49152 0 ESTABLISHED localhost.32000 localhost.31000 49152 0 49170 0 ESTABLISHED Active UNIX domain sockets Address Type Vnode Conn Local Addr Remote Addr ffffffff87a381d0 stream-ord ffffffff87de6740 00000000 /var/run/.inetd.uds kore: TCP: IPv4 Local Address Remote Address Swind Send-Q Rwind Recv-Q State -------------------- -------------------- ----- ------ ----- ------ ----------- kore.22 192.168.0.128.56248 17664 0 49232 0 ESTABLISHED localhost.31000 localhost.32000 49152 0 49152 0 ESTABLISHED localhost.32000 localhost.31000 49152 0 49170 0 ESTABLISHED Active UNIX domain sockets Address Type Vnode Conn Local Addr Remote Addr ffffffff87a388f0 stream-ord ffffffff8aab4600 00000000 /var/run/.inetd.uds --- We will keep poking around, but please let me know if you are able to collect any more information. Cheers, Leif On Mon, May 18, 2009 at 8:14 PM, Santo74 <gds...@de...> wrote: > > Leif, > > Regarding the issue of restarting the application after it crashed or was > forcedly killed I will > keep an eye on it and report back with more info whenever it should happen > again. > It is indeed not the behaviour that I would expect from the wrapper > especialy now that you confirmed that it isn't allocating the whole range. > > As you already mentioned correctly I won't be able yet to verify if I can > start the app on a second zone after having stopped the app on the first > zone for at least 2 min. > > Concerning your last question: our dev/test system is currently configured > with 5 zones, all using their own ip address. > I have no idea about the configuration of the solaris systems at our > customers. > > regards, > > gds > > > Leif Mortenson-2 wrote: >> >> Santo, >> We are in the process of setting up a Solaris 10 server to do some >> testing with Zones in house. We have a Solaris 9 server, but out >> Solaris 10 testing has been done IN a zone on Sun's EZqual loaner >> server. I will let you know what we found. >> >> As I explained, the Wrapper never actually attempts to allocate all >> 1000 ports unless they are already blocked. If the first instance of >> your application uses ports 32000 and 31000 and that crashes, it is >> possible that the 32000 port will be locked for 2 minutes so the >> second invocation of the JVM would use 32001 and 31000. But the >> other 999 ports would have never been accessed so I can imagine no >> reason why they would be locked. >> >> In your case with Solaris Zones. You say that the Wrapper can not >> start on these second Zone when one is running on the first. Are you >> able to verify that the wrapper on the second Zone works if the first >> had not been running for at least 2 minutes? I am wondering if it is >> a configuration issue. >> >> We will be able to test this shortly ourselves. And it doesn't sound >> like you will be able to test it until your system is back up and >> running. >> >> Sorry for this next question as it may show my lack of knowledge with >> Solaris Zones: >> With your system, are both Zones sharing the same IP address? If so, >> they should not be able to share ports on that IP. In this case >> however, we are only binding to localhost, so it should not matter. >> >> I will post back as soon as we have gotten this tested out. >> >> Cheers, >> Leif |
|
From: Santo74 <gds...@de...> - 2009-05-20 08:51:02
|
Leif, This is very strange, because we haven't come across any solaris 10 environment (with zones) not having this issue. Therefore it indeed looks like some configuration differences (or something) in comparison with your system. However, I still find it strange that other applications (not using the service wrapper) are not having this issue on the same zones. As for the security, it's true that our application runs under a dedicated user account (and therefore doesn't have full (root) privileges), but the IBM Tivoli Policy Server (which I mentioned before) is also running under a dedicated account (with limited privileges) as far as I know. This morning I heard that most of the problems with our solaris server should be solved later today, which means that I can hopefully start testing again next monday (long weekend over here). Thanks, gds Leif Mortenson-3 wrote: > > Santo, > We have done some tests with a server configured with 3 Zones as well > as done some more research. > > It does not appear to be possible to have multiple Zones "share" an IP > address. So they will each have their own IP. For that reason, there > should be no reason why any of the Zones would ever have any conflict > with bound ports. As I understand it. > > Below you will find the netstat output from 3 Zones on the same > machine each running a copy of the Wrapper. Each has an SSH > connection to the Zone as well as the two between the Wrapper and its > JVM. In all cases, the port number are the same. > > Because you have had reports from a few of customers, I am sure that > "something" is happening. But from the information to date, I am not > sure what the cause might be. Is it possible that there are some > security configurations setup on one or more of the Zones that would > prevent the Wrapper from starting? > > The Wrapper will loop over its 1000 possible ports looking for the > first one that it is able to bind to. If all 1000 fail to bind then > it reports that fact to the user. Rather than all 1000 ports > actually being already bound, it may be that the OS is refusing to > allow the Wrapper to bind to those ports for security reasons? > > Anyway, here is the netstat output from our 3 Zones. > > --- > jupiter > TCP: IPv4 > Local Address Remote Address Swind Send-Q Rwind Recv-Q > State > -------------------- -------------------- ----- ------ ----- ------ > ----------- > jupiter.22 192.168.0.128.59013 18816 0 49232 0 > ESTABLISHED > localhost.31000 localhost.32000 49152 0 49152 0 > ESTABLISHED > localhost.32000 localhost.31000 49152 0 49170 0 > ESTABLISHED > > Active UNIX domain sockets > Address Type Vnode Conn Local Addr Remote Addr > ffffffff889688f8 stream-ord 00000000 > ffffffff89c8bac0 /tmp/.X11-unix/X0 > ffffffff88968ac0 stream-ord 00000000 > 00000000 /tmp/.X11-unix/X0 > ffffffff87a38728 stream-ord 00000000 > 00000000 /tmp/.X11-unix/X0 > ffffffff88968730 stream-ord 00000000 > ffffffff89c8bac0 /tmp/.X11-unix/X0 > ffffffff87a38560 stream-ord 00000000 > ffffffff89c8bac0 /tmp/.X11-unix/X0 > ffffffff87a38008 stream-ord ffffffff8852b780 > 00000000 /var/run/zones/kore.console_sock > ffffffff88968c88 stream-ord 00000000 > 00000000 /tmp/.X11-unix/X0 > ffffffff87a38398 stream-ord ffffffff89c8bac0 > 00000000 /tmp/.X11-unix/X0 > ffffffff87a38ab8 stream-ord ffffffff882b5880 > 00000000 /var/run/zones/europa.console_sock > ffffffff87a38c80 stream-ord ffffffff87a3d740 > 00000000 /var/run/.inetd.uds > > europa: > TCP: IPv4 > Local Address Remote Address Swind Send-Q Rwind Recv-Q > State > -------------------- -------------------- ----- ------ ----- ------ > ----------- > europa.22 192.168.0.128.55040 13440 0 49232 0 > ESTABLISHED > localhost.31000 localhost.32000 49152 0 49152 0 > ESTABLISHED > localhost.32000 localhost.31000 49152 0 49170 0 > ESTABLISHED > > Active UNIX domain sockets > Address Type Vnode Conn Local Addr Remote Addr > ffffffff87a381d0 stream-ord ffffffff87de6740 > 00000000 /var/run/.inetd.uds > > kore: > TCP: IPv4 > Local Address Remote Address Swind Send-Q Rwind Recv-Q > State > -------------------- -------------------- ----- ------ ----- ------ > ----------- > kore.22 192.168.0.128.56248 17664 0 49232 0 > ESTABLISHED > localhost.31000 localhost.32000 49152 0 49152 0 > ESTABLISHED > localhost.32000 localhost.31000 49152 0 49170 0 > ESTABLISHED > > Active UNIX domain sockets > Address Type Vnode Conn Local Addr Remote Addr > ffffffff87a388f0 stream-ord ffffffff8aab4600 > 00000000 /var/run/.inetd.uds > --- > > We will keep poking around, but please let me know if you are able to > collect any more information. > > Cheers, > Leif > > > On Mon, May 18, 2009 at 8:14 PM, Santo74 > <gds...@de...> wrote: >> >> Leif, >> >> Regarding the issue of restarting the application after it crashed or was >> forcedly killed I will >> keep an eye on it and report back with more info whenever it should >> happen >> again. >> It is indeed not the behaviour that I would expect from the wrapper >> especialy now that you confirmed that it isn't allocating the whole >> range. >> >> As you already mentioned correctly I won't be able yet to verify if I can >> start the app on a second zone after having stopped the app on the first >> zone for at least 2 min. >> >> Concerning your last question: our dev/test system is currently >> configured >> with 5 zones, all using their own ip address. >> I have no idea about the configuration of the solaris systems at our >> customers. >> >> regards, >> >> gds >> >> >> Leif Mortenson-2 wrote: >>> >>> Santo, >>> We are in the process of setting up a Solaris 10 server to do some >>> testing with Zones in house. We have a Solaris 9 server, but out >>> Solaris 10 testing has been done IN a zone on Sun's EZqual loaner >>> server. I will let you know what we found. >>> >>> As I explained, the Wrapper never actually attempts to allocate all >>> 1000 ports unless they are already blocked. If the first instance of >>> your application uses ports 32000 and 31000 and that crashes, it is >>> possible that the 32000 port will be locked for 2 minutes so the >>> second invocation of the JVM would use 32001 and 31000. But the >>> other 999 ports would have never been accessed so I can imagine no >>> reason why they would be locked. >>> >>> In your case with Solaris Zones. You say that the Wrapper can not >>> start on these second Zone when one is running on the first. Are you >>> able to verify that the wrapper on the second Zone works if the first >>> had not been running for at least 2 minutes? I am wondering if it is >>> a configuration issue. >>> >>> We will be able to test this shortly ourselves. And it doesn't sound >>> like you will be able to test it until your system is back up and >>> running. >>> >>> Sorry for this next question as it may show my lack of knowledge with >>> Solaris Zones: >>> With your system, are both Zones sharing the same IP address? If so, >>> they should not be able to share ports on that IP. In this case >>> however, we are only binding to localhost, so it should not matter. >>> >>> I will post back as soon as we have gotten this tested out. >>> >>> Cheers, >>> Leif > > ------------------------------------------------------------------------------ > Crystal Reports - New Free Runtime and 30 Day Trial > Check out the new simplified licensing option that enables > unlimited royalty-free distribution of the report engine > for externally facing server and web deployment. > http://p.sf.net/sfu/businessobjects > _______________________________________________ > Wrapper-user mailing list > Wra...@li... > https://lists.sourceforge.net/lists/listinfo/wrapper-user > > -- View this message in context: http://www.nabble.com/Wrapper-behavior-on-Solaris-10-zone-tp12904287p23631453.html Sent from the Java Service Wrapper mailing list archive at Nabble.com. |
|
From: Leif M. <lei...@ta...> - 2009-05-20 09:14:57
|
Santo, We created "sparse" zones rather than "root" zones because they share much of the file system with the underlying OS. Our thinking was that this would be more likely to show any resource conflicts. I agree that there are likely some configuration differences between your systems and ours. We have been actively attempting to locate the cause of this problem, but any information that you could provide would be very helpful in narrowing this down. Our tests are being on an x86 server within a virtual machine. We also have one Sparc server. But that is running Solaris 9 natively and is being used for our build process. If possible, we would like to avoid reinstalling that with Solaris 10 as we would need to restore it later. I would be very surprised if a problem like this would work differently on x86 vs Sparc however. Cheers, Leif On Wed, May 20, 2009 at 5:50 PM, Santo74 <gds...@de...> wrote: > > Leif, > > This is very strange, because we haven't come across any solaris 10 > environment (with zones) > not having this issue. > Therefore it indeed looks like some configuration differences (or something) > in comparison with your system. > However, I still find it strange that other applications (not using the > service wrapper) are > not having this issue on the same zones. > As for the security, it's true that our application runs under a dedicated > user account (and therefore doesn't have full (root) privileges), but the > IBM Tivoli Policy Server (which I mentioned before) is also running under a > dedicated account (with limited privileges) as far as I know. > > This morning I heard that most of the problems with our solaris server > should be solved later today, which > means that I can hopefully start testing again next monday (long weekend > over here). > > Thanks, > > gds > > > > Leif Mortenson-3 wrote: >> >> Santo, >> We have done some tests with a server configured with 3 Zones as well >> as done some more research. >> >> It does not appear to be possible to have multiple Zones "share" an IP >> address. So they will each have their own IP. For that reason, there >> should be no reason why any of the Zones would ever have any conflict >> with bound ports. As I understand it. >> >> Below you will find the netstat output from 3 Zones on the same >> machine each running a copy of the Wrapper. Each has an SSH >> connection to the Zone as well as the two between the Wrapper and its >> JVM. In all cases, the port number are the same. >> >> Because you have had reports from a few of customers, I am sure that >> "something" is happening. But from the information to date, I am not >> sure what the cause might be. Is it possible that there are some >> security configurations setup on one or more of the Zones that would >> prevent the Wrapper from starting? >> >> The Wrapper will loop over its 1000 possible ports looking for the >> first one that it is able to bind to. If all 1000 fail to bind then >> it reports that fact to the user. Rather than all 1000 ports >> actually being already bound, it may be that the OS is refusing to >> allow the Wrapper to bind to those ports for security reasons? >> >> Anyway, here is the netstat output from our 3 Zones. >> >> --- >> jupiter >> TCP: IPv4 >> Local Address Remote Address Swind Send-Q Rwind Recv-Q >> State >> -------------------- -------------------- ----- ------ ----- ------ >> ----------- >> jupiter.22 192.168.0.128.59013 18816 0 49232 0 >> ESTABLISHED >> localhost.31000 localhost.32000 49152 0 49152 0 >> ESTABLISHED >> localhost.32000 localhost.31000 49152 0 49170 0 >> ESTABLISHED >> >> Active UNIX domain sockets >> Address Type Vnode Conn Local Addr Remote Addr >> ffffffff889688f8 stream-ord 00000000 >> ffffffff89c8bac0 /tmp/.X11-unix/X0 >> ffffffff88968ac0 stream-ord 00000000 >> 00000000 /tmp/.X11-unix/X0 >> ffffffff87a38728 stream-ord 00000000 >> 00000000 /tmp/.X11-unix/X0 >> ffffffff88968730 stream-ord 00000000 >> ffffffff89c8bac0 /tmp/.X11-unix/X0 >> ffffffff87a38560 stream-ord 00000000 >> ffffffff89c8bac0 /tmp/.X11-unix/X0 >> ffffffff87a38008 stream-ord ffffffff8852b780 >> 00000000 /var/run/zones/kore.console_sock >> ffffffff88968c88 stream-ord 00000000 >> 00000000 /tmp/.X11-unix/X0 >> ffffffff87a38398 stream-ord ffffffff89c8bac0 >> 00000000 /tmp/.X11-unix/X0 >> ffffffff87a38ab8 stream-ord ffffffff882b5880 >> 00000000 /var/run/zones/europa.console_sock >> ffffffff87a38c80 stream-ord ffffffff87a3d740 >> 00000000 /var/run/.inetd.uds >> >> europa: >> TCP: IPv4 >> Local Address Remote Address Swind Send-Q Rwind Recv-Q >> State >> -------------------- -------------------- ----- ------ ----- ------ >> ----------- >> europa.22 192.168.0.128.55040 13440 0 49232 0 >> ESTABLISHED >> localhost.31000 localhost.32000 49152 0 49152 0 >> ESTABLISHED >> localhost.32000 localhost.31000 49152 0 49170 0 >> ESTABLISHED >> >> Active UNIX domain sockets >> Address Type Vnode Conn Local Addr Remote Addr >> ffffffff87a381d0 stream-ord ffffffff87de6740 >> 00000000 /var/run/.inetd.uds >> >> kore: >> TCP: IPv4 >> Local Address Remote Address Swind Send-Q Rwind Recv-Q >> State >> -------------------- -------------------- ----- ------ ----- ------ >> ----------- >> kore.22 192.168.0.128.56248 17664 0 49232 0 >> ESTABLISHED >> localhost.31000 localhost.32000 49152 0 49152 0 >> ESTABLISHED >> localhost.32000 localhost.31000 49152 0 49170 0 >> ESTABLISHED >> >> Active UNIX domain sockets >> Address Type Vnode Conn Local Addr Remote Addr >> ffffffff87a388f0 stream-ord ffffffff8aab4600 >> 00000000 /var/run/.inetd.uds >> --- >> >> We will keep poking around, but please let me know if you are able to >> collect any more information. >> >> Cheers, >> Leif >> >> >> On Mon, May 18, 2009 at 8:14 PM, Santo74 >> <gds...@de...> wrote: >>> >>> Leif, >>> >>> Regarding the issue of restarting the application after it crashed or was >>> forcedly killed I will >>> keep an eye on it and report back with more info whenever it should >>> happen >>> again. >>> It is indeed not the behaviour that I would expect from the wrapper >>> especialy now that you confirmed that it isn't allocating the whole >>> range. >>> >>> As you already mentioned correctly I won't be able yet to verify if I can >>> start the app on a second zone after having stopped the app on the first >>> zone for at least 2 min. >>> >>> Concerning your last question: our dev/test system is currently >>> configured >>> with 5 zones, all using their own ip address. >>> I have no idea about the configuration of the solaris systems at our >>> customers. >>> >>> regards, >>> >>> gds >>> >>> >>> Leif Mortenson-2 wrote: >>>> >>>> Santo, >>>> We are in the process of setting up a Solaris 10 server to do some >>>> testing with Zones in house. We have a Solaris 9 server, but out >>>> Solaris 10 testing has been done IN a zone on Sun's EZqual loaner >>>> server. I will let you know what we found. >>>> >>>> As I explained, the Wrapper never actually attempts to allocate all >>>> 1000 ports unless they are already blocked. If the first instance of >>>> your application uses ports 32000 and 31000 and that crashes, it is >>>> possible that the 32000 port will be locked for 2 minutes so the >>>> second invocation of the JVM would use 32001 and 31000. But the >>>> other 999 ports would have never been accessed so I can imagine no >>>> reason why they would be locked. >>>> >>>> In your case with Solaris Zones. You say that the Wrapper can not >>>> start on these second Zone when one is running on the first. Are you >>>> able to verify that the wrapper on the second Zone works if the first >>>> had not been running for at least 2 minutes? I am wondering if it is >>>> a configuration issue. >>>> >>>> We will be able to test this shortly ourselves. And it doesn't sound >>>> like you will be able to test it until your system is back up and >>>> running. >>>> >>>> Sorry for this next question as it may show my lack of knowledge with >>>> Solaris Zones: >>>> With your system, are both Zones sharing the same IP address? If so, >>>> they should not be able to share ports on that IP. In this case >>>> however, we are only binding to localhost, so it should not matter. >>>> >>>> I will post back as soon as we have gotten this tested out. >>>> >>>> Cheers, >>>> Leif |
|
From: Santo74 <gds...@de...> - 2009-05-20 13:25:24
|
Leif, In the meantime our solaris system is up and running again (a sparc system by the way) and we already did some new tests with the wrapper and could reproduce the following: zone1 runs 2 wrappers (our application consists of 2 separate components and on some systems they need to run both, hence the 2 wrapper instances) The default port ranges are used and everything runs as expected: 2 connections between wrapper en jvm, 32000 - 31000 and 32001 - 31001 localhost.31000 localhost.32000 49152 0 49152 0 ESTABLISHED localhost.32000 localhost.31000 49152 0 49170 0 ESTABLISHED localhost.31001 localhost.32001 49152 0 49152 0 ESTABLISHED localhost.32001 localhost.31001 49152 0 49170 0 ESTABLISHED We will keep this zone1 running as is and test some scenarios on zone2: 1) define an explicit port range for the wrapper (something outside 32000 range) and start it. This works: localhost.31000 localhost.42700 49152 0 49152 0 ESTABLISHED localhost.42700 localhost.31000 49152 0 49170 0 ESTABLISHED 2) define an explicit port range for the wrapper (within the 32000 range, but exclusive the 2 ports in use on zone 1 (i.e. 32000 and 32001)). This works: localhost.31000 localhost.32002 49152 0 49152 0 ESTABLISHED localhost.32002 localhost.31000 49152 0 49170 0 ESTABLISHED 3) remove the explicit port range and restart with the default settings. This surprisingly also works (apparently because the previous jvm port is in TIME_WAIT, which causes the jvm to use another port (which is however also in use on zone1)): localhost.31000 localhost.42700 49170 0 49152 0 TIME_WAIT localhost.31001 localhost.32000 49152 0 49152 0 ESTABLISHED localhost.32000 localhost.31001 49152 0 49170 0 ESTABLISHED 4) restart again without changing anything (i.e. again use all the defaults). Doesn't work this time (jvm tries to use port 31000 again): INFO | jvm 2 | 2009/05/20 12:12:34 | java.net.SocketException: Address already in use 5) define an explicit port range for the jvm and start it Doesn't work either -> There is only 1 situation where we were able to start a wrapper on a second zone, using the same ports as on the first zone. Strangely enough this is caused by the fact that the previously allocated jvm port is in a TIME_WAIT state. At least, that's the only explanation we have for this. We also verified the type of zones configured on our testserver and it appears that our server is using "root" zones. Therefore I asked one of our consultants to verify the type of zones used at one of our customers. If they also use "root" zones, it might be that this has something to do with the issue. Unfortunately because of the long weekend it will take at least until monday before we have any news on this. Another interesting piece of info that we found is the following: ----- On a Solaris system with zones installed, the zones can communicate with each other over the network. The zones all have separate bindings, or connections, and the zones can all run their own server daemons. These daemons can listen on the same port numbers without any conflict. The IP stack resolves conflicts by considering the IP addresses for incoming connections. The IP addresses identify the zone. ----- Which would mean that solaris should take care of the port conflicts if they arise. And it seams to do this in case the initial port is in TIME_WAIT, but not in "normal" cases. regards, gds Leif Mortenson-3 wrote: > > Santo, > We created "sparse" zones rather than "root" zones because they share > much of the file system with the underlying OS. Our thinking was that > this would be more likely to show any resource conflicts. > > I agree that there are likely some configuration differences between > your systems and ours. We have been actively attempting to locate the > cause of this problem, but any information that you could provide > would be very helpful in narrowing this down. > > Our tests are being on an x86 server within a virtual machine. We > also have one Sparc server. But that is running Solaris 9 natively > and is being used for our build process. If possible, we would like > to avoid reinstalling that with Solaris 10 as we would need to restore > it later. I would be very surprised if a problem like this would work > differently on x86 vs Sparc however. > > Cheers, > Leif > > On Wed, May 20, 2009 at 5:50 PM, Santo74 > <gds...@de...> wrote: >> >> Leif, >> >> This is very strange, because we haven't come across any solaris 10 >> environment (with zones) >> not having this issue. >> Therefore it indeed looks like some configuration differences (or >> something) >> in comparison with your system. >> However, I still find it strange that other applications (not using the >> service wrapper) are >> not having this issue on the same zones. >> As for the security, it's true that our application runs under a >> dedicated >> user account (and therefore doesn't have full (root) privileges), but the >> IBM Tivoli Policy Server (which I mentioned before) is also running under >> a >> dedicated account (with limited privileges) as far as I know. >> >> This morning I heard that most of the problems with our solaris server >> should be solved later today, which >> means that I can hopefully start testing again next monday (long weekend >> over here). >> >> Thanks, >> >> gds >> >> >> >> Leif Mortenson-3 wrote: >>> >>> Santo, >>> We have done some tests with a server configured with 3 Zones as well >>> as done some more research. >>> >>> It does not appear to be possible to have multiple Zones "share" an IP >>> address. So they will each have their own IP. For that reason, there >>> should be no reason why any of the Zones would ever have any conflict >>> with bound ports. As I understand it. >>> >>> Below you will find the netstat output from 3 Zones on the same >>> machine each running a copy of the Wrapper. Each has an SSH >>> connection to the Zone as well as the two between the Wrapper and its >>> JVM. In all cases, the port number are the same. >>> >>> Because you have had reports from a few of customers, I am sure that >>> "something" is happening. But from the information to date, I am not >>> sure what the cause might be. Is it possible that there are some >>> security configurations setup on one or more of the Zones that would >>> prevent the Wrapper from starting? >>> >>> The Wrapper will loop over its 1000 possible ports looking for the >>> first one that it is able to bind to. If all 1000 fail to bind then >>> it reports that fact to the user. Rather than all 1000 ports >>> actually being already bound, it may be that the OS is refusing to >>> allow the Wrapper to bind to those ports for security reasons? >>> >>> Anyway, here is the netstat output from our 3 Zones. >>> >>> --- >>> jupiter >>> TCP: IPv4 >>> Local Address Remote Address Swind Send-Q Rwind Recv-Q >>> State >>> -------------------- -------------------- ----- ------ ----- ------ >>> ----------- >>> jupiter.22 192.168.0.128.59013 18816 0 49232 0 >>> ESTABLISHED >>> localhost.31000 localhost.32000 49152 0 49152 0 >>> ESTABLISHED >>> localhost.32000 localhost.31000 49152 0 49170 0 >>> ESTABLISHED >>> >>> Active UNIX domain sockets >>> Address Type Vnode Conn Local Addr Remote Addr >>> ffffffff889688f8 stream-ord 00000000 >>> ffffffff89c8bac0 /tmp/.X11-unix/X0 >>> ffffffff88968ac0 stream-ord 00000000 >>> 00000000 /tmp/.X11-unix/X0 >>> ffffffff87a38728 stream-ord 00000000 >>> 00000000 /tmp/.X11-unix/X0 >>> ffffffff88968730 stream-ord 00000000 >>> ffffffff89c8bac0 /tmp/.X11-unix/X0 >>> ffffffff87a38560 stream-ord 00000000 >>> ffffffff89c8bac0 /tmp/.X11-unix/X0 >>> ffffffff87a38008 stream-ord ffffffff8852b780 >>> 00000000 /var/run/zones/kore.console_sock >>> ffffffff88968c88 stream-ord 00000000 >>> 00000000 /tmp/.X11-unix/X0 >>> ffffffff87a38398 stream-ord ffffffff89c8bac0 >>> 00000000 /tmp/.X11-unix/X0 >>> ffffffff87a38ab8 stream-ord ffffffff882b5880 >>> 00000000 /var/run/zones/europa.console_sock >>> ffffffff87a38c80 stream-ord ffffffff87a3d740 >>> 00000000 /var/run/.inetd.uds >>> >>> europa: >>> TCP: IPv4 >>> Local Address Remote Address Swind Send-Q Rwind Recv-Q >>> State >>> -------------------- -------------------- ----- ------ ----- ------ >>> ----------- >>> europa.22 192.168.0.128.55040 13440 0 49232 0 >>> ESTABLISHED >>> localhost.31000 localhost.32000 49152 0 49152 0 >>> ESTABLISHED >>> localhost.32000 localhost.31000 49152 0 49170 0 >>> ESTABLISHED >>> >>> Active UNIX domain sockets >>> Address Type Vnode Conn Local Addr Remote Addr >>> ffffffff87a381d0 stream-ord ffffffff87de6740 >>> 00000000 /var/run/.inetd.uds >>> >>> kore: >>> TCP: IPv4 >>> Local Address Remote Address Swind Send-Q Rwind Recv-Q >>> State >>> -------------------- -------------------- ----- ------ ----- ------ >>> ----------- >>> kore.22 192.168.0.128.56248 17664 0 49232 0 >>> ESTABLISHED >>> localhost.31000 localhost.32000 49152 0 49152 0 >>> ESTABLISHED >>> localhost.32000 localhost.31000 49152 0 49170 0 >>> ESTABLISHED >>> >>> Active UNIX domain sockets >>> Address Type Vnode Conn Local Addr Remote Addr >>> ffffffff87a388f0 stream-ord ffffffff8aab4600 >>> 00000000 /var/run/.inetd.uds >>> --- >>> >>> We will keep poking around, but please let me know if you are able to >>> collect any more information. >>> >>> Cheers, >>> Leif >>> >>> >>> On Mon, May 18, 2009 at 8:14 PM, Santo74 >>> <gds...@de...> wrote: >>>> >>>> Leif, >>>> >>>> Regarding the issue of restarting the application after it crashed or >>>> was >>>> forcedly killed I will >>>> keep an eye on it and report back with more info whenever it should >>>> happen >>>> again. >>>> It is indeed not the behaviour that I would expect from the wrapper >>>> especialy now that you confirmed that it isn't allocating the whole >>>> range. >>>> >>>> As you already mentioned correctly I won't be able yet to verify if I >>>> can >>>> start the app on a second zone after having stopped the app on the >>>> first >>>> zone for at least 2 min. >>>> >>>> Concerning your last question: our dev/test system is currently >>>> configured >>>> with 5 zones, all using their own ip address. >>>> I have no idea about the configuration of the solaris systems at our >>>> customers. >>>> >>>> regards, >>>> >>>> gds >>>> >>>> >>>> Leif Mortenson-2 wrote: >>>>> >>>>> Santo, >>>>> We are in the process of setting up a Solaris 10 server to do some >>>>> testing with Zones in house. We have a Solaris 9 server, but out >>>>> Solaris 10 testing has been done IN a zone on Sun's EZqual loaner >>>>> server. I will let you know what we found. >>>>> >>>>> As I explained, the Wrapper never actually attempts to allocate all >>>>> 1000 ports unless they are already blocked. If the first instance of >>>>> your application uses ports 32000 and 31000 and that crashes, it is >>>>> possible that the 32000 port will be locked for 2 minutes so the >>>>> second invocation of the JVM would use 32001 and 31000. But the >>>>> other 999 ports would have never been accessed so I can imagine no >>>>> reason why they would be locked. >>>>> >>>>> In your case with Solaris Zones. You say that the Wrapper can not >>>>> start on these second Zone when one is running on the first. Are you >>>>> able to verify that the wrapper on the second Zone works if the first >>>>> had not been running for at least 2 minutes? I am wondering if it is >>>>> a configuration issue. >>>>> >>>>> We will be able to test this shortly ourselves. And it doesn't sound >>>>> like you will be able to test it until your system is back up and >>>>> running. >>>>> >>>>> Sorry for this next question as it may show my lack of knowledge with >>>>> Solaris Zones: >>>>> With your system, are both Zones sharing the same IP address? If so, >>>>> they should not be able to share ports on that IP. In this case >>>>> however, we are only binding to localhost, so it should not matter. >>>>> >>>>> I will post back as soon as we have gotten this tested out. >>>>> >>>>> Cheers, >>>>> Leif > > ------------------------------------------------------------------------------ > Crystal Reports - New Free Runtime and 30 Day Trial > Check out the new simplified licensing option that enables > unlimited royalty-free distribution of the report engine > for externally facing server and web deployment. > http://p.sf.net/sfu/businessobjects > _______________________________________________ > Wrapper-user mailing list > Wra...@li... > https://lists.sourceforge.net/lists/listinfo/wrapper-user > > -- View this message in context: http://www.nabble.com/Wrapper-behavior-on-Solaris-10-zone-tp12904287p23635418.html Sent from the Java Service Wrapper mailing list archive at Nabble.com. |
|
From: Leif M. <lei...@ta...> - 2009-05-25 04:57:32
|
Santo, We have been doing some more testing following all of your steps with "root" zones. But everything is working as expected. Rereading your email today however, I think I may know what the problem is. The error you show in #4. --- INFO | jvm 2 | 2009/05/20 12:12:34 | java.net.SocketException: Address already in use --- This is a bug in Solaris versions that was fixed in version 3.3.0. It actually has nothing to do with zones and should happen on any Solaris system if you start one copy of the Wrapper, stop it, and then immediately start a new copy. The first copy will bind to port 31000 and then the system puts that into TIME_WAIT state for two minutes. During that time, the second instance of the Wrapper was not correctly recognizing the cause of the SocketException that was thrown. On many platforms a BindException is thrown. But Solaris throws a SocketException, which could mean anything. It is necessary to check the text of the message to see if it is a bind problem. That text starts with "errno: 48" for older JVMs, but is "Address already in use" on newer ones. The bug can be found here. It also starts out thinking it is a zone problem: https://sourceforge.net/tracker/?func=detail&aid=1594073&group_id=39428&atid=425187 Could you please give this a try with the 3.3.5 release? Thanks, Leif On Wed, May 20, 2009 at 10:25 PM, Santo74 <gds...@de...> wrote: > > Leif, > > In the meantime our solaris system is up and running again (a sparc system > by the way) > and we already did some new tests with the wrapper and could reproduce the > following: > > zone1 runs 2 wrappers (our application consists of 2 separate components and > on some systems they need to run both, hence the 2 wrapper instances) > The default port ranges are used and everything runs as expected: > 2 connections between wrapper en jvm, 32000 - 31000 and 32001 - 31001 > > localhost.31000 localhost.32000 49152 0 49152 0 > ESTABLISHED > localhost.32000 localhost.31000 49152 0 49170 0 > ESTABLISHED > localhost.31001 localhost.32001 49152 0 49152 0 > ESTABLISHED > localhost.32001 localhost.31001 49152 0 49170 0 > ESTABLISHED > > We will keep this zone1 running as is and test some scenarios on zone2: > > 1) define an explicit port range for the wrapper (something outside 32000 > range) and start it. > This works: > > localhost.31000 localhost.42700 49152 0 49152 0 > ESTABLISHED > localhost.42700 localhost.31000 49152 0 49170 0 > ESTABLISHED > > 2) define an explicit port range for the wrapper (within the 32000 range, > but exclusive the 2 ports in use on zone 1 (i.e. 32000 and 32001)). > This works: > > localhost.31000 localhost.32002 49152 0 49152 0 > ESTABLISHED > localhost.32002 localhost.31000 49152 0 49170 0 > ESTABLISHED > > 3) remove the explicit port range and restart with the default settings. > This surprisingly also works > (apparently because the previous jvm port is in TIME_WAIT, which causes > the jvm to use another port (which is however also in use on zone1)): > > localhost.31000 localhost.42700 49170 0 49152 0 > TIME_WAIT > localhost.31001 localhost.32000 49152 0 49152 0 > ESTABLISHED > localhost.32000 localhost.31001 49152 0 49170 0 > ESTABLISHED > > 4) restart again without changing anything (i.e. again use all the > defaults). > Doesn't work this time (jvm tries to use port 31000 again): > > INFO | jvm 2 | 2009/05/20 12:12:34 | java.net.SocketException: Address > already in use > > 5) define an explicit port range for the jvm and start it > Doesn't work either > > -> There is only 1 situation where we were able to start a wrapper on a > second zone, using the same ports as on the first zone. > Strangely enough this is caused by the fact that the previously allocated > jvm port is in a TIME_WAIT state. > At least, that's the only explanation we have for this. > > We also verified the type of zones configured on our testserver and it > appears that our server is using "root" zones. > Therefore I asked one of our consultants to verify the type of zones used at > one of our customers. > If they also use "root" zones, it might be that this has something to do > with the issue. > Unfortunately because of the long weekend it will take at least until monday > before we have any news on this. > > Another interesting piece of info that we found is the following: > > ----- > On a Solaris system with zones installed, the zones can communicate with > each other over the > network. The zones all have separate bindings, or connections, and the zones > can all run their > own server daemons. These daemons can listen on the same port numbers > without any conflict. > The IP stack resolves conflicts by considering the IP addresses for incoming > connections. The > IP addresses identify the zone. > ----- > > Which would mean that solaris should take care of the port conflicts if they > arise. > And it seams to do this in case the initial port is in TIME_WAIT, but not in > "normal" cases. > > regards, > > gds > > > Leif Mortenson-3 wrote: >> >> Santo, >> We created "sparse" zones rather than "root" zones because they share >> much of the file system with the underlying OS. Our thinking was that >> this would be more likely to show any resource conflicts. >> >> I agree that there are likely some configuration differences between >> your systems and ours. We have been actively attempting to locate the >> cause of this problem, but any information that you could provide >> would be very helpful in narrowing this down. >> >> Our tests are being on an x86 server within a virtual machine. We >> also have one Sparc server. But that is running Solaris 9 natively >> and is being used for our build process. If possible, we would like >> to avoid reinstalling that with Solaris 10 as we would need to restore >> it later. I would be very surprised if a problem like this would work >> differently on x86 vs Sparc however. >> >> Cheers, >> Leif >> >> On Wed, May 20, 2009 at 5:50 PM, Santo74 >> <gds...@de...> wrote: >>> >>> Leif, >>> >>> This is very strange, because we haven't come across any solaris 10 >>> environment (with zones) >>> not having this issue. >>> Therefore it indeed looks like some configuration differences (or >>> something) >>> in comparison with your system. >>> However, I still find it strange that other applications (not using the >>> service wrapper) are >>> not having this issue on the same zones. >>> As for the security, it's true that our application runs under a >>> dedicated >>> user account (and therefore doesn't have full (root) privileges), but the >>> IBM Tivoli Policy Server (which I mentioned before) is also running under >>> a >>> dedicated account (with limited privileges) as far as I know. >>> >>> This morning I heard that most of the problems with our solaris server >>> should be solved later today, which >>> means that I can hopefully start testing again next monday (long weekend >>> over here). >>> >>> Thanks, >>> >>> gds >>> >>> >>> >>> Leif Mortenson-3 wrote: >>>> >>>> Santo, >>>> We have done some tests with a server configured with 3 Zones as well >>>> as done some more research. >>>> >>>> It does not appear to be possible to have multiple Zones "share" an IP >>>> address. So they will each have their own IP. For that reason, there >>>> should be no reason why any of the Zones would ever have any conflict >>>> with bound ports. As I understand it. >>>> >>>> Below you will find the netstat output from 3 Zones on the same >>>> machine each running a copy of the Wrapper. Each has an SSH >>>> connection to the Zone as well as the two between the Wrapper and its >>>> JVM. In all cases, the port number are the same. >>>> >>>> Because you have had reports from a few of customers, I am sure that >>>> "something" is happening. But from the information to date, I am not >>>> sure what the cause might be. Is it possible that there are some >>>> security configurations setup on one or more of the Zones that would >>>> prevent the Wrapper from starting? >>>> >>>> The Wrapper will loop over its 1000 possible ports looking for the >>>> first one that it is able to bind to. If all 1000 fail to bind then >>>> it reports that fact to the user. Rather than all 1000 ports >>>> actually being already bound, it may be that the OS is refusing to >>>> allow the Wrapper to bind to those ports for security reasons? >>>> >>>> Anyway, here is the netstat output from our 3 Zones. >>>> >>>> --- >>>> jupiter >>>> TCP: IPv4 >>>> Local Address Remote Address Swind Send-Q Rwind Recv-Q >>>> State >>>> -------------------- -------------------- ----- ------ ----- ------ >>>> ----------- >>>> jupiter.22 192.168.0.128.59013 18816 0 49232 0 >>>> ESTABLISHED >>>> localhost.31000 localhost.32000 49152 0 49152 0 >>>> ESTABLISHED >>>> localhost.32000 localhost.31000 49152 0 49170 0 >>>> ESTABLISHED >>>> >>>> Active UNIX domain sockets >>>> Address Type Vnode Conn Local Addr Remote Addr >>>> ffffffff889688f8 stream-ord 00000000 >>>> ffffffff89c8bac0 /tmp/.X11-unix/X0 >>>> ffffffff88968ac0 stream-ord 00000000 >>>> 00000000 /tmp/.X11-unix/X0 >>>> ffffffff87a38728 stream-ord 00000000 >>>> 00000000 /tmp/.X11-unix/X0 >>>> ffffffff88968730 stream-ord 00000000 >>>> ffffffff89c8bac0 /tmp/.X11-unix/X0 >>>> ffffffff87a38560 stream-ord 00000000 >>>> ffffffff89c8bac0 /tmp/.X11-unix/X0 >>>> ffffffff87a38008 stream-ord ffffffff8852b780 >>>> 00000000 /var/run/zones/kore.console_sock >>>> ffffffff88968c88 stream-ord 00000000 >>>> 00000000 /tmp/.X11-unix/X0 >>>> ffffffff87a38398 stream-ord ffffffff89c8bac0 >>>> 00000000 /tmp/.X11-unix/X0 >>>> ffffffff87a38ab8 stream-ord ffffffff882b5880 >>>> 00000000 /var/run/zones/europa.console_sock >>>> ffffffff87a38c80 stream-ord ffffffff87a3d740 >>>> 00000000 /var/run/.inetd.uds >>>> >>>> europa: >>>> TCP: IPv4 >>>> Local Address Remote Address Swind Send-Q Rwind Recv-Q >>>> State >>>> -------------------- -------------------- ----- ------ ----- ------ >>>> ----------- >>>> europa.22 192.168.0.128.55040 13440 0 49232 0 >>>> ESTABLISHED >>>> localhost.31000 localhost.32000 49152 0 49152 0 >>>> ESTABLISHED >>>> localhost.32000 localhost.31000 49152 0 49170 0 >>>> ESTABLISHED >>>> >>>> Active UNIX domain sockets >>>> Address Type Vnode Conn Local Addr Remote Addr >>>> ffffffff87a381d0 stream-ord ffffffff87de6740 >>>> 00000000 /var/run/.inetd.uds >>>> >>>> kore: >>>> TCP: IPv4 >>>> Local Address Remote Address Swind Send-Q Rwind Recv-Q >>>> State >>>> -------------------- -------------------- ----- ------ ----- ------ >>>> ----------- >>>> kore.22 192.168.0.128.56248 17664 0 49232 0 >>>> ESTABLISHED >>>> localhost.31000 localhost.32000 49152 0 49152 0 >>>> ESTABLISHED >>>> localhost.32000 localhost.31000 49152 0 49170 0 >>>> ESTABLISHED >>>> >>>> Active UNIX domain sockets >>>> Address Type Vnode Conn Local Addr Remote Addr >>>> ffffffff87a388f0 stream-ord ffffffff8aab4600 >>>> 00000000 /var/run/.inetd.uds >>>> --- >>>> >>>> We will keep poking around, but please let me know if you are able to >>>> collect any more information. >>>> >>>> Cheers, >>>> Leif >>>> >>>> >>>> On Mon, May 18, 2009 at 8:14 PM, Santo74 >>>> <gds...@de...> wrote: >>>>> >>>>> Leif, >>>>> >>>>> Regarding the issue of restarting the application after it crashed or >>>>> was >>>>> forcedly killed I will >>>>> keep an eye on it and report back with more info whenever it should >>>>> happen >>>>> again. >>>>> It is indeed not the behaviour that I would expect from the wrapper >>>>> especialy now that you confirmed that it isn't allocating the whole >>>>> range. >>>>> >>>>> As you already mentioned correctly I won't be able yet to verify if I >>>>> can >>>>> start the app on a second zone after having stopped the app on the >>>>> first >>>>> zone for at least 2 min. >>>>> >>>>> Concerning your last question: our dev/test system is currently >>>>> configured >>>>> with 5 zones, all using their own ip address. >>>>> I have no idea about the configuration of the solaris systems at our >>>>> customers. >>>>> >>>>> regards, >>>>> >>>>> gds >>>>> >>>>> >>>>> Leif Mortenson-2 wrote: >>>>>> >>>>>> Santo, >>>>>> We are in the process of setting up a Solaris 10 server to do some >>>>>> testing with Zones in house. We have a Solaris 9 server, but out >>>>>> Solaris 10 testing has been done IN a zone on Sun's EZqual loaner >>>>>> server. I will let you know what we found. >>>>>> >>>>>> As I explained, the Wrapper never actually attempts to allocate all >>>>>> 1000 ports unless they are already blocked. If the first instance of >>>>>> your application uses ports 32000 and 31000 and that crashes, it is >>>>>> possible that the 32000 port will be locked for 2 minutes so the >>>>>> second invocation of the JVM would use 32001 and 31000. But the >>>>>> other 999 ports would have never been accessed so I can imagine no >>>>>> reason why they would be locked. >>>>>> >>>>>> In your case with Solaris Zones. You say that the Wrapper can not >>>>>> start on these second Zone when one is running on the first. Are you >>>>>> able to verify that the wrapper on the second Zone works if the first >>>>>> had not been running for at least 2 minutes? I am wondering if it is >>>>>> a configuration issue. >>>>>> >>>>>> We will be able to test this shortly ourselves. And it doesn't sound >>>>>> like you will be able to test it until your system is back up and >>>>>> running. >>>>>> >>>>>> Sorry for this next question as it may show my lack of knowledge with >>>>>> Solaris Zones: >>>>>> With your system, are both Zones sharing the same IP address? If so, >>>>>> they should not be able to share ports on that IP. In this case >>>>>> however, we are only binding to localhost, so it should not matter. >>>>>> >>>>>> I will post back as soon as we have gotten this tested out. >>>>>> >>>>>> Cheers, >>>>>> Leif |
|
From: Santo74 <gds...@de...> - 2009-05-25 19:23:30
|
Leif, We did some new tests with the latest version of the wrapper (v3.3.5) and it seems that you are right. The first tests were successful, so we expect that the issue is indeed resolved in the latest version of the wrapper. It remains unclear to me however why this bug would prevent us from starting a wrapper instance on a second zone. I know this bug shouldn't have anything to do with zones in particular, but at least we couln't start a second wrapper on a separate zone (unless the port was in TIME_WAIT). In some way this does make it also zone related in my opinion. Anyway, we are able now to start multiple wrapper instances without having to modify the port ranges and that's the most important to us. So thanks for the great support and I'm glad we could solve the problem with your help. regards, gds Leif Mortenson-3 wrote: > > Santo, > We have been doing some more testing following all of your steps with > "root" zones. But everything is working as expected. > > Rereading your email today however, I think I may know what the > problem is. The error you show in #4. > --- > INFO | jvm 2 | 2009/05/20 12:12:34 | java.net.SocketException: > Address already in use > --- > This is a bug in Solaris versions that was fixed in version 3.3.0. It > actually has nothing to do with zones and should happen on any Solaris > system if you start one copy of the Wrapper, stop it, and then > immediately start a new copy. The first copy will bind to port 31000 > and then the system puts that into TIME_WAIT state for two minutes. > During that time, the second instance of the Wrapper was not correctly > recognizing the cause of the SocketException that was thrown. On > many platforms a BindException is thrown. But Solaris throws a > SocketException, which could mean anything. It is necessary to check > the text of the message to see if it is a bind problem. That text > starts with "errno: 48" for older JVMs, but is "Address already in > use" on newer ones. > The bug can be found here. It also starts out thinking it is a zone > problem: > https://sourceforge.net/tracker/?func=detail&aid=1594073&group_id=39428&atid=425187 > > Could you please give this a try with the 3.3.5 release? > > Thanks, > Leif > > On Wed, May 20, 2009 at 10:25 PM, Santo74 > <gds...@de...> wrote: >> >> Leif, >> >> In the meantime our solaris system is up and running again (a sparc >> system >> by the way) >> and we already did some new tests with the wrapper and could reproduce >> the >> following: >> >> zone1 runs 2 wrappers (our application consists of 2 separate components >> and >> on some systems they need to run both, hence the 2 wrapper instances) >> The default port ranges are used and everything runs as expected: >> 2 connections between wrapper en jvm, 32000 - 31000 and 32001 - 31001 >> >> localhost.31000 localhost.32000 49152 0 49152 0 >> ESTABLISHED >> localhost.32000 localhost.31000 49152 0 49170 0 >> ESTABLISHED >> localhost.31001 localhost.32001 49152 0 49152 0 >> ESTABLISHED >> localhost.32001 localhost.31001 49152 0 49170 0 >> ESTABLISHED >> >> We will keep this zone1 running as is and test some scenarios on zone2: >> >> 1) define an explicit port range for the wrapper (something outside 32000 >> range) and start it. >> This works: >> >> localhost.31000 localhost.42700 49152 0 49152 0 >> ESTABLISHED >> localhost.42700 localhost.31000 49152 0 49170 0 >> ESTABLISHED >> >> 2) define an explicit port range for the wrapper (within the 32000 range, >> but exclusive the 2 ports in use on zone 1 (i.e. 32000 and 32001)). >> This works: >> >> localhost.31000 localhost.32002 49152 0 49152 0 >> ESTABLISHED >> localhost.32002 localhost.31000 49152 0 49170 0 >> ESTABLISHED >> >> 3) remove the explicit port range and restart with the default settings. >> This surprisingly also works >> (apparently because the previous jvm port is in TIME_WAIT, which causes >> the jvm to use another port (which is however also in use on zone1)): >> >> localhost.31000 localhost.42700 49170 0 49152 0 >> TIME_WAIT >> localhost.31001 localhost.32000 49152 0 49152 0 >> ESTABLISHED >> localhost.32000 localhost.31001 49152 0 49170 0 >> ESTABLISHED >> >> 4) restart again without changing anything (i.e. again use all the >> defaults). >> Doesn't work this time (jvm tries to use port 31000 again): >> >> INFO | jvm 2 | 2009/05/20 12:12:34 | java.net.SocketException: >> Address >> already in use >> >> 5) define an explicit port range for the jvm and start it >> Doesn't work either >> >> -> There is only 1 situation where we were able to start a wrapper on a >> second zone, using the same ports as on the first zone. >> Strangely enough this is caused by the fact that the previously allocated >> jvm port is in a TIME_WAIT state. >> At least, that's the only explanation we have for this. >> >> We also verified the type of zones configured on our testserver and it >> appears that our server is using "root" zones. >> Therefore I asked one of our consultants to verify the type of zones used >> at >> one of our customers. >> If they also use "root" zones, it might be that this has something to do >> with the issue. >> Unfortunately because of the long weekend it will take at least until >> monday >> before we have any news on this. >> >> Another interesting piece of info that we found is the following: >> >> ----- >> On a Solaris system with zones installed, the zones can communicate with >> each other over the >> network. The zones all have separate bindings, or connections, and the >> zones >> can all run their >> own server daemons. These daemons can listen on the same port numbers >> without any conflict. >> The IP stack resolves conflicts by considering the IP addresses for >> incoming >> connections. The >> IP addresses identify the zone. >> ----- >> >> Which would mean that solaris should take care of the port conflicts if >> they >> arise. >> And it seams to do this in case the initial port is in TIME_WAIT, but not >> in >> "normal" cases. >> >> regards, >> >> gds >> >> >> Leif Mortenson-3 wrote: >>> >>> Santo, >>> We created "sparse" zones rather than "root" zones because they share >>> much of the file system with the underlying OS. Our thinking was that >>> this would be more likely to show any resource conflicts. >>> >>> I agree that there are likely some configuration differences between >>> your systems and ours. We have been actively attempting to locate the >>> cause of this problem, but any information that you could provide >>> would be very helpful in narrowing this down. >>> >>> Our tests are being on an x86 server within a virtual machine. We >>> also have one Sparc server. But that is running Solaris 9 natively >>> and is being used for our build process. If possible, we would like >>> to avoid reinstalling that with Solaris 10 as we would need to restore >>> it later. I would be very surprised if a problem like this would work >>> differently on x86 vs Sparc however. >>> >>> Cheers, >>> Leif >>> >>> On Wed, May 20, 2009 at 5:50 PM, Santo74 >>> <gds...@de...> wrote: >>>> >>>> Leif, >>>> >>>> This is very strange, because we haven't come across any solaris 10 >>>> environment (with zones) >>>> not having this issue. >>>> Therefore it indeed looks like some configuration differences (or >>>> something) >>>> in comparison with your system. >>>> However, I still find it strange that other applications (not using the >>>> service wrapper) are >>>> not having this issue on the same zones. >>>> As for the security, it's true that our application runs under a >>>> dedicated >>>> user account (and therefore doesn't have full (root) privileges), but >>>> the >>>> IBM Tivoli Policy Server (which I mentioned before) is also running >>>> under >>>> a >>>> dedicated account (with limited privileges) as far as I know. >>>> >>>> This morning I heard that most of the problems with our solaris server >>>> should be solved later today, which >>>> means that I can hopefully start testing again next monday (long >>>> weekend >>>> over here). >>>> >>>> Thanks, >>>> >>>> gds >>>> >>>> >>>> >>>> Leif Mortenson-3 wrote: >>>>> >>>>> Santo, >>>>> We have done some tests with a server configured with 3 Zones as well >>>>> as done some more research. >>>>> >>>>> It does not appear to be possible to have multiple Zones "share" an IP >>>>> address. So they will each have their own IP. For that reason, there >>>>> should be no reason why any of the Zones would ever have any conflict >>>>> with bound ports. As I understand it. >>>>> >>>>> Below you will find the netstat output from 3 Zones on the same >>>>> machine each running a copy of the Wrapper. Each has an SSH >>>>> connection to the Zone as well as the two between the Wrapper and its >>>>> JVM. In all cases, the port number are the same. >>>>> >>>>> Because you have had reports from a few of customers, I am sure that >>>>> "something" is happening. But from the information to date, I am not >>>>> sure what the cause might be. Is it possible that there are some >>>>> security configurations setup on one or more of the Zones that would >>>>> prevent the Wrapper from starting? >>>>> >>>>> The Wrapper will loop over its 1000 possible ports looking for the >>>>> first one that it is able to bind to. If all 1000 fail to bind then >>>>> it reports that fact to the user. Rather than all 1000 ports >>>>> actually being already bound, it may be that the OS is refusing to >>>>> allow the Wrapper to bind to those ports for security reasons? >>>>> >>>>> Anyway, here is the netstat output from our 3 Zones. >>>>> >>>>> --- >>>>> jupiter >>>>> TCP: IPv4 >>>>> Local Address Remote Address Swind Send-Q Rwind Recv-Q >>>>> State >>>>> -------------------- -------------------- ----- ------ ----- ------ >>>>> ----------- >>>>> jupiter.22 192.168.0.128.59013 18816 0 49232 0 >>>>> ESTABLISHED >>>>> localhost.31000 localhost.32000 49152 0 49152 0 >>>>> ESTABLISHED >>>>> localhost.32000 localhost.31000 49152 0 49170 0 >>>>> ESTABLISHED >>>>> >>>>> Active UNIX domain sockets >>>>> Address Type Vnode Conn Local Addr Remote Addr >>>>> ffffffff889688f8 stream-ord 00000000 >>>>> ffffffff89c8bac0 /tmp/.X11-unix/X0 >>>>> ffffffff88968ac0 stream-ord 00000000 >>>>> 00000000 /tmp/.X11-unix/X0 >>>>> ffffffff87a38728 stream-ord 00000000 >>>>> 00000000 /tmp/.X11-unix/X0 >>>>> ffffffff88968730 stream-ord 00000000 >>>>> ffffffff89c8bac0 /tmp/.X11-unix/X0 >>>>> ffffffff87a38560 stream-ord 00000000 >>>>> ffffffff89c8bac0 /tmp/.X11-unix/X0 >>>>> ffffffff87a38008 stream-ord ffffffff8852b780 >>>>> 00000000 /var/run/zones/kore.console_sock >>>>> ffffffff88968c88 stream-ord 00000000 >>>>> 00000000 /tmp/.X11-unix/X0 >>>>> ffffffff87a38398 stream-ord ffffffff89c8bac0 >>>>> 00000000 /tmp/.X11-unix/X0 >>>>> ffffffff87a38ab8 stream-ord ffffffff882b5880 >>>>> 00000000 /var/run/zones/europa.console_sock >>>>> ffffffff87a38c80 stream-ord ffffffff87a3d740 >>>>> 00000000 /var/run/.inetd.uds >>>>> >>>>> europa: >>>>> TCP: IPv4 >>>>> Local Address Remote Address Swind Send-Q Rwind Recv-Q >>>>> State >>>>> -------------------- -------------------- ----- ------ ----- ------ >>>>> ----------- >>>>> europa.22 192.168.0.128.55040 13440 0 49232 0 >>>>> ESTABLISHED >>>>> localhost.31000 localhost.32000 49152 0 49152 0 >>>>> ESTABLISHED >>>>> localhost.32000 localhost.31000 49152 0 49170 0 >>>>> ESTABLISHED >>>>> >>>>> Active UNIX domain sockets >>>>> Address Type Vnode Conn Local Addr Remote Addr >>>>> ffffffff87a381d0 stream-ord ffffffff87de6740 >>>>> 00000000 /var/run/.inetd.uds >>>>> >>>>> kore: >>>>> TCP: IPv4 >>>>> Local Address Remote Address Swind Send-Q Rwind Recv-Q >>>>> State >>>>> -------------------- -------------------- ----- ------ ----- ------ >>>>> ----------- >>>>> kore.22 192.168.0.128.56248 17664 0 49232 0 >>>>> ESTABLISHED >>>>> localhost.31000 localhost.32000 49152 0 49152 0 >>>>> ESTABLISHED >>>>> localhost.32000 localhost.31000 49152 0 49170 0 >>>>> ESTABLISHED >>>>> >>>>> Active UNIX domain sockets >>>>> Address Type Vnode Conn Local Addr Remote Addr >>>>> ffffffff87a388f0 stream-ord ffffffff8aab4600 >>>>> 00000000 /var/run/.inetd.uds >>>>> --- >>>>> >>>>> We will keep poking around, but please let me know if you are able to >>>>> collect any more information. >>>>> >>>>> Cheers, >>>>> Leif >>>>> >>>>> >>>>> On Mon, May 18, 2009 at 8:14 PM, Santo74 >>>>> <gds...@de...> wrote: >>>>>> >>>>>> Leif, >>>>>> >>>>>> Regarding the issue of restarting the application after it crashed or >>>>>> was >>>>>> forcedly killed I will >>>>>> keep an eye on it and report back with more info whenever it should >>>>>> happen >>>>>> again. >>>>>> It is indeed not the behaviour that I would expect from the wrapper >>>>>> especialy now that you confirmed that it isn't allocating the whole >>>>>> range. >>>>>> >>>>>> As you already mentioned correctly I won't be able yet to verify if I >>>>>> can >>>>>> start the app on a second zone after having stopped the app on the >>>>>> first >>>>>> zone for at least 2 min. >>>>>> >>>>>> Concerning your last question: our dev/test system is currently >>>>>> configured >>>>>> with 5 zones, all using their own ip address. >>>>>> I have no idea about the configuration of the solaris systems at our >>>>>> customers. >>>>>> >>>>>> regards, >>>>>> >>>>>> gds >>>>>> >>>>>> >>>>>> Leif Mortenson-2 wrote: >>>>>>> >>>>>>> Santo, >>>>>>> We are in the process of setting up a Solaris 10 server to do some >>>>>>> testing with Zones in house. We have a Solaris 9 server, but out >>>>>>> Solaris 10 testing has been done IN a zone on Sun's EZqual loaner >>>>>>> server. I will let you know what we found. >>>>>>> >>>>>>> As I explained, the Wrapper never actually attempts to allocate all >>>>>>> 1000 ports unless they are already blocked. If the first instance >>>>>>> of >>>>>>> your application uses ports 32000 and 31000 and that crashes, it is >>>>>>> possible that the 32000 port will be locked for 2 minutes so the >>>>>>> second invocation of the JVM would use 32001 and 31000. But the >>>>>>> other 999 ports would have never been accessed so I can imagine no >>>>>>> reason why they would be locked. >>>>>>> >>>>>>> In your case with Solaris Zones. You say that the Wrapper can not >>>>>>> start on these second Zone when one is running on the first. Are >>>>>>> you >>>>>>> able to verify that the wrapper on the second Zone works if the >>>>>>> first >>>>>>> had not been running for at least 2 minutes? I am wondering if it >>>>>>> is >>>>>>> a configuration issue. >>>>>>> >>>>>>> We will be able to test this shortly ourselves. And it doesn't >>>>>>> sound >>>>>>> like you will be able to test it until your system is back up and >>>>>>> running. >>>>>>> >>>>>>> Sorry for this next question as it may show my lack of knowledge >>>>>>> with >>>>>>> Solaris Zones: >>>>>>> With your system, are both Zones sharing the same IP address? If >>>>>>> so, >>>>>>> they should not be able to share ports on that IP. In this case >>>>>>> however, we are only binding to localhost, so it should not matter. >>>>>>> >>>>>>> I will post back as soon as we have gotten this tested out. >>>>>>> >>>>>>> Cheers, >>>>>>> Leif > > ------------------------------------------------------------------------------ > Register Now for Creativity and Technology (CaT), June 3rd, NYC. CaT > is a gathering of tech-side developers & brand creativity professionals. > Meet > the minds behind Google Creative Lab, Visual Complexity, Processing, & > iPhoneDevCamp asthey present alongside digital heavyweights like Barbarian > Group, R/GA, & Big Spaceship. http://www.creativitycat.com > _______________________________________________ > Wrapper-user mailing list > Wra...@li... > https://lists.sourceforge.net/lists/listinfo/wrapper-user > > -- View this message in context: http://www.nabble.com/Wrapper-behavior-on-Solaris-10-zone-tp12904287p23711826.html Sent from the Java Service Wrapper mailing list archive at Nabble.com. |