|
From: Maxime <ma...@ta...> - 2019-04-08 03:30:25
|
Christoph We just release a new version of the Wrapper (3.5.38) which fixes the problem where Wrapper could hang trying to read from the output pipe of a JVM if the fork to launch the JVM process failed. This new version contains several other improvements which are listed in our release notes: https://wrapper.tanukisoftware.com/doc/english/release-notes.html Since it has just been released, it is currently marked as the "Latest Release" on our website, and will be advised for production after we get enough feedback that it is running without problem (generally after 2 weeks). https://wrapper.tanukisoftware.com/doc/english/download.jsp We will be happy to hear any feedback you may have. Best Regards, The Java Service Wrapper Team On Mon, Jan 28, 2019 at 10:33 PM Christoph SCHWAIGER <csc...@am...> wrote: > CONFIDENTIAL & RESTRICTED > > > > Hello Leif, > > > > Yes, the error message in in the wrapper log. Indeed it is only in the two > logs of the resources which caused problems when stopped. I was mistaken > that it is not OOM related - processes were missed to be restarted since > the OOM problem. On my side I added a cleanup script to the cluster setup, > which will kill hanging processes if something goes wrong. > > > > Thanks for your help and the fix! > > > > I admit, running into OOM is by far the biggest problem. > > > > Cheers > > Christoph > > > > *From:* Leif Mortenson [mailto:lei...@ta...] > *Sent:* 28 January 2019 08:23 > *To:* Wrapper User List <wra...@li...> > *Subject:* Re: [Wrapper-user] [EXT] Re: no JVM running (state: > DOWN_CLEAN) on linux after OOM > > > > Christoph > > > > Thank you for the very detailed analysis. > > This is in line with what we found and are testing a fix for. > > > > If we play with ulimit to allow exactly enough processes to be able to > launch the Wrapper, but not the JVM then the Wrapper will fail to fork the > JVM and fall into error code. > > Prior to the fork, the wrapper was opening a set of pipes whose opposite > end are normally used by the child process. > > The problem was that the error code on failed fork was not correctly > closing down those pipes. > > > > Then later on in the main loop, the Wrapper was attempting to read child > output from those pipes. > > The system calls appear to block on those reads even when non-blocking > mode is set in the case that the child has not yet connected. > > The fix is to simply close those pipes in the error code. > > > > This is actually a very old bug. It has not been seen before because most > errors involve a successful fork followed by an error launching the JVM. > > That error code was working. So this problem is very specific to this > exact low resource state. > > > > Never the less this is a fairly critical problem as it could affect anyone > in this situation. > > When it happens though, all the Wrapper can really do is shutdown anyway. > So even when the bug is fixed, there are still going to be resource > problems that must be resolved. > > The Wrapper will of course shutdown cleanly rather than hanging. > > > > The only thing we were not sure about is that in our tests, we were always > seeing a FATAL error "Could not spawn JVM process" in the log file. > > This was not in the log that you sent. Can you confirm whether or not you > are seeing that? > > > > Assuming all tests go well, this will be in the upcoming 3.5.38. > > > > Unfortunately there is not currently a workaround for this other than > making sure that there are enough free processes to launch the Wrapper and > fork. > > > > Cheers, > > Leif > > > > On Fri, Jan 25, 2019 at 11:35 PM Christoph SCHWAIGER < > csc...@am...> wrote: > > CONFIDENTIAL & RESTRICTED > > > > Hello Leif, > > > > Thanks for answering that – could have checked myself. > > > > But that is not why I contact you again. The problem occurred again while > the infrastructure was in a healthy state, I looked a bit deeper. > > > > Background is that the cluster wanted to switch over resources to the > other server and performed local ./scheck.sh (the name of our wrapper > script) stop, but the stop never finished: I had a look to the shell > processes still lingering around. > > > > The status output is as it was last time: > > [scheck@muctxp5b scheck_tcpbatch]$ ./scheck.sh status > > Service check monitoring instance (not installed) is running: PID:9414, > Wrapper:STOPPING, Java:DOWN_CLEAN > > [scheck@muctxp5b scheck_ tcpbatch]$ echo $? > > 0 > > > > Here is the wrapper and the tree of the hanging commands (parent first) > performing the stop: > > [scheck@muctxp5b scheck]$ ps -elf | grep tcpbatch > > 1 S scheck 9411 1 0 80 0 - 29194 pipe_w Jan04 ? 00:05:40 > /opt/scheck/muctxp5j/scheck_tcpbatch/./wrapper > /opt/scheck/muctxp5j/scheck_tcpbatch/conf/wrapper.conf > wrapper.syslog.ident=scheck wrapper.pidfile=/opt/scheck/muctxp5 > > j/scheck_tcpbatch/./scheck.pid wrapper.daemonize=TRUE wrapper.name > <https://clicktime.symantec.com/3RrHqwUUzytgyMHF2iCct2v6H2?u=http%3A%2F%2Fwrapper.name>=scheck > wrapper.displayname=Service check monitoring instance > wrapper.statusfile=/opt/scheck/muctxp5j/scheck_tcpbatch/./scheck.status > wrapper.java.statusfile=/opt/scheck/muctxp5j/scheck_tcpbatch/./scheck.java.status > wrapper.script.version=3.5.30 > > 0 S scheck 13900 40818 0 80 0 - 25832 pipe_w 14:41 pts/0 00:00:00 > grep tcpbatch > > 4 S scheck 23388 1 0 80 0 - 26529 do_wai Jan23 ? 00:00:00 > bash -c USER=scheck; export USER; LOGNAME=sch > > eck; export LOGNAME; HOME=/home/scheck; export HOME; > /opt/scheck/resources.sh muctxp5j scheck_tcpbatch stop > > 0 S scheck 23394 23388 0 80 0 - 26529 do_wai Jan23 ? 00:00:00 > /bin/bash /opt/scheck/resources.sh muctxp5j scheck_tcpbatch stop > > 0 S scheck 23398 23394 0 80 0 - 26758 do_wai Jan23 ? 00:02:31 > /bin/sh /opt/scheck/muctxp5j/scheck_tcpbatch/scheck.sh stop > > > > I attached to the scheck.sh script and see it looping, every second > performing some syscalls, which I guess is this function: > > waitforwrapperstop() { > > getpid > > while [ "X$pid" != "X" ] ; do > > sleep 1 > > getpid > > done > > } > > > > So I had a look to the wrapper process 9411, which did not want to vanish, > with strace: > > Process 9411 attached > > 15:31:10 read(5, > > > > And see file descriptor 5 is pipe # 813574553: > > scheck@muctxp5b 9411]$ ls -lr fd > > total 0 > > l-wx------ 1 scheck scheck 64 Jan 7 13:25 6 -> pipe:[813574553] (this > is shown in red color - assume since pipe is broken) > > lr-x------ 1 scheck scheck 64 Jan 4 12:48 5 -> pipe:[813574553] (this > is shown in red color - assume since pipe is broken) > > lrwx------ 1 scheck scheck 64 Jan 4 12:48 4 -> > /opt/scheck/muctxp5j/scheck_tcpbatch/log/wrapper_donotmonitor.log > > lrwx------ 1 scheck scheck 64 Jan 4 12:49 3 -> socket:[813555625] (this > is shown in red color - assume since pipe is broken) > > lrwx------ 1 scheck scheck 64 Jan 4 12:48 2 -> /dev/null > > lrwx------ 1 scheck scheck 64 Jan 4 12:48 1 -> /dev/null > > lrwx------ 1 scheck scheck 64 Jan 4 12:48 0 -> /dev/null > > > > concerning stackoverflow, a grep of the # from lsof should list both sides > of the pipe, but it shows only read/write of one side. Unless I > misunderstood. > > [scheck@muctxp5b 9411]$ lsof | grep 813574553 > > wrapper 9411 scheck 5r FIFO 0,8 0t0 > 813574553 pipe > > wrapper 9411 scheck 6w FIFO 0,8 0t0 > 813574553 pipe > > > > Is the process on other side of the pipe the JVM? Which was, as the status > output indicates, down already. Then for some reason the wrapper process > was still waiting in a read to the pipe to a stopped process – and maybe > because of this remained up. > > > > I didn’t try further, since I have no debugger on the system and no source. > > > > When I detected the problem, it was about 10hours after the stop attempt. > > > > As last time, scheck.sh top immediately came back and stopped with wrapper > process! > > > > Hopefully this helps investigating. > > > > Cheers > > Christoph > > *From:* Leif Mortenson [mailto:lei...@ta...] > *Sent:* 10 January 2019 17:27 > *To:* Wrapper User List <wra...@li...> > *Subject:* Re: [Wrapper-user] [EXT] Re: no JVM running (state: > DOWN_CLEAN) on linux after OOM > > > > Christoph > > > > Yes, the following configuration is what causes the Wrapper do restart > based on the text in the console output: > > --- > > wrapper.filter.trigger.1001=java.lang.OutOfMemoryError > > wrapper.filter.action.1001=RESTART > > wrapper.filter.message.1001=The JVM has run out of memory. > > --- > > > > It sounds like you are on the right track. We will see if we can > reproduce something here as well. > > > > Cheers, > > Leif > > > > > > On Wed, Jan 9, 2019 at 10:16 PM Christoph SCHWAIGER < > csc...@am...> wrote: > > CONFIDENTIAL & RESTRICTED > > > > > > Hello Leif, > > > > Those were the very last entries in the wrapper log: > > ERROR | wrapper | 2019/01/07 15:02:45 | Shutdown failed: Timed out > waiting for signal from JVM. > > ERROR | wrapper | 2019/01/07 15:02:46 | JVM did not exit on request, > termination requested. > > STATUS | wrapper | 2019/01/07 15:02:46 | JVM received a signal SIGKILL > (9). > > STATUS | wrapper | 2019/01/07 15:02:46 | JVM process is gone. > > STATUS | wrapper | 2019/01/07 15:02:46 | JVM exited after being requested > to terminate. > > STATUS | wrapper | 2019/01/07 15:02:50 | Reloading Wrapper > configuration... > > STATUS | wrapper | 2019/01/07 15:02:50 | Launching a JVM... > > Time of last entry was the timestamp of java.status file. I had noticed > that one day later. > > > > I didn’t notice this log entry before: > > STATUS | wrapper | 2019/01/07 15:02:11 | The JVM has run out of memory. > Restarting JVM. > > …to me it sounds like the wrapper was using the exception (in first email > it the complete log section) to consider JVM should best be restarted due > to running out of memory. Which, in this situation was not the case. I > don’t know why for the JVM heap depletion is the culprit by default when a > thread cannot be spawned. Definitely there are other reasons for failure. > > > > The user was limited to 1k processes max, but for what I read, threads are > counted. In stackoverflow I found this command to count the # of threads > for a user: > > > > ps -eo euser,nlwp | grep scheck | awk '{print $2}' | awk '{ num_threads > += $1 } END { print num_threads }' > > > > Currently it shows 7579 - the JVMs are heavily multithreaded. > > > > Unfortunately I don’t even got a test box to simulate this. It gets time > to get one. > > > > Cheers, > > Christoph > > *From:* Leif Mortenson [mailto:lei...@ta...] > *Sent:* 09 January 2019 10:50 > *To:* Wrapper User List <wra...@li...> > *Subject:* Re: [Wrapper-user] [EXT] Re: no JVM running (state: > DOWN_CLEAN) on linux after OOM > > > > Christoph > > Ok. So you are using a newer version of the Wrapper, so ignore the issue > I mentioned about failing to kill the JVM. That was an old problem. > > > > Please send the debug output if you get it again. > > > > We will play around with the ulimits here as well and make sure the > Wrapper behaves correctly. > > > > I am maybe not understanding the exact problem. > > After you get the OOM and the wrapper tries to restart, is the Wrapper > just failing to start the next JVM and exiting? Or is it getting stuck. > > The later would be bad, and something we will want to get to the bottom of. > > > > It does not sound like this is easily reproduceable. But so, then the > following will output detailed information about the state. It is a LOT of > output though so not realistic unless you are testing. > > wrapper.state_output=TRUE > > > > Cheers, > > Leif > > > > On Wed, Jan 9, 2019 at 6:11 PM Christoph SCHWAIGER <csc...@am...> > wrote: > > CONFIDENTIAL & RESTRICTED > > > > Hello Leif, > > > > Thanks for your response. > > > > Easy one first, the version we use: > > [scheck@muctxp5b scheck_unix4]$ ./wrapper --version > > Java Service Wrapper Community Edition 64-bit 3.5.30 > > Copyright (C) 1999-2016 Tanuki Software, Ltd. All Rights Reserved. > > http://wrapper.tanukisoftware.com > <https://clicktime.symantec.com/32PRRMdpdoTcCTFhKkEQTZ96H2?u=http%3A%2F%2Fwrapper.tanukisoftware.com> > > > > concerning the forced kill, I think I have seen once on another instance > and time in the wrapper log something like “..JVM received sigkill (9)..”. > > > > In the case I looked at, the JVM process owned by the wrapper was gone, > which suits the DOWN_CLEAN as you explained. > > > > I’ll turn on debug output on a few of them in case it happens again. > > > > As I interpret it, the configuration as such is OK, as well as the normal > behaviour: when I i.e. kill the JVM manually, the wrapper brings it back > online. And due to the OOM situation – more precisely, wrapper and JVM were > limited by 1024 processes max in ulimits – the wrapper was not able i.e. to > fork a command and that could explain why recovery stalled. Likely is that > other wrapper / JVM tandems on the same machine (20-30 tandems) faced the > same trouble and tried to recover, which would mean sometimes the ceiling > was reached, sometimes not (i.e. when yet another jvm with many threads was > killed or die). Does this makes sense to you? > > > > Should I look into updating my script to interpret the output of “app.sh > status” concerning certain Java:__ states and kill the wrapper ? > > (in such a case the veritas cluster would consider the resource being > offline and start the wrapper again). > > If that is a good idea depends on the amount of states to consider and for > how long such a state can be tolerated. Maybe it is paranoid, since our > box is very big, we should be fine concerning OOM unless we screw up > settings again. We’re newbies on Linux, used windows for years. > > > > Cheers, > > Christoph > > > > *From:* Leif Mortenson [mailto:lei...@ta...] > *Sent:* 09 January 2019 03:10 > *To:* Wrapper User List <wra...@li...> > *Subject:* [EXT] Re: [Wrapper-user] no JVM running (state: DOWN_CLEAN) on > linux after OOM > > > > Christoph > > > > 1) Could you please send me the wrapper.log file with debug output enabled > (wrapper.debug=true) that shows what is happening when the Wrapper is > failing to restart the JVM? > > Please include the part of the log showing the last few moments of the JVM > that runs out of memory as well. > > > > 2) What version of the Wrapper are you running? > > The following issue was fixed in 3.5.16 and sounds like it might be what > you are seeing. > > https://wrapper.tanukisoftware.com/doc/english/release-notes.html#3.5.16 > <https://clicktime.symantec.com/a/1/s_mZYlanJcqYJWQ55URpsksoMfAB69FuqpCaCaHcFZI=?d=lq4ISNO68RaA_a5U2L38JAI42nrP-Lj0_jQA4RKR0ryTRdGXvAEAfHiDUn-vKdryduqkwm-zX0YYsOECXFXDc6niuyt7Ae837n0-wWAZ8u99Nabj6hxgw76Xg8rXhtHV8FEA0rrzVL_1TAZuUAMX2ztmAkWA0qdhQO1XYUkMswad3bsnlUv2XxQZ09Oc1lbfNAXv0DNlGOaVnU6lrHEJobFamicDkAhsG_GVSZVC9oI_NjgxAcJ-M7XOvhLaol54ep5LiB5j_uxRx-67kzXJbZT0fZIK8-9mNXr7t7qXXF3EHeUiqKaJuWdkuTfMfI_ZzmE2QhUiHCnSvJmRfKZPZ8K_jzVJBlUz0PDGfOAqzIOVsQmLsYSkVxRtrXkK_DwR_O_u91EdthCtNsTLDOxUBJzYFmWLI6CrV_jpYReCYAthEio3DegMb4kU9fCvs37XzsrCLlh41tLw87m9neyQHU9F5aZIyZY1&u=https%3A%2F%2Fwrapper.tanukisoftware.com%2Fdoc%2Fenglish%2Frelease-notes.html%233.5.16> > > --- > > Fix a problem where a JVM process was not stopped completely on a UNIX > platform and stayed defunct after a forced kill until the Wrapper process > itself stopped. This was especially noticeable if the JVM is frozen and the > JVM is being killed forcibly. > > --- > > Are you seeing a zombie Java process still running? > > This bug meant that the JVM was being left around in the background when > the Wrapper thought it was gone. > > If you are out of memory then the next JVM would not have enough memory to > launch. > > If the first JVM is not actually frozen, it would shut itself down after > losing its backend connection to the Wrapper. But that might be happening > too late and result in what you are seeing. > > > > 3) The DOWN_CLEAN state means that the Wrapper has completely shutdown the > JVM and cleaned up any associated resources. > > We will take a look at the documentation on the following page as you are > correct that it is missing some information. > > https://wrapper.tanukisoftware.com/doc/english/prop-java-statusfile.html > <https://clicktime.symantec.com/a/1/pFsMh63Y_XDBdbw7xETbo40_Uhah2ByBR7xqKlm8s8w=?d=lq4ISNO68RaA_a5U2L38JAI42nrP-Lj0_jQA4RKR0ryTRdGXvAEAfHiDUn-vKdryduqkwm-zX0YYsOECXFXDc6niuyt7Ae837n0-wWAZ8u99Nabj6hxgw76Xg8rXhtHV8FEA0rrzVL_1TAZuUAMX2ztmAkWA0qdhQO1XYUkMswad3bsnlUv2XxQZ09Oc1lbfNAXv0DNlGOaVnU6lrHEJobFamicDkAhsG_GVSZVC9oI_NjgxAcJ-M7XOvhLaol54ep5LiB5j_uxRx-67kzXJbZT0fZIK8-9mNXr7t7qXXF3EHeUiqKaJuWdkuTfMfI_ZzmE2QhUiHCnSvJmRfKZPZ8K_jzVJBlUz0PDGfOAqzIOVsQmLsYSkVxRtrXkK_DwR_O_u91EdthCtNsTLDOxUBJzYFmWLI6CrV_jpYReCYAthEio3DegMb4kU9fCvs37XzsrCLlh41tLw87m9neyQHU9F5aZIyZY1&u=https%3A%2F%2Fwrapper.tanukisoftware.com%2Fdoc%2Fenglish%2Fprop-java-statusfile.html> > > > > Cheers, > > Leif > > > > On Tue, Jan 8, 2019 at 8:32 PM Christoph SCHWAIGER <csc...@am...> > wrote: > > CONFIDENTIAL & RESTRICTED > > > > Hello Leif, > > > > Thanks for the information about the subscription. I did so. > > > > We have been using the wrapper on windows for many years, since a couple > of years we have a standard support version. > > > > Our problem is on linux RH. *After an out of memory situation (the jvm > exited) it is not restarted and remains down indefinitely, the status > script exits with status zero*, so all looks up for the cluster. > (integrated into veritas cluster). The OOM was bad: not related to JVM, but > caused by overly optimistic ulimits of the user - that has been corrected. > > > > STATUS | wrapper | 2019/01/07 13:38:41 | Launching a JVM... > > INFO | jvm 1 | 2019/01/07 13:38:43 | WrapperManager: Initializing... > > INFO | jvm 1 | 2019/01/07 13:38:45 | S-Check version 3.0.4 Monte Rosa > from 12-Sep-2018 08:02 by cschwaiger > > INFO | jvm 1 | 2019/01/07 13:38:45 | Scheck is starting on server > MUCTXP5B > > INFO | jvm 1 | 2019/01/07 13:38:52 | parsed 1 xml files and created 0 > service records. > > INFO | jvm 1 | 2019/01/07 15:02:11 | Exception in thread > "InactivityMonitor WriteCheck" java.lang.OutOfMemoryError: unable to create > new native thread > > STATUS | wrapper | 2019/01/07 15:02:11 | The JVM has run out of memory. > Restarting JVM. > > INFO | jvm 1 | 2019/01/07 15:02:11 | at > java.lang.Thread.start0(Native Method) > > INFO | jvm 1 | 2019/01/07 15:02:11 | at > java.lang.Thread.start(Thread.java:717) > > INFO | jvm 1 | 2019/01/07 15:02:11 | at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957) > > INFO | jvm 1 | 2019/01/07 15:02:11 | at > java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378) > > INFO | jvm 1 | 2019/01/07 15:02:11 | at > org.apache.activemq.transport.InactivityMonitor.writeCheck(InactivityMonitor.java:147) > > INFO | jvm 1 | 2019/01/07 15:02:11 | at > org.apache.activemq.transport.InactivityMonitor$2.run(InactivityMonitor.java:113) > > INFO | jvm 1 | 2019/01/07 15:02:11 | at > org.apache.activemq.thread.SchedulerTimerTask.run(SchedulerTimerTask.java:33) > > INFO | jvm 1 | 2019/01/07 15:02:11 | at > java.util.TimerThread.mainLoop(Timer.java:555) > > INFO | jvm 1 | 2019/01/07 15:02:11 | at > java.util.TimerThread.run(Timer.java:505) > > ERROR | wrapper | 2019/01/07 15:02:45 | Shutdown failed: Timed out > waiting for signal from JVM. > > ERROR | wrapper | 2019/01/07 15:02:46 | JVM did not exit on request, > termination requested. > > STATUS | wrapper | 2019/01/07 15:02:46 | JVM received a signal SIGKILL > (9). > > STATUS | wrapper | 2019/01/07 15:02:46 | JVM process is gone. > > STATUS | wrapper | 2019/01/07 15:02:46 | JVM exited after being requested > to terminate. > > STATUS | wrapper | 2019/01/07 15:02:50 | Reloading Wrapper > configuration... > > STATUS | wrapper | 2019/01/07 15:02:50 | Launching a JVM... > > > > [scheck@muctxp5b scheck_unix11]$ ./scheck.sh status > > *Service check monitoring instance (not installed) is running: PID:56766, > Wrapper:STARTED, Java:DOWN_CLEAN* > > > > I could not find the DOWN_CLEAN state documented – looked at: > https://wrapper.tanukisoftware.com/doc/english/prop-java-statusfile.html > <https://clicktime.symantec.com/a/1/pFsMh63Y_XDBdbw7xETbo40_Uhah2ByBR7xqKlm8s8w=?d=lq4ISNO68RaA_a5U2L38JAI42nrP-Lj0_jQA4RKR0ryTRdGXvAEAfHiDUn-vKdryduqkwm-zX0YYsOECXFXDc6niuyt7Ae837n0-wWAZ8u99Nabj6hxgw76Xg8rXhtHV8FEA0rrzVL_1TAZuUAMX2ztmAkWA0qdhQO1XYUkMswad3bsnlUv2XxQZ09Oc1lbfNAXv0DNlGOaVnU6lrHEJobFamicDkAhsG_GVSZVC9oI_NjgxAcJ-M7XOvhLaol54ep5LiB5j_uxRx-67kzXJbZT0fZIK8-9mNXr7t7qXXF3EHeUiqKaJuWdkuTfMfI_ZzmE2QhUiHCnSvJmRfKZPZ8K_jzVJBlUz0PDGfOAqzIOVsQmLsYSkVxRtrXkK_DwR_O_u91EdthCtNsTLDOxUBJzYFmWLI6CrV_jpYReCYAthEio3DegMb4kU9fCvs37XzsrCLlh41tLw87m9neyQHU9F5aZIyZY1&u=https%3A%2F%2Fwrapper.tanukisoftware.com%2Fdoc%2Fenglish%2Fprop-java-statusfile.html> > > > > ”scheck.sh stop” fails – indefinitely waits for wrapper to stop. A simple > kill <pid> terminates it. > > > > Any recommendations – i.e. measures to avoid hanging in the “looks good = > status zero, but down” state? > > > > Below/attached is the information about os version and configuration. > > > > Thanks in advance, > > Christoph > > > > Linux muctxp5b 2.6.32-754.3.5.el6.x86_64 #1 SMP Thu Aug 9 11:56:22 EDT > 2018 x86_64 x86_64 x86_64 GNU/Linux > > _______________________________________________ > Wrapper-user mailing list > Wra...@li... > https://lists.sourceforge.net/lists/listinfo/wrapper-user > |