From: Syamala T. <sya...@or...> - 2002-05-15 01:05:44
|
I have an in house perl application that uses Expect to tail to a legacy program. I have it run satisfactorily on multiple platforms. On linux platforms, I have sean a strange problem. The perl executable monopolizing a cpu after a while of execution time. The legacy program is too loquacious and writes a lot of stuff on the STDOUT. Perhaps thousands of lines are being shown on STDOUT. The program is running correctly, but unbearably slow! Slow over 10 times or worse. The Expect module was Expect 1.12, when problem was discovered. I have switched from Expect 1.12 to the latest Expect 1.15. Problem persists. [ Basically the perl application uses expect to match from a hoard of patterns (about 60) in a loop. ] Does some body has a clue as to what is going on? -Syamal. ----------- |
From: Austin S. <te...@of...> - 2002-05-15 02:22:08
|
On Tue, May 14, 2002 at 06:04:28PM -0700, Syamala Tadigadapa wrote: > I have an in house perl application that uses Expect to tail to a legacy > program. > I have it run satisfactorily on multiple platforms. > > On linux platforms, I have sean a strange problem. The perl executable > monopolizing > a cpu after a while of execution time. The legacy program is too > loquacious and writes > a lot of stuff on the STDOUT. Perhaps thousands of lines are being > shown on STDOUT. > > The program is running correctly, but unbearably slow! Slow over 10 > times or worse. > Expect by default will match the entire output for a match everytime more output is created by the spawned program. If you have a verbose program that doesn't match it becomes difficult for perl to keep up. Typically the answer is to set max_accum() to some sane value, perhaps 10000. What can go wrong is if you set the max value to less than what is read you may lose matching data. Currently expect will read up to 8096 bytes at a time from a stream. Hmm, maybe it would make sense to make this some fraction of exp_Max_Accum. Alternately maybe it would make sense to trim exp_Accum _after_ the pattern matches rather than before. Austin |
From: Syamala T. <sya...@or...> - 2002-05-17 20:38:37
|
I have made significant improvement from 24hrs to 4.5hrs. by clearing accumulator after every time out of 15sec. It could make more saving if I set the time out to say 3 seconds or so. Now then, I have still the CPU monopolization occurring of course. As suggested, I am going to try set the Exp_Max_Accum to 8096. I can reports my findings once I see them. I have the following questions. 1. Why can't I have a method in Expect which can allow me to truncate the accumulator to the point of pattern matching. If present, I could simply call the method once I find a match. This could prevent the buffer/accumulator from choking. 2. Expect can trim the accumulator at each each match to a configurable size, say a default of 4k. This should be done evenly at time out points also. This could as well prevent accumulator/buffer from choking. By the way, I found another machine, which is NOT Linux also hit by the same issue. So, it is not Linux specific problem. -Syamal. --------- Austin Schutz wrote: > On Tue, May 14, 2002 at 06:04:28PM -0700, Syamala Tadigadapa wrote: > > I have an in house perl application that uses Expect to tail to a legacy > > program. > > I have it run satisfactorily on multiple platforms. > > > > On linux platforms, I have sean a strange problem. The perl executable > > monopolizing > > a cpu after a while of execution time. The legacy program is too > > loquacious and writes > > a lot of stuff on the STDOUT. Perhaps thousands of lines are being > > shown on STDOUT. > > > > The program is running correctly, but unbearably slow! Slow over 10 > > times or worse. > > > > Expect by default will match the entire output for a match everytime > more output is created by the spawned program. If you have a verbose program > that doesn't match it becomes difficult for perl to keep up. Typically the > answer is to set max_accum() to some sane value, perhaps 10000. What can go > wrong is if you set the max value to less than what is read you may lose > matching data. > Currently expect will read up to 8096 bytes at a time from a > stream. Hmm, maybe it would make sense to make this some fraction of > exp_Max_Accum. Alternately maybe it would make sense to trim exp_Accum _after_ > the pattern matches rather than before. > > Austin |
From: Austin S. <te...@of...> - 2002-05-18 00:38:32
|
On Fri, May 17, 2002 at 01:37:33PM -0700, Syamala Tadigadapa wrote: > I have made significant improvement from 24hrs to 4.5hrs. by > clearing accumulator after every time out of 15sec. It could make > more saving if I set the time out to say 3 seconds or so. Now then, > I have still the CPU monopolization occurring of course. As suggested, > I am going to try set the Exp_Max_Accum to 8096. I can reports my > findings once I see them. I have the following questions. > > 1. Why can't I have a method in Expect which can allow me to > truncate the accumulator to the point of pattern matching. If > present, I could simply call the method once I find a match. > This could prevent the buffer/accumulator from choking. > 2. Expect can trim the accumulator at each each match to a > configurable size, say a default of 4k. This should be done > evenly at time out points also. This could as well prevent > accumulator/buffer from choking. The accumulator should be clearing by itself after every successful match, not including timeout. 8096 is the maximum number of bytes that can be read at one time in the current implementation. I would suggest a slightly _larger_ number for the possibility that you will have a pattern match that occurs right at the border between the last read and the current one. If you are having high CPU utilization after you set max accum, I would guess that you are probably seeing many small reads very close together in time rather than one big one, forcing the expect matching engine into action each time. If this is indeed the case it might be possible to put small delays inside the expect matching loop to force it to read in larger chunks. Please let us know how it goes. Thanks, Austin |