You can subscribe to this list here.
2005 |
Jan
|
Feb
(53) |
Mar
(62) |
Apr
(88) |
May
(55) |
Jun
(204) |
Jul
(52) |
Aug
|
Sep
(1) |
Oct
(94) |
Nov
(15) |
Dec
(68) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(130) |
Feb
(105) |
Mar
(34) |
Apr
(61) |
May
(41) |
Jun
(92) |
Jul
(176) |
Aug
(102) |
Sep
(247) |
Oct
(69) |
Nov
(32) |
Dec
(140) |
2007 |
Jan
(58) |
Feb
(51) |
Mar
(11) |
Apr
(20) |
May
(34) |
Jun
(37) |
Jul
(18) |
Aug
(60) |
Sep
(41) |
Oct
(105) |
Nov
(19) |
Dec
(14) |
2008 |
Jan
(3) |
Feb
|
Mar
(7) |
Apr
(5) |
May
(123) |
Jun
(5) |
Jul
(1) |
Aug
(29) |
Sep
(15) |
Oct
(21) |
Nov
(51) |
Dec
(3) |
2009 |
Jan
|
Feb
(36) |
Mar
(29) |
Apr
|
May
|
Jun
(7) |
Jul
(4) |
Aug
|
Sep
(4) |
Oct
|
Nov
(13) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
(9) |
Apr
(11) |
May
(16) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(7) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
(92) |
Nov
(28) |
Dec
(16) |
2013 |
Jan
(9) |
Feb
(2) |
Mar
|
Apr
(4) |
May
(4) |
Jun
(6) |
Jul
(14) |
Aug
(12) |
Sep
(4) |
Oct
(13) |
Nov
(1) |
Dec
(6) |
2014 |
Jan
(23) |
Feb
(19) |
Mar
(10) |
Apr
(14) |
May
(11) |
Jun
(6) |
Jul
(11) |
Aug
(15) |
Sep
(41) |
Oct
(95) |
Nov
(23) |
Dec
(11) |
2015 |
Jan
(3) |
Feb
(9) |
Mar
(19) |
Apr
(3) |
May
(1) |
Jun
(3) |
Jul
(11) |
Aug
(1) |
Sep
(15) |
Oct
(5) |
Nov
(2) |
Dec
|
2016 |
Jan
(7) |
Feb
(11) |
Mar
(8) |
Apr
(1) |
May
(3) |
Jun
(17) |
Jul
(12) |
Aug
(3) |
Sep
(5) |
Oct
(19) |
Nov
(12) |
Dec
(6) |
2017 |
Jan
(30) |
Feb
(23) |
Mar
(12) |
Apr
(32) |
May
(27) |
Jun
(7) |
Jul
(13) |
Aug
(16) |
Sep
(6) |
Oct
(11) |
Nov
|
Dec
(12) |
2018 |
Jan
(1) |
Feb
(5) |
Mar
(6) |
Apr
(7) |
May
(23) |
Jun
(3) |
Jul
(2) |
Aug
(1) |
Sep
(6) |
Oct
(6) |
Nov
(10) |
Dec
(3) |
2019 |
Jan
(26) |
Feb
(15) |
Mar
(9) |
Apr
|
May
(8) |
Jun
(14) |
Jul
(10) |
Aug
(10) |
Sep
(4) |
Oct
(2) |
Nov
(20) |
Dec
(10) |
2020 |
Jan
(10) |
Feb
(14) |
Mar
(29) |
Apr
(11) |
May
(25) |
Jun
(21) |
Jul
(23) |
Aug
(12) |
Sep
(19) |
Oct
(6) |
Nov
(8) |
Dec
(12) |
2021 |
Jan
(29) |
Feb
(9) |
Mar
(8) |
Apr
(8) |
May
(2) |
Jun
(2) |
Jul
(9) |
Aug
(9) |
Sep
(3) |
Oct
(4) |
Nov
(12) |
Dec
(13) |
2022 |
Jan
(4) |
Feb
|
Mar
(4) |
Apr
(12) |
May
(15) |
Jun
(7) |
Jul
(10) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(8) |
Dec
|
2023 |
Jan
(15) |
Feb
|
Mar
(23) |
Apr
(1) |
May
(2) |
Jun
(10) |
Jul
|
Aug
(22) |
Sep
(19) |
Oct
(2) |
Nov
(20) |
Dec
|
2024 |
Jan
(1) |
Feb
|
Mar
(16) |
Apr
(15) |
May
(6) |
Jun
(4) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(13) |
Nov
(18) |
Dec
(6) |
2025 |
Jan
(12) |
Feb
|
Mar
(2) |
Apr
(1) |
May
(11) |
Jun
(5) |
Jul
(4) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Zoran V. <zv...@ar...> - 2005-04-07 08:42:52
|
Am 06.04.2005 um 21:07 schrieb Stephen Deasey: > There have been some changes in this area of Tcl. Just a very quick > scan brought up this, which is tagged 8.4.7: > > http://cvs.sourceforge.net/viewcvs.py/tcl/tcl/unix/tclUnixThrd.c? > r1=1.23.2.6&r2=1.23.2.7 Stephen, you were on the right track! I somehow messed up with testing but indeed, the problem is introduced in 8.4.7 for the first time. The problematic spot is the new TclpFreeAllocCache(ptr) call in unix/tclUnixThrd.c where some aggressive cleanup has been introduced which screwed something up. I will have to understand what the people wanted to achieve by this and after figuring this out, I will fix that for 8.4.10 (if it comes out). See: void TclpFreeAllocCache(ptr) void *ptr; { extern void TclFreeAllocCache(void *); TclFreeAllocCache(ptr); /* * Perform proper cleanup of things done in TclpGetAllocCache. */ if (initialized) { pthread_key_delete(key); initialized = 0; } } If you rewrite it: void TclpFreeAllocCache(ptr) void *ptr; { extern void TclFreeAllocCache(void *); TclFreeAllocCache(ptr); } it does not leak any more. So, I suppose, I go back to electronic archeology again... Cheer's Zoran |
From: Stephen D. <sd...@gm...> - 2005-04-06 22:16:28
|
On Apr 6, 2005 1:43 PM, Vlad Seryakov <vl...@cr...> wrote: > Oh, yes, i remember read something about that, i am actually using up to > 100 threads at the same time and they are created/exit all the time. > That was one of my intentions to be able to submit data to conn thread > to minimize thread creation instead of introducing new thread pooling > manager. ns_job? |
From: Zoran V. <zv...@ar...> - 2005-04-06 21:19:31
|
Am 06.04.2005 um 21:07 schrieb Stephen Deasey: > There have been some changes in this area of Tcl. Just a very quick > scan brought up this, which is tagged 8.4.7: > > http://cvs.sourceforge.net/viewcvs.py/tcl/tcl/unix/tclUnixThrd.c? > r1=1.23.2.6&r2=1.23.2.7 > > Correct threaded obj allocator to > fully cleanup on exit and allow for > reinitialization. [Bug #736426] > (mistachkin, kenny) > > OK. Light at the end of tunnel. The 8.4.1 does not leak. So, I suppose I will now have to work myself up the release tree and see where this was introduced and then revert the change! Huh... Zoran |
From: Vlad S. <vl...@cr...> - 2005-04-06 20:49:51
|
Oh, yes, i remember read something about that, i am actually using up to 100 threads at the same time and they are created/exit all the time. That was one of my intentions to be able to submit data to conn thread to minimize thread creation instead of introducing new thread pooling manager. Zoran Vasiljevic wrote: > > Am 06.04.2005 um 22:19 schrieb Vlad Seryakov: > >> >> Is memory contention that big so new memory allocator is needed, i >> never looked into that and do not have >> clear picture why this zippy appeared in the first place. What kind of >> numbers we are talking about comparing >> zippy and regular mallocs? > > > Oh, it can be actually many times faster... especially with reasonable > number > of threads (10-20). > There are some very nice comparisons of MT memory allocators on: > > http://developers.sun.com/solaris/articles/multiproc/multiproc.html > > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2005-04-06 20:33:46
|
Am 06.04.2005 um 22:19 schrieb Vlad Seryakov: > > Is memory contention that big so new memory allocator is needed, i > never looked into that and do not have > clear picture why this zippy appeared in the first place. What kind of > numbers we are talking about comparing > zippy and regular mallocs? Oh, it can be actually many times faster... especially with reasonable number of threads (10-20). There are some very nice comparisons of MT memory allocators on: http://developers.sun.com/solaris/articles/multiproc/multiproc.html |
From: Vlad S. <vl...@cr...> - 2005-04-06 20:25:44
|
I tried google allocator under Linux and NS just crashed, so i did not bother to investigate. Is memory contention that big so new memory allocator is needed, i never looked into that and do not have clear picture why this zippy appeared in the first place. What kind of numbers we are talking about comparing zippy and regular mallocs? Zoran Vasiljevic wrote: > > Am 06.04.2005 um 20:39 schrieb Vlad Seryakov: > >> I used valgrind and it complained about huge memory leaks somewhere in >> the thread allocation, nothing in particular but it pointed on >> Ns_CsInit function when used master lock. I could not figure out there >> anything, it is just simple plain ns_calloc but still, when >> zippy disabled it is running fine. >> > > Do you primarily use Linux? In that case the ptmalloc should be already > there in glibc > and this whole junk can safely be skipped. > Our problem is that we need it for Solaris and Mac OSX and Windows and > Linux. > Hence I must try the standalone ptmalloc unless I get the Tcl built-in > allocator > working sane. And, I yet have to see what is the situation looking like > on the Win... > > But as the Jim Davidson writes: > > "Going forward, there are likley newer options such as the google > allocator which is similar to > zippy for small blocks but appears to be smarter for large blocks > and/or garbage collection." > > Bah... this thing even does not compile on Darwin... > > Zoran > > > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2005-04-06 20:24:02
|
Am 06.04.2005 um 21:07 schrieb Stephen Deasey: > > I'm looking in /proc/$PID/maps and yes, the tcl (8.4.6) I have > installed under /opt/tcl/lib/libtcl8.4.so is the only one linked in. > >> From the command prompt that results from running 'make runtest', no > leaks. From the ncsp port, no leaks. I'm looking directly at > /proc/$PID/status, vm is stable at ~20MB, res ~3MB. Absolutely strange. > > However! Running /opt/tcl/bin/tclsh8.4 *does* leak memory. Not as > quickly as the test I did with 8.5, though. Aha... meaning somebody broke the thing when moving to Tcl? > > Thinking about this, it doesn't seem like an error in strategy of the > zippy ('quick') allocator, i.e. fragmentation. Looks like cleanup on > thread exit is just buggy. This is what I thought as well and did some gdb-ing, but it all seems fine. > > There have been some changes in this area of Tcl. Just a very quick > scan brought up this, which is tagged 8.4.7: > > http://cvs.sourceforge.net/viewcvs.py/tcl/tcl/unix/tclUnixThrd.c? > r1=1.23.2.6&r2=1.23.2.7 > > Correct threaded obj allocator to > fully cleanup on exit and allow for > reinitialization. [Bug #736426] > (mistachkin, kenny) > Nope. I tried the 8.4.5 but it appears to have the same problem. Zoran |
From: Zoran V. <zv...@ar...> - 2005-04-06 20:20:19
|
Am 06.04.2005 um 20:39 schrieb Vlad Seryakov: > I used valgrind and it complained about huge memory leaks somewhere in > the thread allocation, nothing in particular but it pointed on > Ns_CsInit function when used master lock. I could not figure out there > anything, it is just simple plain ns_calloc but still, when > zippy disabled it is running fine. > Do you primarily use Linux? In that case the ptmalloc should be already there in glibc and this whole junk can safely be skipped. Our problem is that we need it for Solaris and Mac OSX and Windows and Linux. Hence I must try the standalone ptmalloc unless I get the Tcl built-in allocator working sane. And, I yet have to see what is the situation looking like on the Win... But as the Jim Davidson writes: "Going forward, there are likley newer options such as the google allocator which is similar to zippy for small blocks but appears to be smarter for large blocks and/or garbage collection." Bah... this thing even does not compile on Darwin... Zoran |
From: Stephen D. <sd...@gm...> - 2005-04-06 19:07:13
|
On Apr 6, 2005 9:12 AM, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 06.04.2005 um 13:25 schrieb Zoran Vasiljevic: > > > > > I will try the b. first. Stephen, if you can confirm that > > between 8.4.6 and 8.4.9 the leak start to raise its ugly > > head, the work to fix that will be much less. Otherwise > > I must prepare myself for several days electronic archeology. > > Bah... I just installed the 8.4.6 and see: the same behaviour. > Which is what I did expect. Yet, I can't explain why Stephen > is getting no leaks with 8.4.6. > > Stephen, can you please try again, but be sure you're using > the 8.4.6 lib. In my setup, everything from 8.4.6 to 8.4.9 > leaks on all platforms we have/use: Sun, Linux, MacOSX. I'm looking in /proc/$PID/maps and yes, the tcl (8.4.6) I have installed under /opt/tcl/lib/libtcl8.4.so is the only one linked in. From the command prompt that results from running 'make runtest', no leaks. From the ncsp port, no leaks. I'm looking directly at /proc/$PID/status, vm is stable at ~20MB, res ~3MB. However! Running /opt/tcl/bin/tclsh8.4 *does* leak memory. Not as quickly as the test I did with 8.5, though. The result from the time command in the leaky tclsh8.4 is ~2770 and slowly going up to ~3130 as I run it 6 or 7 times, 1000 iterations each. From the good nsd however, it always takes ~4800. Some how, my good nsd is performing some extra cleanup... Threads in the good nsd do seem to actually start up and run the command. Checked with an ns_log statement. Thinking about this, it doesn't seem like an error in strategy of the zippy ('quick') allocator, i.e. fragmentation. Looks like cleanup on thread exit is just buggy. There have been some changes in this area of Tcl. Just a very quick scan brought up this, which is tagged 8.4.7: http://cvs.sourceforge.net/viewcvs.py/tcl/tcl/unix/tclUnixThrd.c?r1=1.23.2.6&r2=1.23.2.7 Correct threaded obj allocator to fully cleanup on exit and allow for reinitialization. [Bug #736426] (mistachkin, kenny) |
From: Vlad S. <vl...@cr...> - 2005-04-06 18:46:16
|
I used valgrind and it complained about huge memory leaks somewhere in the thread allocation, nothing in particular but it pointed on Ns_CsInit function when used master lock. I could not figure out there anything, it is just simple plain ns_calloc but still, when zippy disabled it is running fine. Zoran Vasiljevic wrote: > > Am 06.04.2005 um 18:12 schrieb Zoran Vasiljevic: > >> Stephen, can you please try again, but be sure you're using >> the 8.4.6 lib. In my setup, everything from 8.4.6 to 8.4.9 >> leaks on all platforms we have/use: Sun, Linux, MacOSX. >> > > Hm. I tried 8.4.6. Same thing (leaks). > > Now, I have looked into the thing and must admit that I'd > have to spend more time than I thought in order to understand > what is this beast doing *exactly* in order to pinpoint the > problem. > > In the meantime, I wen't shopping for a better solution and > found basically Hoard and ptmalloc. The former is GPL or > requires commercial license, hence it is out of the question > for us/me at the moment. The ptmalloc seems to be implemented > in glibc as I understand it. Both of them are practically a > snap-in replacements for standard libc allocators. On Linux, > I believe, this whole "zippy" (whatever that is) stuff is > really not needed, since glibc based. For the rest, well, > Solaris has starting with 2.9 a goot mtmalloc implementation, > whereas Darwin just has none. > > I will try seeing how the ptmalloc does the work on Solaris/Darwin > and check wether this could be an option for us. If yes, I will > have to recompile everything with USE_THREAD_ALLOC removed from > all makefiles and link against the ptmalloc library. I will keep > you informed what I got. > > Bottom line is, all of this is really off-topic for Naviserver since > it does rely on the Tcl lib for its memory access. Since I do some > work there, I will try to see with the people in Tcl project and > with AS (I think Jim Davidson is responsible for this allocator) > what is really happening inside and if this can be fixed up in order > to keep Tcl itself stable. > > Zoran > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2005-04-06 18:37:24
|
Am 06.04.2005 um 18:12 schrieb Zoran Vasiljevic: > Stephen, can you please try again, but be sure you're using > the 8.4.6 lib. In my setup, everything from 8.4.6 to 8.4.9 > leaks on all platforms we have/use: Sun, Linux, MacOSX. > Hm. I tried 8.4.6. Same thing (leaks). Now, I have looked into the thing and must admit that I'd have to spend more time than I thought in order to understand what is this beast doing *exactly* in order to pinpoint the problem. In the meantime, I wen't shopping for a better solution and found basically Hoard and ptmalloc. The former is GPL or requires commercial license, hence it is out of the question for us/me at the moment. The ptmalloc seems to be implemented in glibc as I understand it. Both of them are practically a snap-in replacements for standard libc allocators. On Linux, I believe, this whole "zippy" (whatever that is) stuff is really not needed, since glibc based. For the rest, well, Solaris has starting with 2.9 a goot mtmalloc implementation, whereas Darwin just has none. I will try seeing how the ptmalloc does the work on Solaris/Darwin and check wether this could be an option for us. If yes, I will have to recompile everything with USE_THREAD_ALLOC removed from all makefiles and link against the ptmalloc library. I will keep you informed what I got. Bottom line is, all of this is really off-topic for Naviserver since it does rely on the Tcl lib for its memory access. Since I do some work there, I will try to see with the people in Tcl project and with AS (I think Jim Davidson is responsible for this allocator) what is really happening inside and if this can be fixed up in order to keep Tcl itself stable. Zoran |
From: Zoran V. <zv...@ar...> - 2005-04-06 16:12:48
|
Am 06.04.2005 um 13:25 schrieb Zoran Vasiljevic: > > I will try the b. first. Stephen, if you can confirm that > between 8.4.6 and 8.4.9 the leak start to raise its ugly > head, the work to fix that will be much less. Otherwise > I must prepare myself for several days electronic archeology. Bah... I just installed the 8.4.6 and see: the same behaviour. Which is what I did expect. Yet, I can't explain why Stephen is getting no leaks with 8.4.6. Stephen, can you please try again, but be sure you're using the 8.4.6 lib. In my setup, everything from 8.4.6 to 8.4.9 leaks on all platforms we have/use: Sun, Linux, MacOSX. I hate to do this job really. I'm sick of fishing this memory-related stuff over and over again. This is killing us. Grr... (The Desperate) Zoran |
From: Zoran V. <zv...@ar...> - 2005-04-06 11:25:29
|
Am 06.04.2005 um 09:46 schrieb Bernd Eidenschink: > > Hi Zoran, > >> would you mind putting this in your nscp session and observe the >> virtual >> memory usage (e.g. with the top utility) in a separate window? >> >> time {set t [ns_thread begin "set a 1"]; ns_thread join $t} >> 10000 >> >> I start with about 45MB server (startup) and end with 600MB after the >> test! > > SuSE 2.6.8-24.11-default; tcl8.4.9 > > Compiled fresh checkout version. Virtual memory starts with > about 7MB and ends with 461MB. > Wonderful. I mean, absolute shit! Hence, there are two possibilities: a. scrap the threaded allocator b. fix the threaded allocator I will try the b. first. Stephen, if you can confirm that between 8.4.6 and 8.4.9 the leak start to raise its ugly head, the work to fix that will be much less. Otherwise I must prepare myself for several days electronic archeology. Zoran |
From: Bernd E. <eid...@we...> - 2005-04-06 07:40:05
|
Hi Zoran, > would you mind putting this in your nscp session and observe the virtual > memory usage (e.g. with the top utility) in a separate window? > > time {set t [ns_thread begin "set a 1"]; ns_thread join $t} 10000 > > I start with about 45MB server (startup) and end with 600MB after the > test! SuSE 2.6.8-24.11-default; tcl8.4.9 Compiled fresh checkout version. Virtual memory starts with about 7MB and ends with 461MB. |
From: Vlad S. <vl...@cr...> - 2005-04-06 01:39:42
|
You can do this, and you have access to OSX, i do not Zoran Vasiljevic wrote: > > Am 04.04.2005 um 17:28 schrieb Vlad Seryakov: > >> I think it is safe for us to replace NS poll with attached >> implementation. > > > Great! Would you put this in or shoud I do it? > I have access to 10.2, 10.3 and 10.4 Mac OSX (this is one of the most > important > platform for us, btw)... > Thanks you for finding this out... > > Zoran > > >> >> Stephen Deasey wrote: >> >>> I think OSX is the only poll() underachiever we care about. >>> Sourceforge has 10.1 and 10.2 hosts you can compile and test on, if >>> you're interested: >>> http://sourceforge.net/docman/display_doc.php?docid=762&group_id=1 >>> On Apr 3, 2005 4:44 PM, Vlad Seryakov <vl...@cr...> wrote: >>> >>>> I found this, looks like we can use it. >>>> >>>> i made a copy, so if evrybody is okay i can replace this poll. >>>> >>>> http://mail.python.org/pipermail/python-list/2001-October/069168.html >>>> >>>> -- >>>> Vlad Seryakov >>>> 571 262-8608 office >>>> vl...@cr... >>>> http://www.crystalballinc.com/vlad/ >>> >>> ------------------------------------------------------- >>> SF email is sponsored by - The IT Product Guide >>> Read honest & candid reviews on hundreds of IT Products from real users. >>> Discover which products truly live up to the hype. Start reading now. >>> http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click >>> _______________________________________________ >>> naviserver-devel mailing list >>> nav...@li... >>> https://lists.sourceforge.net/lists/listinfo/naviserver-devel >> >> >> -- >> Vlad Seryakov >> 571 262-8608 office >> vl...@cr... >> http://www.crystalballinc.com/vlad/ >> #ifndef HAVE_POLL >> >> struct pollfd { >> int fd; >> short events; >> short revents; >> }; >> >> #define POLLIN 001 >> #define POLLPRI 002 >> #define POLLOUT 004 >> #define POLLNORM POLLIN >> #define POLLERR 010 >> #define POLLHUP 020 >> #define POLLNVAL 040 >> >> ---------------------------------------- >> /* >> * prt >> * >> * Copyright 1994 University of Washington >> * >> * Permission is hereby granted to copy this software, and to >> * use and redistribute it, except that this notice may not be >> * removed. The University of Washington does not guarantee >> * that this software is suitable for any purpose and will not >> * be held liable for any damage it may cause. >> */ >> >> /* >> ** emulate poll() for those platforms (Ultrix) that don't have it. >> */ >> >> #include <sys/types.h> >> #include <sys/time.h> >> >> int >> poll(fds, nfds, timo) >> struct pollfd *fds; >> unsigned long nfds; >> int timo; >> { >> struct timeval timeout, *toptr; >> fd_set ifds, ofds, efds, *ip, *op, *ep; >> int i, rc, n; >> FD_ZERO(&ifds); >> FD_ZERO(&ofds); >> FD_ZERO(&efds); >> for (i = 0, n = -1, op = ip = 0; i < nfds; ++i) { >> fds[i].revents = 0; >> if (fds[i].fd < 0) >> continue; >> if (fds[i].fd > n) >> n = fds[i].fd; >> if (fds[i].events & (POLLIN|POLLPRI)) { >> ip = &ifds; >> FD_SET(fds[i].fd, ip); >> } >> if (fds[i].events & POLLOUT) { >> op = &ofds; >> FD_SET(fds[i].fd, op); >> } >> FD_SET(fds[i].fd, &efds); >> } >> if (timo < 0) >> toptr = 0; >> else { >> toptr = &timeout; >> timeout.tv_sec = timo / 1000; >> timeout.tv_usec = (timo - timeout.tv_sec * 1000) * 1000; >> } >> rc = select(++n, ip, op, &efds, toptr); >> if (rc <= 0) >> return rc; >> >> for (i = 0, n = 0; i < nfds; ++i) { >> if (fds[i].fd < 0) continue; >> if (fds[i].events & (POLLIN|POLLPRI) && FD_ISSET(i, &ifds)) >> fds[i].revents |= POLLIN; >> if (fds[i].events & POLLOUT && FD_ISSET(i, &ofds)) >> fds[i].revents |= POLLOUT; >> if (FD_ISSET(i, &efds)) >> /* Some error was detected ... should be some way to know. */ >> fds[i].revents |= POLLHUP; >> } >> return rc; >> } >> #endif >> > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Vlad S. <vl...@cr...> - 2005-04-06 00:49:28
|
Yes, in my serve ri use threads extensively and i switched to normal malloc becaus emy server was leaking very bad. It happens on new 2.6 kernel with latest glibc, on on RH 7.2 the same software was runing fine. I do not use AS memory allocator anymore. Zoran Vasiljevic wrote: > Hey friends, > > would you mind putting this in your nscp session and observe the virtual > memory usage (e.g. with the top utility) in a separate window? > > time {set t [ns_thread begin "set a 1"]; ns_thread join $t} 10000 > > I start with about 45MB server (startup) and end with 600MB after the test! > > I believe this is not a memleak per-se (i.e. one on e could catch with > Purify or such...). I believe this is a side-effect of the AOL memory > allocator which has been accepted by the Tcl core in threaded builds. > > Anyways, this is absolutely unacceptable for us. I must do some lobying > in the Tcl project to correct this somehow. This could explain some > memory-related problems in AS (and NS, consequently) reported by some > users. Alternbative could be to test the new Googled tmalloc (or however > it is called)... > > Cheers > Zoran > > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Stephen D. <sd...@gm...> - 2005-04-05 22:21:56
|
On Apr 5, 2005 2:44 PM, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 05.04.2005 um 23:35 schrieb Stephen Deasey: > > > Fedora Core 3, 2.6.10 kernel, tcl 8.4.6. Looking at the tclConfig.sh, > > it was compiled -DUSE_THREAD_ALLOC=1. > > > > Is there anything weird about your build, 64bit, static, ...? > > > > Nope. All standard, out-of-the-box. I'm using the 8.4.9 Tcl but > AFAIK, there are no changes in that area since 8.4.6. > I'm absolutely sure this is the issue of the allocator because > the same thing happens with tclsh and threading extension. > By removing -DUSE_THREAD_ALLOC=1 and recompiling the tcl lib > all is fine. Eh... > > I did not try under Linux (must do this tomorrow at work). > But, there must be something wrong there. I recall checking every > piece of code we use with Purify extensively. No problems/leaks. > Having got report from one of the customers with 900MB server > that rang many bells in my head. > Do you have access to any other platfrom there? Sorry, just Linux. I rebuilt against a copy of tcl8.5 (from cvs some months ago) I have on this machine, and now it leaks like a sieve... Looks like a regression in Tcl. |
From: Zoran V. <zv...@ar...> - 2005-04-05 21:48:42
|
Am 05.04.2005 um 23:27 schrieb Stephen Deasey: > > > Should we be concerned about that last comment (Some error was > detected...)? > Hm... to answer that I'd have to read the stuff more carefully :-) Maybe also look in the gnu implementation and compare the two. Zoran |
From: Zoran V. <zv...@ar...> - 2005-04-05 21:45:04
|
Am 05.04.2005 um 23:35 schrieb Stephen Deasey: > Fedora Core 3, 2.6.10 kernel, tcl 8.4.6. Looking at the tclConfig.sh, > it was compiled -DUSE_THREAD_ALLOC=1. > > Is there anything weird about your build, 64bit, static, ...? > Nope. All standard, out-of-the-box. I'm using the 8.4.9 Tcl but AFAIK, there are no changes in that area since 8.4.6. I'm absolutely sure this is the issue of the allocator because the same thing happens with tclsh and threading extension. By removing -DUSE_THREAD_ALLOC=1 and recompiling the tcl lib all is fine. Eh... I did not try under Linux (must do this tomorrow at work). But, there must be something wrong there. I recall checking every piece of code we use with Purify extensively. No problems/leaks. Having got report from one of the customers with 900MB server that rang many bells in my head. Do you have access to any other platfrom there? > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real > users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel |
From: Stephen D. <sd...@gm...> - 2005-04-05 21:35:13
|
On Apr 5, 2005 2:25 PM, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 05.04.2005 um 23:21 schrieb Stephen Deasey: > > > In a fresh, empty server (make runtest) memory use doesn't climb at > > all (2.8MB res, 17MB virt). > > > > Do you have any C modules loaded? Use ttrace? > > Nothing! Just vanilla server. Have tried under Solaris. Same thing. > Then went and recompiled Tcl lib w/o the threaded allocator and > repeated the same test. Stable memory, i.e. no leaks. Compoiled > the threaded alloc again and repeated on Sol and Darwin and leak > appears again. Hm... You are testing under Linux? Fedora Core 3, 2.6.10 kernel, tcl 8.4.6. Looking at the tclConfig.sh, it was compiled -DUSE_THREAD_ALLOC=1. Is there anything weird about your build, 64bit, static, ...? |
From: Stephen D. <sd...@gm...> - 2005-04-05 21:27:42
|
On Apr 4, 2005 8:28 AM, Vlad Seryakov <vl...@cr...> wrote: > I think it is safe for us to replace NS poll with attached implementation. > > Stephen Deasey wrote: > > I think OSX is the only poll() underachiever we care about. > > Sourceforge has 10.1 and 10.2 hosts you can compile and test on, if > > you're interested: > > > > http://sourceforge.net/docman/display_doc.php?docid=762&group_id=1 > > > > > > > > On Apr 3, 2005 4:44 PM, Vlad Seryakov <vl...@cr...> wrote: > > > >>I found this, looks like we can use it. > >> > >>i made a copy, so if evrybody is okay i can replace this poll. > >> > >>http://mail.python.org/pipermail/python-list/2001-October/069168.html > >> > >>-- > >>Vlad Seryakov > >>571 262-8608 office > >>vl...@cr... > >>http://www.crystalballinc.com/vlad/ > > > > > > > > ------------------------------------------------------- > > SF email is sponsored by - The IT Product Guide > > Read honest & candid reviews on hundreds of IT Products from real users. > > Discover which products truly live up to the hype. Start reading now. > > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > > _______________________________________________ > > naviserver-devel mailing list > > nav...@li... > > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > > -- > Vlad Seryakov > 571 262-8608 office > vl...@cr... > http://www.crystalballinc.com/vlad/ > > > #ifndef HAVE_POLL > > struct pollfd { > int fd; > short events; > short revents; > }; > > #define POLLIN 001 > #define POLLPRI 002 > #define POLLOUT 004 > #define POLLNORM POLLIN > #define POLLERR 010 > #define POLLHUP 020 > #define POLLNVAL 040 > > ---------------------------------------- > /* > * prt > * > * Copyright 1994 University of Washington > * > * Permission is hereby granted to copy this software, and to > * use and redistribute it, except that this notice may not be > * removed. The University of Washington does not guarantee > * that this software is suitable for any purpose and will not > * be held liable for any damage it may cause. > */ > > /* > ** emulate poll() for those platforms (Ultrix) that don't have it. > */ > > #include <sys/types.h> > #include <sys/time.h> > > int > poll(fds, nfds, timo) > struct pollfd *fds; > unsigned long nfds; > int timo; > { > struct timeval timeout, *toptr; > fd_set ifds, ofds, efds, *ip, *op, *ep; > int i, rc, n; > FD_ZERO(&ifds); > FD_ZERO(&ofds); > FD_ZERO(&efds); > for (i = 0, n = -1, op = ip = 0; i < nfds; ++i) { > fds[i].revents = 0; > if (fds[i].fd < 0) > continue; > if (fds[i].fd > n) > n = fds[i].fd; > if (fds[i].events & (POLLIN|POLLPRI)) { > ip = &ifds; > FD_SET(fds[i].fd, ip); > } > if (fds[i].events & POLLOUT) { > op = &ofds; > FD_SET(fds[i].fd, op); > } > FD_SET(fds[i].fd, &efds); > } > if (timo < 0) > toptr = 0; > else { > toptr = &timeout; > timeout.tv_sec = timo / 1000; > timeout.tv_usec = (timo - timeout.tv_sec * 1000) * 1000; > } > rc = select(++n, ip, op, &efds, toptr); > if (rc <= 0) > return rc; > > for (i = 0, n = 0; i < nfds; ++i) { > if (fds[i].fd < 0) continue; > if (fds[i].events & (POLLIN|POLLPRI) && FD_ISSET(i, &ifds)) > fds[i].revents |= POLLIN; > if (fds[i].events & POLLOUT && FD_ISSET(i, &ofds)) > fds[i].revents |= POLLOUT; > if (FD_ISSET(i, &efds)) > /* Some error was detected ... should be some way to know. */ > fds[i].revents |= POLLHUP; > } > return rc; > } > #endif Should we be concerned about that last comment (Some error was detected...)? |
From: Zoran V. <zv...@ar...> - 2005-04-05 21:25:14
|
Am 05.04.2005 um 23:21 schrieb Stephen Deasey: > In a fresh, empty server (make runtest) memory use doesn't climb at > all (2.8MB res, 17MB virt). > > Do you have any C modules loaded? Use ttrace? Nothing! Just vanilla server. Have tried under Solaris. Same thing. Then went and recompiled Tcl lib w/o the threaded allocator and repeated the same test. Stable memory, i.e. no leaks. Compoiled the threaded alloc again and repeated on Sol and Darwin and leak appears again. Hm... You are testing under Linux? > > > > On Apr 5, 2005 1:19 PM, Zoran Vasiljevic <zv...@ar...> wrote: >> Hey friends, >> >> would you mind putting this in your nscp session and observe the >> virtual >> memory usage (e.g. with the top utility) in a separate window? >> >> time {set t [ns_thread begin "set a 1"]; ns_thread join $t} >> 10000 >> >> I start with about 45MB server (startup) and end with 600MB after the >> test! >> >> I believe this is not a memleak per-se (i.e. one on e could catch with >> Purify or such...). I believe this is a side-effect of the AOL memory >> allocator which has been accepted by the Tcl core in threaded builds. >> >> Anyways, this is absolutely unacceptable for us. I must do some >> lobying >> in the Tcl project to correct this somehow. This could explain some >> memory-related problems in AS (and NS, consequently) reported by some >> users. Alternbative could be to test the new Googled tmalloc (or >> however >> it is called)... >> >> Cheers >> Zoran > > > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real > users. > Discover which products truly live up to the hype. Start reading now. > http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel |
From: Stephen D. <sd...@gm...> - 2005-04-05 21:21:40
|
In a fresh, empty server (make runtest) memory use doesn't climb at all (2.8MB res, 17MB virt). Do you have any C modules loaded? Use ttrace? On Apr 5, 2005 1:19 PM, Zoran Vasiljevic <zv...@ar...> wrote: > Hey friends, > > would you mind putting this in your nscp session and observe the virtual > memory usage (e.g. with the top utility) in a separate window? > > time {set t [ns_thread begin "set a 1"]; ns_thread join $t} 10000 > > I start with about 45MB server (startup) and end with 600MB after the > test! > > I believe this is not a memleak per-se (i.e. one on e could catch with > Purify or such...). I believe this is a side-effect of the AOL memory > allocator which has been accepted by the Tcl core in threaded builds. > > Anyways, this is absolutely unacceptable for us. I must do some lobying > in the Tcl project to correct this somehow. This could explain some > memory-related problems in AS (and NS, consequently) reported by some > users. Alternbative could be to test the new Googled tmalloc (or however > it is called)... > > Cheers > Zoran |
From: Zoran V. <zv...@ar...> - 2005-04-05 20:19:41
|
Hey friends, would you mind putting this in your nscp session and observe the virtual memory usage (e.g. with the top utility) in a separate window? time {set t [ns_thread begin "set a 1"]; ns_thread join $t} 10000 I start with about 45MB server (startup) and end with 600MB after the test! I believe this is not a memleak per-se (i.e. one on e could catch with Purify or such...). I believe this is a side-effect of the AOL memory allocator which has been accepted by the Tcl core in threaded builds. Anyways, this is absolutely unacceptable for us. I must do some lobying in the Tcl project to correct this somehow. This could explain some memory-related problems in AS (and NS, consequently) reported by some users. Alternbative could be to test the new Googled tmalloc (or however it is called)... Cheers Zoran |
From: Zoran V. <zv...@ar...> - 2005-04-05 20:11:43
|
Am 04.04.2005 um 17:28 schrieb Vlad Seryakov: > I think it is safe for us to replace NS poll with attached > implementation. Great! Would you put this in or shoud I do it? I have access to 10.2, 10.3 and 10.4 Mac OSX (this is one of the most important platform for us, btw)... Thanks you for finding this out... Zoran > > Stephen Deasey wrote: >> I think OSX is the only poll() underachiever we care about. >> Sourceforge has 10.1 and 10.2 hosts you can compile and test on, if >> you're interested: >> http://sourceforge.net/docman/display_doc.php?docid=762&group_id=1 >> On Apr 3, 2005 4:44 PM, Vlad Seryakov <vl...@cr...> wrote: >>> I found this, looks like we can use it. >>> >>> i made a copy, so if evrybody is okay i can replace this poll. >>> >>> http://mail.python.org/pipermail/python-list/2001-October/069168.html >>> >>> -- >>> Vlad Seryakov >>> 571 262-8608 office >>> vl...@cr... >>> http://www.crystalballinc.com/vlad/ >> ------------------------------------------------------- >> SF email is sponsored by - The IT Product Guide >> Read honest & candid reviews on hundreds of IT Products from real >> users. >> Discover which products truly live up to the hype. Start reading now. >> http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click >> _______________________________________________ >> naviserver-devel mailing list >> nav...@li... >> https://lists.sourceforge.net/lists/listinfo/naviserver-devel > > -- > Vlad Seryakov > 571 262-8608 office > vl...@cr... > http://www.crystalballinc.com/vlad/ > #ifndef HAVE_POLL > > struct pollfd { > int fd; > short events; > short revents; > }; > > #define POLLIN 001 > #define POLLPRI 002 > #define POLLOUT 004 > #define POLLNORM POLLIN > #define POLLERR 010 > #define POLLHUP 020 > #define POLLNVAL 040 > > ---------------------------------------- > /* > * prt > * > * Copyright 1994 University of Washington > * > * Permission is hereby granted to copy this software, and to > * use and redistribute it, except that this notice may not be > * removed. The University of Washington does not guarantee > * that this software is suitable for any purpose and will not > * be held liable for any damage it may cause. > */ > > /* > ** emulate poll() for those platforms (Ultrix) that don't have it. > */ > > #include <sys/types.h> > #include <sys/time.h> > > int > poll(fds, nfds, timo) > struct pollfd *fds; > unsigned long nfds; > int timo; > { > struct timeval timeout, *toptr; > fd_set ifds, ofds, efds, *ip, *op, *ep; > int i, rc, n; > FD_ZERO(&ifds); > FD_ZERO(&ofds); > FD_ZERO(&efds); > for (i = 0, n = -1, op = ip = 0; i < nfds; ++i) { > fds[i].revents = 0; > if (fds[i].fd < 0) > continue; > if (fds[i].fd > n) > n = fds[i].fd; > if (fds[i].events & (POLLIN|POLLPRI)) { > ip = &ifds; > FD_SET(fds[i].fd, ip); > } > if (fds[i].events & POLLOUT) { > op = &ofds; > FD_SET(fds[i].fd, op); > } > FD_SET(fds[i].fd, &efds); > } > if (timo < 0) > toptr = 0; > else { > toptr = &timeout; > timeout.tv_sec = timo / 1000; > timeout.tv_usec = (timo - timeout.tv_sec * 1000) * 1000; > } > rc = select(++n, ip, op, &efds, toptr); > if (rc <= 0) > return rc; > > for (i = 0, n = 0; i < nfds; ++i) { > if (fds[i].fd < 0) continue; > if (fds[i].events & (POLLIN|POLLPRI) && FD_ISSET(i, &ifds)) > fds[i].revents |= POLLIN; > if (fds[i].events & POLLOUT && FD_ISSET(i, &ofds)) > fds[i].revents |= POLLOUT; > if (FD_ISSET(i, &efds)) > /* Some error was detected ... should be some way to know. */ > fds[i].revents |= POLLHUP; > } > return rc; > } > #endif > |