From: Zoran V. <zv...@ar...> - 2006-02-01 16:02:04
|
Hi! Running the nsthreadtest utility reveals some interesting facts: Mac OSX starting 10 malloc threads...waiting....done: 0 seconds, 48015 usec starting 10 ns_malloc threads...waiting....done: 0 seconds, 75784 usec Linux: starting 10 malloc threads...waiting....done: 0 seconds, 9686 usec starting 10 ns_malloc threads...waiting....done: 0 seconds, 30727 usec Solaris: starting 10 malloc threads...waiting....done: 0 seconds, 31668 usec starting 10 ns_malloc threads...waiting....done: 0 seconds, 54280 usec What is most interesting is not the absolute sizes but the ratio between ns_malloc use and native malloc timings! According to the output, the malloc is faster (quite significantly). I recall times when it was the other way arround. The ns_malloc was initially ment to be better (or more optimal) when using threads as it would have less lock contention due to its internal handling. What I'd ask you is to try to reproduce this on your machine so we can see wether this is something generaly wrong or just wrong at my site. In the former case we'd have to dig into and see what is happening and if those tests are really to depend on or not. Cheers Zoran |
From: Vlad S. <vl...@cr...> - 2006-02-01 16:12:31
|
On my machine with tcl 8.4.12 starting 10 malloc threads...waiting....done: 0 seconds, 16003 usec starting 10 ns_malloc threads...waiting....done: 0 seconds, 13207 usec Zoran Vasiljevic wrote: > Hi! > > Running the nsthreadtest utility reveals some interesting facts: > > Mac OSX > starting 10 malloc threads...waiting....done: 0 seconds, 48015 usec > starting 10 ns_malloc threads...waiting....done: 0 seconds, 75784 usec > > Linux: > starting 10 malloc threads...waiting....done: 0 seconds, 9686 usec > starting 10 ns_malloc threads...waiting....done: 0 seconds, 30727 usec > > Solaris: > starting 10 malloc threads...waiting....done: 0 seconds, 31668 usec > starting 10 ns_malloc threads...waiting....done: 0 seconds, 54280 usec > > What is most interesting is not the absolute sizes but the > ratio between ns_malloc use and native malloc timings! > > According to the output, the malloc is faster (quite significantly). > I recall times when it was the other way arround. The ns_malloc was > initially ment to be better (or more optimal) when using threads > as it would have less lock contention due to its internal handling. > > What I'd ask you is to try to reproduce this on your machine > so we can see wether this is something generaly wrong or just > wrong at my site. In the former case we'd have to dig into and > see what is happening and if those tests are really to depend on > or not. > > Cheers > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642 > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-02-02 09:02:22
|
Am 01.02.2006 um 17:15 schrieb Vlad Seryakov: > On my machine with tcl 8.4.12 > > starting 10 malloc threads...waiting....done: 0 seconds, 16003 usec > starting 10 ns_malloc threads...waiting....done: 0 seconds, 13207 > usec I've been trying to see why I'm getting worse values with ns_malloc as with malloc and it turned out to be that only in 2+CPU box I was able to get ns_malloc outperform the malloc. On all single-cpu boxes the times were 2 up to 4 times better with plain malloc! Does anybody have a single AND multi-cpu box to try out? Cheers Zoran |
From: Bernd E. <b.e...@ki...> - 2006-02-02 09:45:30
|
Hi Zoran, Am Donnerstag, 2. Februar 2006 10:01 schrieb Zoran Vasiljevic: > Am 01.02.2006 um 17:15 schrieb Vlad Seryakov: > > On my machine with tcl 8.4.12 > > > > starting 10 malloc threads...waiting....done: 0 seconds, 16003 usec > > starting 10 ns_malloc threads...waiting....done: 0 seconds, 13207 > > usec > > I've been trying to see why I'm getting worse values with ns_malloc > as with malloc and it turned out to be that only in 2+CPU box I was > able to get ns_malloc outperform the malloc. On all single-cpu boxes > the times were 2 up to 4 times better with plain malloc! Single-CPU-box, Tcl 8.4.12; SuSE w/ Kernel 2.6.13-15.7-default. starting 10 malloc threads...waiting....done: 0 seconds, 36667 usec starting 10 ns_malloc threads...waiting....done: 0 seconds, 16620 usec > Does anybody have a single AND multi-cpu box to try out? Hyperthreading, but the Kernel sees 2 CPUs: (Same compilation of NaviServer; Kernel 2.6.13-15.7-smp) starting 10 malloc threads...waiting....done: 0 seconds, 13018 usec starting 10 ns_malloc threads...waiting....done: 0 seconds, 10599 usec Bernd. |
From: Zoran V. <zv...@ar...> - 2006-02-02 10:14:56
|
Am 02.02.2006 um 10:48 schrieb Bernd Eidenschink: > Hi Zoran, > > Am Donnerstag, 2. Februar 2006 10:01 schrieb Zoran Vasiljevic: >> Am 01.02.2006 um 17:15 schrieb Vlad Seryakov: >>> On my machine with tcl 8.4.12 >>> >>> starting 10 malloc threads...waiting....done: 0 seconds, 16003 usec >>> starting 10 ns_malloc threads...waiting....done: 0 seconds, 13207 >>> usec >> >> I've been trying to see why I'm getting worse values with ns_malloc >> as with malloc and it turned out to be that only in 2+CPU box I was >> able to get ns_malloc outperform the malloc. On all single-cpu boxes >> the times were 2 up to 4 times better with plain malloc! > > Single-CPU-box, Tcl 8.4.12; SuSE w/ Kernel 2.6.13-15.7-default. > > starting 10 malloc threads...waiting....done: 0 seconds, 36667 usec > starting 10 ns_malloc threads...waiting....done: 0 seconds, 16620 > usec > >> Does anybody have a single AND multi-cpu box to try out? > > Hyperthreading, but the Kernel sees 2 CPUs: > (Same compilation of NaviServer; Kernel 2.6.13-15.7-smp) > > starting 10 malloc threads...waiting....done: 0 seconds, 13018 usec > starting 10 ns_malloc threads...waiting....done: 0 seconds, 10599 > usec > Damn, I still cannot believe... Can you download the: http://www.archiware.com/www/downloads/memtest.c and compile: gcc -o memtest memtest.c -I/usr/local/include -L/usr/local/lib - ltcl8.4g or (for Tcl w/o symbols) gcc -o memtest memtest.c -I/usr/local/include -L/usr/local/lib - ltcl8.4 and give it a try on both machines? This is what I get on single-cpu: Tcl: 8.4.12 starting 16 malloc threads...waiting....done: 0 seconds, 94103 usec starting 16 ckalloc threads...waiting....done: 0 seconds, 243616 usec and on 2CPU: Tcl: 8.4.12 starting 16 malloc threads...waiting....done: 0 seconds, 435068 usec starting 16 ckalloc threads...waiting....done: 0 seconds, 151250 usec Both are Mac OSX boxes. The single CPU is a 1.5GHz Mac Mini and the 2CPU is a G4 with 2x800Mhz PowerPC. |
From: Gustaf N. <ne...@wu...> - 2006-02-02 12:23:39
|
Zoran Vasiljevic schrieb: > > This is what I get on single-cpu: > > Tcl: 8.4.12 > starting 16 malloc threads...waiting....done: 0 seconds, 94103 usec > starting 16 ckalloc threads...waiting....done: 0 seconds, 243616 usec > > and on 2CPU: > > Tcl: 8.4.12 > starting 16 malloc threads...waiting....done: 0 seconds, 435068 usec > starting 16 ckalloc threads...waiting....done: 0 seconds, 151250 usec > > > Both are Mac OSX boxes. The single CPU is a 1.5GHz Mac Mini and the 2CPU > is a G4 with 2x800Mhz PowerPC. could not resist to try this on our p5 production system under modest load (64bit, linux, lpar with 25 processors, 8 dual-core with ibms version of hyperthreading) processor : 25 cpu : POWER5 (gr) clock : 1904.448000MHz revision : 2.3 timebase : 238056000 machine : CHRP IBM,9117-570 Tcl: 8.4.11 starting 16 malloc threads...waiting....done: 0 seconds, 24791 usec starting 16 ckalloc threads...waiting....done: 0 seconds, 6973 usec interesting enough nsthreadtest crashes (but with aolserver 4.0.10) i know, we should upgrade... -gustaf > > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642 > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel |
From: Zoran V. <zv...@ar...> - 2006-02-02 12:46:04
|
Am 02.02.2006 um 13:23 schrieb Gustaf Neumann: > could not resist to try this on our p5 production system under > modest load > (64bit, linux, lpar with 25 processors, 8 dual-core with ibms > version of hyperthreading) > processor : 25 > cpu : POWER5 (gr) > clock : 1904.448000MHz > revision : 2.3 Urgs!?? What is this for a monster-machine??? The timings what you get are what I expected on a multi-cpu box. However all our single-cpu boxes are WAY slower with ckalloc then with the regular malloc. Do you happen to have a single-cpu box where you can try this out? What I'm trying to understand is: is this pattern regular or not? If yes, then the default tcl allocator used for threading builds is sub-optimal for general builds and has to be runtime-dependent (use it for multi-cpu boxes and not on single-cpu). But before I go the tough way of getting this done in Tcl (I will definitely experience fierce opposition from "works-for-me" people...) I'd better collect some very hard ammunition... If not, then something is wrong with ALL our single-box machines which is very, very hard to believe... Thanks, Zoran |
From: Andrew P. <at...@pi...> - 2006-02-02 13:00:17
|
On Thu, Feb 02, 2006 at 01:44:47PM +0100, Zoran Vasiljevic wrote: > The timings what you get are what I expected on a multi-cpu box. > However all our single-cpu boxes are WAY slower with ckalloc > then with the regular malloc. Zoran, the boxes where you're tests show unexpectedly slow ns_malloc were all Mac PowerPC boxes running OS-X, is that right? If so, then the common thread here is OS-X, no? OS-X is known to be significantly less efficient than Linux in some areas, this is probably one of them? -- Andrew Piskorski <at...@pi...> http://www.piskorski.com/ |
From: Zoran V. <zv...@ar...> - 2006-02-02 13:25:50
|
Am 02.02.2006 um 13:59 schrieb Andrew Piskorski: > > Zoran, the boxes where you're tests show unexpectedly slow ns_malloc > were all Mac PowerPC boxes running OS-X, is that right? If so, then > the common thread here is OS-X, no? OS-X is known to be significantly > less efficient than Linux in some areas, this is probably one of them? > Oh no... I had tested Linux and Solaris boxes as well. I have 2 Solaris boxes: one with 1 cpu and one with 2 cpu's. The same goes for Mac OSX. Unfortunaltey, I have only 1cpu linux boxes to test... What I saw is: 1cpu malloc better than ckalloc 2cpu ckalloc better than malloc regardless which OS. Cheers Zoran |
From: Vlad S. <vl...@cr...> - 2006-02-02 14:48:59
|
I modified memtest to exclude thread related timings. It is at the http://www.crystalballinc.com/vlad/tmp/memtest.c when i call it with memtest 100000 and +, mallocs gettting faster than Tcl alloc on my single CPU box Zoran Vasiljevic wrote: > > Am 02.02.2006 um 13:59 schrieb Andrew Piskorski: > >> >> Zoran, the boxes where you're tests show unexpectedly slow ns_malloc >> were all Mac PowerPC boxes running OS-X, is that right? If so, then >> the common thread here is OS-X, no? OS-X is known to be significantly >> less efficient than Linux in some areas, this is probably one of them? >> > > Oh no... > > I had tested Linux and Solaris boxes as well. > I have 2 Solaris boxes: one with 1 cpu and one with 2 cpu's. > The same goes for Mac OSX. > Unfortunaltey, I have only 1cpu linux boxes to test... > > What I saw is: 1cpu malloc better than ckalloc > 2cpu ckalloc better than malloc > > regardless which OS. > > Cheers > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642 > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-02-02 18:15:31
|
Am 02.02.2006 um 15:52 schrieb Vlad Seryakov: > I modified memtest to exclude thread related timings. > > It is at the http://www.crystalballinc.com/vlad/tmp/memtest.c > > when i call it with memtest 100000 and +, mallocs gettting faster > than Tcl alloc on my single CPU box After poking and poking and poking arround, I came to a conclusion that everything is right and everything is wrong. Nice conclusion, hm? You can really build upon it :-/ I have modified the nsthread/nsthreadtest.c to make the memory test with random blocksizes (up to 16K) instead of the fixed 10 bytes (which was indeed not very representative). Still, I observe different results using different hardware and OS. Generally, for Linux, we get mostly ns_malloc faster, although I have bunch of 1cpu boxes here where this is NOT true. OTOH, on Solaris/Mac 1cpu boxes, the malloc is faster or close to the ns_malloc. On 2+ CPU boxes the ns_malloc is always faster. It is difficult to say more about that w/o gathering extensive statistics data from various people, which I find very time consuming. Hence, it is good to know that one has to pay attention when running the Tcl MT-app (or naviserver) on a single cpu box and that box is a Solaris or Mac. Anyways it has been quite revealing for me to learn all of this thanks to your support and patience! For us: bottom line is: we will stay with ns_malloc as-is as the speed penalty vs. malloc on Solaris/Mac 1cpu is not worth the effort. Thanks again for your support! Zoran |
From: Andrew P. <at...@pi...> - 2006-02-02 19:22:37
|
On Thu, Feb 02, 2006 at 07:14:14PM +0100, Zoran Vasiljevic wrote: > For us: bottom line is: we will stay with ns_malloc as-is as the > speed penalty vs. malloc on Solaris/Mac 1cpu is not worth the > effort. If anyone is really interested in this, the best thing to do would be to try various more sophisticated high-performance multi-threaded malloc replacements, rather than just ns_malloc. This was discussed a bit on the AOLserver list a year or three ago, if anyone cares to search. -- Andrew Piskorski <at...@pi...> http://www.piskorski.com/ |
From: Neophytos D. <k2...@ph...> - 2006-02-02 19:26:41
|
Andrew Piskorski wrote: > If anyone is really interested in this, the best thing to do would be > to try various more sophisticated high-performance multi-threaded > malloc replacements, rather than just ns_malloc. This was discussed a > bit on the AOLserver list a year or three ago, if anyone cares to > search. Andrew, are you talking about Google's PerfTools project? Best, Neophytos |
From: Neophytos D. <k2...@ph...> - 2006-02-02 19:28:31
|
Andrew Piskorski wrote: > If anyone is really interested in this, the best thing to do would be > to try various more sophisticated high-performance multi-threaded > malloc replacements, rather than just ns_malloc. This was discussed a > bit on the AOLserver list a year or three ago, if anyone cares to > search. Here's the message I had in mind: http://www.opensubscriber.com/message/aol...@li.../1027255.html Best, Neophytos |
From: Andrew P. <at...@pi...> - 2006-02-02 19:54:55
|
On Thu, Feb 02, 2006 at 07:28:17PM +0000, Neophytos Demetriou wrote: > Andrew Piskorski wrote: > >If anyone is really interested in this, the best thing to do would be > >to try various more sophisticated high-performance multi-threaded > >malloc replacements, rather than just ns_malloc. This was discussed a > >bit on the AOLserver list a year or three ago, if anyone cares to > >search. > > Here's the message I had in mind: > http://www.opensubscriber.com/message/aol...@li.../1027255.html Google's TCMalloc thing is also relevent, but it's not the malloc replacement I was thinking of. Ah yes, I was thinking of "Hoard": http://www.hoard.org/ http://www.cs.umass.edu/~emery/hoard/ http://www.mail-archive.com/cgi-bin/htsearch?config=aolserver_listserv_aol_com&restrict=&exclude=&words=hoard -- Andrew Piskorski <at...@pi...> http://www.piskorski.com/ |
From: Zoran V. <zv...@ar...> - 2006-02-03 08:16:45
|
Am 02.02.2006 um 20:54 schrieb Andrew Piskorski: > > Google's TCMalloc thing is also relevent, but it's not the malloc > replacement I was thinking of. Ah yes, I was thinking of "Hoard": > Oh, apparently most of such tools are really tuned to multi-cpu systems. The problem I see is that when running MT-software on a single CPU, the things start to be different. As far as I could see, the Tcl MT allocator is pretty good overall (with a few exceptions) and it is just "there". But I'm curious to see other players as well. As soon as we get our internal release out of the door I will get some of those alternate allocatos under test. Cheer Zoran |
From: Gustaf N. <ne...@wu...> - 2006-02-02 20:21:23
Attachments:
memtest.xls
|
Zoran Vasiljevic schrieb: > >> could not resist to try this on our p5 production system under >> modest load >> (64bit, linux, lpar with 25 processors, 8 dual-core with ibms >> version of hyperthreading) >> processor : 25 >> cpu : POWER5 (gr) >> clock : 1904.448000MHz >> revision : 2.3 > > > Urgs!?? What is this for a monster-machine??? Cool, isn't it? its an ibm p570 running Red Hat Enterprise Linux AS release 4 (Nahant Update 2), everything compiled 64 bit (this was more effort than expected). > > The timings what you get are what I expected on a multi-cpu box. > However all our single-cpu boxes are WAY slower with ckalloc > then with the regular malloc. > Do you happen to have a single-cpu box where you can try this out? not with this processor. if you are interested, i could get access to a 4-cpu box with an p5 processor. > > What I'm trying to understand is: is this pattern regular or not? with vlad's version, i get: Tcl: 8.4.11 starting 16 malloc threads...waiting....done: 20000 mallocs, 0 seconds, 392396 usec starting 16 ckalloc threads...waiting....done: 20000 mallocs, 0 seconds, 85702 usec modifying the #of threads, i get: threads malloc ckalloc ratio 8 0,121891 0,033443 3,644738809 16 0,392396 0,085702 4,578609601 24 0,813853 0,13622 5,974548524 32 1,122965 0,144308 7,781723813 40 1,564372 0,17262 9,062518827 48 2,490847 0,184043 13,53404911 56 3,299245 0,209622 15,73902071 100 8,139274 0,374034 21,76078645 how comes, that ckalloc is so much faster? how do malloc/ckmalloc relate to ns_malloc? even the ratio goes up by the # of threads: for 100 threads it is 21 times faster, while for 8 threads, the ratio is "only" 3.6. See the excel file for the graphics... -gustaf |
From: Zoran V. <zv...@ar...> - 2006-02-03 08:11:46
|
Am 02.02.2006 um 21:21 schrieb Gustaf Neumann: > how comes, that ckalloc is so much faster? It avoids the malloc's lock and builds its own malloc tables per-thread. So when lots of threads start to attack the malloc, there is quite a lot of contention on the mutex, so they go sleep for a while. > how do malloc/ckmalloc relate to ns_malloc? malloc is the bottom layer as provides with the OS (or the malloc library). ckalloc is a macro defined in Tcl so you can declare TCL_MEM_DEBUG and it will add some additional layer so the Tcl memory debugging tools can be used. If no TCL_MEM_DEBUG is declared, it defaults to Tcl_Alloc. The Tcl_Alloc is different for non-thread and thread-builds. This is controlled by USE_THREAD_ALLOC when compiling the Tcl library and is default for threaded builds. This activates special MT-optimized allocator. It handles all memory allocations <16284 bytes in per-thread tables, instead of shared tables thus avioding lock contention. This is what AS used before and it got into Tcl as it was/is pretty efficient overall. This is not always the case for 1cpu boxes, as I've seen in my tests. ns_malloc is just a wrapper arround the ckalloc. Cheers Zoran |
From: Vlad S. <vl...@cr...> - 2006-02-02 14:29:38
|
I tried on single CPU box 3.2Ghz with 1Gb of RAM, on 2CPU box i got the same result, ns_malloc is fatser Zoran Vasiljevic wrote: > > Am 01.02.2006 um 17:15 schrieb Vlad Seryakov: > >> On my machine with tcl 8.4.12 >> >> starting 10 malloc threads...waiting....done: 0 seconds, 16003 usec >> starting 10 ns_malloc threads...waiting....done: 0 seconds, 13207 usec > > > I've been trying to see why I'm getting worse values with ns_malloc > as with malloc and it turned out to be that only in 2+CPU box I was > able to get ns_malloc outperform the malloc. On all single-cpu boxes > the times were 2 up to 4 times better with plain malloc! > > Does anybody have a single AND multi-cpu box to try out? > > Cheers > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642 > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Neophytos D. <k2...@ph...> - 2006-02-02 13:10:55
|
Intel Pentium 4 3GHz [Hyperthreading] Tcl: 8.4.11 starting 16 malloc threads...waiting....done: 0 seconds, 58726 usec starting 16 ckalloc threads...waiting....done: 0 seconds, 41410 usec IBM X40 Tcl: 8.4.11 starting 16 malloc threads...waiting....done: 0 seconds, 101392 usec starting 16 ckalloc threads...waiting....done: 0 seconds, 45640 usec |
From: Zoran V. <zv...@ar...> - 2006-02-02 13:26:52
|
Am 02.02.2006 um 14:10 schrieb Neophytos Demetriou: > Intel Pentium 4 3GHz [Hyperthreading] > Tcl: 8.4.11 > starting 16 malloc threads...waiting....done: 0 seconds, 58726 usec > starting 16 ckalloc threads...waiting....done: 0 seconds, 41410 usec mult-icpu, right? > > IBM X40 > Tcl: 8.4.11 > starting 16 malloc threads...waiting....done: 0 seconds, 101392 usec > starting 16 ckalloc threads...waiting....done: 0 seconds, 45640 usec > Is this also multicpu or a single-cpu? Do you have some single-cpu machine to test on? Cheers Zoran |
From: Neophytos D. <k2...@ph...> - 2006-02-02 13:48:26
|
Zoran Vasiljevic wrote: > > Am 02.02.2006 um 14:10 schrieb Neophytos Demetriou: > >> Intel Pentium 4 3GHz [Hyperthreading] >> Tcl: 8.4.11 >> starting 16 malloc threads...waiting....done: 0 seconds, 58726 usec >> starting 16 ckalloc threads...waiting....done: 0 seconds, 41410 usec > > > mult-icpu, right? single-cpu [with hyperthreading] >> IBM X40 >> Tcl: 8.4.11 >> starting 16 malloc threads...waiting....done: 0 seconds, 101392 usec >> starting 16 ckalloc threads...waiting....done: 0 seconds, 45640 usec >> > > Is this also multicpu or a single-cpu? this is my laptop, an IBM thinkpad X40: 1.4-GHz Pentium M best, neophytos PS. I'm using Gentoo linux (2.6.12-gentoo-r4) on both machines: gcc 3.3.5-20050130 (Gentoo 3.3.5.20050130-r1, ssp-3.3.5.20050130-1, pie-8.7.7.1) |
From: Zoran V. <zv...@ar...> - 2006-02-02 14:23:50
|
Am 02.02.2006 um 14:48 schrieb Neophytos Demetriou: > Zoran Vasiljevic wrote: >> Am 02.02.2006 um 14:10 schrieb Neophytos Demetriou: >>> Intel Pentium 4 3GHz [Hyperthreading] >>> Tcl: 8.4.11 >>> starting 16 malloc threads...waiting....done: 0 seconds, 58726 usec >>> starting 16 ckalloc threads...waiting....done: 0 seconds, 41410 >>> usec >> mult-icpu, right? > > single-cpu [with hyperthreading] This counts as 1+cpu allright. > >>> IBM X40 >>> Tcl: 8.4.11 >>> starting 16 malloc threads...waiting....done: 0 seconds, 101392 >>> usec >>> starting 16 ckalloc threads...waiting....done: 0 seconds, 45640 >>> usec >>> >> Is this also multicpu or a single-cpu? > > this is my laptop, an IBM thinkpad X40: 1.4-GHz Pentium M > But this one.... this one is killing me! This one is a single-cpu, right? Cheers Zoran |
From: Neophytos D. <k2...@ph...> - 2006-02-02 14:32:28
|
Zoran Vasiljevic wrote: >> this is my laptop, an IBM thinkpad X40: 1.4-GHz Pentium M >> > > But this one.... this one is killing me! This one is a single-cpu, right? Right. |
From: Bernd E. <eid...@we...> - 2006-02-02 10:58:25
|
Zoran, I compiled it w/o symbols but it seems to hang in MemTime: Tcl: 8.4.11 starting 16 malloc threads... Or is it supposed to run veeeery long on my machine? :-) Bernd. |