You can subscribe to this list here.
2005 |
Jan
|
Feb
(53) |
Mar
(62) |
Apr
(88) |
May
(55) |
Jun
(204) |
Jul
(52) |
Aug
|
Sep
(1) |
Oct
(94) |
Nov
(15) |
Dec
(68) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(130) |
Feb
(105) |
Mar
(34) |
Apr
(61) |
May
(41) |
Jun
(92) |
Jul
(176) |
Aug
(102) |
Sep
(247) |
Oct
(69) |
Nov
(32) |
Dec
(140) |
2007 |
Jan
(58) |
Feb
(51) |
Mar
(11) |
Apr
(20) |
May
(34) |
Jun
(37) |
Jul
(18) |
Aug
(60) |
Sep
(41) |
Oct
(105) |
Nov
(19) |
Dec
(14) |
2008 |
Jan
(3) |
Feb
|
Mar
(7) |
Apr
(5) |
May
(123) |
Jun
(5) |
Jul
(1) |
Aug
(29) |
Sep
(15) |
Oct
(21) |
Nov
(51) |
Dec
(3) |
2009 |
Jan
|
Feb
(36) |
Mar
(29) |
Apr
|
May
|
Jun
(7) |
Jul
(4) |
Aug
|
Sep
(4) |
Oct
|
Nov
(13) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
(9) |
Apr
(11) |
May
(16) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(7) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
(92) |
Nov
(28) |
Dec
(16) |
2013 |
Jan
(9) |
Feb
(2) |
Mar
|
Apr
(4) |
May
(4) |
Jun
(6) |
Jul
(14) |
Aug
(12) |
Sep
(4) |
Oct
(13) |
Nov
(1) |
Dec
(6) |
2014 |
Jan
(23) |
Feb
(19) |
Mar
(10) |
Apr
(14) |
May
(11) |
Jun
(6) |
Jul
(11) |
Aug
(15) |
Sep
(41) |
Oct
(95) |
Nov
(23) |
Dec
(11) |
2015 |
Jan
(3) |
Feb
(9) |
Mar
(19) |
Apr
(3) |
May
(1) |
Jun
(3) |
Jul
(11) |
Aug
(1) |
Sep
(15) |
Oct
(5) |
Nov
(2) |
Dec
|
2016 |
Jan
(7) |
Feb
(11) |
Mar
(8) |
Apr
(1) |
May
(3) |
Jun
(17) |
Jul
(12) |
Aug
(3) |
Sep
(5) |
Oct
(19) |
Nov
(12) |
Dec
(6) |
2017 |
Jan
(30) |
Feb
(23) |
Mar
(12) |
Apr
(32) |
May
(27) |
Jun
(7) |
Jul
(13) |
Aug
(16) |
Sep
(6) |
Oct
(11) |
Nov
|
Dec
(12) |
2018 |
Jan
(1) |
Feb
(5) |
Mar
(6) |
Apr
(7) |
May
(23) |
Jun
(3) |
Jul
(2) |
Aug
(1) |
Sep
(6) |
Oct
(6) |
Nov
(10) |
Dec
(3) |
2019 |
Jan
(26) |
Feb
(15) |
Mar
(9) |
Apr
|
May
(8) |
Jun
(14) |
Jul
(10) |
Aug
(10) |
Sep
(4) |
Oct
(2) |
Nov
(20) |
Dec
(10) |
2020 |
Jan
(10) |
Feb
(14) |
Mar
(29) |
Apr
(11) |
May
(25) |
Jun
(21) |
Jul
(23) |
Aug
(12) |
Sep
(19) |
Oct
(6) |
Nov
(8) |
Dec
(12) |
2021 |
Jan
(29) |
Feb
(9) |
Mar
(8) |
Apr
(8) |
May
(2) |
Jun
(2) |
Jul
(9) |
Aug
(9) |
Sep
(3) |
Oct
(4) |
Nov
(12) |
Dec
(13) |
2022 |
Jan
(4) |
Feb
|
Mar
(4) |
Apr
(12) |
May
(15) |
Jun
(7) |
Jul
(10) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(8) |
Dec
|
2023 |
Jan
(15) |
Feb
|
Mar
(23) |
Apr
(1) |
May
(2) |
Jun
(10) |
Jul
|
Aug
(22) |
Sep
(19) |
Oct
(2) |
Nov
(20) |
Dec
|
2024 |
Jan
(1) |
Feb
|
Mar
(16) |
Apr
(15) |
May
(6) |
Jun
(4) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(13) |
Nov
(18) |
Dec
(6) |
2025 |
Jan
(12) |
Feb
|
Mar
(2) |
Apr
(1) |
May
(11) |
Jun
(5) |
Jul
(4) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Bernd E. <eid...@we...> - 2006-02-07 19:07:45
|
Gustaf, I started with these: * tcl/form.tcl: * tcl/http.tcl: * tcl/charsets.tcl: * tcl/fastpath.tcl: * tcl/sendmail.tcl: the other files you provided partially differ from HEAD. I'll work over them the next days. -Bernd. |
From: Bernd E. <eid...@we...> - 2006-02-07 18:02:58
|
> Here comes something to consider for the distro: > > the .tcl files contain still many legacy comparions and some non-braced > expressions. > attached are the updated files that standardize the comparisons and can > be compiled > into more effficient byte-code. that's really true. i'll go and include them one by one. Thanks! Bernd. |
From: Gustaf N. <ne...@wu...> - 2006-02-07 17:45:31
|
Here comes something to consider for the distro: the .tcl files contain still many legacy comparions and some non-braced expressions. attached are the updated files that standardize the comparisons and can be compiled into more effficient byte-code. best regards -gustaf |
From: Zoran V. <zv...@ar...> - 2006-02-07 11:21:11
|
Hi ! I wonder why is this so: bash-2.03$ find modules/tcl/ modules/tcl/ modules/tcl/nsperm modules/tcl/nsperm/init.tcl modules/tcl/nsperm/compat.tcl modules/tcl/charsets.tcl modules/tcl/compat.tcl modules/tcl/debug.tcl modules/tcl/fastpath.tcl modules/tcl/file.tcl modules/tcl/form.tcl modules/tcl/http.tcl modules/tcl/init.tcl modules/tcl/prodebug.tcl modules/tcl/sendmail.tcl modules/tcl/stats.tcl modules/tcl/util.tcl modules/tcl/nsdb modules/tcl/nsdb/util.tcl bash-2.03$ find tcl tcl tcl/nsperm tcl/nsperm/init.tcl tcl/nsperm/compat.tcl tcl/cache.tcl tcl/charsets.tcl tcl/compat.tcl tcl/debug.tcl tcl/fastpath.tcl tcl/file.tcl tcl/form.tcl tcl/http.tcl tcl/init.tcl tcl/prodebug.tcl tcl/sendmail.tcl tcl/stats.tcl tcl/ttrace.tcl tcl/util.tcl tcl/nsdb tcl/nsdb/util.tcl IOW, why is the private library for the virtual server seeded with the same files as the shared library? Isn't it supposed that private virtual server Tcl lib should actually override files in the shared lib and not duplicate them? Cheers, Zoran |
From: Zoran V. <zv...@ar...> - 2006-02-07 08:07:31
|
Am 07.02.2006 um 05:34 schrieb Stephen Deasey: > > Can this be changed to: > > ns_section ns/server/$servername/tcl > ns_param betternamehere true > > It's clearly part of the Tcl configuration. As for the name, 'trace' > just seems too generic, which is why it clashes with existing usage > (ns_register_trace). > > Anyway, tracing is just the mechanism, not the purpose. What about > lazyloader, or tclondemand, or ...? Well, anything you wish. We can call it lazyloader, for example. > > >> This way all "things" loaded into an interp during startup >> will just be recorded in ttrace database. At new thread >> creation, a short bootstrap script will be installed >> instead of a full-blown (potentially very large) init >> script. This one overrides the Tcl unknown method and loads >> required "things" definitions on-demand. > > > Why is this off by default? This is robust, right? This is very robust for us as we have been using it for some years already. I have two other OACS sites (not ours) which uses it as well. It is per-default off because I did not want to throw some foreign code to you before you have the chance to test it. After everybody is satisfied we can scrap quite a bit from bin/init.tcl. > > How about some debug log statements, so that we can see that it's > working correctly. This is true. I will add this. > > Looks like some cut 'n paste cleanup is needed: The blurb at the top > of ttrace.tcl mentions licence.terms. You'll need to include the > license directly at the top of the file. nsv_*, ns_mutex etc. will > always be available as this is now embedded in NaviServer -- can you > remove the indirect calls ( ${store}set etc. ) ? Yes. The thing is also the part of the Tcl threading extension, hence I made it generic. I can remove those compat code as it will be easier to read. But, I did not want to introduce some errors before you get the chance to give it a try. Once blessed we can make all those things above and make it on per default. Cheers Zoran |
From: Stephen D. <sd...@gm...> - 2006-02-07 04:34:15
|
On 2/6/06, Zoran Vasiljevic <zv...@ar...> wrote: > Hi! > > I have included the ttrace module which allows for > alternative Tcl interp initializations. Nice! > To enable, edit the sample-config.tcl and set > > ns_section ns/server/$servername/ttrace > ns_param enabletraces true Can this be changed to: ns_section ns/server/$servername/tcl ns_param betternamehere true It's clearly part of the Tcl configuration. As for the name, 'trace' just seems too generic, which is why it clashes with existing usage (ns_register_trace). Anyway, tracing is just the mechanism, not the purpose. What about lazyloader, or tclondemand, or ...? > This way all "things" loaded into an interp during startup > will just be recorded in ttrace database. At new thread > creation, a short bootstrap script will be installed > instead of a full-blown (potentially very large) init > script. This one overrides the Tcl unknown method and loads > required "things" definitions on-demand. Why is this off by default? This is robust, right? How about some debug log statements, so that we can see that it's working correctly. Looks like some cut 'n paste cleanup is needed: The blurb at the top of ttrace.tcl mentions licence.terms. You'll need to include the license directly at the top of the file. nsv_*, ns_mutex etc. will always be available as this is now embedded in NaviServer -- can you remove the indirect calls ( ${store}set etc. ) ? |
From: Zoran V. <zv...@ar...> - 2006-02-06 10:33:57
|
Hi! I have included the ttrace module which allows for alternative Tcl interp initializations. To enable, edit the sample-config.tcl and set ns_section ns/server/$servername/ttrace ns_param enabletraces true This way all "things" loaded into an interp during startup will just be recorded in ttrace database. At new thread creation, a short bootstrap script will be installed instead of a full-blown (potentially very large) init script. This one overrides the Tcl unknown method and loads required "things" definitions on-demand. All this results in memory footprint of the server reduced to about one third (or one half) and much faster thread creation. Please test with your setup and report any problems you might find out. Cheers Zoran |
From: Zoran V. <zv...@ar...> - 2006-02-06 08:05:51
|
Am 05.02.2006 um 23:48 schrieb Stephen Deasey: > On 2/4/06, Zoran Vasiljevic <zv...@ar...> wrote: >> Hi! >> >> I have uploaded the 4.99.1 release to SF. > > > nsd/nsd.h:114 > #define NSD_TAG "$Name: $" > > Don't forget to use the magic 'cvs export' when making a release: AHA! That was the missing part! I was wondering why on earth this wasn't substituted after I tagged the thing. Thanks! Zoran |
From: Vlad S. <vl...@cr...> - 2006-02-05 23:24:17
|
> > This should make the new aggressive cleanups unnecessary: > > % make clean > ... > /bin/rm -Rf *~ *.bak > ... > Sorry, maybe make clean is way too aggressive regarding backup files -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Stephen D. <sd...@gm...> - 2006-02-05 22:48:49
|
On 2/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > Hi! > > I have uploaded the 4.99.1 release to SF. nsd/nsd.h:114 #define NSD_TAG "$Name: $" Don't forget to use the magic 'cvs export' when making a release: http://sourceforge.net/mailarchive/message.php?msg_id=3D11432982 This should make the new aggressive cleanups unnecessary: % make clean ... /bin/rm -Rf *~ *.bak ... Ouch! My editor backup files disappear when rebuilding! :-) |
From: Zoran V. <zv...@ar...> - 2006-02-05 09:09:49
|
Am 05.02.2006 um 09:59 schrieb Stephen Deasey: > Hmm, looks like Ns_CompressGzip() returns an error if zlib support is > not enabled. I guess we just issue a warning at configure time, and > come up with a workaround for the test. In that case (if the warning is present) the test can workarround that, correct. Warning if no zlib is found during compile and it was not disabled (--without-zlib) is also OK. Cheers Zoran |
From: Stephen D. <sd...@gm...> - 2006-02-05 08:59:54
|
On 2/5/06, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 05.02.2006 um 09:22 schrieb Stephen Deasey: > > > I think the build should fail if zlib is > > not found. > > ...and if you configured with --without-zlib? > Hm... > > The problem is, as I see that the test routines > cannot check if the zlib is compiled-in or not. > I usually build with --without-zlib for some weird > unimportant reasons. > > I think that the code IS allright, as it allows the > fallback when no zlib is compiled in. But what is > if the zlib is compiled but not found on runtime? > Well, this should obviously result in process not > being able to start, right? Now, was your code compiled > with zlib support and the zlib library was not installed? > > OTOH, if you do not explicitly disable the zlib support > then the build should fail. That is true. The zlib-devel package was not installed on my laptop and so there was no include/zlib.h header. The configure script detected this and disabled support. Hmm, looks like Ns_CompressGzip() returns an error if zlib support is not enabled. I guess we just issue a warning at configure time, and come up with a workaround for the test. |
From: Zoran V. <zv...@ar...> - 2006-02-05 08:35:12
|
Am 05.02.2006 um 09:22 schrieb Stephen Deasey: > I think the build should fail if zlib is > not found. ...and if you configured with --without-zlib? Hm... The problem is, as I see that the test routines cannot check if the zlib is compiled-in or not. I usually build with --without-zlib for some weird unimportant reasons. I think that the code IS allright, as it allows the fallback when no zlib is compiled in. But what is if the zlib is compiled but not found on runtime? Well, this should obviously result in process not being able to start, right? Now, was your code compiled with zlib support and the zlib library was not installed? OTOH, if you do not explicitly disable the zlib support then the build should fail. That is true. Cheers Zoran |
From: Stephen D. <sd...@gm...> - 2006-02-05 08:22:45
|
On 2/5/06, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 05.02.2006 um 07:38 schrieb Zoran Vasiljevic: > > > =3D=3D=3D=3D ns_info-2.19.1 basic operation FAILED > > =3D=3D=3D=3D Contents of test case: > > > > expr {[llength [ns_info pools]]<=3D0} > > > > ---- Result was: > > 0 > > ---- Result should have been (exact matching): > > 1 > > =3D=3D=3D=3D ns_info-2.19.1 FAILED > > > I think this test is broken. It assumes that at > the point of calling the test, no memory allocations > took place, which is hard to believe. I think this > test could be: > > test ns_info-2.19.1 {basic operation} -body { > expr {[llength [ns_info pools]] =3D=3D 0} > } -result 0 > > Although, precisely speaking, this test does not > make much sense because you cannot control if some > memory is being allocated or not from the test suite. This used to work. Looks like it's because I built tcl with --enable-symbo= ls... The ns_addrbyhost error disappeared. This may have been because my network went down, although when I disable the wireless now I don't get the test failure, although I do get an error message in the log.=20 Weird. The ADP test failure happened because zlib.h wasn't installed on this machine. I think our code is wrong here. If zlib support is not built the the compression routines silently pass through the data uncompressed and unmodified. I think the build should fail if zlib is not found. |
From: Zoran V. <zv...@ar...> - 2006-02-05 07:25:46
|
Am 05.02.2006 um 07:38 schrieb Zoran Vasiljevic: > ==== ns_info-2.19.1 basic operation FAILED > ==== Contents of test case: > > expr {[llength [ns_info pools]]<=0} > > ---- Result was: > 0 > ---- Result should have been (exact matching): > 1 > ==== ns_info-2.19.1 FAILED I think this test is broken. It assumes that at the point of calling the test, no memory allocations took place, which is hard to believe. I think this test could be: test ns_info-2.19.1 {basic operation} -body { expr {[llength [ns_info pools]] == 0} } -result 0 Although, precisely speaking, this test does not make much sense because you cannot control if some memory is being allocated or not from the test suite. Cheers Zoran |
From: Zoran V. <zv...@ar...> - 2006-02-05 06:40:07
|
Am 05.02.2006 um 00:00 schrieb Vlad Seryakov: > Works fine for me, no failures ==== ns_info-2.19.1 basic operation FAILED ==== Contents of test case: expr {[llength [ns_info pools]]<=0} ---- Result was: 0 ---- Result should have been (exact matching): 1 ==== ns_info-2.19.1 FAILED I get only this one. The test checks llength to be <0 ? This seems weird. But, apart from that, I do not know what is going on, i.e. is the test wrong/out_of_date or the code is broken. Zoran |
From: Vlad S. <vl...@cr...> - 2006-02-04 23:00:57
|
Works fine for me, no failures Stephen Deasey wrote: > On 2/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > >>Hi! >> >>I have uploaded the 4.99.1 release to SF. > > > > I'm getting some test failures: > > > Tests began at Sat Feb 04 03:13:58 PM MST 2006 > http.test > http_byteranges.test > http_chunked.test > [04/Feb/2006:15:13:58][4858.1087490992][-conn:test:0] Notice: > encoding: loaded: iso8859-1 > ns_accesslog.test > ns_addrbyhost.test > > ==== ns_addrbyhost-1.2 bad host FAILED > ==== Contents of test case: > > ns_addrbyhost this_should_not_resolve > > ---- Test completed normally; Return code was: 0 > ---- Return code should have been one of: 1 > ==== ns_addrbyhost-1.2 FAILED > > ns_adp_compress.test > > ==== ns_adp_conpress-1.2 Accept-Encoding gzip FAILED > ==== Contents of test case: > > set response [nstest_http -setheaders {accept-encoding gzip} > -getheaders {content-encoding} -getbody 1 GET /ns_adp_compress.adp] > binary scan [lindex $response 2] "H*" values > list [lindex $response 0] [lindex $response 1] [regexp -all > -inline {..} $values] > > ---- Result was: > 200 {} {74 68 69 73 20 69 73 20 61 20 74 65 73 74} > ---- Result should have been (exact matching): > 200 gzip {1f 8b 08 00 00 00 00 00 00 03 2b c9 c8 2c 56 00 a2 44 85 92 > d4 e2 12 00 26 33 05 16 0d 1e e7 ea 00 00 00 0e} > ==== ns_adp_conpress-1.2 FAILED > > ns_base64encode.test > ns_cache.test > [04/Feb/2006:15:14:08][4858.1076697408][-main-] Notice: maxsize 1024 > size 1 entries 1 flushed 29 hits 564 missed 32 hitrate 94 > ns_conn.test > ns_conn_host.test > ns_crypt.test > ns_env.test > ns_gifsize.test > ns_hashpath.test > ns_hostbyaddr.test > ns_hrefs.test > ns_httptime.test > ns_info.test > > ==== ns_info-2.19.1 basic operation FAILED > ==== Contents of test case: > > expr {[llength [ns_info pools]]<=0} > > ---- Result was: > 0 > ---- Result should have been (exact matching): > 1 > ==== ns_info-2.19.1 FAILED > > ns_jpegsize.test > ns_log.test > [04/Feb/2006:15:14:12][4858.1076697408][-main-] Notice: test > [04/Feb/2006:15:14:12][4858.1076697408][-main-] Notice: test > ns_mime.test > ns_nsv.test > ns_pagepath.test > ns_parseargs.test > ns_register_filter.test > ns_register_proc.test > ns_serverpath.test > ns_set.test > ns_sha1.test > ns_thread.test > ns_urlencode.test > ns_uuencode.test > nsdb.test > tclresp.test > url2file.test > > Tests ended at Sat Feb 04 03:14:30 PM MST 2006 > all.tcl: Total 690 Passed 687 Skipped 0 Failed 3 > Sourced 35 Test Files. > Files with failing tests: ns_addrbyhost.test ns_adp_compress.test ns_info.test > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=k&kid3432&bid#0486&dat1642 > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Stephen D. <sd...@gm...> - 2006-02-04 22:17:17
|
On 2/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > Hi! > > I have uploaded the 4.99.1 release to SF. I'm getting some test failures: Tests began at Sat Feb 04 03:13:58 PM MST 2006 http.test http_byteranges.test http_chunked.test [04/Feb/2006:15:13:58][4858.1087490992][-conn:test:0] Notice: encoding: loaded: iso8859-1 ns_accesslog.test ns_addrbyhost.test =3D=3D=3D=3D ns_addrbyhost-1.2 bad host FAILED =3D=3D=3D=3D Contents of test case: ns_addrbyhost this_should_not_resolve ---- Test completed normally; Return code was: 0 ---- Return code should have been one of: 1 =3D=3D=3D=3D ns_addrbyhost-1.2 FAILED ns_adp_compress.test =3D=3D=3D=3D ns_adp_conpress-1.2 Accept-Encoding gzip FAILED =3D=3D=3D=3D Contents of test case: set response [nstest_http -setheaders {accept-encoding gzip}=20 -getheaders {content-encoding} -getbody 1 GET /ns_adp_compress.adp] binary scan [lindex $response 2] "H*" values list [lindex $response 0] [lindex $response 1] [regexp -all -inline {..} $values] ---- Result was: 200 {} {74 68 69 73 20 69 73 20 61 20 74 65 73 74} ---- Result should have been (exact matching): 200 gzip {1f 8b 08 00 00 00 00 00 00 03 2b c9 c8 2c 56 00 a2 44 85 92 d4 e2 12 00 26 33 05 16 0d 1e e7 ea 00 00 00 0e} =3D=3D=3D=3D ns_adp_conpress-1.2 FAILED ns_base64encode.test ns_cache.test [04/Feb/2006:15:14:08][4858.1076697408][-main-] Notice: maxsize 1024 size 1 entries 1 flushed 29 hits 564 missed 32 hitrate 94 ns_conn.test ns_conn_host.test ns_crypt.test ns_env.test ns_gifsize.test ns_hashpath.test ns_hostbyaddr.test ns_hrefs.test ns_httptime.test ns_info.test =3D=3D=3D=3D ns_info-2.19.1 basic operation FAILED =3D=3D=3D=3D Contents of test case: expr {[llength [ns_info pools]]<=3D0} ---- Result was: 0 ---- Result should have been (exact matching): 1 =3D=3D=3D=3D ns_info-2.19.1 FAILED ns_jpegsize.test ns_log.test [04/Feb/2006:15:14:12][4858.1076697408][-main-] Notice: test [04/Feb/2006:15:14:12][4858.1076697408][-main-] Notice: test ns_mime.test ns_nsv.test ns_pagepath.test ns_parseargs.test ns_register_filter.test ns_register_proc.test ns_serverpath.test ns_set.test ns_sha1.test ns_thread.test ns_urlencode.test ns_uuencode.test nsdb.test tclresp.test url2file.test Tests ended at Sat Feb 04 03:14:30 PM MST 2006 all.tcl: Total 690 Passed 687 Skipped 0 Failed 3 Sourced 35 Test Files. Files with failing tests: ns_addrbyhost.test ns_adp_compress.test ns_info.t= est |
From: Gustaf N. <ne...@wu...> - 2006-02-04 21:26:13
|
Zoran Vasiljevic schrieb: > Anyway, from all this tests, it appears that the Tcl allocator > is slower than anything else, at least for the test-pattern > used in your test. for our power5 machine (the monster) with the linux 2.6.9-22 kernel, ckalloc is not so bad, but _malloc is still significantly better. Tcl: 8.4.11, threads 16, loops 500000 starting 16 malloc threads...waiting....done: 8 seconds, 397357 usec starting 16 ckalloc threads...waiting....done: 8 seconds, 4346 usec starting 16 _malloc threads...waiting....done: 6 seconds, 404069 usec strangely enough, with 64 threads, the differences seem to disappear. Tcl: 8.4.11, threads 64, loops 500000 starting 64 malloc threads...waiting....done: 26 seconds, 718077 usec starting 64 ckalloc threads...waiting....done: 26 seconds, 783919 usec starting 64 _malloc threads...waiting....done: 26 seconds, 237283 usec these results are repeatable. -gustaf |
From: Bernd E. <eid...@we...> - 2006-02-04 18:20:10
|
Hi, > I have uploaded the 4.99.1 release to SF. > Also, the modules are now part of the release. great! I updated minor parts of the Wiki to reflect that. > In the meantime, lets start adding the documentation. > Ah, yes... at the point somebody is adding the documentation, > he can immediately start adding tests for the particular > command. I will try to go this way... yes, and if possible and relevant, we should keep this one synced: http://naviserver.sourceforge.net/wiki/index.php/Complete_Configfile (And of course corrected on the fly. It was initially just a collection of what is available, with very little verification against the sources) You wrote in another mail: > o. everybody takes some command(s) and add content > as he can or the time allows This should be the way to go. Cheerio, Bernd. |
From: Zoran V. <zv...@ar...> - 2006-02-04 17:07:35
|
Hi! I have uploaded the 4.99.1 release to SF. Also, the modules are now part of the release. I will bump the versions now to 4.99.2 for future interim releases. According to me, we could/should get to the 5.0 when: a. documentation is ready b. new writer-thread support is finalized The a. will allow non-expert user to start using our baby. The b. will allow the server to equally optimal serve large (or large number of) static files w/o blowing the memory with all those connection threads. An interesting new field is the memory allocation as we can see from the recent tests. It depends how this will evolve, but we might go and make Tcl adopt the new (better) allocator if we can give them proof of speed improvements and if we can be platform-neutral. In the meantime, lets start adding the documentation. Ah, yes... at the point somebody is adding the documentation, he can immediately start adding tests for the particular command. I will try to go this way... Cheers Zoran |
From: Zoran V. <zv...@ar...> - 2006-02-04 16:20:54
|
Am 04.02.2006 um 17:08 schrieb Vlad Seryakov: > That could be true on Solaris, but in Linux 2.6 mmap/munmap is very > fast and looking into kernel source it tells you that they conver > sbrk ito mmap imternally but the different is that mmap is > multithreaded-aware while sbrk not. Solaris (1 CPU) Tcl: 8.4.12, threads 16, loops 500000 starting 16 malloc threads...waiting....done: 3 seconds, 938700 usec starting 16 ckalloc threads...waiting....done: 6 seconds, 62454 usec starting 16 _malloc threads...waiting....done: 9 seconds, 755277 usec Linux (1 CPU, 1.8 GHz) Tcl: 8.4.12, threads 16, loops 500000 starting 16 malloc threads...waiting....done: 2 seconds, 298735 usec starting 16 ckalloc threads...waiting....done: 3 seconds, 331197 usec starting 16 _malloc threads...waiting....done: 1 seconds, 323865 usec Mac OSX (1 CPU 1.5Ghz) zoran:~ zoran$ ./m2 Tcl: 8.4.12, threads 16, loops 500000 starting 16 malloc threads...waiting....done: 57 seconds, 300088 usec starting 16 ckalloc threads...waiting....done: 195 seconds, 526369 usec starting 16 _malloc threads...waiting....done: 13 seconds, 869307 usec Mac OSX (2 CPU 867MHz) panther:~ zoran$ ./m2 Tcl: 8.4.12, threads 16, loops 500000 starting 16 malloc threads...waiting....done: 189 seconds, 228665 usec starting 16 ckalloc threads...waiting....done: 730 seconds, 700258 usec (!!!!!) starting 16 _malloc threads...waiting....done: 19 seconds, 958533 usec > > Now, using mmap to allocate block of memory and then re-using that > this is waht i am doing, but i do not use munmap, still it is > possible. > With random allocations from 1-128L, Tcl alloc gives the worst > results, constantly, which means it is good on small allocations only? Aparently it is all above 16284 bytes that uses malloc directly. > > I am not trying to re-invent the wheel, it is just accidentally i > replaced sbrk with mmap and removed mutexes around it and it became > much faster than what we have now, at least on Linux. The only part where it is not faster is single-cpu solaris. I have no idea why. I can test it on 2 cpu solaris next week. Anyway, from all this tests, it appears that the Tcl allocator is slower than anything else, at least for the test-pattern used in your test. Cheers Zoran |
From: Vlad S. <vl...@cr...> - 2006-02-04 16:08:48
|
That could be true on Solaris, but in Linux 2.6 mmap/munmap is very fast and looking into kernel source it tells you that they conver sbrk ito mmap imternally but the different is that mmap is multithreaded-aware while sbrk not. Now, using mmap to allocate block of memory and then re-using that this is waht i am doing, but i do not use munmap, still it is possible. With random allocations from 1-128L, Tcl alloc gives the worst results, constantly, which means it is good on small allocations only? Tcl: 8.4.12, threads 16, loops 500000 starting 16 malloc threads...waiting....done: 3 seconds, 955518 usec starting 16 ckalloc threads...waiting....done: 4 seconds, 272964 usec starting 16 _malloc threads...waiting....done: 1 seconds, 890566 usec I am not trying to re-invent the wheel, it is just accidentally i replaced sbrk with mmap and removed mutexes around it and it became much faster than what we have now, at least on Linux. Stephen Deasey wrote: > On 2/3/06, Vlad Seryakov <vl...@cr...> wrote: > >>Here is the test http://www.crystalballinc.com/vlad/tmp/memtest.c >> >>It give very strange results, it works for Linux only because it uses >>mmap only and it looks like brk uses mmap internally according to Linux >>2.6.13 kernel and it allows unlimited mmap-ed regions (or as i >>understand up to vm.max_map_count = 65536). >> >>According to this test, when i use random sizes from 0-128k, Tcl >>allocator gives worse results than Linux malloc. On small amounts >>ckalloc faster but once over 64k, Linux malloc is faster. And, my small >>malloc implementaion which is based on first version of Lea's malloc and >>uses mmap only and supports per-thread memory only beats all mallocs, >>especially on bigger sizes. It does not crash, even on 5Mil loops, but i >>am not sure why it is so simple and so effective. > > > > Have you taken fragmentation into account? > > There's some memory related links in this blog post I read recently: > > http://primates.ximian.com/~federico/news-2005-12.html#14 > > Federico makes a good point: this has all been done before... > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=k&kid3432&bid#0486&dat1642 > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-02-04 15:25:46
|
Am 03.02.2006 um 19:44 schrieb Vlad Seryakov: > Let's start working on it > I can imagine following simple scenario: o. write a short Tcl script to generate doctool templates for all ns_* commands o. everybody takes some command(s) and add content as he can or the time allows Zoran |
From: Andrew P. <at...@pi...> - 2006-02-04 09:36:57
|
On Sat, Feb 04, 2006 at 12:51:07AM -0700, Stephen Deasey wrote: > There's some memory related links in this blog post I read recently: > > http://primates.ximian.com/~federico/news-2005-12.html#14 > > Federico makes a good point: this has all been done before... Hm, he links to these two: http://citeseer.ist.psu.edu/bonwick94slab.html http://citeseer.ist.psu.edu/bonwick01magazines.html The 1994 "Slab Allocator" was used in SunOS 5.4 (Solaris 2.4). The 2001 "Vmem" and "libumem" version uses a "per-processor caching scheme ... that provides linear scaling to any number of CPUs." (Nice paper!) That seems reasonable. I figure that the CPU is what does the work, so per-thread memory caching as done the Tcl/AOLserver "zippy" allocator is only necessary because threads are not tied to any one CPU. If they were, it would be better to have those N threads all use the same memory cache for their allocations, as obviously only 1 can allocate at any given time. Interestingly, libumem, the non-kernel version of their work, they (initially?) usedper-thread rather than per-cpu caches, because the Solaris thread library didn't have the right APIs to do things per CPU. Apparently that worked fine, and it was still faster than Hoard (by a constant amount, they both scaled linearly). However, their tests only seem to use only 1 thread per CPU though, which isn't terribly realistic. Allocator CPU-affinity is probably much more useful when you have 100 or 1000 threads per CPU, and it might be interesting to see scalability graphs under those conditions. Some of the benchmarks are impressive, on large multi-cpu boxes they cite 2x throughput improvement on a Spec web serving benchmark by adding their new stuff to Solaris. And it even improves single cpu performance somewhat too. The paper clearly says that the userspace version beats Hoard, ptmalloc (Gnu libc), and mtmalloc (Solaris), which it calls "the strongest competition". As of 2001 glibc's allocator was clearly crap for anything other than single-thread code - thus the traditional need for hacks like the zippy allocator. One interesting bit is that: "We discussed the magazine layer in the context of the slab allocator, but in fact the algorithms are completely general. A magazine layer can be added to ANY memory allocator to make it scale." The work described in that 2001 paper has all been in Solaris since version 8. Do Linux, FreeBSD, and/or Gnu Libc have this yet? Ah, Mena-Quintero's blog above says that "gslice" is exactly that, and seems to be in Gnome 2.10: http://developer.gnome.org/doc/API/2.0/glib/glib-Memory-Slices.html Shouldn't something like that be in ** Gnu LibC **, not just in Gnome? Bonwick's brief concluding thoughts about how OS core services are often the most neglected are also interesting. -- Andrew Piskorski <at...@pi...> http://www.piskorski.com/ |