You can subscribe to this list here.
2005 |
Jan
|
Feb
(53) |
Mar
(62) |
Apr
(88) |
May
(55) |
Jun
(204) |
Jul
(52) |
Aug
|
Sep
(1) |
Oct
(94) |
Nov
(15) |
Dec
(68) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(130) |
Feb
(105) |
Mar
(34) |
Apr
(61) |
May
(41) |
Jun
(92) |
Jul
(176) |
Aug
(102) |
Sep
(247) |
Oct
(69) |
Nov
(32) |
Dec
(140) |
2007 |
Jan
(58) |
Feb
(51) |
Mar
(11) |
Apr
(20) |
May
(34) |
Jun
(37) |
Jul
(18) |
Aug
(60) |
Sep
(41) |
Oct
(105) |
Nov
(19) |
Dec
(14) |
2008 |
Jan
(3) |
Feb
|
Mar
(7) |
Apr
(5) |
May
(123) |
Jun
(5) |
Jul
(1) |
Aug
(29) |
Sep
(15) |
Oct
(21) |
Nov
(51) |
Dec
(3) |
2009 |
Jan
|
Feb
(36) |
Mar
(29) |
Apr
|
May
|
Jun
(7) |
Jul
(4) |
Aug
|
Sep
(4) |
Oct
|
Nov
(13) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
(9) |
Apr
(11) |
May
(16) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(7) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
(92) |
Nov
(28) |
Dec
(16) |
2013 |
Jan
(9) |
Feb
(2) |
Mar
|
Apr
(4) |
May
(4) |
Jun
(6) |
Jul
(14) |
Aug
(12) |
Sep
(4) |
Oct
(13) |
Nov
(1) |
Dec
(6) |
2014 |
Jan
(23) |
Feb
(19) |
Mar
(10) |
Apr
(14) |
May
(11) |
Jun
(6) |
Jul
(11) |
Aug
(15) |
Sep
(41) |
Oct
(95) |
Nov
(23) |
Dec
(11) |
2015 |
Jan
(3) |
Feb
(9) |
Mar
(19) |
Apr
(3) |
May
(1) |
Jun
(3) |
Jul
(11) |
Aug
(1) |
Sep
(15) |
Oct
(5) |
Nov
(2) |
Dec
|
2016 |
Jan
(7) |
Feb
(11) |
Mar
(8) |
Apr
(1) |
May
(3) |
Jun
(17) |
Jul
(12) |
Aug
(3) |
Sep
(5) |
Oct
(19) |
Nov
(12) |
Dec
(6) |
2017 |
Jan
(30) |
Feb
(23) |
Mar
(12) |
Apr
(32) |
May
(27) |
Jun
(7) |
Jul
(13) |
Aug
(16) |
Sep
(6) |
Oct
(11) |
Nov
|
Dec
(12) |
2018 |
Jan
(1) |
Feb
(5) |
Mar
(6) |
Apr
(7) |
May
(23) |
Jun
(3) |
Jul
(2) |
Aug
(1) |
Sep
(6) |
Oct
(6) |
Nov
(10) |
Dec
(3) |
2019 |
Jan
(26) |
Feb
(15) |
Mar
(9) |
Apr
|
May
(8) |
Jun
(14) |
Jul
(10) |
Aug
(10) |
Sep
(4) |
Oct
(2) |
Nov
(20) |
Dec
(10) |
2020 |
Jan
(10) |
Feb
(14) |
Mar
(29) |
Apr
(11) |
May
(25) |
Jun
(21) |
Jul
(23) |
Aug
(12) |
Sep
(19) |
Oct
(6) |
Nov
(8) |
Dec
(12) |
2021 |
Jan
(29) |
Feb
(9) |
Mar
(8) |
Apr
(8) |
May
(2) |
Jun
(2) |
Jul
(9) |
Aug
(9) |
Sep
(3) |
Oct
(4) |
Nov
(12) |
Dec
(13) |
2022 |
Jan
(4) |
Feb
|
Mar
(4) |
Apr
(12) |
May
(15) |
Jun
(7) |
Jul
(10) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(8) |
Dec
|
2023 |
Jan
(15) |
Feb
|
Mar
(23) |
Apr
(1) |
May
(2) |
Jun
(10) |
Jul
|
Aug
(22) |
Sep
(19) |
Oct
(2) |
Nov
(20) |
Dec
|
2024 |
Jan
(1) |
Feb
|
Mar
(16) |
Apr
(15) |
May
(6) |
Jun
(4) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(13) |
Nov
(18) |
Dec
(6) |
2025 |
Jan
(12) |
Feb
|
Mar
(2) |
Apr
(1) |
May
(11) |
Jun
(5) |
Jul
(4) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Bernd E. <eid...@we...> - 2006-01-19 17:15:10
|
Hi Zoran, > One way to do this is to make a subdirectory in doc/ > like: > src > man > html > where src would contain sources of the doc and man/html > would contain generated files. There should be a yes, that should work. And if not, there's CVS :-) > make doc > > make target which would do the trick. > This first what comes to mind... Apropos: Is there a magic-convert to doctoolformat the current nroff files? -Bernd |
From: Zoran V. <zv...@ar...> - 2006-01-19 14:10:26
|
Am 19.01.2006 um 15:07 schrieb Bernd Eidenschink: > Did we talk about where to place documentation files in doctools > format? > Into the "doc/" directory? Or there only nroff formatted? > > Or completely separated, so that before a file release documenation is > generated to nroff and then merged to "doc/"? We didn't talk about that yet. One way to do this is to make a subdirectory in doc/ like: src man html where src would contain sources of the doc and man/html would contain generated files. There should be a make doc make target which would do the trick. This first what comes to mind... Cheers Zoran |
From: Bernd E. <eid...@we...> - 2006-01-19 14:04:33
|
Did we talk about where to place documentation files in doctools format? Into the "doc/" directory? Or there only nroff formatted? Or completely separated, so that before a file release documenation is generated to nroff and then merged to "doc/"? |
From: Zoran V. <zv...@ar...> - 2006-01-16 17:39:57
|
Hi! Where are you? Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-16 15:06:04
|
sure, sure, release it Zoran Vasiljevic wrote: > > Am 16.01.2006 um 09:51 schrieb Bernd Eidenschink: > >> >> This will be the last tag before "5.0" :-) >> > > You never know... The MAIN reason for delaying > the 5.0 in my eyes is the lack of documentation > or at least lack of the doc infrastructure. > This has been chasing me for about 3 years throughout > the AS and NS projects. It seems that we are not able > to get this thing done. By "we" I mainly mean myself :-( > > I hope to get time to finish this after we release our > 2.0 software.... > > Cheers > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-16 08:57:10
|
Am 16.01.2006 um 09:51 schrieb Bernd Eidenschink: > > This will be the last tag before "5.0" :-) > You never know... The MAIN reason for delaying the 5.0 in my eyes is the lack of documentation or at least lack of the doc infrastructure. This has been chasing me for about 3 years throughout the AS and NS projects. It seems that we are not able to get this thing done. By "we" I mainly mean myself :-( I hope to get time to finish this after we release our 2.0 software.... Cheers Zoran |
From: Bernd E. <eid...@we...> - 2006-01-16 08:48:31
|
Hi Zoran, > As we are now approaching our release date, I'm thinking > about tagging the CVS with 4.99.1 and releasing the tarball. > I just wanted to check if there are some voice against that, > or if there is anything to be added in the code before I > do that. This will be the last tag before "5.0" :-) Do it! |
From: Zoran V. <zv...@ar...> - 2006-01-16 08:41:45
|
Hi! As we are now approaching our release date, I'm thinking about tagging the CVS with 4.99.1 and releasing the tarball. I just wanted to check if there are some voice against that, or if there is anything to be added in the code before I do that. Main reason for doing it is that we are starting to ship our 2.0 release to the public and I need a stable cvs tree. I've been playing with Purify the last week and all seem to work w/o problems on the NS side. So, is there anybody against the interim 4.99.1 release? Cheers Zoran |
From: Zoran V. <zv...@ar...> - 2006-01-14 17:46:06
|
Am 12.01.2006 um 15:35 schrieb Bernd Eidenschink: > ::base64::encode "test\0test\0testpass" Please fill-in the bug report in SF in future. Fixed. Checkout and try again. Cheers Zoran |
From: Zoran V. <zv...@ar...> - 2006-01-13 15:34:26
|
Am 11.01.2006 um 19:06 schrieb Zoran Vasiljevic: > I'm looking into this now... I will suggest we change > this for *all* Tcl callbacks. I'm about to checkin changes to various ns_schedule_* and ns_register_* procs to allow variable number of arguments passed to scripts as discussed here. Here is the list of procedures affected: lexxsrv:nscp 3> ns_register_proc wrong # args: should be "ns_register_proc ?-noinherit? ?--? method url script ?args?" lexxsrv:nscp 6> ns_register_url2file wrong # args: should be "ns_register_url2file ?-noinherit? ?--? url script ?args?" lexxsrv:nscp 7> ns_register_filter wrong # args: should be "ns_register_filter ?-first? ?--? when method urlPattern script ?args?" lexxsrv:nscp 8> ns_register_trace wrong # args: should be "ns_register_trace method urlPattern script ? args?" lexxsrv:nscp 12> ns_schedule_proc wrong # args: should be "ns_schedule_proc ?-once? ?-thread? ?--? interval script ?args?" lexxsrv:nscp 13> ns_schedule_weekly wrong # args: should be "ns_schedule_weekly ?-once? ?-thread? ?--? day hour minute script ?args?" lexxsrv:nscp 14> ns_schedule_daily wrong # args: should be "ns_schedule_daily ?-once? ?-thread? ?--? hour minute script ?args?" Normally, while building up the script, the optional args are always appended after the fixed arguments, in contrast to AS where, because of the backward compatibility reasons, some tricks are done with the command-line. On the C-API.... The Ns_TclNewCallback is now: NS_EXTERN Ns_TclCallback *Ns_TclNewCallback(Tcl_Interp *interp, void *cbProc, Tcl_Obj *scriptObjPtr, int objc, Tcl_Obj *CONST objv[]); ... so there is only object-based interface and it us used from all over the code. The string-based interface equivalent to the above is NOT implemented. The Ns_TclNewCallbackObj is gone. Please report any problems you find asap so I can fix them. Oh yes: this is INCOMPATIBLE change, meaning scripts written for AOLserver MAY NOT work correctly, but this is the price to pay for a cleaner and more logical API. Cheers Zoran |
From: Ibrahim T. <it...@we...> - 2006-01-13 09:18:29
|
Hi Jamie, Have you taken a look at the Visual Studio projects (files *.sln, *.dsp, *.dsw). Although we do compile for Windows, I haven't had the time to go through all the build files and variants (especially not the command line Makefiles and stuff). I currently use the VS 2003 .NET and the naviserver.sln project. This works and build fine for us (at least for the parts of the naviserver and libs we employ). Ibrahim Zoran Vasiljevic wrote: > > Am 13.01.2006 um 09:29 schrieb Jamie Rasmussen: > >> I took a look at the NaviServer HEAD and the Win32 build is broken in >> several places. > > > Hm... we regulary build NS on Windows... I will pass this to > my collegue who is attending this part. Perhaps he didn't > checkin the latest updates... > > Zoran |
From: Zoran V. <zv...@ar...> - 2006-01-13 08:44:21
|
Am 13.01.2006 um 09:29 schrieb Jamie Rasmussen: > I took a look at the NaviServer HEAD and the Win32 build is broken > in several places. Hm... we regulary build NS on Windows... I will pass this to my collegue who is attending this part. Perhaps he didn't checkin the latest updates... Zoran > > On this machine I have Visual Studio 6, Visual C++ Toolkit 2003, > and Visual Studio 2005. The build files in CVS are for Visual > Studio 6 and Visual Studio .NET 2003. I'm not sure if it is worth > spending time on the VS6 build files. The 2003 Toolkit is command- > line only, but is free-to-download. Visual Studio 2005 has some > nice features, but since it was just released I don't know how many > people have started using it. Unfortunately, there appear to be > problems no matter which version you start with. :-) > > I tried VS2005 first, since Microsoft has stopped mainstream > support for VS6, so e.g. recent Platform SDKs don't work with it. > There are a number of issues using VS2005, starting with TCL patch > #1096916. I think it was only applied to the 8.5 branch. When > building I think we would want to define: > _CRT_SECURE_COPP_OVERLOAD_STANDARD_NAMES=1 > _CRT_SECURE_NO_DEPRECATE=1 > _CRT_NONSTDC_NO_DEPRECATE=1 > and potentially > _USE_32BIT_TIME_T=1 > > I tried Visual Studio 6, but there I had problems there with an old > version of Ws2tcpip.h being used; that may just be how I have my > include path set up, I'd have to investigate further. It causes > problems in dns.c at least. > > There are also a number of problems not specific to the compiler > version: > > nsthread/compat.c, getopt.c, and tclatclose.c are gone. task.c, > tclcache.c, tcltime.c, and url2file.c have been added. These > changes need to be reflected in the build files. > > The #define's for NS_MAJOR_VERSION etc. were removed from ns.h but > are still needed by the Windows build. > > Visual Studio doesn't allow mixed declarations and code for .c > files, so in urlspace.c line 1431 we need to move the "doit" > declaration to the top of JunctionFind. > > The new code in binder.c will need a number of additional ifdef's > to compile on Win32. > > The tcl debug suffix changed from "d" to "g" some time ago, this > should be reflected in the post-build copy instructions. > > As I mentioned in my earlier post, the AOLserver HEAD now has a > unified command line build system, it might be worth taking a > closer look at that... > Thanks, > > Jamie > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through > log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD > SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel |
From: Jamie R. <jra...@sb...> - 2006-01-13 08:30:08
|
I took a look at the NaviServer HEAD and the Win32 build is broken in several places. On this machine I have Visual Studio 6, Visual C++ Toolkit 2003, and Visual Studio 2005. The build files in CVS are for Visual Studio 6 and Visual Studio .NET 2003. I'm not sure if it is worth spending time on the VS6 build files. The 2003 Toolkit is command-line only, but is free-to-download. Visual Studio 2005 has some nice features, but since it was just released I don't know how many people have started using it. Unfortunately, there appear to be problems no matter which version you start with. :-) I tried VS2005 first, since Microsoft has stopped mainstream support for VS6, so e.g. recent Platform SDKs don't work with it. There are a number of issues using VS2005, starting with TCL patch #1096916. I think it was only applied to the 8.5 branch. When building I think we would want to define: _CRT_SECURE_COPP_OVERLOAD_STANDARD_NAMES=1 _CRT_SECURE_NO_DEPRECATE=1 _CRT_NONSTDC_NO_DEPRECATE=1 and potentially _USE_32BIT_TIME_T=1 I tried Visual Studio 6, but there I had problems there with an old version of Ws2tcpip.h being used; that may just be how I have my include path set up, I'd have to investigate further. It causes problems in dns.c at least. There are also a number of problems not specific to the compiler version: nsthread/compat.c, getopt.c, and tclatclose.c are gone. task.c, tclcache.c, tcltime.c, and url2file.c have been added. These changes need to be reflected in the build files. The #define's for NS_MAJOR_VERSION etc. were removed from ns.h but are still needed by the Windows build. Visual Studio doesn't allow mixed declarations and code for .c files, so in urlspace.c line 1431 we need to move the "doit" declaration to the top of JunctionFind. The new code in binder.c will need a number of additional ifdef's to compile on Win32. The tcl debug suffix changed from "d" to "g" some time ago, this should be reflected in the post-build copy instructions. As I mentioned in my earlier post, the AOLserver HEAD now has a unified command line build system, it might be worth taking a closer look at that... Thanks, Jamie |
From: Vlad S. <vl...@cr...> - 2006-01-12 21:27:33
|
Just from the top of my head, will it work if poll will do timeout for trigger socket as well, this will it will be waking up constantly? Zoran Vasiljevic wrote: > > Vlad, Stephen, > > What do you think? > > Anfang der weitergeleiteten E-Mail: > >> Von: Jeff Rogers <dv...@DI...> >> Datum: 12. Januar 2006 20:34:09 MEZ >> An: AOL...@LI... >> Betreff: [AOLSERVER] aolserver bug >> Antwort an: AOLserver Discussion <AOL...@LI...> >> >> I found a bug in aolserver 4.0.10 (and previous 4.x versions, not >> sure about >> earlier) that causes the server to lock up. I'm fairly certain I >> understand >> the cause, and my fix appears to work although I'm not sure it is the >> best >> approach. >> >> The bug: when benchmarking the server with a program like ab with >> concurrency=1 (that is, it issues a single request, waits for it to >> complete, then immediately issues the next one) the server will lock up, >> consuming no cpu, but not responding to any requests. >> >> My explanation: when the max number of threads is hit then when a new >> connection is queued (NsQueueConn) it will be unable to find a free >> connection in the pool and the queueing fails, and the new connection is >> added to the wait list (waitPtr). If there is a wait list then no >> drivers >> are polled for new connections (driver.c:801), rather it waits to be >> triggered (SockTrigger) to indicate that a thread is available to >> handle the >> connection. The triggering is done when the connection is completed, >> within >> NsSockClose. NsSockClose in turn is going to be called somewhere >> within the >> running of the connection (ConnRun - queue.c:617). However, the >> available >> thread is not put back onto the queue free list until after ConnRun has >> completed (queue.c:638). So if the driver thread runs in the time slice >> after ConnRun has completed for all active connections but before >> they are >> added back to the free list, then it attempts to queue the connection, >> fails, adds it to the wait list, then waits for the trigger which >> will never >> come, and everything stops. >> >> The problem is a race condition, and as such is extremely timing >> sensitive; >> I cannot reproduce the problem on a generic setup, but when I'm >> benchmarking >> my OpenACS setup it hits the bug very quickly and reliably. The >> explanation >> suggests, and my testing confirms that it seems to occur much less >> reliably >> with concurrency > 1 or if there is a small delay between sending the >> connections. Together these mean that the lockup is most likely to >> show up >> in exactly my test case, while much less likely on a production >> server or >> with high-concurrency load testing. >> >> My solution is to register SockTrigger as a ready proc, which are run >> immediately after the freed conns are put back on to the free queue >> (queue.c:645). This fixes the problem by ensuring that the trigger >> pipe is >> notified strictly after the free queue is updated and the waiting >> conn will >> sucessfully be queued. However I'm not sure this is best: NsSockClose >> attempts to minimize the number of times SockTrigger is called in the >> case >> when multiple connections are being closed at the same time; my fix >> means it >> is called exactly once for each connection, or twice counting the >> call in >> NsSockClose. It's not clear to me what adverse impact this has, if >> any, but >> one thing that could be done is to remove the SockTrigger calls from >> NsSockClose as redundant. Some additional logic could be added into >> SockTrigger to not send to the trigger pipe under certain conditions >> (i.e., >> if it has been triggered and not acknowledged yet, or if there is not >> waitin >> connection), but that would require mutex protection which could >> ultimately >> be more expensive than just blindly triggering the pipe. >> >> Here's a context diff for my patch: >> *** driver.c.orig Thu Jan 12 11:39:05 2006 >> --- driver.c Thu Jan 12 11:39:10 2006 >> *************** >> *** 773,778 **** >> --- 773,781 ---- >> drvPtr = nextDrvPtr; >> } >> >> + /* register a ready proc to trigger the poll */ >> + Ns_RegisterAtReady(SockTrigger,NULL); >> + >> /* >> * Loop forever until signalled to shutdown and all >> * connections are complete and gracefully closed. >> >> >> -J >> >> >> -- >> AOLserver - http://www.aolserver.com/ >> >> To Remove yourself from this list, simply send an email to >> <lis...@li...> with the >> body of "SIGNOFF AOLSERVER" in the email message. You can leave the >> Subject: field of your email blank. > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-12 19:55:18
|
Vlad, Stephen, What do you think? Anfang der weitergeleiteten E-Mail: > Von: Jeff Rogers <dv...@DI...> > Datum: 12. Januar 2006 20:34:09 MEZ > An: AOL...@LI... > Betreff: [AOLSERVER] aolserver bug > Antwort an: AOLserver Discussion <AOL...@LI...> > > I found a bug in aolserver 4.0.10 (and previous 4.x versions, not > sure about > earlier) that causes the server to lock up. I'm fairly certain I > understand > the cause, and my fix appears to work although I'm not sure it is > the best > approach. > > The bug: when benchmarking the server with a program like ab with > concurrency=1 (that is, it issues a single request, waits for it to > complete, then immediately issues the next one) the server will > lock up, > consuming no cpu, but not responding to any requests. > > My explanation: when the max number of threads is hit then when a new > connection is queued (NsQueueConn) it will be unable to find a free > connection in the pool and the queueing fails, and the new > connection is > added to the wait list (waitPtr). If there is a wait list then no > drivers > are polled for new connections (driver.c:801), rather it waits to be > triggered (SockTrigger) to indicate that a thread is available to > handle the > connection. The triggering is done when the connection is > completed, within > NsSockClose. NsSockClose in turn is going to be called somewhere > within the > running of the connection (ConnRun - queue.c:617). However, the > available > thread is not put back onto the queue free list until after ConnRun > has > completed (queue.c:638). So if the driver thread runs in the time > slice > after ConnRun has completed for all active connections but before > they are > added back to the free list, then it attempts to queue the connection, > fails, adds it to the wait list, then waits for the trigger which > will never > come, and everything stops. > > The problem is a race condition, and as such is extremely timing > sensitive; > I cannot reproduce the problem on a generic setup, but when I'm > benchmarking > my OpenACS setup it hits the bug very quickly and reliably. The > explanation > suggests, and my testing confirms that it seems to occur much less > reliably > with concurrency > 1 or if there is a small delay between sending the > connections. Together these mean that the lockup is most likely to > show up > in exactly my test case, while much less likely on a production > server or > with high-concurrency load testing. > > My solution is to register SockTrigger as a ready proc, which are run > immediately after the freed conns are put back on to the free queue > (queue.c:645). This fixes the problem by ensuring that the trigger > pipe is > notified strictly after the free queue is updated and the waiting > conn will > sucessfully be queued. However I'm not sure this is best: NsSockClose > attempts to minimize the number of times SockTrigger is called in > the case > when multiple connections are being closed at the same time; my fix > means it > is called exactly once for each connection, or twice counting the > call in > NsSockClose. It's not clear to me what adverse impact this has, if > any, but > one thing that could be done is to remove the SockTrigger calls from > NsSockClose as redundant. Some additional logic could be added into > SockTrigger to not send to the trigger pipe under certain > conditions (i.e., > if it has been triggered and not acknowledged yet, or if there is > not waitin > connection), but that would require mutex protection which could > ultimately > be more expensive than just blindly triggering the pipe. > > Here's a context diff for my patch: > *** driver.c.orig Thu Jan 12 11:39:05 2006 > --- driver.c Thu Jan 12 11:39:10 2006 > *************** > *** 773,778 **** > --- 773,781 ---- > drvPtr = nextDrvPtr; > } > > + /* register a ready proc to trigger the poll */ > + Ns_RegisterAtReady(SockTrigger,NULL); > + > /* > * Loop forever until signalled to shutdown and all > * connections are complete and gracefully closed. > > > -J > > > -- > AOLserver - http://www.aolserver.com/ > > To Remove yourself from this list, simply send an email to > <lis...@li...> with the > body of "SIGNOFF AOLSERVER" in the email message. You can leave the > Subject: field of your email blank. |
From: Zoran V. <zv...@ar...> - 2006-01-12 18:32:27
|
Hi! I just commited a bugfix in Tcl core-8-4-branch which fixes subtle but fatal memory bug. It is not easy to reproduce but it happens (what would I do w/o Purify?)... If you need more stability, checkout the core-8-4-branch and recompile the Tcl lib. The bug was in the Tcl_FSGetInternalRep where a freed pointer has been overwritten. Cheers Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-12 14:58:11
|
Looks like a bug but looking at Ns_HtuuEncode i do not see where the problem could be. Bernd Eidenschink wrote: > Hi guys, > > while updating the sendmail.tcl to also be capable of the simple AUTH PLAIN > mechanism (if you have to use at least a basic form of relay control) > I noticed a problem with ns_base64encode: > > (1) Working: > > ~> perl -MMIME::Base64 -e 'print encode_base64("test\0test\0testpass");' > dGVzdAB0ZXN0AHRlc3RwYXNz > > % package require base64 > % ::base64::encode "test\0test\0testpass" > dGVzdAB0ZXN0AHRlc3RwYXNz > > (2) Failing: > > % ns_base64encode "test\0test\0testpass" > dGVzdMCAdGVzdMCAdGVzdHBhc3M= > > In the particular AUTH PLAIN situation, where user, realm and pw are split by > the null byte, the problem (one of the problems) is the missing null byte of > the ns_uuencoded string: > > ~> perl -MMIME::Base64 -e 'print decode_base64 > ("dGVzdMCAdGVzdMCAdGVzdHBhc3M=");'|hexdump > 0000000 6574 7473 80c0 6574 7473 80c0 6574 7473 > 0000010 6170 7373 > 0000014 > > and in the perl and tcllib example it works: > > ~> perl -MMIME::Base64 -e 'print decode_base64("dGVzdAB0ZXN0AHRlc3RwYXNz");'| > hexdump > 0000000 6574 7473 7400 7365 0074 6574 7473 6170 > 0000010 7373 > 0000012 > > Do I miss something or could it be a bug? > > -Bernd. > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Vlad S. <vl...@cr...> - 2006-01-12 14:41:32
|
I did some preliminary coding and it works and the speed of downloading 2 simultaneous huge files are the same on average with current model, test are not very usefull because i tested on one computer only. I am attaching 2 files that i modified but more though and work should be done, currently it supportd only big files not in chunked mode returned by fastpath, but i think this is more than enough, all other dynamic content will go as usual. The WriterThread is at te end of driver.c and connio.c was modified in ConnSend function only. Yes, the logging and running atclose procs is the open question, i would run it in the conn thread. The parts between #ifdef #endif in WriterThread is experiments and they should be removed, it slowed down the server significantly. Zoran Vasiljevic wrote: > > Am 12.01.2006 um 09:55 schrieb Gustaf Neumann: > >> the recent discussion was however, to generalize this further and use >> such thread for >> sending and receiving, thus the proposed name "spooling-thread". > > > Having such specialized thread as part of the server is great > gain (no external modules, optimal speed, etc). Later on we > can go to further extent and incorporate some kind of AIO > in the thread to scale it even further up. The built-in fastpath > code can transparently gain from this thread by delegating > the dumb work of copying bytes from files to sockets and releveing > the connection thread for real dynamic content tasks. > > IOW there are many reasons to make this as compact and as fast > as possible and built into the server. So, we'll have a real > beast capable of equally serving static AND dynamic content > with the highest possible speed. > > I do not know how much work that be, but it seems to me that > a spool-thread could take this task. Depending on the number > of CPU's in the box, we might start one or two or four such > things. As the GHz-frenzy has deteriorated and more chip makers > produce dual/quad chips we'll immediately gain from the hardware. > > From the API side, we could re-route some commands sending output > to transparently use this (new) infrastructure w/o programmer's > influence. > > Well, there are MANY reasons why we'd opt to do it within the server > as oposed to a Tcl-crafted solution. The proof of concept is already > done, as you have very good experience with the thing, by using the > trick with channel passing and a dedicated event-loop based thread. > We'd really have to elevate it to a higher level with a writer or > spooler or writer/spooler thread, however it eventually gets implemented. > > I'm really excited about all this... ;-) > > Cheers > Zoran > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Bernd E. <eid...@we...> - 2006-01-12 14:32:46
|
Hi guys, while updating the sendmail.tcl to also be capable of the simple AUTH PLAIN mechanism (if you have to use at least a basic form of relay control) I noticed a problem with ns_base64encode: (1) Working: ~> perl -MMIME::Base64 -e 'print encode_base64("test\0test\0testpass");' dGVzdAB0ZXN0AHRlc3RwYXNz % package require base64 % ::base64::encode "test\0test\0testpass" dGVzdAB0ZXN0AHRlc3RwYXNz (2) Failing: % ns_base64encode "test\0test\0testpass" dGVzdMCAdGVzdMCAdGVzdHBhc3M= In the particular AUTH PLAIN situation, where user, realm and pw are split by the null byte, the problem (one of the problems) is the missing null byte of the ns_uuencoded string: ~> perl -MMIME::Base64 -e 'print decode_base64 ("dGVzdMCAdGVzdMCAdGVzdHBhc3M=");'|hexdump 0000000 6574 7473 80c0 6574 7473 80c0 6574 7473 0000010 6170 7373 0000014 and in the perl and tcllib example it works: ~> perl -MMIME::Base64 -e 'print decode_base64("dGVzdAB0ZXN0AHRlc3RwYXNz");'| hexdump 0000000 6574 7473 7400 7365 0074 6574 7473 6170 0000010 7373 0000012 Do I miss something or could it be a bug? -Bernd. |
From: Zoran V. <zv...@ar...> - 2006-01-12 09:20:45
|
Am 12.01.2006 um 09:55 schrieb Gustaf Neumann: > the recent discussion was however, to generalize this further and > use such thread for > sending and receiving, thus the proposed name "spooling-thread". Having such specialized thread as part of the server is great gain (no external modules, optimal speed, etc). Later on we can go to further extent and incorporate some kind of AIO in the thread to scale it even further up. The built-in fastpath code can transparently gain from this thread by delegating the dumb work of copying bytes from files to sockets and releveing the connection thread for real dynamic content tasks. IOW there are many reasons to make this as compact and as fast as possible and built into the server. So, we'll have a real beast capable of equally serving static AND dynamic content with the highest possible speed. I do not know how much work that be, but it seems to me that a spool-thread could take this task. Depending on the number of CPU's in the box, we might start one or two or four such things. As the GHz-frenzy has deteriorated and more chip makers produce dual/quad chips we'll immediately gain from the hardware. From the API side, we could re-route some commands sending output to transparently use this (new) infrastructure w/o programmer's influence. Well, there are MANY reasons why we'd opt to do it within the server as oposed to a Tcl-crafted solution. The proof of concept is already done, as you have very good experience with the thing, by using the trick with channel passing and a dedicated event-loop based thread. We'd really have to elevate it to a higher level with a writer or spooler or writer/spooler thread, however it eventually gets implemented. I'm really excited about all this... ;-) Cheers Zoran |
From: Gustaf N. <ne...@wu...> - 2006-01-12 08:56:01
|
Vlad Seryakov wrote: > > Hi, > > I have another idea for you to check: > > Will it be usefull to have special writer thread that will send > multiple files in async mode to multiple clients. For example if i > serve big ISO or movie files and have many connections, currently they > all use conn thread for along time until the whole file is sent. > Instead we can mark the conn to be used in writer thread and release > conn thread for other requests and in the meantime the writer thread > will send multiple FDs to clients in one big loop. this is exactly what we are doing in our production system, and what the code posted in http://sourceforge.net/mailarchive/message.php?msg_id=14351395 does. With the recent change in naviserver, that zoran put in, this code runs without a patch in naviserver (provided you have the tclthread library and the xotcl-support from aocs/packages/xotcl-core installed). the recent discussion was however, to generalize this further and use such thread for sending and receiving, thus the proposed name "spooling-thread". -gustaf > > Currently it is possible to simply change ConnSend in connio.c to > submit open descriptor to the writer queue and return marking the > connection so usual NsClose will not close actual connection socket. > Then write thread will be simple loop reading small chunks from every > file and sending to corresponding socket. > > Does it make sense? > |
From: Zoran V. <zv...@ar...> - 2006-01-12 08:03:35
|
Am 12.01.2006 um 05:19 schrieb Vlad Seryakov: > > I have another idea for you to check: :-) > > Will it be usefull to have special writer thread that will send > multiple files in async mode to multiple clients. For example if i > serve big ISO or movie files and have many connections, currently > they all use conn thread for along time until the whole file is sent. That is precisely what I had in mind and have written that in one of the recent emails. > Instead we can mark the conn to be used in writer thread and > release conn thread for other requests and in the meantime the > writer thread will send multiple FDs to clients in one big loop. Kindof. > > Currently it is possible to simply change ConnSend in connio.c to > submit open descriptor to the writer queue and return marking the > connection so usual NsClose will not close actual connection > socket. Then write thread will be simple loop reading small chunks > from every file and sending to corresponding socket. > > Does it make sense? I believe Stephen had some remarks. They were originally ment for the spool thread but can be applied here. Quote: What happens to the conn thread after this? It can't wait for completion, that would defeat the purpose. Do traces (logging etc.) run now, in the conn thread, or later in a spool thread? If logging runs now, but the upload fails, the log will be wrong. If traces run in the spool threads they may block. I was also thinking that perhaps a spooler thread might be re-used for this, although I would have nothing against a specialized thread doing this sort of work. With this approach, even the Fastpath code could pass the socket to the "writer" thread making all even more scalable. The connection thread could only be used for generating dynamic content, whereas the writer thread could handle all static content serving. This makes very much sense to me. Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-12 04:16:35
|
Hi, I have another idea for you to check: Will it be usefull to have special writer thread that will send multiple files in async mode to multiple clients. For example if i serve big ISO or movie files and have many connections, currently they all use conn thread for along time until the whole file is sent. Instead we can mark the conn to be used in writer thread and release conn thread for other requests and in the meantime the writer thread will send multiple FDs to clients in one big loop. Currently it is possible to simply change ConnSend in connio.c to submit open descriptor to the writer queue and return marking the connection so usual NsClose will not close actual connection socket. Then write thread will be simple loop reading small chunks from every file and sending to corresponding socket. Does it make sense? -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Vlad S. <vl...@cr...> - 2006-01-11 19:57:23
|
No Zoran Vasiljevic wrote: > > Am 02.01.2006 um 13:31 schrieb Zoran Vasiljevic: > >> >> Am 31.12.2005 um 18:24 schrieb Vlad Seryakov: >> >>> Agreed on that as well, let's do it >> >> >> Conclusion: >> >> ns_register_filter when method urlPattern script ?arg1 arg2 ...?" >> >> The script definition must thus match: >> >> proc script {when args} {...} >> >> For calls w/o args, we are compatbile with AS. >> For calls w/ args, we are not. >> >> Who will do that? Stephen? > > > I'm looking into this now... I will suggest we change > this for *all* Tcl callbacks. Therefore the: > > Ns_TclCallback * > Ns_TclNewCallback(Tcl_Interp *interp, void *cbProc, > char *script, char *scriptarg) > > should migrate to: > > Ns_TclCallback * > Ns_TclNewCallback(Tcl_Interp *interp, void *cbProc, > char *script, int argc, char **argv) > > where argc will hold the number and argv strings of all > arguments. > I do not know if anybody from outside is using this call > in which case we should name it something like: > > Ns_TclNewCallbackEx > > (admitently, not very innovative...) > > Does anybody use this call? Vlad, do you use it in > some of your numerous modules? > > Cheers > Zoran > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Vlad S. <vl...@cr...> - 2006-01-11 19:32:25
|
Sure Zoran Vasiljevic wrote: > > Am 06.01.2006 um 16:56 schrieb Vlad Seryakov: > >> I uploaded driver.c into SFE, it needs more testing because after my >> last corrections and cleanups it seems i broke something. > > > I would clean/destroy the "uploadTable" AND "hosts" hashtable > in NsWaitDriversShutdown() if the drivers have been stopped > allright: > > } else { > Ns_Log(Notice, "driver: shutdown complete"); > driverThread = NULL; > ns_sockclose(drvPipe[0]); > ns_sockclose(drvPipe[1]); > /* CLEANUP THE hosts hashtable */ > } > > } else { > Ns_Log(Notice, "spooler: shutdown complete"); > spoolThread = NULL; > ns_sockclose(spoolPipe[0]); > ns_sockclose(spoolPipe[1]); > /* CLEANUP THE uploadTable hashtable */ > } > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |