You can subscribe to this list here.
2005 |
Jan
|
Feb
(53) |
Mar
(62) |
Apr
(88) |
May
(55) |
Jun
(204) |
Jul
(52) |
Aug
|
Sep
(1) |
Oct
(94) |
Nov
(15) |
Dec
(68) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(130) |
Feb
(105) |
Mar
(34) |
Apr
(61) |
May
(41) |
Jun
(92) |
Jul
(176) |
Aug
(102) |
Sep
(247) |
Oct
(69) |
Nov
(32) |
Dec
(140) |
2007 |
Jan
(58) |
Feb
(51) |
Mar
(11) |
Apr
(20) |
May
(34) |
Jun
(37) |
Jul
(18) |
Aug
(60) |
Sep
(41) |
Oct
(105) |
Nov
(19) |
Dec
(14) |
2008 |
Jan
(3) |
Feb
|
Mar
(7) |
Apr
(5) |
May
(123) |
Jun
(5) |
Jul
(1) |
Aug
(29) |
Sep
(15) |
Oct
(21) |
Nov
(51) |
Dec
(3) |
2009 |
Jan
|
Feb
(36) |
Mar
(29) |
Apr
|
May
|
Jun
(7) |
Jul
(4) |
Aug
|
Sep
(4) |
Oct
|
Nov
(13) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
(9) |
Apr
(11) |
May
(16) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(7) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
(92) |
Nov
(28) |
Dec
(16) |
2013 |
Jan
(9) |
Feb
(2) |
Mar
|
Apr
(4) |
May
(4) |
Jun
(6) |
Jul
(14) |
Aug
(12) |
Sep
(4) |
Oct
(13) |
Nov
(1) |
Dec
(6) |
2014 |
Jan
(23) |
Feb
(19) |
Mar
(10) |
Apr
(14) |
May
(11) |
Jun
(6) |
Jul
(11) |
Aug
(15) |
Sep
(41) |
Oct
(95) |
Nov
(23) |
Dec
(11) |
2015 |
Jan
(3) |
Feb
(9) |
Mar
(19) |
Apr
(3) |
May
(1) |
Jun
(3) |
Jul
(11) |
Aug
(1) |
Sep
(15) |
Oct
(5) |
Nov
(2) |
Dec
|
2016 |
Jan
(7) |
Feb
(11) |
Mar
(8) |
Apr
(1) |
May
(3) |
Jun
(17) |
Jul
(12) |
Aug
(3) |
Sep
(5) |
Oct
(19) |
Nov
(12) |
Dec
(6) |
2017 |
Jan
(30) |
Feb
(23) |
Mar
(12) |
Apr
(32) |
May
(27) |
Jun
(7) |
Jul
(13) |
Aug
(16) |
Sep
(6) |
Oct
(11) |
Nov
|
Dec
(12) |
2018 |
Jan
(1) |
Feb
(5) |
Mar
(6) |
Apr
(7) |
May
(23) |
Jun
(3) |
Jul
(2) |
Aug
(1) |
Sep
(6) |
Oct
(6) |
Nov
(10) |
Dec
(3) |
2019 |
Jan
(26) |
Feb
(15) |
Mar
(9) |
Apr
|
May
(8) |
Jun
(14) |
Jul
(10) |
Aug
(10) |
Sep
(4) |
Oct
(2) |
Nov
(20) |
Dec
(10) |
2020 |
Jan
(10) |
Feb
(14) |
Mar
(29) |
Apr
(11) |
May
(25) |
Jun
(21) |
Jul
(23) |
Aug
(12) |
Sep
(19) |
Oct
(6) |
Nov
(8) |
Dec
(12) |
2021 |
Jan
(29) |
Feb
(9) |
Mar
(8) |
Apr
(8) |
May
(2) |
Jun
(2) |
Jul
(9) |
Aug
(9) |
Sep
(3) |
Oct
(4) |
Nov
(12) |
Dec
(13) |
2022 |
Jan
(4) |
Feb
|
Mar
(4) |
Apr
(12) |
May
(15) |
Jun
(7) |
Jul
(10) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(8) |
Dec
|
2023 |
Jan
(15) |
Feb
|
Mar
(23) |
Apr
(1) |
May
(2) |
Jun
(10) |
Jul
|
Aug
(22) |
Sep
(19) |
Oct
(2) |
Nov
(20) |
Dec
|
2024 |
Jan
(1) |
Feb
|
Mar
(16) |
Apr
(15) |
May
(6) |
Jun
(4) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(13) |
Nov
(18) |
Dec
(6) |
2025 |
Jan
(12) |
Feb
|
Mar
(2) |
Apr
(1) |
May
(11) |
Jun
(5) |
Jul
(4) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Gustaf N. <ne...@wu...> - 2013-02-11 10:09:50
|
Hi David, Hmm, Zoran - who made the change - seems currently busy. The documentation still mentions "ns_eval .. -sync ...", thread::send uses "-async" (and not "-asynch"), tclx uses "-async", wish uses "-sync", ... so i would say it is not worth breaking compatibility with aolserver, so we should go back to "ns_eval -sync". If no one complains, I'll make the change in the next days. thanks for pointing this out -gustaf neumann On 08.02.13 13:34, David Osborne wrote: > Hi, > > The synchronous switch for ns_eval changed from -sync to > -synch a few years back. I wanted to check if that was a > typo that stuck or intentional? > > https://bitbucket.org/naviserver/naviserver/diff/nsd/init.tcl?diff2=e82368bbcae0&at=default > > Not a huge deal but we're porting some code form AOLserver > to Naviserver and this is one difference between the 2 I > suspect may be unnecessary? > > Regards, > -- > David Osborne > Qcode Software Limited > http://www.qcode.co.uk > > |
From: David O. <da...@qc...> - 2013-02-08 12:57:46
|
Hi, The synchronous switch for ns_eval changed from -sync to -synch a few years back. I wanted to check if that was a typo that stuck or intentional? https://bitbucket.org/naviserver/naviserver/diff/nsd/init.tcl?diff2=e82368bbcae0&at=default Not a huge deal but we're porting some code form AOLserver to Naviserver and this is one difference between the 2 I suspect may be unnecessary? Regards, -- David Osborne Qcode Software Limited http://www.qcode.co.uk |
From: Gustaf N. <ne...@wu...> - 2013-01-23 10:51:17
|
Dear all, the three commands ns_conn copy, ns_conncptofp, and ns_writecontent are very similar, the latter two are identical. all variants are as well in aolserver. Are there objections to mark ns_writecontent and ns_conncptofp as deprecated and to recommend "ns_conn copy"? To my understanding "ns_writecontent" is the oldest, at least this command should be marked deprecated; from the name the semantics are not obvious. btw, there are no regression tests for any of these. -gn |
From: Jeff R. <dv...@di...> - 2013-01-19 09:27:16
|
These functions are all part of the "public" api, so they could conceivably be used in C extensions. How many of them actually are used is another story of course, and it's far from clear if anyone is developing C extensions. In general, I think it's a good thing to have a rich API, including convenience functions (e.g., Ns_CacheSetValue) and plain old utility packages, like the Ns_List functions that aren't used but aren't hurting anything either. Duplicating functionality just makes for a confusing API, so the deprecated functions (which specifically do have a preferred replacement) can be eliminated, perhaps as part of a version bump. Some of these should have tcl apis around them, such as the Cls functions. AOLserver has 'ns_cls' tcl functions for these. Of course, these aren't needed without other capabilities like queue-wait or pre-write callbacks as otherwise conn=interp=conn. The APIs that should be more closely examined are those for actually using the platform, like the ConnSend and ConnWrite and WriteConn functions - just what is the difference between these? There should be one general way (including convenience wrappers) to specify data to be sent, but it seems like there's a lot more than that. -J Gustaf Neumann wrote: > Dear elders of the naviserver list, > > below is a list of about 120 C-functions, which are not used by the > current naviserver. A few (about 5) were obsoleted by me, but the > majority is jsut sitting around. Since these functions are not called by > naviserver this functions are not covered by the regression test. i > think it is time for a major cleanup. > > - most of the deprecated functions were deprecated 2005, a few 2007. > Are there arguments against dropping these now? > > - what should we do with the unused functions? > Delete these as well? > Mark these as deprecated? > Discuss these in more detail? > > Even when the functions are deleted, they are not lost, reviving these > in the case someome needs it could be done quickly. > > -gustaf neumann > > > Deprecated: > > Ns_BindSock > Ns_ConnFlushHeaders > Ns_ConnInit > Ns_ConnLocation > Ns_ConnQueueHeaders > Ns_ConnResetReturn > Ns_ConnSetRequiredHeaders > Ns_ConnWrite > Ns_DStringPop > Ns_DStringPush > Ns_DecodeUrlCharset > Ns_DecodeUrlWithEncoding > Ns_EncodeUrlCharset > Ns_EncodeUrlWithEncoding > Ns_FreeConnInterp > Ns_GetEncoding > Ns_PageRoot > Ns_SetLocationProc > Ns_SetLogFlushProc > Ns_SetNsLogProc > Ns_TclInitInterps > Ns_TclLogErrorRequest > Ns_TclRegisterAtCleanup > Ns_TclRegisterAtCreate > Ns_TclRegisterAtDelete > Ns_TclRegisterDeferred > Ns_WriteCharConn > Ns_WriteConn > > > Not used: > > Ns_AbsoluteUrl > Ns_AdpAppend > Ns_AdpFlush > Ns_AdpGetOutput > Ns_AdpRequestEx > Ns_AuthorizeUser > Ns_CacheSetValue > Ns_CacheSignal > Ns_CacheTryLock > Ns_CacheWait > Ns_ClearSockErrno > Ns_ClsAlloc > Ns_ClsGet > Ns_ClsSet > Ns_CompressGzip > Ns_ConnCopyToDString > Ns_ConnCopyToFd > Ns_ConnCopyToFile > Ns_ConnFlushContent > Ns_ConnGets > Ns_ConnPuts > Ns_ConnReadHeaders > Ns_ConnReturnAdminNotice > Ns_ConnReturnHtml > Ns_ConnReturnNoResponse > Ns_ConnReturnNotImplemented > Ns_ConnReturnOk > Ns_ConnReturnOpenFile > Ns_ConnSendDString > Ns_ConnSendFd > Ns_ConnWriteChars > Ns_ConnWriteData > Ns_DriverInit > Ns_FreeRequest > Ns_GetAllAddrByHost > Ns_GetRequest > Ns_GetSockErrno > Ns_GetThreadServer > Ns_GetUserHome > Ns_IndexDel > Ns_IndexDup > Ns_IndexFindInf > Ns_IndexIntInit > Ns_IndexStringAppend > Ns_IndexStringDestroy > Ns_IndexStringDup > Ns_IndexStringInit > Ns_IndexStringTrunc > Ns_IntPrint > Ns_LibPath > Ns_ListCopy > Ns_ListDeleteDuplicates > Ns_ListDeleteIf > Ns_ListDeleteLowElements > Ns_ListFree > Ns_ListLast > Ns_ListLength > Ns_ListMapcar > Ns_ListNmapcar > Ns_ListPrint > Ns_NextWord > Ns_ObjvDouble > Ns_ObjvEval > Ns_ObjvIndex > Ns_ObjvLong > Ns_RelativeUrl > Ns_SetListFree > Ns_SetThreadServer > Ns_SkipUrl > Ns_SockPipe > Ns_SockPortBound > Ns_SockRecv > Ns_SockRecvBufs > Ns_SockSend > Ns_SockSendFileBufs > Ns_SockStrError > Ns_SockTimedConnect > Ns_SockWait > Ns_StrNSt > Ns_StringArgProc > Ns_UrlIsDir > Ns_UrlIsFile > Ns_UrlSpecificGetExact > Ns_UrlSpecificGetFast > Ns_VarAppend > Ns_VarExists > Ns_VarGet > Ns_VarIncr > Ns_VarSet > Ns_VarUnset > Ns_WaitProcess > ns_closeonexec > ns_duphigh > ns_socknbclose > > > ------------------------------------------------------------------------------ > Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, > MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current > with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft > MVPs and experts. SALE $99.99 this month only -- learn more at: > http://p.sf.net/sfu/learnmore_122912 > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > |
From: Gustaf N. <ne...@wu...> - 2013-01-19 00:27:53
|
Dear elders of the naviserver list, below is a list of about 120 C-functions, which are not used by the current naviserver. A few (about 5) were obsoleted by me, but the majority is jsut sitting around. Since these functions are not called by naviserver this functions are not covered by the regression test. i think it is time for a major cleanup. - most of the deprecated functions were deprecated 2005, a few 2007. Are there arguments against dropping these now? - what should we do with the unused functions? Delete these as well? Mark these as deprecated? Discuss these in more detail? Even when the functions are deleted, they are not lost, reviving these in the case someome needs it could be done quickly. -gustaf neumann Deprecated: Ns_BindSock Ns_ConnFlushHeaders Ns_ConnInit Ns_ConnLocation Ns_ConnQueueHeaders Ns_ConnResetReturn Ns_ConnSetRequiredHeaders Ns_ConnWrite Ns_DStringPop Ns_DStringPush Ns_DecodeUrlCharset Ns_DecodeUrlWithEncoding Ns_EncodeUrlCharset Ns_EncodeUrlWithEncoding Ns_FreeConnInterp Ns_GetEncoding Ns_PageRoot Ns_SetLocationProc Ns_SetLogFlushProc Ns_SetNsLogProc Ns_TclInitInterps Ns_TclLogErrorRequest Ns_TclRegisterAtCleanup Ns_TclRegisterAtCreate Ns_TclRegisterAtDelete Ns_TclRegisterDeferred Ns_WriteCharConn Ns_WriteConn Not used: Ns_AbsoluteUrl Ns_AdpAppend Ns_AdpFlush Ns_AdpGetOutput Ns_AdpRequestEx Ns_AuthorizeUser Ns_CacheSetValue Ns_CacheSignal Ns_CacheTryLock Ns_CacheWait Ns_ClearSockErrno Ns_ClsAlloc Ns_ClsGet Ns_ClsSet Ns_CompressGzip Ns_ConnCopyToDString Ns_ConnCopyToFd Ns_ConnCopyToFile Ns_ConnFlushContent Ns_ConnGets Ns_ConnPuts Ns_ConnReadHeaders Ns_ConnReturnAdminNotice Ns_ConnReturnHtml Ns_ConnReturnNoResponse Ns_ConnReturnNotImplemented Ns_ConnReturnOk Ns_ConnReturnOpenFile Ns_ConnSendDString Ns_ConnSendFd Ns_ConnWriteChars Ns_ConnWriteData Ns_DriverInit Ns_FreeRequest Ns_GetAllAddrByHost Ns_GetRequest Ns_GetSockErrno Ns_GetThreadServer Ns_GetUserHome Ns_IndexDel Ns_IndexDup Ns_IndexFindInf Ns_IndexIntInit Ns_IndexStringAppend Ns_IndexStringDestroy Ns_IndexStringDup Ns_IndexStringInit Ns_IndexStringTrunc Ns_IntPrint Ns_LibPath Ns_ListCopy Ns_ListDeleteDuplicates Ns_ListDeleteIf Ns_ListDeleteLowElements Ns_ListFree Ns_ListLast Ns_ListLength Ns_ListMapcar Ns_ListNmapcar Ns_ListPrint Ns_NextWord Ns_ObjvDouble Ns_ObjvEval Ns_ObjvIndex Ns_ObjvLong Ns_RelativeUrl Ns_SetListFree Ns_SetThreadServer Ns_SkipUrl Ns_SockPipe Ns_SockPortBound Ns_SockRecv Ns_SockRecvBufs Ns_SockSend Ns_SockSendFileBufs Ns_SockStrError Ns_SockTimedConnect Ns_SockWait Ns_StrNSt Ns_StringArgProc Ns_UrlIsDir Ns_UrlIsFile Ns_UrlSpecificGetExact Ns_UrlSpecificGetFast Ns_VarAppend Ns_VarExists Ns_VarGet Ns_VarIncr Ns_VarSet Ns_VarUnset Ns_WaitProcess ns_closeonexec ns_duphigh ns_socknbclose |
From: Gustaf N. <ne...@wu...> - 2013-01-18 11:45:44
|
Dear all, I just committed a larger change to naviserver tip for handling partial writes (write operations that could not be completed in one go from the OS kernel, sending just a portion of the requested size). Previously, NsDriverSend() never performed partial writes although some client functions handled it (not always correctly). The reason for this was that nssock.sendProc used Ns_SockSendBufs() as implementation, and Ns_SockSendBufs() uses its own timeouts and event handling to deliver all content also on non-blocking sockets. The consequence was, that every write request in the writer thread waited until the full completion, which causes essentially a single outgoing request per writer thread at a time. This problem was reduced somewhat by the fact that file-spools read always smallish junks where the blocking was usually not so long. The problem was especially bad when fastpath-mmap + writer threads were activated and/or when only one writer thread was defined. For file with sizes > writersize, this meant sequentialization of file deliveries. The complete-delivery scheme is not a problem, when the connection threads send the content to the client (since multiple connection threads can send back multiple streams and do not block each other directly), but this is is a problem when e.g. a single writer thread should deliver content of multiple requests. Now, the new code handles partial low-level writes correctly by allowing partial writes on the driver level. This requires some reshuffling of the functions. - The old SockSend() is now implementation of nssock.sendProc and performs partial writes on non-blocking sockets. - Ns_SockSendBufs() delivers all iovec bufs like before but calls now NsDriverSend() instead of the direct SockSend(). - Ns_ConnSend() calls Ns_SockSendBufs() instead of NsDriverSend() in cases, where the writer thread is not used. - Ns_SockSendBufs() calls NsDriverSend() instead of SockSend() - Ns_SockSendFileBufs() calls NsDriverSend() instead of Ns_SockSendBufs() Further changes: - Group fields in WriterSock to separate concern of file-spooling and memory based requests. - Factor out WriterSend() from DriverThread() to improve readability and locality The code was tested with nssock + nsssl on Mac OS X and Linux, and is running on http://next-scripting.org all the best -gustaf neumann |
From: Gustaf N. <ne...@wu...> - 2013-01-17 09:45:40
|
Dear all, the tip version of NaviServer supports now TCP_FASTOPEN (sometimes called TFO). The Linux kernel server-support for TFO was introduced by Linux 3.7.0 (used e.g. in FC 18, available since a few days). TFO could result in speed improvements of between 4% and 41% for page load times (see [1]). To use TFO in NaviServer, it has to be configured/compiled/executed on a machine with a Linux Kernel that supports it, and "deferaccept" has to be turned on. -gustaf neumann [1] https://lwn.net/Articles/508865/ |
From: Gustaf N. <ne...@wu...> - 2013-01-14 12:00:30
|
Dear all, i've just commited an update to naviserver to handle HTML output streaming (e.g. via ns_write) as well asynchronously over the writer thread(s). Therefore, a slow delivery line (e.g. returning the content over a slow connection) does not stall a connection thread. This modification removes the vulnerability for streaming output against slow-read attacks, since the time spent in the connection thread is just determined by the computation and not by the delivery throughput of the content. This feature is currently turned off by default and can be controlled by the drivers configuration variable "writerstreaming". For runtime configuration, ns_writer has now two new subcommands "ns_writer size" and "ns_writer streaming" to modify the writer settings without reboot. With this change, all input and output of naviserver can be handled asynchronously, the connection threads can run without being blocked from slow connections. I have updated the commented changelog at https://next-scripting.org/xowiki/docs/misc/naviserver-connthreadqueue All the best -gustaf neumann |
From: Gustaf N. <ne...@wu...> - 2013-01-08 10:30:33
|
Am 03.01.13 18:48, schrieb Jeff Rogers: > > Does it make sense to have 2 different information-gathering commands, > tho? Why not consolidate all these functions into "ns_info", giving > appropriate subcommands -server and/or -pool flags as necessary? well, we have as well ns_conn and ns_thread providing introspection about different kind of objects, so ns_server seems natural (one might argue for a ns_pool cmd in naviserver). Pushing all these into ns_info is problematic. Furthermore, the some of the commands for ns_server/ns_conn/ns_thread might be used as well for setting such values, which would not appear natural via ns_info. Last but not least, backward compatibility speaks as well for ns_server. btw, we have now ns_server ?-server s? ?-pool p? maxthreads ?value? ns_server ?-server s? ?-pool p? minthreads ?value? to query/change the thread configuration at runtime. Now, one can have a watchdog listening the wealth-indicators of naviserver and adjust these parameters without reboot. This improves the situation until we have really good autoscaling. The new queue-lenght based thread calculation work quite well for creating additional threads, but not for keeping the number of threads sufficiently long on the higher value. If there are already e.g. 10 connection threads running and one starts due to a peak #11 (even with a long timeout or connsperthread), it is quite likely that some of the other threads come to the end of its life-cycle soon and the number falls back to 10. We might do better (i.e. less nervous) based on e.g. exponentially smoothed queue length/times (or idle threads), but this requires as well some fiddling around with the parameters. > > As for the pool subcommands, do the existing subcommands handle any > new data points that may be available? For example, if there were > nested pools you would want parents and children, or if they were > dynamic you would want to know when they were created. Or more > concretely, can they provide info on how many connections in the pool > are being handled with background delivery? we have now all "statistics" values on the server level, so there is still something to do. we should as well push the locking to the pool level, some of the locks are still for the full nsd (these locks are relatively infrequent and sort-timed, this is not urgent). -gustaf |
From: Jeff R. <dv...@di...> - 2013-01-03 17:48:56
|
Gustaf Neumann wrote: > In naviserver, we have essentially the two commands "ns_info" and > "ns_server" to obtain information about the "nds" process, about a > single (virtual) server and about (connection) pools. > I would propose to make the following changes to address these problems: The changes you suggest are sensible, giving better overall introspection. Does it make sense to have 2 different information-gathering commands, tho? Why not consolidate all these functions into "ns_info", giving appropriate subcommands -server and/or -pool flags as necessary? As for the pool subcommands, do the existing subcommands handle any new data points that may be available? For example, if there were nested pools you would want parents and children, or if they were dynamic you would want to know when they were created. Or more concretely, can they provide info on how many connections in the pool are being handled with background delivery? > Some of the variants with the specified ?-server ...? might have > concurrency issues, but maybe it is not necessary to support for every I recall some comments in the past that ns_server doesn't do all the necessary locking and so is not particularly safe, being intended for occasional use on development servers rather than frequent use on heavily loaded servers. That may have been about older or aolserver-specific code. -J |
From: Gustaf N. <ne...@wu...> - 2013-01-01 15:57:56
|
Dear naviserver community, First, all the best in the new year! A few days ago, i have merged the naviserver-connthreadqueue fork back into the naviserver trunk, since the server is running since a few weeks flawless in our production environment. While doing some code cleanup and documentation polishing the current state of the server introspection commands caught my attention. Some of the commands seemed to be developed under the assumption that one nsd process hosts just a single server with a single pool. I try to summarize my understanding in this regards: - One nsd might have multiple (virtual) servers with different ip/port/driver. - Every server has a default pool and optionally multiple additional connection pools selected via url-mapping. - Every pool might have different settings like mintheads/maxthreads/..., but also different statistics. In naviserver, we have essentially the two commands "ns_info" and "ns_server" to obtain information about the "nds" process, about a single (virtual) server and about (connection) pools. ################################################################## # # (1) infos about the global state of nsd # ns_info address ns_info argv0 ns_info boottime ns_info builddate ns_info callbacks ns_info config ns_info home ns_info hostname ns_info locks ns_info log ns_info major ns_info mimetypes ns_info minor ns_info name ns_info nsd ns_info server (name of the current server, executing this cmd) ns_info servers (names of the defined servers) ns_info shutdownpending ns_info sockcallbacks ns_info started ns_info tag ns_info threads ns_info uptime ns_info version ns_info winnt * # # (2) info about a server, but currently just available # for the "current" server # ns_info filters ns_info pagedir ns_info pageroot ns_info requestprocs ns_info tcllib ns_info traces ns_info url2file # # (3) info about the current server, with a potentially # non-default pool # ns_server all ?pool? (union of [ns_server active] and [ns_server queued]) ns_server active ?pool? (currently active connections from the pool) ns_server queued ?pool? (currently queued connection from the pool) ns_server connections ?pool? (total number of connections processed) ns_server keepalive ?pool? * (always 0) ns_server pools ?pool? (list names of pools) ns_server threads ?pool? (info about currently running threads) ns_server stats ?pool? (cumulative statistics from a single server, e.g. requests, spools, ...) ns_server waiting ?pool? (number of currently waiting connections of the pool) ################################################################## One can see from this summary, that the information about a single (the current) server can be obtained in part from "ns_info" and in part from "ns_server". Information about different servers cannot be obtained. Information about pools can be ontained from ns_server, but just for the current server. Furthermore, "ns_server" has subcommands, which are independent from the pools, but still have a ?pool? argument. The optional argument is irrelevant for the following commands: ns_server connections ?pool? ns_server keepalive ?pool? ns_server pools ?pool? ns_server stats ?pool? I would propose to make the following changes to address these problems: ################################################################## # # Keep ns_info command from group (1) as it is, with the exception # of "ns_info winnt" (see below). # # Turn previously ns_info commands (2) into ns_server commands, since # these commands return info about a server. The ns_server command # should be extended with an optional "-server" flag, which defaults # to the current server. # ns_server ?-server server? filters ns_server ?-server server? pagedir ns_server ?-server server? pageroot ns_server ?-server server? requestprocs ns_server ?-server server? tcllib ns_server ?-server server? traces ns_server ?-server server? url2file # # modify the previously ns_server commands (3) with the unneeded # trailing pool argument as follows: # # - remove the unneeded pool argument # - add an optional "-server" flag, which defaults # to the current server. # ns_server ?-server server? connections ns_server ?-server server? pools ns_server ?-server server? stats # # modify the previously ns_server commands (3) with the pool # argument as follows: # - add an optional "-pool" flag, which defaults to the # default pool. # - remove the pool argument # - add an optional "-server" flag, which defaults to the # current server (as above) # ns_server ?-server server? ?-pool pool? all ns_server ?-server server? ?-pool pool? active ns_server ?-server server? ?-pool pool? queued ns_server ?-server server? ?-pool pool? threads ns_server ?-server server? ?-pool pool? waiting ################################################################## Some of the variants with the specified ?-server ...? might have concurrency issues, but maybe it is not necessary to support for every subcommand the "-server" flag from the beginning, since most installations seem to have still single-server/single pool (SS-SP) configs. But by going this way, we do can add this gradually without facing the need to change the interface. For users with SS-SP the previously defined "ns_server" commands are compatible. For "ns_info" it should be possible to add a compatibility layer spitting out a "deprecated" message before calling the new interface. The commands ns_info winnt ns_server keepalive ?pool? should be removed (the first one is obsoleted by ::tcl_platform, the second one is a dummy placeholder, which returns always the constant 0). Comments? -gustaf neumann |
From: Gustaf N. <ne...@wu...> - 2012-12-22 11:29:03
|
Dear all, while cleaning up the code i was checking the the mutex frequenies and stumbled upon "nsd:conf" (the string name of nsconf.state.lock). We see on our production server up to 16.000 locks per second on this mutex, this is a factor of 25 higher then the total number of e.g. the nsd:queue lock that we have reduced. most of the "nsd:conf" locks are coming from Ns_InfoStarted() in Ns_ConfigCreateSection(), which is called whenever ConfigGet is called, which is a frequent operation. It is somewhat strange that we use most hammered mutex for a rather - what it looks to me - minor issue, a check, if the server is started or not. It looks to me, as if some people already looked at this heavy hammering issue (see comment "This dirty-read is worth the effort." below), where there seems to be a very simple solution for pratically eliminating this issue, which is also much faster: Why not call GetSection() in ConfigGet() instead of Ns_ConfigCreateSection()? I have changed this locally, all tests run fine, but since the soultion is no much straightforward, i wonder, whether i have missed something on this. The relevant snippets are below, also these modifying the starting state. all the best -gustaf neumann --- a/nsd/config.c Thu Dec 20 12:47:08 2012 +0100 +++ b/nsd/config.c Sat Dec 22 12:02:13 2012 +0100 @@ -816,7 +816,7 @@ s = NULL; if (section != NULL && key != NULL) { - set = Ns_ConfigCreateSection(section); + set = GetSection(section, 0); if (set != NULL) { int i; if (exact) { config.c: Ns_Set * Ns_ConfigCreateSection(CONST char *section) { int create = Ns_InfoStarted() ? 0 : 1; return (section ? GetSection(section, create) : NULL); } static char * ConfigGet(CONST char *section, CONST char *key, int exact, CONST char *defstr) { Ns_Set *set; char *s; s = NULL; if (section != NULL && key != NULL) { set = Ns_ConfigCreateSection(section); ... } int Ns_InfoStarted(int caller) { int started; Ns_MutexLock(&nsconf.state.lock); started = nsconf.state.started; Ns_MutexUnlock(&nsconf.state.lock); return started; } nsconf.c: NsInitConf() { ... nsconf.state.started = 1; ... } nsmain.c Ns_Main() { ... /* * Mark the server stopped until initialization is complete. */ Ns_MutexLock(&nsconf.state.lock); nsconf.state.started = 0; Ns_MutexUnlock(&nsconf.state.lock); ..... /* * Run pre-startups and start the servers. */ NsRunPreStartupProcs(); NsStartServers(); NsStartDrivers(); /* * Signal startup is complete. */ StatusMsg(running); Ns_MutexLock(&nsconf.state.lock); nsconf.state.started = 1; Ns_CondBroadcast(&nsconf.state.cond); Ns_MutexUnlock(&nsconf.state.lock); ... } .... int Ns_WaitForStartup(void) { /* * This dirty-read is worth the effort. */ if (nsconf.state.started) { return NS_OK; } Ns_MutexLock(&nsconf.state.lock); while (!nsconf.state.started) { Ns_CondWait(&nsconf.state.cond, &nsconf.state.lock); } Ns_MutexUnlock(&nsconf.state.lock); return NS_OK; } |
From: Gustaf N. <ne...@wu...> - 2012-12-19 11:21:18
|
Dear friends, again, an udpate of naviserver-connthreadqueue: (1) Observing the traffic on the low-traffic-site next-scripting.org showed still sometimes surprisingly slow "response times" where the pure runtime (not including queuing time etc) was 28 seconds. Under normal conditions, the response time is just a fraction of a second. It turned out that these requests are from a mobile broadband service, with a low bandwidth. 92.40.253.47 - - [12/Dec/2012:22:19:14 +0100] "GET /2.0b3/doc/nx HTTP/1.1" 200 23406 "https://next-scripting.org/2.0b3/doc/xotcl2" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:17.0) Gecko/20100101 Firefox/17.0" 28.700910 "1355347126 .164479 14.759801 0.000122 0.007112 28.693676" Since this page is an adp-page, which has in the current naviserver no chance to be delivered via the writer thread (as shown by the callgraph posted earlier). Maybe your first reaction is "don't care, bandwidths become better", but the problem is quite serious. It is very easy to start an DOS attack by starting just a few requests with a low bandwidth. If the server has e.g. a max of 10 connection threads defined, just 10 slow requests to adp-pages bring the server to a halt for arbitrary long times, although the server is computationally able to handle several hundred/thousand of these adp-requests per second. The attacker has just to accept a few bytes from time to time to stay above the write timeout of the server. Browsing around shows that this is a well-known attack that affects as well apache 1.x and 2.x, but not e.g. nginx, which is fully asynchronous and performs usually request spooling/buffering in front of the back-end. So, asynchronous receives and deliveries are a not only a nice feature. Therefore, i have added an interface between the "string based" delivery API and the writer thread and have this running on next-scripting.org since a few days, everything seems to work flawless, no single long request blocking happened. (2) We started using naviserver-connthreadqueue on our production site a few days ago (which runs behind nginx, therefore it profits just in part form the changes). Naviserver sees there currently about 1.5 mio page-views per day. The experiences are: - the config file needs some tweaking to keep queuing times low. i have set minthreads to 7 (before we had 3, but with a different interpretation). - the new async log writer works nicely, although it might reverse the order of entries in the log file. Will look into that. - the writer thread was not used so far, but had some troubles: (a) we saw peaks of 600.000 mutex locks/second, most of these from a mutex of the writer thread. Under normal conditions, we see on avg 8k mutex locks/second. (b) "ns_writer list" was not thread-safe (it crashed). While looking into problem (a), the writer thread did not have a clear EOF-handling (added POLLHUP handling) and it was possible that the writer might release a socket structure while the driver still depended on it. Now the lifecycle management in all in the driver, the problem is gone. Also (b) is fixed by now. - there are much less thread creates, the memory consumption seems better, but we need some longer measurement. The average response time might be slightly better, but it is within the daily variation range. Since we have no data about the queuing time of the old server, this is still hard to compare. We have still to lower the debug output, so, it is too early for an assessment. (3) Concerning zip-delivery of files: I have refactored the code a little to ease the zip handling on the tcl layer. There is now a new subcommand "ns_conn zipaccepted" which performs the rather complex preference rules. The following code could be easily used e.g. in a filter, or it can be extended to "compile" different formats into a target format. At least something to play in a first step i will commit the changes asap to naviserver-connthreadqueue, clean up, document, etc.... all the best. -gustaf neumann set file graph.js set fn [ns_info pageroot]/$file set mime [ns_guesstype $file] if {[ns_conn zipaccepted] && [ns_set iget [ns_conn headers] Range] eq ""} { if {![file readable $fn.gz] || [file mtime $fn] > [file mtime $fn.gz]} { exec gzip -9 < $fn > $fn.gz } if {[file readable $fn.gz]} { set fn $fn.gz ns_set put [ns_conn outputheaders] Vary Accept-Encoding ns_set put [ns_conn outputheaders] Content-Encoding gzip } } ns_returnfile 200 $mime $fn Am 09.12.12 19:48, schrieb Gustaf Neumann: > Dear all, > > On the link below i have tried to summarize the changes in the > naviserver-connthreadqueue fork taken from several mails. The summary > contains as well a few charts showing the stepwise improvements, which > i hope, someone might find interesting. > > https://next-scripting.org/xowiki/docs/misc/naviserver-connthreadqueue > > The new version uses TCP_CORK for the most interesting cases. The > changes from this feature are not dramatical, since NaviServer is > often aggregating the strings to be written in DStrings, so there are > apparently not many small writes. > > If nobody objects, i would tag the current tip of naviserver with > 4.99.4 and move the changes over to the main repository in the near > future .... after i make an iteration of the affected documentation. > > -gustaf neumann |
From: Gustaf N. <ne...@wu...> - 2012-12-09 18:48:37
|
Dear all, On the link below i have tried to summarize the changes in the naviserver-connthreadqueue fork taken from several mails. The summary contains as well a few charts showing the stepwise improvements, which i hope, someone might find interesting. https://next-scripting.org/xowiki/docs/misc/naviserver-connthreadqueue The new version uses TCP_CORK for the most interesting cases. The changes from this feature are not dramatical, since NaviServer is often aggregating the strings to be written in DStrings, so there are apparently not many small writes. If nobody objects, i would tag the current tip of naviserver with 4.99.4 and move the changes over to the main repository in the near future .... after i make an iteration of the affected documentation. -gustaf neumann |
From: Gustaf N. <ne...@wu...> - 2012-12-06 13:15:52
|
On 06.12.12 13:13, Stephen Deasey wrote: > I guess it depends on how the website is deployed: in a more modern > set-up CSS is often compiled from SASS or LESS; javascript needs to be > minified and combined, possibly compiled using Google's optmising > compiler, maybe from coffee script; images are compressed, etc. Making > gzip versions of static text/* files is just one more target in a > Makefile. Which is a little different than the old PHP/OpenACS > perspective where everything happens at run-time. Modern PHP/OpenACS installations use reverse proxies like nginx for static content, where one has the option to compress files on the fly or to deliver pre-compressed binaries. When we switched our production site to gzip delivery for the dynamic content, we did not notice any difference in cpu-load. Sure, delivering static gziped content is faster than zipping on the fly, but i would like to keep the burden on the site master low. Not sure, why we are discussing this now. My original argument was that the api-structure for deliveries is overly complicated (to put it mildly) and not orthogonal (i failed to understand it without drawing the call-graph). There is a lot of room for improvement. -gustaf neumann |
From: Stephen D. <sd...@gm...> - 2012-12-06 12:14:17
|
On Tue, Dec 4, 2012 at 10:55 PM, Gustaf Neumann <ne...@wu...> wrote: > Am 04.12.12 20:06, schrieb Stephen Deasey: > > - we should actually ship some code which searches for *.gz versions of > static files > > this would mean to keep a .gz version and a non-.gz version in the file > system for the cases, where gzip is not an accepted encoding. Not sure, i > would like to manage these files and to keep it in sync.... the fast-path > cache could keep gzipped copies, invalidation is already there. I guess it depends on how the website is deployed: in a more modern set-up CSS is often compiled from SASS or LESS; javascript needs to be minified and combined, possibly compiled using Google's optmising compiler, maybe from coffee script; images are compressed, etc. Making gzip versions of static text/* files is just one more target in a Makefile. Which is a little different than the old PHP/OpenACS perspective where everything happens at run-time. |
From: Stephen D. <sd...@gm...> - 2012-12-06 12:04:33
|
On Tue, Dec 4, 2012 at 10:24 PM, Gustaf Neumann <ne...@wu...> wrote: > > Today, i was hunting another problem in connection with nsssl, which > turns out to be a weakness of our interfaces. The source for the problem > is that the buffer management of OpenSSL is not aligned with the buffer > management in naviserver. In the naviserver driver, all receive requests > are triggered via the poll, when sockets are readable. With OpenSSL it > might be as well possible that data as a leftover from an earlier > receive when a smaller buffer is provided. Naviserver requested during > upload spool reveive operations with a 4KB buffer. OpenSSL might receive > "at once" 16KB. The read operation with the small buffer will not drain > the OpenSSL buffer, and later, poll() will not be triggered by the fact, > that the socket is readable (since the buffer is still quite full). The > problem happened in NaviServer, when the input was spooled (e.g. file > uploads). I have doubts that this combination ever worked. I have > corrected the problem by increasing the buffer variable in the driver.c. > The cleaner implementation would be to add an "Ns_DriverReadableProc > Readable" similar to the "Ns_DriverKeepProc Keep", but that would > effect the interface of all drivers. Another way to use the openssl library is to manage socket read/writes yourself and hand memory buffers to openssl to encrypt/decrypt. |
From: Gustaf N. <ne...@wu...> - 2012-12-05 01:12:04
|
Am 05.12.12 00:41, schrieb Stephen Deasey: > On Wed, Nov 28, 2012 at 10:38 AM, Gustaf Neumann <ne...@wu...> wrote: >> It is interesting to see, that with always 5 connections threads running and >> using jemalloc, we see a rss consumption only slightly larger than with >> plain tcl and zippy malloc having maxthreads == 2, having less requests >> queued. >> >> Similarly, with tcmalloc we see with minthreads to 5, maxthreads 10 >> >> requests 2062 spools 49 queued 3 connthreads 6 rss 376 >> requests 7743 spools 429 queued 359 connthreads 11 rss 466 >> requests 8389 spools 451 queued 366 connthreads 12 rss 466 >> >> which is even better. > Min/max threads 5/10 better than 2/10? the numbers show that 5/10 with tcmalloc is better than 5/10 with jemalloc and only slghtly worse than 2/2 with zippymalloc. -gn |
From: Stephen D. <sd...@gm...> - 2012-12-04 23:42:17
|
On Wed, Nov 28, 2012 at 10:38 AM, Gustaf Neumann <ne...@wu...> wrote: > > It is interesting to see, that with always 5 connections threads running and > using jemalloc, we see a rss consumption only slightly larger than with > plain tcl and zippy malloc having maxthreads == 2, having less requests > queued. > > Similarly, with tcmalloc we see with minthreads to 5, maxthreads 10 > > requests 2062 spools 49 queued 3 connthreads 6 rss 376 > requests 7743 spools 429 queued 359 connthreads 11 rss 466 > requests 8389 spools 451 queued 366 connthreads 12 rss 466 > > which is even better. Min/max threads 5/10 better than 2/10? How about 7/10? When you hit 10/10 you can delete an awful lot of code :-) |
From: Stephen D. <sd...@gm...> - 2012-12-04 23:29:31
|
On Tue, Dec 4, 2012 at 10:55 PM, Gustaf Neumann <ne...@wu...> wrote: > > The code in naviserver-connthreadqueue handles already read-aheads with SSL. > i have removed there these hacks already; i think, these were in part > responsible for the sometimes erratic response times with SSL. Well, I think the thing here is one-upon-a-time SSL was considered computationally expensive (I don't know if it still is, with recent Intel cpus having dedicated AES instructions etc.). Read-ahead is good because you don't want an expensive conn thread waiting around for the whole request to arrive, packet by packet. But with SSL the single driver thread will be decrypting read-ahead data for multiple sockets and may run out of cpu, stalling the request pipeline, starving the conn threads. By making the SSL driver thread non-async you lose out on read-ahead as that all happens on the conn thread, but you gain cpu resources on a multi-cpu system (all of them, today). AOLserver 4.5 added a pool of read-ahead threads, one per-socket IIRC, to keep the benefits of read-ahead while gaining cpu parallelism. - does a single driver thread have enough computational resources to decrypt all sockets currently in read-ahead? This is going to depend on the algorithm. Might want to favour AES if you know your cpu has support. - which is worse, losing read-ahead, or losing cpu-parallelism? - if a read-ahead thread-pool is added, should it be one thread per-socket, which is simple, or one thread per-cpu and some kind of balancing mechanism? |
From: Gustaf N. <ne...@wu...> - 2012-12-04 22:55:37
|
Am 04.12.12 20:06, schrieb Stephen Deasey: > - we should actually ship some code which searches for *.gz versions > of static files this would mean to keep a .gz version and a non-.gz version in the file system for the cases, where gzip is not an accepted encoding. Not sure, i would like to manage these files and to keep it in sync.... the fast-path cache could keep gzipped copies, invalidation is already there. > > * Similarly, range requests are not handled when the data is not > sent ReturnOpen to the writer Queue. > > > The diagram shows Ns_ConnReturnData also calls ReturnRange, and hence > the other leg of fastpath and all the main data sending routines > should handle range requests. this path is ok. when neither mmap or cache is set, fastpath can call ReturnOpenFd, and ReturnOpen send the data blindly to the writer if configured, which does not handle ranges. This needs some refactoring. > > * there is quite some potential to simplify / orthogonalize the > servers infrastructure. > * improving this structure has nothing to do with > naviserver-connthreadqueue, and should happen at some time in the > main tip. > > > The writer thread was one of the last bits of code to land before > things quietened down, and a lot of the stuff that got talked about > didn't get implemented. i am not complaining, just trying to understand the historical layers. Without the call-graph the current code is hard to follow. > One thing that was mentioned was having a call-back interface where > you submit a function to the writer thread and it runs it. This would > allow other kinds of requests to be served async. > > One of the things we've been talking about with the connthread work is > simplification. The current code, with it's workarounds for stalls and > managing thread counts is very complicated. If it were simplified and > genericised it could also be used for background writer threads, and > SSL read-ahead threads (as in aolserver > 4.5). So, that's another +1 > for keeping the conn threads simple. The code in naviserver-connthreadqueue handles already read-aheads with SSL. i have removed there these hacks already; i think, these were in part responsible for the sometimes erratic response times with SSL. -gustaf neuamnn |
From: Gustaf N. <ne...@wu...> - 2012-12-04 22:24:55
|
Am 04.12.12 20:25, schrieb Stephen Deasey: > I found this nifty site the other day: > > https://www.ssllabs.com/ssltest/analyze.html?d=next-scripting.org > > It's highlighting a few things that need fixed in the nsssl module, > including a couple of security bugs. Looks like relatively little code > though. The report is already much better: now everything is green. Most of the complaints could be removed via configuration, just two issues required code changes (requires a flag, which is not available in all current OpenSSL implementation, such as that from Mac OS X, and adding a callback). The security rating is now better than from nginx. Today, i was hunting another problem in connection with nsssl, which turns out to be a weakness of our interfaces. The source for the problem is that the buffer management of OpenSSL is not aligned with the buffer management in naviserver. In the naviserver driver, all receive requests are triggered via the poll, when sockets are readable. With OpenSSL it might be as well possible that data as a leftover from an earlier receive when a smaller buffer is provided. Naviserver requested during upload spool reveive operations with a 4KB buffer. OpenSSL might receive "at once" 16KB. The read operation with the small buffer will not drain the OpenSSL buffer, and later, poll() will not be triggered by the fact, that the socket is readable (since the buffer is still quite full). The problem happened in NaviServer, when the input was spooled (e.g. file uploads). I have doubts that this combination ever worked. I have corrected the problem by increasing the buffer variable in the driver.c. The cleaner implementation would be to add an "Ns_DriverReadableProc Readable" similar to the "Ns_DriverKeepProc Keep", but that would effect the interface of all drivers. -gustaf neumann |
From: Stephen D. <sd...@gm...> - 2012-12-04 21:06:21
|
On Thu, Nov 29, 2012 at 6:51 PM, Gustaf Neumann <ne...@wu...> wrote: > > It turned out > that the large queueing time came from requests from taipeh, which contained > several 404 errors. The size of the 404 request is 727 bytes, and therefore > under the writersize, which was configured as 1000. The delivery of an error > message takes to this site more than a second. Funny enough, the delivery of > the error message blocked the connection thread longer than the delivery of > the image when it is above the writersize. > > I will reduce the writersize further, but still a slow delivery can even > slow down the delivery of the headers, which happens still in the connection > thread. This shouldn't be the case for strings, or data sent from the fast path cache, such as a small file (a custom 404), as eventually those should work their way down to Ns_ConnWriteData which will construct the headers if not already sent and pass them, along with the data payload to writev(2). Linux should coalesce the buffers and send in a single packet, if small enough. I wonder if this is some kind of weird nsssl interaction. (For things like sendfile without ssl we could use TCP_CORK to coalesce the headers with the body) |
From: Stephen D. <sd...@gm...> - 2012-12-04 19:25:54
|
On Mon, Dec 3, 2012 at 10:38 AM, Gustaf Neumann <ne...@wu...> wrote: > > All changes are on bitbucket (nsssl and naviserver-connthreadqueue). I found this nifty site the other day: https://www.ssllabs.com/ssltest/analyze.html?d=next-scripting.org It's highlighting a few things that need fixed in the nsssl module, including a couple of security bugs. Looks like relatively little code though. Also, there's this: https://insouciant.org/tech/ssl-performance-case-study/ which is a pretty good explanation of things from a performance point of view. I haven't spent much time looking at SSL. Looks like there could be some big wins. For example, some of the stuff to do with certificate chains could probably be automated - the server could spit out an informative error to the log if things look poorly optimised. |
From: Stephen D. <sd...@gm...> - 2012-12-04 19:06:52
|
On Tue, Dec 4, 2012 at 5:21 PM, Gustaf Neumann <ne...@wu...> wrote: > > * Only content sent via Ns_ConnWriteVChars has the chance to get > compressed. > ie. dynamic content with a text/* mime-type. The idea here was you don't want to try and compress gifs an so on, and static content could be pre-compressed on disk - at runtime simple look for a *.gz version of the content. This could be cleaned up a bit by: - having an extendable white-list of mime-types which should be compressed: text/*, application/javascript, application/xml etc. - we should actually ship some code which searches for *.gz versions of static files > * Similarly, range requests are not handled when the data is not sent > ReturnOpen to the writer Queue. > The diagram shows Ns_ConnReturnData also calls ReturnRange, and hence the other leg of fastpath and all the main data sending routines should handle range requests. > > * there is quite some potential to simplify / orthogonalize the servers > infrastructure. > * improving this structure has nothing to do with > naviserver-connthreadqueue, and should happen at some time in the main tip. > The writer thread was one of the last bits of code to land before things quietened down, and a lot of the stuff that got talked about didn't get implemented. One thing that was mentioned was having a call-back interface where you submit a function to the writer thread and it runs it. This would allow other kinds of requests to be served async. One of the things we've been talking about with the connthread work is simplification. The current code, with it's workarounds for stalls and managing thread counts is very complicated. If it were simplified and genericised it could also be used for background writer threads, and SSL read-ahead threads (as in aolserver > 4.5). So, that's another +1 for keeping the conn threads simple. |