You can subscribe to this list here.
2005 |
Jan
|
Feb
(61) |
Mar
(153) |
Apr
(39) |
May
(10) |
Jun
(15) |
Jul
(15) |
Aug
(2) |
Sep
|
Oct
(17) |
Nov
(2) |
Dec
(13) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(18) |
Feb
(9) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(7) |
Aug
(1) |
Sep
(2) |
Oct
|
Nov
(1) |
Dec
|
2007 |
Jan
(8) |
Feb
(3) |
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
(6) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: SourceForge.net <no...@so...> - 2006-08-31 14:06:25
|
Feature Requests item #1549952, was opened at 2006-08-31 16:06 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1549952&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Zoran Vasiljevic (vasiljevic) Summary: Read/write configuration storage Initial Comment: Attached is a sample implementation (bare-bones) of a configuration managment module, similar to the built-in ns_config but with read/write caps. Not all commands have been implemented. The only one visible command in Tcl API is: ns_conf. It accepts two sub-commands: get put So: ns_conf put my/section theparameter paramvalue1 which will set theparameter to paramvalue1 in the my/section, creating the section (and the parameter if needed) or ns_conf get ns_conf get my/section ns_conf get my/section theparameter which will return all sections, all parameters and value of the given parameter respectively. There is no persistence at this point. This may/will be added later on. To compile, unpack in the root of the server distribution, then change to nsconf directory and invoke "make". Then add nsconf.so module to the server config file. Emphasis during the development was on minimal locking for multiple readers. Writers are not fast. Readers are pretty fast. Per design, readers will content mostly with one writer hence the locking overhead is at absolute minimum. Comments are welcome. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1549952&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-07-25 07:24:47
|
Bugs item #1527964, was opened at 2006-07-24 22:43 Message generated for change (Comment added) made by vasiljevic You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1527964&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Current Status: Open Resolution: None Priority: 5 Submitted By: Stephen Deasey (sdeasey) Assigned to: Nobody/Anonymous (nobody) Summary: NsQueueConn failure for one queue effects all queues Initial Comment: A comment in nsd/driver.c states: /* * Hint: NsQueueConn may fail to queue a certain * socket to the designated connection queue. * In such case, ALL ready sockets will be put on * the waiting list until the next interation, * regardless of which connection queue they are * to be queued. */ This may be a problem for quick error reponses (Server Busy, Error etc.), whose connection now needs to be kept open, consuming resources, when an empty queue exists which could service the request. ---------------------------------------------------------------------- >Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-07-25 09:24 Message: Logged In: YES user_id=95086 It is myself who wrote that comment when I was looking at that code some months ago, as this was not very clear. I recall talking to Vlad about that but I can't recall what was the outcome! ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1527964&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-07-24 20:43:53
|
Bugs item #1527964, was opened at 2006-07-24 14:43 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1527964&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Current Status: Open Resolution: None Priority: 5 Submitted By: Stephen Deasey (sdeasey) Assigned to: Nobody/Anonymous (nobody) Summary: NsQueueConn failure for one queue effects all queues Initial Comment: A comment in nsd/driver.c states: /* * Hint: NsQueueConn may fail to queue a certain * socket to the designated connection queue. * In such case, ALL ready sockets will be put on * the waiting list until the next interation, * regardless of which connection queue they are * to be queued. */ This may be a problem for quick error reponses (Server Busy, Error etc.), whose connection now needs to be kept open, consuming resources, when an empty queue exists which could service the request. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1527964&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-07-24 20:31:38
|
Bugs item #1527956, was opened at 2006-07-24 14:31 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1527956&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Stephen Deasey (sdeasey) Assigned to: Nobody/Anonymous (nobody) Summary: Timeout in DriverRecv? Initial Comment: From: http://sourceforge.net/mailarchive/forum.php?thread_id=23433606&forum_id=43966 The nssock driver does this: case DriverRecv: timeout.sec = sock->driver->recvwait; n = Ns_SockRecvBufs(sock->sock, bufs, nbufs, &timeout); break; case DriverSend: timeout.sec = sock->driver->sendwait; n = Ns_SockSendBufs(sock->sock, bufs, nbufs, &timeout); break; ...which is called by the driver thread to read the request, headers etc. It's a non-blocking socket, so why the timeout? Ns_SockRecvBufs looks like this: n = SockRecv(sock, bufs, nbufs); if (n < 0 && ns_sockerrno == EWOULDBLOCK && Ns_SockTimedWait(sock, NS_SOCK_READ, timeoutPtr) == NS_OK) { n = SockRecv(sock, bufs, nbufs); } return n; i.e. if on the first attempt to read from the socket the kernel returns EWOULDBLOCK, wait for 30 seconds for something to arrive. Sockets only return EWOULDBLOCK if they are in non-blocking mode, and there is nothing to read/write. It's in non-blocking mode because we want to multiplex all sockets efficiently in the driver thread. Ns_SockTimedWait calls poll() with a timeout. It doesn't return until data arrives or the timeout expires. So, this can block the driver thread for up to 30 seconds. Does this make any sense? It's hard to see how this would be triggered in practice. We only attempt to read from the socket if the poll loop indicates the socket is readable. However, the Linux man page for select() (which also applies to poll) does say: Under Linux, select() may report a socket file descriptor as "ready for reading", while nevertheless a subsequent read blocks. This could for example happen when data has arrived but upon examination has wrong checksum and is discarded. There may be other circum-stances. Thus it may be safer to use O_NONBLOCK on sockets that should not block. We do use O_NONBLOCK, but then defeat that with an extra 30 second poll() timeout... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1527956&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-07-13 22:48:34
|
Feature Requests item #1522162, was opened at 2006-07-13 16:48 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1522162&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Stephen Deasey (sdeasey) Assigned to: Stephen Deasey (sdeasey) Summary: Sls: Socket Local Storage API Initial Comment: I'd like to add a version of the Ns_Cls (conn local storage) API with different scope: the data would be stored with the socket, not the connection. The difference is the lifetime of the data. Cls data is cleaned up at the end of every conn. That is, after logging. Sls data would only be cleaned up when a connected TCP socket is closed. Driver modules already have an 'arg' pointer in the Ns_Sock structure on which they can hang data, but there's only one pointer available. The attached patch Implements the Sls API, and also a keyed version with which C code can inter-opperate with Tcl code. There's a Tcl API to the keyed interface for this. The only unusual feature is that the code tries to allocate only enough space for the slots which are allocated. Cls code hangs an 16 element array off the conn structure. Sls code size the Sock structure itself, depending on the slots allocated. This makes the Sock structures more cache friendly as they're looped over in the driver code. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1522162&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-07-11 15:51:54
|
Feature Requests item #1520624, was opened at 2006-07-11 15:50 Message generated for change (Settings changed) made by eide You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1520624&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: Next Release (example) >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Bernd Eidenschink (eide) Assigned to: Nobody/Anonymous (nobody) Summary: Provide access to default configuration Initial Comment: Make all configuration options visible (log file) and accessible (ns_config) as well as their default values. While it is possible to print out the configuration after startup (Example 1 below), only the values set up in the config file are visible. When starting the server in the way that it logs 'debug' level messages (Example 2 below), a lot of default values/ranges are logged but only for those where Ns_Log('Debug') is set up. Some are still missing (along with the defaults). Is it possible to not only set variables internally during startup but to also build the ns_set needed for ns_config command in parallel? Example 1-----Print config------------------- set out [list] foreach section [lsort [ns_configsections]] { lappend out [string repeat - 80] lappend out "[ns_set name $section]" lappend out [string repeat - 80] array set section_values [ns_set array $section] foreach section_key [lsort [array names section_values]] { lappend out "$section_key: [set section_values([set section_key])]" } array unset section_values } puts [join $outpuf "\n"] Example 2-----Print config------------------- ... config: ns/parameters:listenbacklog value=(null) min=0 max=2147483647 default=32 (int) config: ns/parameters:dnscache value=(null) default=true (bool) config: ns/parameters:dnscachemaxsize value=(null) min=0 max=2147483647 default=512000 (int) config: ns/parameters:dnswaittimeout value=(null) min=0 max=2147483647 default=5 (int) config: ns/parameters:dnscachetimeout value=(null) min=0 max=2147483647 default=60 (int) ... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1520624&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-07-11 15:50:20
|
Feature Requests item #1520624, was opened at 2006-07-11 15:50 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1520624&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: Next Release (example) Status: Open Resolution: None Priority: 5 Submitted By: Bernd Eidenschink (eide) Assigned to: Nobody/Anonymous (nobody) Summary: Provide access to default configuration Initial Comment: Make all configuration options visible (log file) and accessible (ns_config) as well as their default values. While it is possible to print out the configuration after startup (Example 1 below), only the values set up in the config file are visible. When starting the server in the way that it logs 'debug' level messages (Example 2 below), a lot of default values/ranges are logged but only for those where Ns_Log('Debug') is set up. Some are still missing (along with the defaults). Is it possible to not only set variables internally during startup but to also build the ns_set needed for ns_config command in parallel? Example 1-----Print config------------------- set out [list] foreach section [lsort [ns_configsections]] { lappend out [string repeat - 80] lappend out "[ns_set name $section]" lappend out [string repeat - 80] array set section_values [ns_set array $section] foreach section_key [lsort [array names section_values]] { lappend out "$section_key: [set section_values([set section_key])]" } array unset section_values } puts [join $outpuf "\n"] Example 2-----Print config------------------- ... config: ns/parameters:listenbacklog value=(null) min=0 max=2147483647 default=32 (int) config: ns/parameters:dnscache value=(null) default=true (bool) config: ns/parameters:dnscachemaxsize value=(null) min=0 max=2147483647 default=512000 (int) config: ns/parameters:dnswaittimeout value=(null) min=0 max=2147483647 default=5 (int) config: ns/parameters:dnscachetimeout value=(null) min=0 max=2147483647 default=60 (int) ... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1520624&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-07-11 14:31:27
|
Feature Requests item #1520585, was opened at 2006-07-11 14:31 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1520585&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: Next Release (example) Status: Open Resolution: None Priority: 5 Submitted By: Bernd Eidenschink (eide) Assigned to: Nobody/Anonymous (nobody) Summary: Provide access to default configuration Initial Comment: Make all configuration options visible (log file) and accessible (ns_config) as well as their default values. While it is possible to print out the configuration after startup (Example 1 below), only the values set up in the config file are visible. When starting the server in the way that it logs 'debug' level messages (Example 2 below), a lot of default values/ranges are logged but only for those where Ns_Log('Debug') is set up. Some are still missing (along with the defaults). Is it possible to not only set variables internally during startup but to also build the ns_set needed for ns_config command in parallel? Example 1-----Print config------------------- set out [list] foreach section [lsort [ns_configsections]] { lappend out [string repeat - 80] lappend out "[ns_set name $section]" lappend out [string repeat - 80] array set section_values [ns_set array $section] foreach section_key [lsort [array names section_values]] { lappend out "$section_key: [set section_values([set section_key])]" } array unset section_values } puts [join $outpuf "\n"] Example 2-----Print config------------------- ... config: ns/parameters:listenbacklog value=(null) min=0 max=2147483647 default=32 (int) config: ns/parameters:dnscache value=(null) default=true (bool) config: ns/parameters:dnscachemaxsize value=(null) min=0 max=2147483647 default=512000 (int) config: ns/parameters:dnswaittimeout value=(null) min=0 max=2147483647 default=5 (int) config: ns/parameters:dnscachetimeout value=(null) min=0 max=2147483647 default=60 (int) ... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1520585&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-02-09 06:42:15
|
Feature Requests item #1403933, was opened at 2006-01-12 15:45 Message generated for change (Comment added) made by vasiljevic You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Writer Thread Initial Comment: Will it be usefull to have special writer thread that will send multiple files in async mode to multiple clients. For example if i serve big ISO or movie files and have many connections, currently they all use conn thread for along time until the whole file is sent. Instead we can mark the conn to be used in writer thread and release conn thread for other requests and in the meantime the writer thread will send multiple FDs to clients in one big loop. Currently it is possible to simply change ConnSend in connio.c to submit open descriptor to the writer queue and return marking the connection so usual NsClose will not close actual connection socket. Then write thread will be simple loop reading small chunks from every file and sending to corresponding socket. Does it make sense? ---------------------------------------------------------------------- >Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-02-09 07:42 Message: Logged In: YES user_id=95086 If I understand that correctly, chunked content is better done i.e. calculation what range to return, in the connection thread. Once the conn-thread has calculated what range of the file should be returned, it can pass the work to the writer thread. I'm saying this out of the blue, not looking at the sources but it seems feasible to me. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-02-08 21:46 Message: Logged In: YES user_id=184124 Just to send couple of Kbytes of return data back to clinet is easier and faster usuall way instead of using writer thread. It's primary purpose to serve large content without holding conn threads. Just to add ability to submit return content or file using ns_conn spool is easy, i may extend WriterSock to have pointer to the buffer of data and instead of file it will return use this buffer only. Bigger problem i see in my implementation is that Chunked requests will still hold conn threads and i am not sure wether to duplicate chunking logic into WriterThread or not. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-02-08 19:48 Message: Logged In: YES user_id=184124 i'll see what i can do, and yes, if multiple threads requests are round-robin ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-02-08 18:53 Message: Logged In: YES user_id=95086 Why don't you make a [ns_conn spool] command which will gracefuly close the connection, passing the socket to the writer thread(s)? This way everybody can use writer thread out of Tcl with ease... Oh yes... if there are >1 writer thread, how would you distribute? Do you have one queue for writer threads? ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-01-12 17:12 Message: Logged In: YES user_id=184124 To play and test this i can add this to CVS but have it disabled by default, so nobody will be affected but will allow to do real testing and improvements. I do not think we should cover too much by this, just effective downloading huge files will be enough. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-01-12 15:46 Message: Logged In: YES user_id=184124 I did some preliminary coding and it works and the speed of downloading 2 simultaneous huge files are the same on average with current model, test are not very usefull because i tested on one computer only. I am attaching 2 files that i modified but more though and work should be done, currently it supportd only big files not in chunked mode returned by fastpath, but i think this is more than enough, all other dynamic content will go as usual. The WriterThread is at te end of driver.c and connio.c was modified in ConnSend function only. Yes, the logging and running atclose procs is the open question, i would run it in the conn thread. The parts between #ifdef #endif in WriterThread is experiments and they should be removed, it slowed down the server significantly. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-02-08 20:47:04
|
Feature Requests item #1403933, was opened at 2006-01-12 14:45 Message generated for change (Comment added) made by seryakov You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Writer Thread Initial Comment: Will it be usefull to have special writer thread that will send multiple files in async mode to multiple clients. For example if i serve big ISO or movie files and have many connections, currently they all use conn thread for along time until the whole file is sent. Instead we can mark the conn to be used in writer thread and release conn thread for other requests and in the meantime the writer thread will send multiple FDs to clients in one big loop. Currently it is possible to simply change ConnSend in connio.c to submit open descriptor to the writer queue and return marking the connection so usual NsClose will not close actual connection socket. Then write thread will be simple loop reading small chunks from every file and sending to corresponding socket. Does it make sense? ---------------------------------------------------------------------- >Comment By: Vlad Seryakov (seryakov) Date: 2006-02-08 20:46 Message: Logged In: YES user_id=184124 Just to send couple of Kbytes of return data back to clinet is easier and faster usuall way instead of using writer thread. It's primary purpose to serve large content without holding conn threads. Just to add ability to submit return content or file using ns_conn spool is easy, i may extend WriterSock to have pointer to the buffer of data and instead of file it will return use this buffer only. Bigger problem i see in my implementation is that Chunked requests will still hold conn threads and i am not sure wether to duplicate chunking logic into WriterThread or not. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-02-08 18:48 Message: Logged In: YES user_id=184124 i'll see what i can do, and yes, if multiple threads requests are round-robin ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-02-08 17:53 Message: Logged In: YES user_id=95086 Why don't you make a [ns_conn spool] command which will gracefuly close the connection, passing the socket to the writer thread(s)? This way everybody can use writer thread out of Tcl with ease... Oh yes... if there are >1 writer thread, how would you distribute? Do you have one queue for writer threads? ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-01-12 16:12 Message: Logged In: YES user_id=184124 To play and test this i can add this to CVS but have it disabled by default, so nobody will be affected but will allow to do real testing and improvements. I do not think we should cover too much by this, just effective downloading huge files will be enough. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-01-12 14:46 Message: Logged In: YES user_id=184124 I did some preliminary coding and it works and the speed of downloading 2 simultaneous huge files are the same on average with current model, test are not very usefull because i tested on one computer only. I am attaching 2 files that i modified but more though and work should be done, currently it supportd only big files not in chunked mode returned by fastpath, but i think this is more than enough, all other dynamic content will go as usual. The WriterThread is at te end of driver.c and connio.c was modified in ConnSend function only. Yes, the logging and running atclose procs is the open question, i would run it in the conn thread. The parts between #ifdef #endif in WriterThread is experiments and they should be removed, it slowed down the server significantly. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-02-08 18:48:08
|
Feature Requests item #1403933, was opened at 2006-01-12 14:45 Message generated for change (Comment added) made by seryakov You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Writer Thread Initial Comment: Will it be usefull to have special writer thread that will send multiple files in async mode to multiple clients. For example if i serve big ISO or movie files and have many connections, currently they all use conn thread for along time until the whole file is sent. Instead we can mark the conn to be used in writer thread and release conn thread for other requests and in the meantime the writer thread will send multiple FDs to clients in one big loop. Currently it is possible to simply change ConnSend in connio.c to submit open descriptor to the writer queue and return marking the connection so usual NsClose will not close actual connection socket. Then write thread will be simple loop reading small chunks from every file and sending to corresponding socket. Does it make sense? ---------------------------------------------------------------------- >Comment By: Vlad Seryakov (seryakov) Date: 2006-02-08 18:48 Message: Logged In: YES user_id=184124 i'll see what i can do, and yes, if multiple threads requests are round-robin ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-02-08 17:53 Message: Logged In: YES user_id=95086 Why don't you make a [ns_conn spool] command which will gracefuly close the connection, passing the socket to the writer thread(s)? This way everybody can use writer thread out of Tcl with ease... Oh yes... if there are >1 writer thread, how would you distribute? Do you have one queue for writer threads? ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-01-12 16:12 Message: Logged In: YES user_id=184124 To play and test this i can add this to CVS but have it disabled by default, so nobody will be affected but will allow to do real testing and improvements. I do not think we should cover too much by this, just effective downloading huge files will be enough. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-01-12 14:46 Message: Logged In: YES user_id=184124 I did some preliminary coding and it works and the speed of downloading 2 simultaneous huge files are the same on average with current model, test are not very usefull because i tested on one computer only. I am attaching 2 files that i modified but more though and work should be done, currently it supportd only big files not in chunked mode returned by fastpath, but i think this is more than enough, all other dynamic content will go as usual. The WriterThread is at te end of driver.c and connio.c was modified in ConnSend function only. Yes, the logging and running atclose procs is the open question, i would run it in the conn thread. The parts between #ifdef #endif in WriterThread is experiments and they should be removed, it slowed down the server significantly. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-02-08 17:53:54
|
Feature Requests item #1403933, was opened at 2006-01-12 15:45 Message generated for change (Comment added) made by vasiljevic You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Writer Thread Initial Comment: Will it be usefull to have special writer thread that will send multiple files in async mode to multiple clients. For example if i serve big ISO or movie files and have many connections, currently they all use conn thread for along time until the whole file is sent. Instead we can mark the conn to be used in writer thread and release conn thread for other requests and in the meantime the writer thread will send multiple FDs to clients in one big loop. Currently it is possible to simply change ConnSend in connio.c to submit open descriptor to the writer queue and return marking the connection so usual NsClose will not close actual connection socket. Then write thread will be simple loop reading small chunks from every file and sending to corresponding socket. Does it make sense? ---------------------------------------------------------------------- >Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-02-08 18:53 Message: Logged In: YES user_id=95086 Why don't you make a [ns_conn spool] command which will gracefuly close the connection, passing the socket to the writer thread(s)? This way everybody can use writer thread out of Tcl with ease... Oh yes... if there are >1 writer thread, how would you distribute? Do you have one queue for writer threads? ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-01-12 17:12 Message: Logged In: YES user_id=184124 To play and test this i can add this to CVS but have it disabled by default, so nobody will be affected but will allow to do real testing and improvements. I do not think we should cover too much by this, just effective downloading huge files will be enough. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-01-12 15:46 Message: Logged In: YES user_id=184124 I did some preliminary coding and it works and the speed of downloading 2 simultaneous huge files are the same on average with current model, test are not very usefull because i tested on one computer only. I am attaching 2 files that i modified but more though and work should be done, currently it supportd only big files not in chunked mode returned by fastpath, but i think this is more than enough, all other dynamic content will go as usual. The WriterThread is at te end of driver.c and connio.c was modified in ConnSend function only. Yes, the logging and running atclose procs is the open question, i would run it in the conn thread. The parts between #ifdef #endif in WriterThread is experiments and they should be removed, it slowed down the server significantly. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-02-08 15:48:06
|
Feature Requests item #1427620, was opened at 2006-02-08 16:22 Message generated for change (Comment added) made by vasiljevic You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1427620&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None >Status: Closed Resolution: None Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Zoran Vasiljevic (vasiljevic) Summary: Better ns_cache statistics Initial Comment: (Posted on behalf of Gustaf Neumann by vasiljevic) Determining a good cache size and useful time-out-values is a non-trivial task. ns_cache_stats returns already some useful statistics, but i would say, at least two important figures are missing, which are subsumed by the # of flushes. Actually, there are at least 3 kind of flushes: a) expires (entry is too old) b) prunes (entry thrown out of the cache due to space competition) c) intentional flushes (flush command, deletion of an entry, etc.) Currently, nscache lumbs a+b+c under flushes. It would be nice to obtain separate values for these kinds of flushes to get some idea, whether or not a cache behaves like expected, or whethe it should be increased or decreased. ---------------------------------------------------------------------- >Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-02-08 16:48 Message: Logged In: YES user_id=95086 Done in cvs head. The ns_cache_stats return following data: maxsize X size X entries X flushed X hits X missed X hitrate X expired X pruned 0 New elements are: "pruned" and "expired" "pruned" counts number of entries removed from the cache because of the cache-size constraint "expired" counts number of entries removed from the cache because of the time-to-live constraint ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1427620&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-02-08 15:22:23
|
Feature Requests item #1427620, was opened at 2006-02-08 16:22 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1427620&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Zoran Vasiljevic (vasiljevic) Summary: Better ns_cache statistics Initial Comment: (Posted on behalf of Gustaf Neumann by vasiljevic) Determining a good cache size and useful time-out-values is a non-trivial task. ns_cache_stats returns already some useful statistics, but i would say, at least two important figures are missing, which are subsumed by the # of flushes. Actually, there are at least 3 kind of flushes: a) expires (entry is too old) b) prunes (entry thrown out of the cache due to space competition) c) intentional flushes (flush command, deletion of an entry, etc.) Currently, nscache lumbs a+b+c under flushes. It would be nice to obtain separate values for these kinds of flushes to get some idea, whether or not a cache behaves like expected, or whethe it should be increased or decreased. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1427620&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-02-06 10:23:10
|
Feature Requests item #1166553, was opened at 2005-03-19 17:35 Message generated for change (Settings changed) made by vasiljevic You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1166553&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tcl-API Group: None >Status: Closed Resolution: None Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Zoran Vasiljevic (vasiljevic) Summary: Add ttrace module Initial Comment: The ttrace module is written in Tcl and is part of the threading extension and available on SF: http://sourceforge.net/projects/ttrace The basic idea is to use Tcl trace capabilities and replicate state of the Tcl interpreter from one to other threads in a lazy mode. So, instead of loading everything in one interp and then using clumsy introspective script to build the initialization script for other threads the ttrace package loads all procedure, namespace (et al) defs and stuffs them into a shared array. By overloading the Tcl [unknown] command, resources are brought into life as needed. This results in drastic memory footprint reduction and much faster thread creation. I have it running for about 1 year w/o any noticable speed penalty. One of the OpenACS users also tried it and is very pleased. I think we can add this as alternate interp initialization scheme. ---------------------------------------------------------------------- >Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-02-06 11:23 Message: Logged In: YES user_id=95086 Added into current head (after 4.99.1 release). ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2005-03-26 19:34 Message: Logged In: YES user_id=95086 Yep. It is little bit difficult to follow. I think I will have to add some more clarifying comments. But the basic idea is trivial. Setup Tcl-level traces on various commands (load, proc, namespace..) and capture the work in nsv_arrays. Then synthethize a small Tcl script which, among other things, overloads the Tcl [unknown] command. The script is passed to each interpreter. During runtime, the [unknown] is responsible for pulling things out ot the nsv_arrays and loading them in the running inter. That's it. OK, the implementation is not that simple, but this is life. It will be more clear when I add some comments there... Bottom line: it works perfectly for me and for some other guys who dared to test it :). Overall, reduction in memory was significant (2-3 times) and the thread init time drop to couple of msecs. ---------------------------------------------------------------------- Comment By: Stephen Deasey (sdeasey) Date: 2005-03-26 19:26 Message: Logged In: YES user_id=87254 Good idea. I remember you mentioning rewriting in C some time ago, is that still the plan? I took a quick look at the ttrace module in sf cvs and had a little trouble following it. I guess that's just the nature of the problem, there seems to be a lot of fiddly things to consider. It would be nice to make it more obvious what steps are being taken, but nothing comes to mind immediately... ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2005-03-19 21:30 Message: Logged In: YES user_id=184124 could be a good option, yes ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1166553&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-02-06 09:01:46
|
Feature Requests item #1424987, was opened at 2006-02-06 08:05 Message generated for change (Comment added) made by vasiljevic You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1424987&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Next Release (example) Status: Open Resolution: None Priority: 5 Submitted By: Bernd Eidenschink (eide) Assigned to: Nobody/Anonymous (nobody) Summary: HTTP/1.1 compliance Initial Comment: There have been many requests in the past for HTTP/1.1 compliance and as much answers like "What exactly do you want - the Server still implements a lot of 1.1". Stephen set up this fine page on the Wiki, an excellent starting point: http://naviserver.sourceforge.net/wiki/index.php/ HTTP_1.1 I would vote to answer the status of HTTP/1.1 compliance to the inofficial "5.0 roadmap". (Just to answer - not to implement. This is of course another story) ---------------------------------------------------------------------- >Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-02-06 10:01 Message: Logged In: YES user_id=95086 Good point. Yes, us being the webserver, we ought to do this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1424987&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-02-06 07:05:58
|
Feature Requests item #1424987, was opened at 2006-02-06 07:05 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1424987&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Next Release (example) Status: Open Resolution: None Priority: 5 Submitted By: Bernd Eidenschink (eide) Assigned to: Nobody/Anonymous (nobody) Summary: HTTP/1.1 compliance Initial Comment: There have been many requests in the past for HTTP/1.1 compliance and as much answers like "What exactly do you want - the Server still implements a lot of 1.1". Stephen set up this fine page on the Wiki, an excellent starting point: http://naviserver.sourceforge.net/wiki/index.php/ HTTP_1.1 I would vote to answer the status of HTTP/1.1 compliance to the inofficial "5.0 roadmap". (Just to answer - not to implement. This is of course another story) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1424987&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-24 13:25:55
|
Feature Requests item #1413620, was opened at 2006-01-24 12:26 Message generated for change (Settings changed) made by vasiljevic You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1413620&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed >Resolution: Accepted Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Zoran Vasiljevic (vasiljevic) Summary: Add microseconds resolution for socket timeouts Initial Comment: The ns_sockopen and ns_sockselect accept "-timeout" optional argument. This is in certain cases to coarse hence idea is to modify the syntax of the value to: -timeout seconds?:microseconds? so one can ns_sockselect for 0:2000 (that is 2 millisec) This is the same format as already used for ns_job command and is backwardly compatible with the existing code hence no scripts will break. ---------------------------------------------------------------------- >Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-01-24 14:25 Message: Logged In: YES user_id=95086 Added to cvs. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1413620&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-24 11:26:36
|
Feature Requests item #1413620, was opened at 2006-01-24 12:26 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1413620&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Zoran Vasiljevic (vasiljevic) Summary: Add microseconds resolution for socket timeouts Initial Comment: The ns_sockopen and ns_sockselect accept "-timeout" optional argument. This is in certain cases to coarse hence idea is to modify the syntax of the value to: -timeout seconds?:microseconds? so one can ns_sockselect for 0:2000 (that is 2 millisec) This is the same format as already used for ns_job command and is backwardly compatible with the existing code hence no scripts will break. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1413620&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-24 11:21:24
|
Feature Requests item #1404901, was opened at 2006-01-13 16:44 Message generated for change (Comment added) made by vasiljevic You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1404901&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed Resolution: None Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Zoran Vasiljevic (vasiljevic) Summary: Improve handling of registered script args Initial Comment: The hanling of the optional stript args passed to various registered procedures should be unified. If a registered Tcl callback needs to be invoked with variable number of arguments, the implementation should allow registration of variable number of args. During the invocation of the callback, optional arguments must be appended AFTER any fixed arguments as defined by the callback definition. For example: ns_register_filter reason /url myScript $arg1 $arg2 should require the myScript to be coded as: proc myScript {reason args} {...} in which case the procedure will be invoked as: myScript $reason $arg1 $arg2 Generally speaking, any command registering a script should always allow for variable number of optional args appended. ---------------------------------------------------------------------- >Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-01-24 12:21 Message: Logged In: YES user_id=95086 Added into cvs. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1404901&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-16 08:13:42
|
Bugs item #1407112, was opened at 2006-01-16 08:11 Message generated for change (Comment added) made by eide You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1407112&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None >Status: Closed Resolution: None Priority: 5 Submitted By: Bernd Eidenschink (eide) Assigned to: Nobody/Anonymous (nobody) Summary: ns_base64encode adds unwanted extra newline Initial Comment: ns_base64encode breaks lines after 60 (encoded) characters by appending a newline character. This can lead to an extra, unwanted newline at the end of the encoding. Example: set str "The short red fox ran quickly through the green field " append str "and jumped over the tall brown bear\n" ns_base64encode $str ---------------------------------------------------------------------- >Comment By: Bernd Eidenschink (eide) Date: 2006-01-16 08:13 Message: Logged In: YES user_id=197910 Fixed in HEAD. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1407112&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-16 08:11:25
|
Bugs item #1407112, was opened at 2006-01-16 08:11 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1407112&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Bernd Eidenschink (eide) Assigned to: Nobody/Anonymous (nobody) Summary: ns_base64encode adds unwanted extra newline Initial Comment: ns_base64encode breaks lines after 60 (encoded) characters by appending a newline character. This can lead to an extra, unwanted newline at the end of the encoding. Example: set str "The short red fox ran quickly through the green field " append str "and jumped over the tall brown bear\n" ns_base64encode $str ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1407112&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-14 17:47:30
|
Bugs item #1405988, was opened at 2006-01-14 18:37 Message generated for change (Settings changed) made by vasiljevic You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1405988&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tcl-API Group: None >Status: Closed Resolution: Fixed Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Zoran Vasiljevic (vasiljevic) Summary: ns_base64encode fails on embedded nulls Initial Comment: lexxsrv:nscp 8> ns_base64encode test\0 dGVzdMCA lexxsrv:nscp 9> base64::encode test\0 dGVzdAA= The built-in ns_base64encode seems to have problems with strings containing embedded characters. This works though: lexxsrv:nscp 4> ns_base64encode test dGVzdA== lexxsrv:nscp 5> base64::encode test dGVzdA== ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-01-14 18:47 Message: Logged In: YES user_id=95086 Fixed in HEAD. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1405988&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-14 17:47:05
|
Bugs item #1405988, was opened at 2006-01-14 18:37 Message generated for change (Comment added) made by vasiljevic You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1405988&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: Tcl-API Group: None Status: Open >Resolution: Fixed Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) >Assigned to: Zoran Vasiljevic (vasiljevic) Summary: ns_base64encode fails on embedded nulls Initial Comment: lexxsrv:nscp 8> ns_base64encode test\0 dGVzdMCA lexxsrv:nscp 9> base64::encode test\0 dGVzdAA= The built-in ns_base64encode seems to have problems with strings containing embedded characters. This works though: lexxsrv:nscp 4> ns_base64encode test dGVzdA== lexxsrv:nscp 5> base64::encode test dGVzdA== ---------------------------------------------------------------------- >Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-01-14 18:47 Message: Logged In: YES user_id=95086 Fixed in HEAD. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1405988&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-14 17:37:41
|
Bugs item #1405988, was opened at 2006-01-14 18:37 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1405988&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Nobody/Anonymous (nobody) Summary: ns_base64encode fails on embedded nulls Initial Comment: lexxsrv:nscp 8> ns_base64encode test\0 dGVzdMCA lexxsrv:nscp 9> base64::encode test\0 dGVzdAA= The built-in ns_base64encode seems to have problems with strings containing embedded characters. This works though: lexxsrv:nscp 4> ns_base64encode test dGVzdA== lexxsrv:nscp 5> base64::encode test dGVzdA== ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1405988&group_id=130646 |