You can subscribe to this list here.
2005 |
Jan
|
Feb
(61) |
Mar
(153) |
Apr
(39) |
May
(10) |
Jun
(15) |
Jul
(15) |
Aug
(2) |
Sep
|
Oct
(17) |
Nov
(2) |
Dec
(13) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(18) |
Feb
(9) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(7) |
Aug
(1) |
Sep
(2) |
Oct
|
Nov
(1) |
Dec
|
2007 |
Jan
(8) |
Feb
(3) |
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
(6) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: SourceForge.net <no...@so...> - 2006-01-13 15:44:04
|
Feature Requests item #1404901, was opened at 2006-01-13 16:44 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1404901&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Zoran Vasiljevic (vasiljevic) Summary: Improve handling of registered script args Initial Comment: The hanling of the optional stript args passed to various registered procedures should be unified. If a registered Tcl callback needs to be invoked with variable number of arguments, the implementation should allow registration of variable number of args. During the invocation of the callback, optional arguments must be appended AFTER any fixed arguments as defined by the callback definition. For example: ns_register_filter reason /url myScript $arg1 $arg2 should require the myScript to be coded as: proc myScript {reason args} {...} in which case the procedure will be invoked as: myScript $reason $arg1 $arg2 Generally speaking, any command registering a script should always allow for variable number of optional args appended. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1404901&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-12 16:12:04
|
Feature Requests item #1403933, was opened at 2006-01-12 14:45 Message generated for change (Comment added) made by seryakov You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Writer Thread Initial Comment: Will it be usefull to have special writer thread that will send multiple files in async mode to multiple clients. For example if i serve big ISO or movie files and have many connections, currently they all use conn thread for along time until the whole file is sent. Instead we can mark the conn to be used in writer thread and release conn thread for other requests and in the meantime the writer thread will send multiple FDs to clients in one big loop. Currently it is possible to simply change ConnSend in connio.c to submit open descriptor to the writer queue and return marking the connection so usual NsClose will not close actual connection socket. Then write thread will be simple loop reading small chunks from every file and sending to corresponding socket. Does it make sense? ---------------------------------------------------------------------- >Comment By: Vlad Seryakov (seryakov) Date: 2006-01-12 16:12 Message: Logged In: YES user_id=184124 To play and test this i can add this to CVS but have it disabled by default, so nobody will be affected but will allow to do real testing and improvements. I do not think we should cover too much by this, just effective downloading huge files will be enough. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-01-12 14:46 Message: Logged In: YES user_id=184124 I did some preliminary coding and it works and the speed of downloading 2 simultaneous huge files are the same on average with current model, test are not very usefull because i tested on one computer only. I am attaching 2 files that i modified but more though and work should be done, currently it supportd only big files not in chunked mode returned by fastpath, but i think this is more than enough, all other dynamic content will go as usual. The WriterThread is at te end of driver.c and connio.c was modified in ConnSend function only. Yes, the logging and running atclose procs is the open question, i would run it in the conn thread. The parts between #ifdef #endif in WriterThread is experiments and they should be removed, it slowed down the server significantly. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-12 14:46:48
|
Feature Requests item #1403933, was opened at 2006-01-12 14:45 Message generated for change (Comment added) made by seryakov You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Writer Thread Initial Comment: Will it be usefull to have special writer thread that will send multiple files in async mode to multiple clients. For example if i serve big ISO or movie files and have many connections, currently they all use conn thread for along time until the whole file is sent. Instead we can mark the conn to be used in writer thread and release conn thread for other requests and in the meantime the writer thread will send multiple FDs to clients in one big loop. Currently it is possible to simply change ConnSend in connio.c to submit open descriptor to the writer queue and return marking the connection so usual NsClose will not close actual connection socket. Then write thread will be simple loop reading small chunks from every file and sending to corresponding socket. Does it make sense? ---------------------------------------------------------------------- >Comment By: Vlad Seryakov (seryakov) Date: 2006-01-12 14:46 Message: Logged In: YES user_id=184124 I did some preliminary coding and it works and the speed of downloading 2 simultaneous huge files are the same on average with current model, test are not very usefull because i tested on one computer only. I am attaching 2 files that i modified but more though and work should be done, currently it supportd only big files not in chunked mode returned by fastpath, but i think this is more than enough, all other dynamic content will go as usual. The WriterThread is at te end of driver.c and connio.c was modified in ConnSend function only. Yes, the logging and running atclose procs is the open question, i would run it in the conn thread. The parts between #ifdef #endif in WriterThread is experiments and they should be removed, it slowed down the server significantly. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-12 14:45:54
|
Feature Requests item #1403933, was opened at 2006-01-12 14:45 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Writer Thread Initial Comment: Will it be usefull to have special writer thread that will send multiple files in async mode to multiple clients. For example if i serve big ISO or movie files and have many connections, currently they all use conn thread for along time until the whole file is sent. Instead we can mark the conn to be used in writer thread and release conn thread for other requests and in the meantime the writer thread will send multiple FDs to clients in one big loop. Currently it is possible to simply change ConnSend in connio.c to submit open descriptor to the writer queue and return marking the connection so usual NsClose will not close actual connection socket. Then write thread will be simple loop reading small chunks from every file and sending to corresponding socket. Does it make sense? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1403933&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-12 03:12:15
|
Feature Requests item #1396336, was opened at 2006-01-03 21:35 Message generated for change (Settings changed) made by seryakov You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1396336&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None >Status: Closed Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Spooler support Initial Comment: Proposed patch is a simple spooler thread that is similar to driver thread, all uploads bigger than readahead are spooled into spooler thread that uses temp file to keep the data. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-01-11 19:33 Message: Logged In: YES user_id=184124 Okay, i will work on it. ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-01-11 11:23 Message: Logged In: YES user_id=95086 Lets go checkin that stuff. It looks allright to me. There are couple of minor things that I'd change there but overall I'm satisfied. With time we can improve this by adding more control like stop-upload etc. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1396336&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-12 03:12:15
|
Feature Requests item #1393765, was opened at 2005-12-30 17:28 Message generated for change (Settings changed) made by seryakov You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1393765&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tcl-API Group: None >Status: Closed Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Upload Statistis Initial Comment: i have another idea i'd like to discuss before i even start coding a prototype. There is thread on aolserver mailing list about upload prgoress, so i thought would it be a good idea to have global url-specific cache of all uploads, let's say, all requestes with content-length > 0. It will have only 2 alues, current size and total length and will last only for the time of upload. It will belong to Sock structure of the Request, so on close it will be freed as well. Making it url-specific will give Web developer ability to generate uniaue urls for upload and then request statistics, requests with the same url will not override each other, server will update statistics for the first created cache only, subsequent uploads with the same url will not show anything or show old values which is fine for security reasons. Overhead is minimal and it will add one new commmand like ns_upload_stats url. SockRead will handle it all, so no other places are affected. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2006-01-02 18:01 Message: Logged In: YES user_id=184124 Another patch that makes locking very very light in the driver thread, so if nobody is asking for stats, it will do lock to update 2 integers only, anyway only driver thread handles sockets. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2005-12-30 18:10 Message: Logged In: YES user_id=184124 More generic patch with using query as well ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2005-12-30 17:48 Message: Logged In: YES user_id=95086 oh... using request->line sounds better to me (easier). But don't nail me down on that. I yet have to clear with our web-gui guy. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2005-12-30 17:33 Message: Logged In: YES user_id=184124 also, another option is to use request->line instead of url, this way i can reuse the same url but use different parameters like session id to differentiate different uploads. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1393765&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-11 19:33:46
|
Feature Requests item #1396336, was opened at 2006-01-03 21:35 Message generated for change (Comment added) made by seryakov You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1396336&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Spooler support Initial Comment: Proposed patch is a simple spooler thread that is similar to driver thread, all uploads bigger than readahead are spooled into spooler thread that uses temp file to keep the data. ---------------------------------------------------------------------- >Comment By: Vlad Seryakov (seryakov) Date: 2006-01-11 19:33 Message: Logged In: YES user_id=184124 Okay, i will work on it. ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-01-11 11:23 Message: Logged In: YES user_id=95086 Lets go checkin that stuff. It looks allright to me. There are couple of minor things that I'd change there but overall I'm satisfied. With time we can improve this by adding more control like stop-upload etc. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1396336&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-11 11:23:27
|
Feature Requests item #1396336, was opened at 2006-01-03 22:35 Message generated for change (Comment added) made by vasiljevic You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1396336&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Spooler support Initial Comment: Proposed patch is a simple spooler thread that is similar to driver thread, all uploads bigger than readahead are spooled into spooler thread that uses temp file to keep the data. ---------------------------------------------------------------------- >Comment By: Zoran Vasiljevic (vasiljevic) Date: 2006-01-11 12:23 Message: Logged In: YES user_id=95086 Lets go checkin that stuff. It looks allright to me. There are couple of minor things that I'd change there but overall I'm satisfied. With time we can improve this by adding more control like stop-upload etc. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1396336&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-03 21:35:15
|
Feature Requests item #1396336, was opened at 2006-01-03 21:35 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1396336&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Spooler support Initial Comment: Proposed patch is a simple spooler thread that is similar to driver thread, all uploads bigger than readahead are spooled into spooler thread that uses temp file to keep the data. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1396336&group_id=130646 |
From: SourceForge.net <no...@so...> - 2006-01-02 18:01:45
|
Feature Requests item #1393765, was opened at 2005-12-30 17:28 Message generated for change (Comment added) made by seryakov You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1393765&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tcl-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Upload Statistis Initial Comment: i have another idea i'd like to discuss before i even start coding a prototype. There is thread on aolserver mailing list about upload prgoress, so i thought would it be a good idea to have global url-specific cache of all uploads, let's say, all requestes with content-length > 0. It will have only 2 alues, current size and total length and will last only for the time of upload. It will belong to Sock structure of the Request, so on close it will be freed as well. Making it url-specific will give Web developer ability to generate uniaue urls for upload and then request statistics, requests with the same url will not override each other, server will update statistics for the first created cache only, subsequent uploads with the same url will not show anything or show old values which is fine for security reasons. Overhead is minimal and it will add one new commmand like ns_upload_stats url. SockRead will handle it all, so no other places are affected. ---------------------------------------------------------------------- >Comment By: Vlad Seryakov (seryakov) Date: 2006-01-02 18:01 Message: Logged In: YES user_id=184124 Another patch that makes locking very very light in the driver thread, so if nobody is asking for stats, it will do lock to update 2 integers only, anyway only driver thread handles sockets. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2005-12-30 18:10 Message: Logged In: YES user_id=184124 More generic patch with using query as well ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2005-12-30 17:48 Message: Logged In: YES user_id=95086 oh... using request->line sounds better to me (easier). But don't nail me down on that. I yet have to clear with our web-gui guy. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2005-12-30 17:33 Message: Logged In: YES user_id=184124 also, another option is to use request->line instead of url, this way i can reuse the same url but use different parameters like session id to differentiate different uploads. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1393765&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-30 18:10:36
|
Feature Requests item #1393765, was opened at 2005-12-30 17:28 Message generated for change (Comment added) made by seryakov You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1393765&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tcl-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Upload Statistis Initial Comment: i have another idea i'd like to discuss before i even start coding a prototype. There is thread on aolserver mailing list about upload prgoress, so i thought would it be a good idea to have global url-specific cache of all uploads, let's say, all requestes with content-length > 0. It will have only 2 alues, current size and total length and will last only for the time of upload. It will belong to Sock structure of the Request, so on close it will be freed as well. Making it url-specific will give Web developer ability to generate uniaue urls for upload and then request statistics, requests with the same url will not override each other, server will update statistics for the first created cache only, subsequent uploads with the same url will not show anything or show old values which is fine for security reasons. Overhead is minimal and it will add one new commmand like ns_upload_stats url. SockRead will handle it all, so no other places are affected. ---------------------------------------------------------------------- >Comment By: Vlad Seryakov (seryakov) Date: 2005-12-30 18:10 Message: Logged In: YES user_id=184124 More generic patch with using query as well ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2005-12-30 17:48 Message: Logged In: YES user_id=95086 oh... using request->line sounds better to me (easier). But don't nail me down on that. I yet have to clear with our web-gui guy. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2005-12-30 17:33 Message: Logged In: YES user_id=184124 also, another option is to use request->line instead of url, this way i can reuse the same url but use different parameters like session id to differentiate different uploads. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1393765&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-30 17:48:14
|
Feature Requests item #1393765, was opened at 2005-12-30 18:28 Message generated for change (Comment added) made by vasiljevic You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1393765&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tcl-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Upload Statistis Initial Comment: i have another idea i'd like to discuss before i even start coding a prototype. There is thread on aolserver mailing list about upload prgoress, so i thought would it be a good idea to have global url-specific cache of all uploads, let's say, all requestes with content-length > 0. It will have only 2 alues, current size and total length and will last only for the time of upload. It will belong to Sock structure of the Request, so on close it will be freed as well. Making it url-specific will give Web developer ability to generate uniaue urls for upload and then request statistics, requests with the same url will not override each other, server will update statistics for the first created cache only, subsequent uploads with the same url will not show anything or show old values which is fine for security reasons. Overhead is minimal and it will add one new commmand like ns_upload_stats url. SockRead will handle it all, so no other places are affected. ---------------------------------------------------------------------- >Comment By: Zoran Vasiljevic (vasiljevic) Date: 2005-12-30 18:48 Message: Logged In: YES user_id=95086 oh... using request->line sounds better to me (easier). But don't nail me down on that. I yet have to clear with our web-gui guy. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2005-12-30 18:33 Message: Logged In: YES user_id=184124 also, another option is to use request->line instead of url, this way i can reuse the same url but use different parameters like session id to differentiate different uploads. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1393765&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-30 17:33:31
|
Feature Requests item #1393765, was opened at 2005-12-30 17:28 Message generated for change (Comment added) made by seryakov You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1393765&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tcl-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Upload Statistis Initial Comment: i have another idea i'd like to discuss before i even start coding a prototype. There is thread on aolserver mailing list about upload prgoress, so i thought would it be a good idea to have global url-specific cache of all uploads, let's say, all requestes with content-length > 0. It will have only 2 alues, current size and total length and will last only for the time of upload. It will belong to Sock structure of the Request, so on close it will be freed as well. Making it url-specific will give Web developer ability to generate uniaue urls for upload and then request statistics, requests with the same url will not override each other, server will update statistics for the first created cache only, subsequent uploads with the same url will not show anything or show old values which is fine for security reasons. Overhead is minimal and it will add one new commmand like ns_upload_stats url. SockRead will handle it all, so no other places are affected. ---------------------------------------------------------------------- >Comment By: Vlad Seryakov (seryakov) Date: 2005-12-30 17:33 Message: Logged In: YES user_id=184124 also, another option is to use request->line instead of url, this way i can reuse the same url but use different parameters like session id to differentiate different uploads. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1393765&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-30 17:28:44
|
Feature Requests item #1393765, was opened at 2005-12-30 17:28 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1393765&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tcl-API Group: None Status: Open Resolution: None Priority: 5 Submitted By: Vlad Seryakov (seryakov) Assigned to: Vlad Seryakov (seryakov) Summary: Upload Statistis Initial Comment: i have another idea i'd like to discuss before i even start coding a prototype. There is thread on aolserver mailing list about upload prgoress, so i thought would it be a good idea to have global url-specific cache of all uploads, let's say, all requestes with content-length > 0. It will have only 2 alues, current size and total length and will last only for the time of upload. It will belong to Sock structure of the Request, so on close it will be freed as well. Making it url-specific will give Web developer ability to generate uniaue urls for upload and then request statistics, requests with the same url will not override each other, server will update statistics for the first created cache only, subsequent uploads with the same url will not show anything or show old values which is fine for security reasons. Overhead is minimal and it will add one new commmand like ns_upload_stats url. SockRead will handle it all, so no other places are affected. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1393765&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-30 11:27:31
|
Feature Requests item #1377113, was opened at 2005-12-09 06:23 Message generated for change (Settings changed) made by sdeasey You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1377113&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed >Resolution: Accepted Priority: 5 Submitted By: Stephen Deasey (sdeasey) >Assigned to: Stephen Deasey (sdeasey) Summary: Double locking in urlspace Initial Comment: The code in urlspace.c has it's own private lock around all opperations which touch the trie. Subsystems which call this code have their own locks with a slightly larger scope to allow reference counting of the urlspecific data. Here's a longer explanation: http://sourceforge.net/mailarchive/forum.php?thread_id=9172330&forum_id=43966 This patch moves the ID out of the trie and uses it instead to select from one of several tries. Originally, there was one, global trie called urlspace. Now, urlspace is an array of tries, which is indexed by ID. All locking has been removed from the urlspace code. This is a potential incompatibility. The existing core code which uses this is fine. Registered procedures and registered url2file procs have their own locking. The queue.c code which assigns requests to a particular conn pool has no locking at all, and doesn't need it as the configuration is read at startup and never changes. I've changed a couple of other things. I've removed the ServerSpecific* procs. It's not a very useful API as it's just as easy to allocate server specific data on module load and pass around a pointer in callbacks, or failing that, uses a simple hash table. I've also changed the output format of the NsUrlSpecificWalk function. It no longer returns the virtual server name, which was redundant as you are required to pass it to the function. The URL and filter are returned as two seperate elements, with the minimal URL always being /. A new entry is printed for both inheriting and non inheriting data, and the data is preceded by the work 'inherit' or 'noinherit'. Previously it was impossible to tell what kind of data was registered for a URL. This function is still a little mysterious to me. I wonder why you can't just build up the URL in a dstring rather than push it all onto a stack..? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1377113&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-30 11:09:24
|
Feature Requests item #1159259, was opened at 2005-03-08 11:30 Message generated for change (Comment added) made by sdeasey You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1159259&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tcl-API Group: None >Status: Closed >Resolution: Duplicate Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Stephen Deasey (sdeasey) Summary: Add nscache module functionality Initial Comment: This is really what needs to be done sine the server itself does support the caches and what lacks is the Tcl access to them. The nscache module (or its equivalent) deserves to be part of the server distribution. ---------------------------------------------------------------------- >Comment By: Stephen Deasey (sdeasey) Date: 2005-12-30 04:09 Message: Logged In: YES user_id=87254 See RFE: 1119257 ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2005-03-18 05:15 Message: Logged In: YES user_id=95086 Take whatever time you need. I do not have any problems with that. I also think that namespaced commands are better, so instead of ns_cache blabla the better way would be ns::cache::blabla but I think it is to early to make this kind of switch. We can eventually make this sometime, leaving the ns_xxxx for compatibility, but this is another story. ---------------------------------------------------------------------- Comment By: Stephen Deasey (sdeasey) Date: 2005-03-18 05:03 Message: Logged In: YES user_id=87254 The existing nscache module is a good example of the ensemble method of command parsing sucking, IMHO. The master command has to parse the first word to figure out which subcommand to invoke, and then dispatches to the implementation of that subcommand as they have nothing in common. It requires extra code and slows performance. There are locking problems with the current API. ns_cache_stats, for example, can access any cache in the system at any time but there is no locking of the Cache structures. The built in commands work on a global level, the commands in the nscache module operate only on caches created with those commands. It would be a shame to junk the ability of the current commands to work with all caches. The nscache module works fine as is, and as I've already started work on integrating the tcl cache commands I'd prefer if the old ones weren't added in the mean time. Can you give me some time to post a patch to the original cache RFE? ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2005-03-18 01:56 Message: Logged In: YES user_id=95086 I would also somehow unify the Tcl api interface. If you look at the server part it does: ns_cache_XXXXX whereas the cache module has different api ns_cache YYYY I prefer the latter (ensemble-based) api. If we move the cache module in the core, we should make ns_cache YYYYY wrappers to do the ns_cache_XXXXX commands. What do you think? ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2005-03-17 21:33 Message: Logged In: YES user_id=184124 For a start we can just add tclcache.c into the core and leter improve/rewrite caching if needed. Currenlty i am pretty happy with this module functionality as is. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2005-03-08 12:48 Message: Logged In: YES user_id=184124 Yes, i agree with it ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1159259&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-30 11:08:26
|
Feature Requests item #1119257, was opened at 2005-02-09 05:32 Message generated for change (Comment added) made by sdeasey You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1119257&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: Tcl-API Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Stephen Deasey (sdeasey) Summary: Cache API, add Tcl interface into core Initial Comment: Those commands are in the server itself: ns_cache_flush ns_cache_stats ns_cache_size ns_cache_names ns_cache_keys I would not touch them for the compatibility reasons. However, they are pretty limited (introspection/management). What is missing is type of functionality added by the nscache module which I believe should have be done long time ago already. Suggestion: Include nscache into the core Tcl commands. Stephen, you mentioned you've benn working on alternate cache implementation. Can we use this and add better Tcl interface? ---------------------------------------------------------------------- >Comment By: Stephen Deasey (sdeasey) Date: 2005-12-30 04:08 Message: Logged In: YES user_id=87254 2005-12-30 Stephen Deasey <sd...@us...> * include/ns.h: * nsd/nsd.h: * nsd/init.c: * nsd/proc.c: * nsd/cache.c: Global hash of cache names removed; access no longer available via Ns_CacheFind() or ns_cache_names. Caches have unique locking requirements and lifetimes which could be violated by e.g. running ns_cache_stats on a thread local cache as the thread is exiting. New Ns_CacheCreateEx() allows creating a cache which has both a size and a time limit. Caches with time limits are calculated differently. Previous behaviour was to periodically check the last access time of each cache entry and flush those which had expired. A busy cache could grow without bound. New behaviour is to check for expirey each time an entry is retrieved from the cache. All caches can (and should) be size limited. Timeouts may be specified per cache and per entry using new Ns_CacheSetValueExpires(). Ns_CacheFlush() now returns the number of entries flushed. New Ns_CacheStats() takes over from ns_cache_stats for non-Tcl caches. Stats are returned in Tcl "array get" format. Removed Ns_CacheMalloc(), Ns_CacheFree() and Ns_CacheName(). Move Tcl commands to tclcache.c. * nsd/Makefile: * nsd/tclcmds.c: * nsd/tclcache.c: * tests/ns_cache.test: Add new Tcl commands ns_cache_create, _eval, _append, _lappend, and _incr. (RFE# 1119257) Old commands ns_cache_names, _keys, _flush and _stats now only work on Tcl caches. ns_cache_size has been removed; the size is now reported along with other statistics. These commands now only work on Tcl caches. * tcl/cache.tcl: New ns_memoize and related commands which act just like ns_cache_eval but use the script as a key into the memoize cache. ---------------------------------------------------------------------- Comment By: Stephen Deasey (sdeasey) Date: 2005-12-23 12:01 Message: Logged In: YES user_id=87254 Here's a version of the Tcl caching commands merged into the core rather than as a stand alone module. It includes the extra commands such as _incr and _lappend, and also an implementation of ns_memoize. Tcl caches can be created at runtime. Individual entries can have a max lifetime. ---------------------------------------------------------------------- Comment By: Stephen Deasey (sdeasey) Date: 2005-10-21 04:04 Message: Logged In: YES user_id=87254 Attached a snapshot of development. This is obviously taking too long, so here it is. FIXME: Add explanation here... ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2005-02-09 10:25 Message: Logged In: YES user_id=95086 We can synthesize your changes and Stephens rewrite of the cache guts. I see no problem there. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2005-02-09 10:20 Message: Logged In: YES user_id=184124 I added ns_cache incr command aslo some time ago, i think it is inthe CVS version of nscache. Also i used to play with core cache to add more prices size calculation incuding overhead. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1119257&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-23 19:01:35
|
Feature Requests item #1119257, was opened at 2005-02-09 05:32 Message generated for change (Comment added) made by sdeasey You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1119257&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Zoran Vasiljevic (vasiljevic) Assigned to: Stephen Deasey (sdeasey) Summary: Cache API, add Tcl interface into core Initial Comment: Those commands are in the server itself: ns_cache_flush ns_cache_stats ns_cache_size ns_cache_names ns_cache_keys I would not touch them for the compatibility reasons. However, they are pretty limited (introspection/management). What is missing is type of functionality added by the nscache module which I believe should have be done long time ago already. Suggestion: Include nscache into the core Tcl commands. Stephen, you mentioned you've benn working on alternate cache implementation. Can we use this and add better Tcl interface? ---------------------------------------------------------------------- >Comment By: Stephen Deasey (sdeasey) Date: 2005-12-23 12:01 Message: Logged In: YES user_id=87254 Here's a version of the Tcl caching commands merged into the core rather than as a stand alone module. It includes the extra commands such as _incr and _lappend, and also an implementation of ns_memoize. Tcl caches can be created at runtime. Individual entries can have a max lifetime. ---------------------------------------------------------------------- Comment By: Stephen Deasey (sdeasey) Date: 2005-10-21 04:04 Message: Logged In: YES user_id=87254 Attached a snapshot of development. This is obviously taking too long, so here it is. FIXME: Add explanation here... ---------------------------------------------------------------------- Comment By: Zoran Vasiljevic (vasiljevic) Date: 2005-02-09 10:25 Message: Logged In: YES user_id=95086 We can synthesize your changes and Stephens rewrite of the cache guts. I see no problem there. ---------------------------------------------------------------------- Comment By: Vlad Seryakov (seryakov) Date: 2005-02-09 10:20 Message: Logged In: YES user_id=184124 I added ns_cache incr command aslo some time ago, i think it is inthe CVS version of nscache. Also i used to play with core cache to add more prices size calculation incuding overhead. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1119257&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-11 03:36:36
|
Feature Requests item #1365119, was opened at 2005-11-23 16:40 Message generated for change (Settings changed) made by sdeasey You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1365119&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: Next Release (example) >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Paul Bukowski (shacky) >Assigned to: Stephen Deasey (sdeasey) Summary: throttle inside nssock module?? Initial Comment: It would be interesting if navi could limit in/out bandwitch per vhost... or per outgoing ip address. ---------------------------------------------------------------------- Comment By: Stephen Deasey (sdeasey) Date: 2005-12-08 23:35 Message: Logged In: YES user_id=87254 Hi Paul, I haven't looked carefully, but from within nssock I don't think you have enough information to throttle by virtual host, although you could probably do it by IP address. I guess there are a couple of reasons you might want to throttle bandwidth. As an ISP, you might want to limit each customer to some fraction of your bandwidth, depending upon their plan. You may also want to protect against DOS attacks, even accidental ones such as when someone tries to mirror your website. You have to be careful though. Even something as simple as limiting the bandwith per domain at an ISP can have unforseen consequences. For example, limiting the download rate will cause server threads to be tied up for longer than they otherwise would. It may have been possible to serve some file in a fraction of a second, but because you are limiting bandwidth, the thread which is writing the data is sleeping occasionaly. You can end up in a situation where all threads are busy serving data really slowly, refusing new connections and generally giving poor service to all, when in reallity the capacity of the server is much higher. Anyway, it's not clear what the best solution is, and I don't think that the standard nssock is the place to put it. nssock is only a couple hundred line of well commented code. I would suggest you copy it to nsthrottle, and start hacking. The thttpd web server has throttling code and is open source, so you could use that for inspiration. Jump on to the naviserver developers mailing list and we'll be glad to help you out. If you come up with something, we'll give you a CVS account and you can maintain your own module here if you'd like. Hope that helps. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1365119&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-11 03:35:30
|
Bugs item #1340755, was opened at 2005-10-28 08:29 Message generated for change (Comment added) made by sdeasey You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1340755&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Current >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Bernd Eidenschink (eide) >Assigned to: Stephen Deasey (sdeasey) Summary: ns_httptime not locale safe Initial Comment: ns_httptime is supposed to return a date in a predictable format, an english locale one; but it does not, see comment in httptime.c: /* * This will most likely break if the locale is not an english one. * The format is RFC 1123: "Sun, 06 Nov 1997 09:12:45 GMT" */ The TCL api uses it in "util.tcl" (ns_setexpires) and "sendmail.tcl" (DATE header). This is sometimes causing mails to be sorted wrong in MUAs or misleading http proxies, depending on locale set. ---------------------------------------------------------------------- >Comment By: Stephen Deasey (sdeasey) Date: 2005-12-10 20:35 Message: Logged In: YES user_id=87254 Fixed using Eduardo's patch: http://sourceforge.net/tracker/index.php?func=detail&aid=1247879&group_id=3152&atid=103152 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1340755&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-09 13:23:22
|
Feature Requests item #1377113, was opened at 2005-12-09 06:23 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1377113&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Stephen Deasey (sdeasey) Assigned to: Nobody/Anonymous (nobody) Summary: Double locking in urlspace Initial Comment: The code in urlspace.c has it's own private lock around all opperations which touch the trie. Subsystems which call this code have their own locks with a slightly larger scope to allow reference counting of the urlspecific data. Here's a longer explanation: http://sourceforge.net/mailarchive/forum.php?thread_id=9172330&forum_id=43966 This patch moves the ID out of the trie and uses it instead to select from one of several tries. Originally, there was one, global trie called urlspace. Now, urlspace is an array of tries, which is indexed by ID. All locking has been removed from the urlspace code. This is a potential incompatibility. The existing core code which uses this is fine. Registered procedures and registered url2file procs have their own locking. The queue.c code which assigns requests to a particular conn pool has no locking at all, and doesn't need it as the configuration is read at startup and never changes. I've changed a couple of other things. I've removed the ServerSpecific* procs. It's not a very useful API as it's just as easy to allocate server specific data on module load and pass around a pointer in callbacks, or failing that, uses a simple hash table. I've also changed the output format of the NsUrlSpecificWalk function. It no longer returns the virtual server name, which was redundant as you are required to pass it to the function. The URL and filter are returned as two seperate elements, with the minimal URL always being /. A new entry is printed for both inheriting and non inheriting data, and the data is preceded by the work 'inherit' or 'noinherit'. Previously it was impossible to tell what kind of data was registered for a URL. This function is still a little mysterious to me. I wonder why you can't just build up the URL in a dstring rather than push it all onto a stack..? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1377113&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-09 11:33:41
|
Bugs item #1377059, was opened at 2005-12-09 04:33 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1377059&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Current Status: Open Resolution: None Priority: 8 Submitted By: Stephen Deasey (sdeasey) Assigned to: Nobody/Anonymous (nobody) Summary: UTF8 input not validated Initial Comment: Here's an old bug report on the tDOM XML parser mailing list: http://groups.yahoo.com/group/tdom/message/1092 I think the problem is in form.c Ext2Utf(). This function is supposed to convert an external string of characters (such as from a form submission) to UTF-8 before they are passed to the Tcl core. If the encoding which is passed in (which comes from various configuration options) happens to be null, the function becomes a no-op, simply appending the data unchanged to the dstring. As it happens, the encoding can be null... Even if it wasn't, I don't think it is ever valid for this function to simply append the data unchecked. The idea is that it is converting from some encoding to UTF-8, and that if the input is already UTF-8 then no encoding needs to happen. But the input could be *invalid* UTF-8. What's needed is to convert from UTF-8 to UTF-8, or effectively to validate the characters. The guys on the tDOM list say that all sorts of bad things can happen when invalid UTF-8 gets into the core, and as this character data comes from the 'Net it's a bit concerning. There were some changes made to AOLserver 8 months ago to simplify the encoding subsystem, or at least the configuration of it. We should probably import those before changing anything else. ( http://cvs.sourceforge.net/viewcvs.py/aolserver/aolserver/ChangeLog?rev=1.321&view=markup ) We need to decide how best to fix the above mentioned problem with validation of UTF-8 data. The nstest_http proc has no notion of charsets etc. It might need to be changed to properly test the fix. There might be other places in the code which have simillar problems. I seem to remember a report recently about the AOLserver nsdb module crashing due to weird encoding issues, but I can't seem to find it now... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719006&aid=1377059&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-12-09 06:35:59
|
Feature Requests item #1365119, was opened at 2005-11-23 16:40 Message generated for change (Comment added) made by sdeasey You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1365119&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: Next Release (example) Status: Open Resolution: None Priority: 5 Submitted By: Paul Bukowski (shacky) Assigned to: Nobody/Anonymous (nobody) Summary: throttle inside nssock module?? Initial Comment: It would be interesting if navi could limit in/out bandwitch per vhost... or per outgoing ip address. ---------------------------------------------------------------------- >Comment By: Stephen Deasey (sdeasey) Date: 2005-12-08 23:35 Message: Logged In: YES user_id=87254 Hi Paul, I haven't looked carefully, but from within nssock I don't think you have enough information to throttle by virtual host, although you could probably do it by IP address. I guess there are a couple of reasons you might want to throttle bandwidth. As an ISP, you might want to limit each customer to some fraction of your bandwidth, depending upon their plan. You may also want to protect against DOS attacks, even accidental ones such as when someone tries to mirror your website. You have to be careful though. Even something as simple as limiting the bandwith per domain at an ISP can have unforseen consequences. For example, limiting the download rate will cause server threads to be tied up for longer than they otherwise would. It may have been possible to serve some file in a fraction of a second, but because you are limiting bandwidth, the thread which is writing the data is sleeping occasionaly. You can end up in a situation where all threads are busy serving data really slowly, refusing new connections and generally giving poor service to all, when in reallity the capacity of the server is much higher. Anyway, it's not clear what the best solution is, and I don't think that the standard nssock is the place to put it. nssock is only a couple hundred line of well commented code. I would suggest you copy it to nsthrottle, and start hacking. The thttpd web server has throttling code and is open source, so you could use that for inspiration. Jump on to the naviserver developers mailing list and we'll be glad to help you out. If you come up with something, we'll give you a CVS account and you can maintain your own module here if you'd like. Hope that helps. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1365119&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-11-23 23:40:07
|
Feature Requests item #1365119, was opened at 2005-11-24 00:40 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1365119&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: C-API Group: Next Release (example) Status: Open Resolution: None Priority: 5 Submitted By: Paul Bukowski (shacky) Assigned to: Nobody/Anonymous (nobody) Summary: throttle inside nssock module?? Initial Comment: It would be interesting if navi could limit in/out bandwitch per vhost... or per outgoing ip address. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1365119&group_id=130646 |
From: SourceForge.net <no...@so...> - 2005-11-07 04:31:08
|
Feature Requests item #1333811, was opened at 2005-10-21 01:18 Message generated for change (Settings changed) made by sdeasey You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1333811&group_id=130646 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed >Resolution: Accepted Priority: 5 Submitted By: Stephen Deasey (sdeasey) Assigned to: Stephen Deasey (sdeasey) Summary: ns_register_url2file: url2file procs in Tcl Initial Comment: There is a C API for registering url2file callbacks (Ns_SetUrlToFile) and converting a URL to a file path (Ns_UrlToFile). There's a Tcl api for the latter (ns_url2file) but no way tom implement the callbacks. A further limitatation is that url2file callbacks are not passed any context making configuration lookups difficult. The attached patch consolidates all the url2file processing into a single file and adds a Tcl API. It also adds a new C API with the additional feature that url2file callbacks are registered for a URL hierarchy, not just per-virtual server. Ns_RegisterUrl2FileProc(CONST char *server, CONST char *url, Ns_Url2FileProc *proc, Ns_Callback *delete, void *arg, int flags) Callbacks can be registered at runtime, may have context data and opperate over some subset of the URL hierarchy, just like registered procs. Backwards compatibility with Ns_SetUrlToFile is preserved by allowing old style procs to take presedence. ns_register_url2file ?-noinherit? ?--? url script ?arg? ns_unregister_url2file ?-noinherit? ?-recurse? ?--? url ns_register_fasturl2file ?-noinherit? ?--? url ?basepath? ns_register_fasturl2file allows you to register some default url2file handler for the entire URL hierarchy, and then selectively overide some subset with the default implementation. It also optionally allows you to 'mount' some subset of the URL hierarchy onto some portion of the file system path other than the default, which is to concatenate the URL with the page root. There's a reasonably complete set of tests which demonstrate the API. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=719009&aid=1333811&group_id=130646 |