You can subscribe to this list here.
2005 |
Jan
|
Feb
(53) |
Mar
(62) |
Apr
(88) |
May
(55) |
Jun
(204) |
Jul
(52) |
Aug
|
Sep
(1) |
Oct
(94) |
Nov
(15) |
Dec
(68) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(130) |
Feb
(105) |
Mar
(34) |
Apr
(61) |
May
(41) |
Jun
(92) |
Jul
(176) |
Aug
(102) |
Sep
(247) |
Oct
(69) |
Nov
(32) |
Dec
(140) |
2007 |
Jan
(58) |
Feb
(51) |
Mar
(11) |
Apr
(20) |
May
(34) |
Jun
(37) |
Jul
(18) |
Aug
(60) |
Sep
(41) |
Oct
(105) |
Nov
(19) |
Dec
(14) |
2008 |
Jan
(3) |
Feb
|
Mar
(7) |
Apr
(5) |
May
(123) |
Jun
(5) |
Jul
(1) |
Aug
(29) |
Sep
(15) |
Oct
(21) |
Nov
(51) |
Dec
(3) |
2009 |
Jan
|
Feb
(36) |
Mar
(29) |
Apr
|
May
|
Jun
(7) |
Jul
(4) |
Aug
|
Sep
(4) |
Oct
|
Nov
(13) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
(9) |
Apr
(11) |
May
(16) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(7) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
(92) |
Nov
(28) |
Dec
(16) |
2013 |
Jan
(9) |
Feb
(2) |
Mar
|
Apr
(4) |
May
(4) |
Jun
(6) |
Jul
(14) |
Aug
(12) |
Sep
(4) |
Oct
(13) |
Nov
(1) |
Dec
(6) |
2014 |
Jan
(23) |
Feb
(19) |
Mar
(10) |
Apr
(14) |
May
(11) |
Jun
(6) |
Jul
(11) |
Aug
(15) |
Sep
(41) |
Oct
(95) |
Nov
(23) |
Dec
(11) |
2015 |
Jan
(3) |
Feb
(9) |
Mar
(19) |
Apr
(3) |
May
(1) |
Jun
(3) |
Jul
(11) |
Aug
(1) |
Sep
(15) |
Oct
(5) |
Nov
(2) |
Dec
|
2016 |
Jan
(7) |
Feb
(11) |
Mar
(8) |
Apr
(1) |
May
(3) |
Jun
(17) |
Jul
(12) |
Aug
(3) |
Sep
(5) |
Oct
(19) |
Nov
(12) |
Dec
(6) |
2017 |
Jan
(30) |
Feb
(23) |
Mar
(12) |
Apr
(32) |
May
(27) |
Jun
(7) |
Jul
(13) |
Aug
(16) |
Sep
(6) |
Oct
(11) |
Nov
|
Dec
(12) |
2018 |
Jan
(1) |
Feb
(5) |
Mar
(6) |
Apr
(7) |
May
(23) |
Jun
(3) |
Jul
(2) |
Aug
(1) |
Sep
(6) |
Oct
(6) |
Nov
(10) |
Dec
(3) |
2019 |
Jan
(26) |
Feb
(15) |
Mar
(9) |
Apr
|
May
(8) |
Jun
(14) |
Jul
(10) |
Aug
(10) |
Sep
(4) |
Oct
(2) |
Nov
(20) |
Dec
(10) |
2020 |
Jan
(10) |
Feb
(14) |
Mar
(29) |
Apr
(11) |
May
(25) |
Jun
(21) |
Jul
(23) |
Aug
(12) |
Sep
(19) |
Oct
(6) |
Nov
(8) |
Dec
(12) |
2021 |
Jan
(29) |
Feb
(9) |
Mar
(8) |
Apr
(8) |
May
(2) |
Jun
(2) |
Jul
(9) |
Aug
(9) |
Sep
(3) |
Oct
(4) |
Nov
(12) |
Dec
(13) |
2022 |
Jan
(4) |
Feb
|
Mar
(4) |
Apr
(12) |
May
(15) |
Jun
(7) |
Jul
(10) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(8) |
Dec
|
2023 |
Jan
(15) |
Feb
|
Mar
(23) |
Apr
(1) |
May
(2) |
Jun
(10) |
Jul
|
Aug
(22) |
Sep
(19) |
Oct
(2) |
Nov
(20) |
Dec
|
2024 |
Jan
(1) |
Feb
|
Mar
(16) |
Apr
(15) |
May
(6) |
Jun
(4) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(13) |
Nov
(18) |
Dec
(6) |
2025 |
Jan
(12) |
Feb
|
Mar
(2) |
Apr
(1) |
May
(11) |
Jun
(5) |
Jul
(4) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Zoran V. <zv...@ar...> - 2006-01-04 10:24:22
|
Interesting... What is a "task" (in a nutshell)? Cheers Zoran |
From: Zoran V. <zv...@ar...> - 2006-01-04 07:13:59
|
Am 04.01.2006 um 06:20 schrieb Vlad Seryakov: > I am attaching the whole driver.c file because patch would not be > very readable. Won't go as the list limits the message to 40K. Go and upload it into the RFE. That should work. Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-04 05:18:06
|
I am attaching the whole driver.c file because patch would not be very readable. See if this a good solution or not, it works and uses separate thread for all reading and spooling, also all upload stats are done in the spoller thread, so driver thread now works without any locking. Vlad Seryakov wrote: > I submitted Service Request with new patch with spooler support. Very > simple but seems to be working well. > > Zoran Vasiljevic wrote: > >> >> Am 03.01.2006 um 16:47 schrieb Vlad Seryakov: >> >>> Spool thread can be a replica of driver thread, it will do the same >>> as driver and then pass Sock to the conn thread. So making it C- >>> based is not that hard and still it will be fast and small. >>> >>> Plus, upload statistics now will be handled in the spool thread, not >>> driver thread, so no overhead and slowdown. >>> >>> So we read up to maxreradahead, then queue it into spooling thread >>> and spooling thread will give it to conn thread. Spooling thread >>> will use same Sock/Driver structures, actually Sock will not see any >>> differences between running in driver thread or spooling thread. >>> >>> I can try to provide test of using spooling thread along with driver >>> thread today if i will have time. >> >> >> >> This is all true, and perhaps the easiest way to get >> to the result on the short term. I appreciate very >> much the effort you're doing and we will definitely >> use it, as soon as you get something working stable >> enough. >> >> Still, our precious (little) server would definitely >> benefit from some additional event-loop-type of >> processing where you need not spawn a thread for >> just about anything. Tcl has come long way with >> their simple event loop and single-thread approach. >> >> Ultimately I can imagine something like: >> >> --------- >> | >> driver_thread >> / \ >> conn_threads spool_threads >> | | >> blocking_io nonblocking-io >> >> where one would do blocking IO (db stuff etc) whereas >> the other can be used for simple non-blocking ops like >> copy_file_to_socket or vice-versa types of operations. >> >> >> Zoran >> >> >> ------------------------------------------------------- >> This SF.net email is sponsored by: Splunk Inc. Do you grep through log >> files >> for problems? Stop! Download the new AJAX search engine that makes >> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! >> http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click >> _______________________________________________ >> naviserver-devel mailing list >> nav...@li... >> https://lists.sourceforge.net/lists/listinfo/naviserver-devel >> > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Gustaf N. <ne...@wu...> - 2006-01-03 21:57:27
|
Stephen Deasey wrote: >> The driver thread gets the connection and reads all up to maxreadahead. >> It then passes the connection to the conn thread. >> The conn thread decides to run that connection entirely (POST of a >> simple form >> or GET, all content read-in) OR it decides to pass the connection to the >> spooling thread(s). >> > > > What happens to the conn thread after this? It can't wait for > completion, that would defeat the purpose. > > Do traces (logging etc.) run now, in the conn thread, or later in a > spool thread? If logging runs now, but the upload fails, the log will > be wrong. If traces run in the spool threads they may block. > > In my setup, we have currently the spooling thread just for sending data. The code sent yesterday is a replacement for ns_returnfile. The connection thread finishes asynchronously, before all data is spooled. This means, that the connection thread finishes after it delegates the file delivery to the spool-thread. In order to get the correct sent-content-length in the logfile, i had to set (somewhat optimistic) the sent-contentlength manually. The traces are run immediately after the delegation. So the request lifecycle (state flow) for get request for a large file is currently accept(DT) -> preauth(CT) -> postauth(CT) -> procs(CT) -> trace(CT) -> spool(ST) DT: driver thread, CT: connection thread, ST: spool thread To make the implementation cleaner, it would preferable to provide means to pass the connection + context back to a connection thread that should run the final trace accept(DT) -> preauth(CT1) -> postauth(CT1) -> procs(CT1) -> spool(ST) -> trace(CT2) i would think, that if we can solve this part cleanly, the upload would work with the same mechanism. if the states are handled explicitely, one could think about a tcl command ns_enqueue $state $client_data that will enqueue a job for the connection thread pools, containing the state, file where the connection should continue together with a tcl stucture client_data that passes around connection specific context information (e.g user_id, ...) and the information from ns_conn (socket, head info, ..). For the example above, there would be an (implicit) ns_enqueue preauth "" issued from the driver thread, an ns_spool send $filename (or $fd) $context to pass the control to the spooling-thread, an ns_enqueue trace $context issued from the spooling trace to pass control back to a connection thread. > If further processing must be done after upload, does the spool thread > pass control back to a conn thread? Does this look like a new > request? The state of any running Tcl script will be lost at this > point (the conn thread will have cleaned up after hand-off to a spool > thread, right?). > upload could be done with the same machanisms_ 1) driver thread: request-head processing 2) start connection thread with a new filter (e.g. request-head) 3) the request-head filter will call pass control to the spooling-thread (e.g. ns_spool receive $length $context) 4) when the spool file is fully received (e.g. fcopy -command-callback) it enqueues the request for the connection threads (e.g. ns_enqueue preauth $context). 5) the connection thread obtains the context starts request processing as usual in the preauth state (start preauth, postauth, trace, ... unless control is passed back to the spool threads) So the request lifecycle (state flow) for a file upload could be: accept(DT) -> request-head(CT1) -> spool(ST) -> preauth(CT2) -> postauth(CT2) -> procs(CT2) -> trace(CT2) i would think that with the two commands ns_enqueue and ns_spool, which switch the control flow between connection threads and spooling threads and copy connection state information (from ns_conn) and client_data, one would have a very flexible framework. -gustaf PS: this model would be still compatible with aolserver, but would require that e.g. authentification code will be called from the request-head filter callback as well as from the preauth callback (maybe avoided by data in the client_data) Moving preauth and postauth to CT1 would be logically cleaner, but might be incompatible with aolserver, since these callback might want already to access the posted data. |
From: Vlad S. <vl...@cr...> - 2006-01-03 21:37:43
|
I submitted Service Request with new patch with spooler support. Very simple but seems to be working well. Zoran Vasiljevic wrote: > > Am 03.01.2006 um 16:47 schrieb Vlad Seryakov: > >> Spool thread can be a replica of driver thread, it will do the same >> as driver and then pass Sock to the conn thread. So making it C- based >> is not that hard and still it will be fast and small. >> >> Plus, upload statistics now will be handled in the spool thread, not >> driver thread, so no overhead and slowdown. >> >> So we read up to maxreradahead, then queue it into spooling thread >> and spooling thread will give it to conn thread. Spooling thread will >> use same Sock/Driver structures, actually Sock will not see any >> differences between running in driver thread or spooling thread. >> >> I can try to provide test of using spooling thread along with driver >> thread today if i will have time. > > > This is all true, and perhaps the easiest way to get > to the result on the short term. I appreciate very > much the effort you're doing and we will definitely > use it, as soon as you get something working stable > enough. > > Still, our precious (little) server would definitely > benefit from some additional event-loop-type of > processing where you need not spawn a thread for > just about anything. Tcl has come long way with > their simple event loop and single-thread approach. > > Ultimately I can imagine something like: > > --------- > | > driver_thread > / \ > conn_threads spool_threads > | | > blocking_io nonblocking-io > > where one would do blocking IO (db stuff etc) whereas > the other can be used for simple non-blocking ops like > copy_file_to_socket or vice-versa types of operations. > > > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-03 18:57:19
|
Am 03.01.2006 um 16:47 schrieb Vlad Seryakov: > Spool thread can be a replica of driver thread, it will do the same > as driver and then pass Sock to the conn thread. So making it C- > based is not that hard and still it will be fast and small. > > Plus, upload statistics now will be handled in the spool thread, > not driver thread, so no overhead and slowdown. > > So we read up to maxreradahead, then queue it into spooling thread > and spooling thread will give it to conn thread. Spooling thread > will use same Sock/Driver structures, actually Sock will not see > any differences between running in driver thread or spooling thread. > > I can try to provide test of using spooling thread along with > driver thread today if i will have time. This is all true, and perhaps the easiest way to get to the result on the short term. I appreciate very much the effort you're doing and we will definitely use it, as soon as you get something working stable enough. Still, our precious (little) server would definitely benefit from some additional event-loop-type of processing where you need not spawn a thread for just about anything. Tcl has come long way with their simple event loop and single-thread approach. Ultimately I can imagine something like: --------- | driver_thread / \ conn_threads spool_threads | | blocking_io nonblocking-io where one would do blocking IO (db stuff etc) whereas the other can be used for simple non-blocking ops like copy_file_to_socket or vice-versa types of operations. Zoran |
From: Zoran V. <zv...@ar...> - 2006-01-03 18:13:49
|
Am 03.01.2006 um 18:50 schrieb Stephen Deasey: > What happens to the conn thread after this? It can't wait for > completion, that would defeat the purpose. > > Do traces (logging etc.) run now, in the conn thread, or later in a > spool thread? If logging runs now, but the upload fails, the log will > be wrong. If traces run in the spool threads they may block. > > If further processing must be done after upload, does the spool thread > pass control back to a conn thread? Does this look like a new > request? The state of any running Tcl script will be lost at this > point (the conn thread will have cleaned up after hand-off to a spool > thread, right?). All very good question to which I can't give any question at this point. Or to put it simpler: I have no damn idea! I wanted to get mylself some bird-eye view on the matter in order to better understand what we are after and how we can solve it. What I however DO now is: we (as Gustaf) have a-kind-of extra processing built into our app which is quite similar to spool thread we are contemplating here about. We use the [ns_conn channel] to splice out the socket out of the conn structure, then wrap this into a Tcl channel and then transfer this channel to detached thread sitting in event loop. From the client side we fake the content length (the client is not a browser) so the driver thread does not come in between with its greedy reading. Then we simply finish the processing in the conn thread but the socket still lives and is getting serviced in the "spool" thread. The "spool" thread operates in event-loop mode and does the rest of the processing and eventually closes the socket to the client when done. Now, this is *way* dirty, but it allows us to use conn thread only for a shortest possible time and yet do long-running processing in the "spool" thread in the event-loop mode. I believe this is precisely what Gustaf is also doing in OACS. Again, I DO know that this is dirty but when you have only a hammer everything looks like a nail, doesn't it? That was my initial idea: add event-loop-type of processing into the current naviserver driver-thread/conn-thread paradigm. After thinking a while, the conn thread need not be involved at all in the long run. We can feed the conn to conn or spool The conn thread does all blocking whereas the spool thread does all non-blocking? The conn thread might be used for blocking (db) access whereas spool thread might be used to serve images, static files, uploads and all other things requiring IO which CAN be done non-blocking. Now, what I wanted to give here is an *idea*. Deeper bits and pieces i.e. how all this interact with each other I haven't consider yet, as it is too early for that. The question is: is the idea itself worth considering or not? Zoran |
From: Stephen D. <sd...@gm...> - 2006-01-03 17:50:39
|
On 1/3/06, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 03.01.2006 um 01:07 schrieb Vlad Seryakov: > > > > > Will it be more generic way just to set upper limit, if we see that > > upload exceeds that limit, pass control to conn thread and let it > > finish reading, this way, even spooling to file will work, because > > each upload conn thread will use their own file and will not hold > > driver thread. > > But we already have this limit. It is: maxreadahead. > > If we'd want to kill many flies with one stoke, I believe > we will have to step out of the box! > > By doing everything in the driver thread (provided we get the > AIO to work, which I'm sure can be done cross-platform) we are > getting only one of the things solved: upload of large conent. > We are not solving the upload progress nor any other security, > quota, whatelse, reqs. In order to do this, we'd need hooks into > the driver thread processing. This will inevitably lead to adding > a Tcl script processing into the driver thread plus some kind of > locking which could possibly cripple drivers performance. > I would not like to do anything in the driver thread which is Tcl- > related. > And, if any IO is being done there, it should not be blocking. > > One of the solution: move everything which exceeds "maxreadahead" into > the connection thread. This way you have all bits and pieces and you > can define whatever checking/statistic policy you need. The code doing > the multipart dissasembly can have hooks to inspect for progress, you > can stop/discard the upload based on some constraints (file size, > whatelse) > so you have total control. The down-side: you engage the conn thread and > risk getting DOS attacks, as Gustaf already mentioned in one of his last > responses to this thread. > > I think naviserver model using driver-thread for getting connections > accepted and their data read in-advance before passing ctrl to conn > thread > is good for most of the dynamic-content systems but not for doing > (large) > file uploads and/or serving (large) static content (images, etc). > > I believe that event-loop-type (or AIO processing) is more suitable > for that. > After all, you can have 100's of slow peers pumping or pulling files > to/from > you at the snail speed. In such case it is a complete waste to > allocate that > many connection threads as it will blow the system memory. A > dedicated pool of > threads each running in the event-loop mode would be far more > appropriate. > Actually, just one thread in event-loop would do the work perfectly > on a single > CPU box. On many-CPU box a pool of threads (one per CPU) would be > more appropriate. > > So, there it comes: we'd have to invent a third instance of the > processing! > > 1. driver thread > 2. connection thread > 3. spooling thread > > The driver thread gets the connection and reads all up to maxreadahead. > It then passes the connection to the conn thread. > The conn thread decides to run that connection entirely (POST of a > simple form > or GET, all content read-in) OR it decides to pass the connection to the > spooling thread(s). What happens to the conn thread after this? It can't wait for completion, that would defeat the purpose. Do traces (logging etc.) run now, in the conn thread, or later in a spool thread? If logging runs now, but the upload fails, the log will be wrong. If traces run in the spool threads they may block. If further processing must be done after upload, does the spool thread pass control back to a conn thread? Does this look like a new request? The state of any running Tcl script will be lost at this point (the conn thread will have cleaned up after hand-off to a spool thread, right?). > The spooling threads operate in event-loop mode. There has to be some > kind > of dispacher which evenly distributes the processing among spooling > threads. > Once in the spooling thread, the connection is processed entirely > asynchronously > as in Tcl event-loop. In fact, this whole thing can be implemented in > Tcl with > the building blocks we already have: channel-transfer between > threads, thread > management (create, teardown). Event the dispatcher can be done in > Tcl alone. > After processing in the spooling thread, the entire connection can > then be > bounced back to the connection thread OR finished in the spooling > thread. > > I hope I did not write a tons of nonsense and thank you for being > patient > and reading up to here :-) > > So, what do you think? > > Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-03 15:46:20
|
Spool thread can be a replica of driver thread, it will do the same as driver and then pass Sock to the conn thread. So making it C-based is not that hard and still it will be fast and small. Plus, upload statistics now will be handled in the spool thread, not driver thread, so no overhead and slowdown. So we read up to maxreradahead, then queue it into spooling thread and spooling thread will give it to conn thread. Spooling thread will use same Sock/Driver structures, actually Sock will not see any differences between running in driver thread or spooling thread. I can try to provide test of using spooling thread along with driver thread today if i will have time. Zoran Vasiljevic wrote: > > Am 03.01.2006 um 11:21 schrieb Andrew Piskorski: > >> Hm, does Tcl support asynchronous (non-blocking) IO for both network >> sockets and local files? Tcl has 'fconfigure -blocking 0' of course, >> but I don't know for sure whether that really does what you want for >> this application. If Tcl DOES support it adequately, then all the >> previous questions about how to do cross-platform asynchronous IO from >> C vanish, which is a nice bonus. > > > Tcl does this properly by implementing non-blocking IO (O_NONBLOCK or > O_NDELAY) > and adding their own event-loop processing salt. This works for both files > and sockets. > > The truble is: you need a fully loaded Tcl for that. But we do have it > anyways. If we restrict that across limited number of spooling threads, > the overhead might not be large. Normally I'd start with spoolthread- > per-cpu > but the infrastructure must be made so that it allows several spooling > threads to gain from multiple-cpu's in modern boxes. > > Zoran > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-03 10:35:07
|
Am 03.01.2006 um 11:21 schrieb Andrew Piskorski: > Hm, does Tcl support asynchronous (non-blocking) IO for both network > sockets and local files? Tcl has 'fconfigure -blocking 0' of course, > but I don't know for sure whether that really does what you want for > this application. If Tcl DOES support it adequately, then all the > previous questions about how to do cross-platform asynchronous IO from > C vanish, which is a nice bonus. Tcl does this properly by implementing non-blocking IO (O_NONBLOCK or O_NDELAY) and adding their own event-loop processing salt. This works for both files and sockets. The truble is: you need a fully loaded Tcl for that. But we do have it anyways. If we restrict that across limited number of spooling threads, the overhead might not be large. Normally I'd start with spoolthread- per-cpu but the infrastructure must be made so that it allows several spooling threads to gain from multiple-cpu's in modern boxes. Zoran |
From: Zoran V. <zv...@ar...> - 2006-01-03 10:23:16
|
Am 03.01.2006 um 10:49 schrieb Bernd Eidenschink: > > Of course, AJAX is still evolving and so are browser features, it's > good to > have stats and it is often requested, but for now there are use > cases where > the client approach is still better. All very true... The ultimate solution is: both. Better support on the server as well on the client side. We have no control on the client side. Hence we can strive to get the server side done properly. Zoran |
From: Andrew P. <at...@pi...> - 2006-01-03 10:21:59
|
On Tue, Jan 03, 2006 at 09:49:02AM +0100, Zoran Vasiljevic wrote: > The spooling threads operate in event-loop mode. There has to be > some kind of dispacher which evenly distributes the processing among > spooling threads. Or just start out with one spool thread, support for multiple spool threads can be added later, as it's not really necessary, just a possible performance tweak for large SMP machines. Then again, all the thread pool infrastructure is probably already there, so using it from the get go might be simple. > Once in the spooling thread, the connection is processed entirely > asynchronously as in Tcl event-loop. In fact, this whole thing can > be implemented in Tcl with the building blocks we already have: > channel-transfer between threads, thread management (create, > teardown). Event the dispatcher can be done in Tcl alone. I particularly like the all non-blocking, all event driven, all Tcl design of your spool threads. You can always add bits of C code later if that turns out to be more efficient, but being able to do the whole thing in Tcl is very nice. Hm, does Tcl support asynchronous (non-blocking) IO for both network sockets and local files? Tcl has 'fconfigure -blocking 0' of course, but I don't know for sure whether that really does what you want for this application. If Tcl DOES support it adequately, then all the previous questions about how to do cross-platform asynchronous IO from C vanish, which is a nice bonus. > I hope I did not write a tons of nonsense and thank you for being > patient and reading up to here :-) On the contrary, that was by far the clearest design explanation I've yet seen in this discussion. -- Andrew Piskorski <at...@pi...> http://www.piskorski.com/ |
From: Bernd E. <eid...@we...> - 2006-01-03 09:47:32
|
> o. add upload statistics and control? Reading the current very interesting thread, I would vote for adding it, if the stats are some kind of by-product of more stability against DOS attacks, multiple slow clients, reducing memory needs etc. as you all pointed out. Because what is still not solved on the client side, the browser, is a good GUI and support for uploading multiple files and showing their progress. AJAX helps a bit, there are nice tricks to display only one HTML file element (http://the-stickman.com/web-development/javascript/upload-multiple-files-with-a-single-file-element/) or progress bars (http://blog.joshuaeichorn.com/archives/2005/05/01/ajax-file-upload-progress/). But I don't have a file selection dialog box where I can select multiple files at once (say 50 or more for uploading images to a galery) or even complete directories, where files on the client side that exceed a certain limit are automatically sorted out; where the files are uploaded one by one and not as one giant form post; re-sent in case of a broken connection or even uploaded in parallel. This is where all the java and flash upload applets come in and play out their advantages. Of course, AJAX is still evolving and so are browser features, it's good to have stats and it is often requested, but for now there are use cases where the client approach is still better. cu BE |
From: Zoran V. <zv...@ar...> - 2006-01-03 08:46:50
|
Am 03.01.2006 um 01:07 schrieb Vlad Seryakov: > > Will it be more generic way just to set upper limit, if we see that > upload exceeds that limit, pass control to conn thread and let it > finish reading, this way, even spooling to file will work, because > each upload conn thread will use their own file and will not hold > driver thread. But we already have this limit. It is: maxreadahead. If we'd want to kill many flies with one stoke, I believe we will have to step out of the box! By doing everything in the driver thread (provided we get the AIO to work, which I'm sure can be done cross-platform) we are getting only one of the things solved: upload of large conent. We are not solving the upload progress nor any other security, quota, whatelse, reqs. In order to do this, we'd need hooks into the driver thread processing. This will inevitably lead to adding a Tcl script processing into the driver thread plus some kind of locking which could possibly cripple drivers performance. I would not like to do anything in the driver thread which is Tcl- related. And, if any IO is being done there, it should not be blocking. One of the solution: move everything which exceeds "maxreadahead" into the connection thread. This way you have all bits and pieces and you can define whatever checking/statistic policy you need. The code doing the multipart dissasembly can have hooks to inspect for progress, you can stop/discard the upload based on some constraints (file size, whatelse) so you have total control. The down-side: you engage the conn thread and risk getting DOS attacks, as Gustaf already mentioned in one of his last responses to this thread. I think naviserver model using driver-thread for getting connections accepted and their data read in-advance before passing ctrl to conn thread is good for most of the dynamic-content systems but not for doing (large) file uploads and/or serving (large) static content (images, etc). I believe that event-loop-type (or AIO processing) is more suitable for that. After all, you can have 100's of slow peers pumping or pulling files to/from you at the snail speed. In such case it is a complete waste to allocate that many connection threads as it will blow the system memory. A dedicated pool of threads each running in the event-loop mode would be far more appropriate. Actually, just one thread in event-loop would do the work perfectly on a single CPU box. On many-CPU box a pool of threads (one per CPU) would be more appropriate. So, there it comes: we'd have to invent a third instance of the processing! 1. driver thread 2. connection thread 3. spooling thread The driver thread gets the connection and reads all up to maxreadahead. It then passes the connection to the conn thread. The conn thread decides to run that connection entirely (POST of a simple form or GET, all content read-in) OR it decides to pass the connection to the spooling thread(s). The spooling threads operate in event-loop mode. There has to be some kind of dispacher which evenly distributes the processing among spooling threads. Once in the spooling thread, the connection is processed entirely asynchronously as in Tcl event-loop. In fact, this whole thing can be implemented in Tcl with the building blocks we already have: channel-transfer between threads, thread management (create, teardown). Event the dispatcher can be done in Tcl alone. After processing in the spooling thread, the entire connection can then be bounced back to the connection thread OR finished in the spooling thread. I hope I did not write a tons of nonsense and thank you for being patient and reading up to here :-) So, what do you think? Zoran |
From: Gustaf N. <ne...@wu...> - 2006-01-03 01:57:38
|
Stephen Deasey wrote: > > Your single thread helper mechanism above may not work the disk as > hard as multiple threads, but it does mean that the driver thread > doesn't have to block waiting for disk. > it does not necessary have to write to the disk. > You've also moved quota checking etc. into the helper thread. This > doesn't so much matter if there's just one, but with more than one you > have to remember that a Tcl interp will be allocated for each, and > they'll be just as heavyweight as normal conn threads. > > We have a similarly problem with normal downloads, i.e. writing data > to the client, an mp3 file for example, ties up a heavyweight conn > thread. A generic pool of IO threads might be useful in a number of > places. Of course, this is just a simulation of AIO to disk, but see > my other email for why AIO is not so easy. > we are using on our production site a background delivery for large files (up to 4 mio requests per day, up to 30 GB/day, up to 1200 concurrent users, oacs) which does not block the connection threads. i did this with some help from zoran and it works like a champ (single spooling thread, using ns_conn channel + libthread with thread::transfer and thread:;send code below). This bgdelivery implementation has already reduced the memory footprint of our server, frees us from slow-download-DOS-"attacks" and yielded no measurable performance degradation. > AOLserver HEAD has a new filter type, NS_FILTER_PREQUEUE, which gets > called just before the connection is queued for processing by a conn > thread. The unique thing about this filter type is that the > registered code runs in the context of the driver thread. > Unfortunately that's after all the data has been read and spooled, but > we could implement it such that they ran just after all headers have > been received. > interesting > If you want to check quotas by querying a database then then this > doesn't help -- your db query will block the driver thread. These > filters will have to be coded carefully. We could also implement > non-blocking db calls, and that's certainly something I'm interested > in, but it doesn't help right now. > true. > However, if the code you want to run is short and non blocking, > prequeue filters we be a good way to run quota checks and such without > having to involve a conn thread at all. > also the singed cookie checking of oacs might be already to expensive for a single thread (no matter whether is happens in the driver or in the model suggested in my last mail). The more i think about the problem, the general case (expensive permission + quota checking) of uploads should be handled by connection threads. In order to avoid having these sitting more or less idle around, one cal use a spooling thread (or a few of these). What i have seen, all ajax file upload bars are associating an Upload-ID with the file upload and query later via GET requests the status of these. This means, that ordinary requests must be able to query the status... This is quite similar to my request monitor part in oacs-head (you can see this in mannheim in action, where it is publically accessible https://dotlrn.uni-mannheim.de/request-monitor/) A revised model: 1) request HEAD processing alway in the driver thread 2) for incoming data processing delegation to a connection thread with a new filter type (can do db-queries etc) 3) delegation to a spooling thread (where it can be monitored) 4) delegation to a connection thread ala aolserver. when end-of-file/error occurs in the spooling thread, it would require a tcl-command to enqueue the request as connection thread. Actually, this could be done only when there is a POST/PUT request with a Upload-ID as a query parameter, such that the programmer has full control when this happen, there will be no penalty for small forms... -gustaf ===================================================== ad_library { Routines for background delivery of files @author Gustaf Neumann (ne...@wu...) @creation-date 19 Nov 2005 @cvs-id $Id: background-delivery-procs.tcl,v 1.2 2005/12/30 00:07:23 gustafn Exp $ } ::xotcl::THREAD create bgdelivery { set ::delivery_count 0 proc deliver {ch filename context} { set fd [open $filename] fconfigure $fd -translation binary fconfigure $ch -translation binary fcopy $fd $ch -command [list end-delivery $filename $fd $ch] set ::running($ch,$filename) $context incr ::delivery_count } proc end-delivery {filename fd ch bytes args} { #ns_log notice "--- end of delivery of $filename, $bytes bytes written $args" if {[catch {close $ch} e]} {ns_log notice "bgdelivery, closing channel for $filename, error: $e"} if {[catch {close $fd} e]} {ns_log notice "bgdelivery, closing file $filename, error: $e"} unset ::running($ch,$filename) } } -persistent 1 bgdelivery ad_forward running { Interface to the background delivery thread to query the currently running deliveries. @return list of key value pairs of all currently running background processes } %self do array get running bgdelivery ad_forward nr_running { Interface to the background delivery thread to query the number of currently running deliveries. @return number of currently running background deliveries } %self do array size running if {[ns_info name] eq "NaviServer"} { bgdelivery forward write_headers ns_headers } else { bgdelivery forward write_headers ns_headers DUMMY } bgdelivery ad_proc returnfile {statuscode mime_type filename} { Deliver the given file to the requestor in the background. This proc uses the background delivery thread to send the file in an event-driven manner without blocking a request thread. This is especially important when large files are requested over slow (e.g. dial-ip) connections. } { set size [file size $filename] if {[my write_headers $statuscode $mime_type $size]} { set ch [ns_conn channel] thread::transfer [my get_tid] $ch throttle get_context my do -async deliver $ch $filename \ [list [throttle set requestor],[throttle set url] [ns_conn start]] ns_conn contentsentlength $size ;# maybe overly optimistic } } |
From: Vlad S. <vl...@cr...> - 2006-01-03 00:07:47
|
Check my last upload patch, it may be usefull until more generic approach will be developed. Zoran Vasiljevic wrote: > > Am 02.01.2006 um 21:13 schrieb Stephen Deasey: > >> >> If the problem is that threads are too "heavy", then it's pointless to >> use aio_write() etc. if the underlying implementation also uses >> threads. It will be worse than using conn threads alone, as there >> will be extra context switches as control bounces from driver thread, >> to IO thread, back to driver thread, and finally to conn thread. > > > Well, I really do not know which OS uses what implementation. > We now know thaat Linux uses userland threads. > I do not know about Darwin but I can peek into their sources. > Same for Solaris. > >> >> Here's another way to look at it: We already dedicate a conn thread >> for every download. What's so special about uploads that we can't >> also dedicate a thread to them? The threads are long lived, you can >> prevent them from running Tcl etc... > > > This is true. What I had in mind was to make it simple as possible > by reusing already present driver thread w/o making it more complex > than needed by adding even more theads. We already have lots of > them anyways. > >> >> Sure, the ideal would be AIO for both uploads and downloads. But the >> io_* API is busted, at least on one major platform, possibly more. >> Have you looked at Darwin, how do they implement it? What are you >> going to do on Windows? >> > > I do not know. On windows most of the things are already async > (WaitForMultipleObjects) so I believe that we can work-arround by > adding our own wrapper for AIO as we have done for mmap. > > But this is all speculation. I never went farther than reading > man pages and contemplating about it. Vlad went considerably > farther and have tried couple of things. > > I believe it is the time we have to make ourselves pretty clear > what are we about to do: > > o. add upload statistics and control? > o. improve upload altogether > o. redesign upload altogether > > At the moment, most visible/needed improvement coud/would be > a sort of upload statistics, as we do miss this functionality. > > Zoran > >> >> ------------------------------------------------------- >> This SF.net email is sponsored by: Splunk Inc. Do you grep through >> log files >> for problems? Stop! Download the new AJAX search engine that makes >> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! >> http://ads.osdn.com/?ad_idv37&alloc_id865&op=click >> _______________________________________________ >> naviserver-devel mailing list >> nav...@li... >> https://lists.sourceforge.net/lists/listinfo/naviserver-devel > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_idv37&alloc_id865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Vlad S. <vl...@cr...> - 2006-01-03 00:07:02
|
As i see it by default nsd does not use mmaps, only for uploads, yes it might need to review this again, mmap for big movie/iso files is not appropriate and i am using nsd for those kinds of files most of the time. Stephen Deasey wrote: > On 1/2/06, Vlad Seryakov <vl...@cr...> wrote: > >>I did very simple test, replaced write with aio_write and at the end >>checked aio_error/aio_return, they all returned 0 so mmap should work >>because file is synced. when i was doing aio_write i used aio_offset, so >>each aio_write would put data into separate region on the file. >> >>Unfortunately i removed modified driver.c by accident, so i will have to >>do it again but something is not right in simply replacing write with >>aio_write and mmap. Without mmap, GetForm will have to read the file >>manually and parse it, it makes it more complicated and still, if writes >>are not complete we may get SIGBUS again. >> >>The problem i see here i cannot be sure that all writes are completed, >>aio_error/aio_return are used to check only one last operation, not the >>batch of writes. > > > > We might have to drop mmap support anyway. We're trying t accommodate > large files, so we may run out of address space. > > This summer I was running thttpd to serve some iso images. I started > getting busy errors when very little bandwidth was being used, and > with only 3 concurrent users. It turns out that a couple of extra > images were published, and the users were all downloading different > ones. As thttpd uses mmap to read the files, the process ran out of > address space. It doesn't take much... > > We still have to support mmap for reading in the case that a module > calls Ns_ConContent(), but otherwise we can use read/write etc. > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_idv37&alloc_id865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Vlad S. <vl...@cr...> - 2006-01-03 00:04:54
|
It was kind of random, sometimes SIGBUS, next time SIGEGV, it needs another round of testing. The goal was just simple replace with and see if aio_write may work, looks like it does not. I checked samba and squid, they use some kind of high level wrappers around AIO, most of the time their own re-implemenation using threads. Will it be more generic way just to set upper limit, if we see that upload exceeds that limit, pass control to conn thread and let it finish reading, this way, even spooling to file will work, because each upload conn thread will use their own file and will not hold driver thread. Every conn thread calls NsGetRequest which uses SockRead, so somewhere in the driver.c we can check for max upload limit and mark Sock for conn queueing, so NsGetRequest will finish upload. Yes, too many connections will take all threads but that's how it works anyway, nsd can be configured with maxconn that corresponds to any particular system/load. Zoran Vasiljevic wrote: > > Am 02.01.2006 um 19:13 schrieb Vlad Seryakov: > >> I did very simple test, replaced write with aio_write and at the end >> checked aio_error/aio_return, they all returned 0 so mmap should work >> because file is synced. when i was doing aio_write i used aio_offset, >> so each aio_write would put data into separate region on the file. >> >> Unfortunately i removed modified driver.c by accident, so i will have >> to do it again but something is not right in simply replacing write >> with aio_write and mmap. Without mmap, GetForm will have to read the >> file manually and parse it, it makes it more complicated and still, >> if writes are not complete we may get SIGBUS again. >> >> The problem i see here i cannot be sure that all writes are >> completed, aio_error/aio_return are used to check only one last >> operation, not the batch of writes. > > > Hmmm... that means that before writing the next chunk, you should check > the > status of the last and skip if still not written? > > OTOH, how come you get SIGBUS? And where? Normally this might be only > if you mmap the file as some structure, not char array? > > Zoran > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-02 20:37:19
|
Am 02.01.2006 um 21:13 schrieb Stephen Deasey: > > If the problem is that threads are too "heavy", then it's pointless to > use aio_write() etc. if the underlying implementation also uses > threads. It will be worse than using conn threads alone, as there > will be extra context switches as control bounces from driver thread, > to IO thread, back to driver thread, and finally to conn thread. Well, I really do not know which OS uses what implementation. We now know thaat Linux uses userland threads. I do not know about Darwin but I can peek into their sources. Same for Solaris. > > Here's another way to look at it: We already dedicate a conn thread > for every download. What's so special about uploads that we can't > also dedicate a thread to them? The threads are long lived, you can > prevent them from running Tcl etc... This is true. What I had in mind was to make it simple as possible by reusing already present driver thread w/o making it more complex than needed by adding even more theads. We already have lots of them anyways. > > Sure, the ideal would be AIO for both uploads and downloads. But the > io_* API is busted, at least on one major platform, possibly more. > Have you looked at Darwin, how do they implement it? What are you > going to do on Windows? > I do not know. On windows most of the things are already async (WaitForMultipleObjects) so I believe that we can work-arround by adding our own wrapper for AIO as we have done for mmap. But this is all speculation. I never went farther than reading man pages and contemplating about it. Vlad went considerably farther and have tried couple of things. I believe it is the time we have to make ourselves pretty clear what are we about to do: o. add upload statistics and control? o. improve upload altogether o. redesign upload altogether At the moment, most visible/needed improvement coud/would be a sort of upload statistics, as we do miss this functionality. Zoran > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through =20 > log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD =20 > SPLUNK! > http://ads.osdn.com/?ad_idv37&alloc_id=16865&op=3Dclick > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel |
From: Stephen D. <sd...@gm...> - 2006-01-02 20:24:37
|
On 1/2/06, Vlad Seryakov <vl...@cr...> wrote: > I did very simple test, replaced write with aio_write and at the end > checked aio_error/aio_return, they all returned 0 so mmap should work > because file is synced. when i was doing aio_write i used aio_offset, so > each aio_write would put data into separate region on the file. > > Unfortunately i removed modified driver.c by accident, so i will have to > do it again but something is not right in simply replacing write with > aio_write and mmap. Without mmap, GetForm will have to read the file > manually and parse it, it makes it more complicated and still, if writes > are not complete we may get SIGBUS again. > > The problem i see here i cannot be sure that all writes are completed, > aio_error/aio_return are used to check only one last operation, not the > batch of writes. We might have to drop mmap support anyway. We're trying t accommodate large files, so we may run out of address space. This summer I was running thttpd to serve some iso images. I started getting busy errors when very little bandwidth was being used, and with only 3 concurrent users. It turns out that a couple of extra images were published, and the users were all downloading different ones. As thttpd uses mmap to read the files, the process ran out of address space. It doesn't take much... We still have to support mmap for reading in the case that a module calls Ns_ConContent(), but otherwise we can use read/write etc. |
From: Stephen D. <sd...@gm...> - 2006-01-02 20:13:47
|
On 1/2/06, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 02.01.2006 um 17:23 schrieb Stephen Deasey: > > >> So, what is the problem? > > > > > > http://www.gnu.org/software/libc/manual/html_node/Configuration-of- > > AIO.html > > > > Still, where is the problem? The fact that Linux implements them as > userlevel thread does not mean much to me. If the problem is that threads are too "heavy", then it's pointless to use aio_write() etc. if the underlying implementation also uses threads. It will be worse than using conn threads alone, as there will be extra context switches as control bounces from driver thread, to IO thread, back to driver thread, and finally to conn thread. If what you're saying is Solaris needs it, Linux doesn't matter, well I don't know about that. Here's another way to look at it: We already dedicate a conn thread for every download. What's so special about uploads that we can't also dedicate a thread to them? The threads are long lived, you can prevent them from running Tcl etc... Sure, the ideal would be AIO for both uploads and downloads. But the io_* API is busted, at least on one major platform, possibly more.=20 Have you looked at Darwin, how do they implement it? What are you going to do on Windows? |
From: Zoran V. <zv...@ar...> - 2006-01-02 18:44:30
|
Am 02.01.2006 um 19:13 schrieb Vlad Seryakov: > I did very simple test, replaced write with aio_write and at the > end checked aio_error/aio_return, they all returned 0 so mmap > should work because file is synced. when i was doing aio_write i > used aio_offset, so each aio_write would put data into separate > region on the file. > > Unfortunately i removed modified driver.c by accident, so i will > have to do it again but something is not right in simply replacing > write with aio_write and mmap. Without mmap, GetForm will have to > read the file manually and parse it, it makes it more complicated > and still, if writes are not complete we may get SIGBUS again. > > The problem i see here i cannot be sure that all writes are > completed, aio_error/aio_return are used to check only one last > operation, not the batch of writes. Hmmm... that means that before writing the next chunk, you should check the status of the last and skip if still not written? OTOH, how come you get SIGBUS? And where? Normally this might be only if you mmap the file as some structure, not char array? Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-02 18:10:51
|
I did very simple test, replaced write with aio_write and at the end checked aio_error/aio_return, they all returned 0 so mmap should work because file is synced. when i was doing aio_write i used aio_offset, so each aio_write would put data into separate region on the file. Unfortunately i removed modified driver.c by accident, so i will have to do it again but something is not right in simply replacing write with aio_write and mmap. Without mmap, GetForm will have to read the file manually and parse it, it makes it more complicated and still, if writes are not complete we may get SIGBUS again. The problem i see here i cannot be sure that all writes are completed, aio_error/aio_return are used to check only one last operation, not the batch of writes. Zoran Vasiljevic wrote: > > Am 02.01.2006 um 12:06 schrieb Zoran Vasiljevic: > >> >> Am 02.01.2006 um 04:43 schrieb Vlad Seryakov: >> >>> Tried it, nsd started crashing and i am guessing that the problem is >>> combination of aio_write and mmap. >>> When is start spooling, i just submit aio_write and return >>> immediately, so there are a lot of quick aio_write calls. By the >>> time i reach mmap, it looks like it's always ahead of actual >>> writing, so even when i am trying to check aio_error/aio_fsync in >>> the NsGetRequest, i still get SIGBUS/SIGSEGV when i access reqPtr- >>> >content, looks like mmap and aio_write are out of sync. And there >>> is no way to wait until all file buffers will be flushed, so manual >>> separate thread implementation could be the only portable solution. >>> Or i just do not understand aio_xxx things completely. >> >> >> Hm... the idea is to mmap the file when all is read from the client. >> So you'd have to aio_write into the file until all is received, then >> at some point check the aio_status of the write (non-blocking) and >> when it's done, revert to mmap the file. I do not believe you can >> mix the aio_ with mmap. > > > What I still do not know is: if you are allowed to issue multiple > outstanding aio_writes one after another... It might be that you > need to check wether the first aio_write is done before you attempt > the next one. > > Zoran > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-02 16:35:56
|
Am 02.01.2006 um 17:23 schrieb Stephen Deasey: >> So, what is the problem? > > > http://www.gnu.org/software/libc/manual/html_node/Configuration-of- > AIO.html > Still, where is the problem? The fact that Linux implements them as userlevel thread does not mean much to me. On Solaris, each thread you start results in an LWP in the kernel. At least this is so if you use the naviserver thread abstraction. This is not cheap, hence I'm not very fond of starting threads with or without the Tcl interp, as all threads are actually expensive. (the former are more expensive though). On Linux, especially with NPTL, threads are rather cheap, so there you can easily add yet-another-thread and handle the spooling with a dedicated thread or pool of threads. But... what is the MAIN reason against AIO? Why would not that be a platform-neutral simple way of doing a non-blocking writing from within our driver-thread? Provided we have a simple and workable abstract solution (aio_xxx family of routines) which is thread-safe, why would we bother about the per-platform implementation of it? ??? Zoran |
From: Stephen D. <sd...@gm...> - 2006-01-02 16:23:31
|
On 1/2/06, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 02.01.2006 um 08:36 schrieb Stephen Deasey: > > > On 12/31/05, Zoran Vasiljevic <zv...@ar...> wrote: > >> > >> Am 31.12.2005 um 20:12 schrieb Vlad Seryakov: > >> > >>> aio_read/aio_write system calls look like supported ubnder Linux > >>> > >> > >> yes. most of modern os's support some kind of kaio. > >> I have checked solaris, linux and darwin and all do. > > > > > > Hmm, not sure about that... > > > > Linux aio_read() is implemented within glibc with via threads. The > > 2.6 Linux epoll() is great for socket file descriptors, but doesn't > > work with files. Same for realtime signals in 2.4. The kernel stuff, > > io_submit() etc. only works on files opened with O_DIRECT, i.e. no > > file system buffering. io_*() via libaio, and epoll() are not > > portable. > > > > The BSDs (FreeBSD first, others later) have kqueue, and so does Mac/OS > > from 10.3 on, but again this is not portable. This didn't work with > > threads in the past. Don't know if/when that got fixed... > > > > Solaris 10 has some brand new port_*() calls. Again, not portable. > > Not sure how aio_read() etc. are implemented on older versions of > > Solaris. > > > > I think Windows has pretty good support for AIO, including to files. > > Obviously, not portable. > > Look: > > On Darwin (10.4+) > > aio_cancel(2) - cancel an outstanding asynchronous I/O > operation (REALTIME) > aio_error(2) - retrieve error status of asynchronous I/O > operation (REALTIME) > aio_read(2) - asynchronous read from a file (REALTIME) > aio_return(2) - retrieve return status of asynchronous I/O > operation (REALTIME) > aio_suspend(2) - suspend until asynchronous I/O operations > or timeout complete (REALTIME) > aio_write(2) - asynchronous write to a file (REALTIME) > > On Solaris: > > On Linux: > > aio_return (3) - get return status of asynchronous I/O operation > aio.h (0p) - asynchronous input and output (REALTIME) > aio_cancel (3) - cancel an outstanding asynchronous I/O request > aio_error (3p) - retrieve errors status for an asynchronous I/O > operation (REALTIME) > aio_error (3) - get error status of asynchronous I/O operation > aio_suspend (3) - wait for asynchronous I/O operation or timeout > aio_read (3p) - asynchronous read from a file (REALTIME) > aio_return (3p) - retrieve return status of an asynchronous I/O > operation (REALTIME) > aio_write (3p) - asynchronous write to a file (REALTIME) > aio_cancel (3p) - cancel an asynchronous I/O request (REALTIME) > aio_fsync (3) - asynchronous file synchronization > aio_fsync (3p) - asynchronous file synchronization (REALTIME) > aio_write (3) - asynchronous write > aio_suspend (3p) - wait for an asynchronous I/O request (REALTIME) > aio_read (3) - asynchronous read > > On Solaris (2.8+) > > # apropos aio > aio aio (3head) - asynchronous input and output > aio_cancel aio_cancel (3rt) - cancel asynchronous I/O request > aio_error aio_error (3rt) - retrieve errors status for an > asynchronous I/O operation > aio_fsync aio_fsync (3rt) - asynchronous file synchronization > aio_read aio_read (3rt) - asynchronous read from a file > aio_req aio_req (9s) - asynchronous I/O request structure > aio_return aio_return (3rt) - retrieve return status of an > asynchronous I/O operation > aio_suspend aio_suspend (3rt) - wait for asynchronous I/O request > aio_write aio_write (3rt) - asynchronous write to a file > aiocancel aiocancel (3aio) - cancel an asynchronous operation > aioread aioread (3aio) - read or write asynchronous I/O > operations > aiowait aiowait (3aio) - wait for completion of asynchronous > I/O operation > aiowrite aioread (3aio) - read or write asynchronous I/O > operations > libaio libaio (3lib) - the asynchronous I/O library > > What we'd need is available on all above platforms: > > aio_write > aio_cancel > aio_suspend > > So, what is the problem? http://www.gnu.org/software/libc/manual/html_node/Configuration-of-AIO.html |