From: Vlad S. <vl...@cr...> - 2006-01-07 16:37:15
|
Yes, it makes sense Zoran Vasiljevic wrote: > > Am 06.01.2006 um 19:00 schrieb Vlad Seryakov: > >> sockPtr will be queued into the connection queue for processing, it >> may happen from driver thread or from spooler thread, works equally. > > > Hmhmhmhmhmhmhm... > > The SOCK_SPOOL is only retured from SockRead(). The SockRead() > is only attempted in the DriverThread if : > > if (sockPtr->drvPtr->opts & NS_DRIVER_ASYNC) { > n = SockRead(sockPtr, 1); > } else { > n = SOCK_READY; > } > > So, the same test as above in SpoolThread has no meaning at all > since you will never come to SpoolThread unless somebody > (driver thread) calls SockRead which MAY return SOCK_SPOOL. > > Do I see this right? > > I'm not nitpicking! I'm just trying to understand the code. > I believe this thing could be misleading and you should > (if I'm correct) remove the above snippet from the SpoolThread > and just write: > > n = SockRead(sockPtr, 1); > > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Stephen D. <sd...@gm...> - 2006-01-03 17:50:39
|
On 1/3/06, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 03.01.2006 um 01:07 schrieb Vlad Seryakov: > > > > > Will it be more generic way just to set upper limit, if we see that > > upload exceeds that limit, pass control to conn thread and let it > > finish reading, this way, even spooling to file will work, because > > each upload conn thread will use their own file and will not hold > > driver thread. > > But we already have this limit. It is: maxreadahead. > > If we'd want to kill many flies with one stoke, I believe > we will have to step out of the box! > > By doing everything in the driver thread (provided we get the > AIO to work, which I'm sure can be done cross-platform) we are > getting only one of the things solved: upload of large conent. > We are not solving the upload progress nor any other security, > quota, whatelse, reqs. In order to do this, we'd need hooks into > the driver thread processing. This will inevitably lead to adding > a Tcl script processing into the driver thread plus some kind of > locking which could possibly cripple drivers performance. > I would not like to do anything in the driver thread which is Tcl- > related. > And, if any IO is being done there, it should not be blocking. > > One of the solution: move everything which exceeds "maxreadahead" into > the connection thread. This way you have all bits and pieces and you > can define whatever checking/statistic policy you need. The code doing > the multipart dissasembly can have hooks to inspect for progress, you > can stop/discard the upload based on some constraints (file size, > whatelse) > so you have total control. The down-side: you engage the conn thread and > risk getting DOS attacks, as Gustaf already mentioned in one of his last > responses to this thread. > > I think naviserver model using driver-thread for getting connections > accepted and their data read in-advance before passing ctrl to conn > thread > is good for most of the dynamic-content systems but not for doing > (large) > file uploads and/or serving (large) static content (images, etc). > > I believe that event-loop-type (or AIO processing) is more suitable > for that. > After all, you can have 100's of slow peers pumping or pulling files > to/from > you at the snail speed. In such case it is a complete waste to > allocate that > many connection threads as it will blow the system memory. A > dedicated pool of > threads each running in the event-loop mode would be far more > appropriate. > Actually, just one thread in event-loop would do the work perfectly > on a single > CPU box. On many-CPU box a pool of threads (one per CPU) would be > more appropriate. > > So, there it comes: we'd have to invent a third instance of the > processing! > > 1. driver thread > 2. connection thread > 3. spooling thread > > The driver thread gets the connection and reads all up to maxreadahead. > It then passes the connection to the conn thread. > The conn thread decides to run that connection entirely (POST of a > simple form > or GET, all content read-in) OR it decides to pass the connection to the > spooling thread(s). What happens to the conn thread after this? It can't wait for completion, that would defeat the purpose. Do traces (logging etc.) run now, in the conn thread, or later in a spool thread? If logging runs now, but the upload fails, the log will be wrong. If traces run in the spool threads they may block. If further processing must be done after upload, does the spool thread pass control back to a conn thread? Does this look like a new request? The state of any running Tcl script will be lost at this point (the conn thread will have cleaned up after hand-off to a spool thread, right?). > The spooling threads operate in event-loop mode. There has to be some > kind > of dispacher which evenly distributes the processing among spooling > threads. > Once in the spooling thread, the connection is processed entirely > asynchronously > as in Tcl event-loop. In fact, this whole thing can be implemented in > Tcl with > the building blocks we already have: channel-transfer between > threads, thread > management (create, teardown). Event the dispatcher can be done in > Tcl alone. > After processing in the spooling thread, the entire connection can > then be > bounced back to the connection thread OR finished in the spooling > thread. > > I hope I did not write a tons of nonsense and thank you for being > patient > and reading up to here :-) > > So, what do you think? > > Zoran |
From: Zoran V. <zv...@ar...> - 2006-01-03 18:13:49
|
Am 03.01.2006 um 18:50 schrieb Stephen Deasey: > What happens to the conn thread after this? It can't wait for > completion, that would defeat the purpose. > > Do traces (logging etc.) run now, in the conn thread, or later in a > spool thread? If logging runs now, but the upload fails, the log will > be wrong. If traces run in the spool threads they may block. > > If further processing must be done after upload, does the spool thread > pass control back to a conn thread? Does this look like a new > request? The state of any running Tcl script will be lost at this > point (the conn thread will have cleaned up after hand-off to a spool > thread, right?). All very good question to which I can't give any question at this point. Or to put it simpler: I have no damn idea! I wanted to get mylself some bird-eye view on the matter in order to better understand what we are after and how we can solve it. What I however DO now is: we (as Gustaf) have a-kind-of extra processing built into our app which is quite similar to spool thread we are contemplating here about. We use the [ns_conn channel] to splice out the socket out of the conn structure, then wrap this into a Tcl channel and then transfer this channel to detached thread sitting in event loop. From the client side we fake the content length (the client is not a browser) so the driver thread does not come in between with its greedy reading. Then we simply finish the processing in the conn thread but the socket still lives and is getting serviced in the "spool" thread. The "spool" thread operates in event-loop mode and does the rest of the processing and eventually closes the socket to the client when done. Now, this is *way* dirty, but it allows us to use conn thread only for a shortest possible time and yet do long-running processing in the "spool" thread in the event-loop mode. I believe this is precisely what Gustaf is also doing in OACS. Again, I DO know that this is dirty but when you have only a hammer everything looks like a nail, doesn't it? That was my initial idea: add event-loop-type of processing into the current naviserver driver-thread/conn-thread paradigm. After thinking a while, the conn thread need not be involved at all in the long run. We can feed the conn to conn or spool The conn thread does all blocking whereas the spool thread does all non-blocking? The conn thread might be used for blocking (db) access whereas spool thread might be used to serve images, static files, uploads and all other things requiring IO which CAN be done non-blocking. Now, what I wanted to give here is an *idea*. Deeper bits and pieces i.e. how all this interact with each other I haven't consider yet, as it is too early for that. The question is: is the idea itself worth considering or not? Zoran |
From: Gustaf N. <ne...@wu...> - 2006-01-03 21:57:27
|
Stephen Deasey wrote: >> The driver thread gets the connection and reads all up to maxreadahead. >> It then passes the connection to the conn thread. >> The conn thread decides to run that connection entirely (POST of a >> simple form >> or GET, all content read-in) OR it decides to pass the connection to the >> spooling thread(s). >> > > > What happens to the conn thread after this? It can't wait for > completion, that would defeat the purpose. > > Do traces (logging etc.) run now, in the conn thread, or later in a > spool thread? If logging runs now, but the upload fails, the log will > be wrong. If traces run in the spool threads they may block. > > In my setup, we have currently the spooling thread just for sending data. The code sent yesterday is a replacement for ns_returnfile. The connection thread finishes asynchronously, before all data is spooled. This means, that the connection thread finishes after it delegates the file delivery to the spool-thread. In order to get the correct sent-content-length in the logfile, i had to set (somewhat optimistic) the sent-contentlength manually. The traces are run immediately after the delegation. So the request lifecycle (state flow) for get request for a large file is currently accept(DT) -> preauth(CT) -> postauth(CT) -> procs(CT) -> trace(CT) -> spool(ST) DT: driver thread, CT: connection thread, ST: spool thread To make the implementation cleaner, it would preferable to provide means to pass the connection + context back to a connection thread that should run the final trace accept(DT) -> preauth(CT1) -> postauth(CT1) -> procs(CT1) -> spool(ST) -> trace(CT2) i would think, that if we can solve this part cleanly, the upload would work with the same mechanism. if the states are handled explicitely, one could think about a tcl command ns_enqueue $state $client_data that will enqueue a job for the connection thread pools, containing the state, file where the connection should continue together with a tcl stucture client_data that passes around connection specific context information (e.g user_id, ...) and the information from ns_conn (socket, head info, ..). For the example above, there would be an (implicit) ns_enqueue preauth "" issued from the driver thread, an ns_spool send $filename (or $fd) $context to pass the control to the spooling-thread, an ns_enqueue trace $context issued from the spooling trace to pass control back to a connection thread. > If further processing must be done after upload, does the spool thread > pass control back to a conn thread? Does this look like a new > request? The state of any running Tcl script will be lost at this > point (the conn thread will have cleaned up after hand-off to a spool > thread, right?). > upload could be done with the same machanisms_ 1) driver thread: request-head processing 2) start connection thread with a new filter (e.g. request-head) 3) the request-head filter will call pass control to the spooling-thread (e.g. ns_spool receive $length $context) 4) when the spool file is fully received (e.g. fcopy -command-callback) it enqueues the request for the connection threads (e.g. ns_enqueue preauth $context). 5) the connection thread obtains the context starts request processing as usual in the preauth state (start preauth, postauth, trace, ... unless control is passed back to the spool threads) So the request lifecycle (state flow) for a file upload could be: accept(DT) -> request-head(CT1) -> spool(ST) -> preauth(CT2) -> postauth(CT2) -> procs(CT2) -> trace(CT2) i would think that with the two commands ns_enqueue and ns_spool, which switch the control flow between connection threads and spooling threads and copy connection state information (from ns_conn) and client_data, one would have a very flexible framework. -gustaf PS: this model would be still compatible with aolserver, but would require that e.g. authentification code will be called from the request-head filter callback as well as from the preauth callback (maybe avoided by data in the client_data) Moving preauth and postauth to CT1 would be logically cleaner, but might be incompatible with aolserver, since these callback might want already to access the posted data. |
From: Stephen D. <sd...@gm...> - 2006-01-02 08:30:34
|
On 12/31/05, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 31.12.2005 um 19:03 schrieb Vlad Seryakov: > > > Could config option, like 1 second or 10Kbytes > > Yup. For example. This could reduce locking attempts. > Seems fine to me. Right. So the thread handling the upload would lock/unlock every upload_size/10Kbytes or seconds_to_upload. The frequency of locking for the every 10Kbytes case would increase (and hence chance of blocking) with the capacity of your and your clients connection. Another thread would lock/unlock every time the client requests an update on the progress, say once per second. Multiply the above by the number of concurrent clients. That's the price you pay for upload stats and that's OK. I wouldn't want that overhead when I'm not interested in the stats though... |
From: Zoran V. <zv...@ar...> - 2006-01-02 09:48:01
|
Am 02.01.2006 um 09:30 schrieb Stephen Deasey: > > Right. So the thread handling the upload would lock/unlock every > upload_size/10Kbytes or seconds_to_upload. The frequency of locking > for the every 10Kbytes case would increase (and hence chance of > blocking) with the capacity of your and your clients connection. > > Another thread would lock/unlock every time the client requests an > update on the progress, say once per second. > > Multiply the above by the number of concurrent clients. > > > That's the price you pay for upload stats and that's OK. I wouldn't > want that overhead when I'm not interested in the stats though... In which case you'd: a. have no threads asking for the stats b. have only the upload thread do locks every 10k or such so the locking will decrease even more. Furthermore, if you set the upload_size to zero then we can entirely skip that hence no locking would occur. I believe that this is fair for everybody. People needing that stats can turn it on. People never interested in that can leave the default (no stats) and all are happy? Zoran |
From: Vlad S. <vl...@cr...> - 2005-12-31 17:08:54
|
> I think we talked about this before, but I can't find it in the > mailing list archive. Anyway, the problem with recording the upload > process is all the locking that's required. You could minimize this, > e.g. by only recording uploads above a certain size, or to a certain > URL. Yes, that is true, but at the same time, GET requests do not carry the body, so no locking willhappen. POST uploads are very slow processes in a sense that user expects to wait until it finishes and locking will happen only for uploads. It is possible further minimize it by doing locking only several times instead of on every read but avoiding locks is not possible. > It reminds me of a similar problem we had. Spooling large uploads to disk: > > https://sourceforge.net/mailarchive/forum.php?thread_id=7524448&forum_id=43966 > > Vlad implemented the actual spooling, but moving that work into the > conn threads, reading lazily, is still to be done. > > Lazy uploading is exactly the hook you need to track upload progress. > The client starts to upload a file. Read-ahead occurs in the driver > thread, say 8k. Control is passed to a conn thread, which then calls > Ns_ConnContent(). The remaining content is read from the client, in > the context of the conn thread and so not blocking the driver thread, > and perhaps the content is spooled to disk. > > To implement upload tracking you would register a proc for /upload > which instead of calling Ns_ConnContent(), calls Ns_ConnRead() > multiple times, recording the number of bytes read in the upload > tracking cache, and saving the data to disk or wherever. > > A lot more control of the upload process is needed, whether it be to > control size, access, to record stats, or something we haven't thought > of yet. If we complete the work to get lazy reading from the client > working, an upload tracker will be an easy module to write. > One problem i see with lazy uploads that if you have multiple clients doing large POSTs, spawning multiple clients for a long time reading that content will waste resources, each conn thread is heavy, with Tcl interp. Using driver thread reading small chunks from the connections and putting it into file will keep everything smooth. But with small uploads on fast network this may not be an issue, so it needs a compromize solution here, may be configurable options. Currently, spooling into file can be disabled/enabled, lazy spooling may be implemented similar way. Actually, lazy file spooling can be easily done, because even Ns_ConnContent calls SockRead which does spooling, we just need to introduce an option that tells how much we should spool in the main thread and then continue in the conn thread. -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Stephen D. <sd...@gm...> - 2006-01-02 08:21:29
|
On 12/31/05, Vlad Seryakov <vl...@cr...> wrote: > > > I think we talked about this before, but I can't find it in the > > mailing list archive. Anyway, the problem with recording the upload > > process is all the locking that's required. You could minimize this, > > e.g. by only recording uploads above a certain size, or to a certain > > URL. > > Yes, that is true, but at the same time, GET requests do not carry the > body, so no locking willhappen. POST uploads are very slow processes in > a sense that user expects to wait until it finishes and locking will > happen only for uploads. It is possible further minimize it by doing > locking only several times instead of on every read but avoiding locks > is not possible. POST request with small amounts of data are really common. Think of a user logging in via a web form. Couple of bytes. The way you've coded it at the mo (and I realize it's just a first cut), all requests with more than 0 bytes of body content will cause the upload stats code to fire. That's why I suggested we may want to have some lower threshold, or restrict it to certain URLs via urlspecific data. We don't want to track every form submission. > > It reminds me of a similar problem we had. Spooling large uploads to d= isk: > > > > https://sourceforge.net/mailarchive/forum.php?thread_id=3D7524448&forum= _id=3D43966 > > > > Vlad implemented the actual spooling, but moving that work into the > > conn threads, reading lazily, is still to be done. > > > > Lazy uploading is exactly the hook you need to track upload progress. > > The client starts to upload a file. Read-ahead occurs in the driver > > thread, say 8k. Control is passed to a conn thread, which then calls > > Ns_ConnContent(). The remaining content is read from the client, in > > the context of the conn thread and so not blocking the driver thread, > > and perhaps the content is spooled to disk. > > > > To implement upload tracking you would register a proc for /upload > > which instead of calling Ns_ConnContent(), calls Ns_ConnRead() > > multiple times, recording the number of bytes read in the upload > > tracking cache, and saving the data to disk or wherever. > > > > A lot more control of the upload process is needed, whether it be to > > control size, access, to record stats, or something we haven't thought > > of yet. If we complete the work to get lazy reading from the client > > working, an upload tracker will be an easy module to write. > > > > One problem i see with lazy uploads that if you have multiple clients > doing large POSTs, spawning multiple clients for a long time reading > that content will waste resources, each conn thread is heavy, with Tcl > interp. Not necessarily. You can create another thread pool and then ensure that those threads never run Tcl code. Remember, Tcl interps are allocated lazily on first use. We could also adjust stacksize per thread pool. > Using driver thread reading small chunks from the connections > and putting it into file will keep everything smooth. But with small > uploads on fast network this may not be an issue, so it needs a > compromize solution here, may be configurable options. Currently, > spooling into file can be disabled/enabled, lazy spooling may be > implemented similar way. Actually, lazy file spooling can be easily > done, because even Ns_ConnContent calls SockRead which does spooling, we > just need to introduce an option that tells how much we should spool in > the main thread and then continue in the conn thread. We already have maxreadahead, which is the amount of data read by the driver thread into a memory buffer. I think you're either happy letting the driver thread block writing to disk, or you're not. Why would you set a threshold on this? |
From: Zoran V. <zv...@ar...> - 2006-01-02 09:54:29
|
Am 02.01.2006 um 09:21 schrieb Stephen Deasey: > > POST request with small amounts of data are really common. Think of a > user logging in via a web form. Couple of bytes. > > The way you've coded it at the mo (and I realize it's just a first > cut), all requests with more than 0 bytes of body content will cause > the upload stats code to fire. That's why I suggested we may want to > have some lower threshold, or restrict it to certain URLs via > urlspecific data. We don't want to track every form submission. No. That is of course not good. >> >> One problem i see with lazy uploads that if you have multiple clients >> doing large POSTs, spawning multiple clients for a long time reading >> that content will waste resources, each conn thread is heavy, with >> Tcl >> interp. > > > Not necessarily. You can create another thread pool and then ensure > that those threads never run Tcl code. Remember, Tcl interps are > allocated lazily on first use. We could also adjust stacksize per > thread pool. But then, how are you going to interpret the stats? You need to run some Tcl code for that and this will load the Tcl interp? > > >> Using driver thread reading small chunks from the connections >> and putting it into file will keep everything smooth. But with small >> uploads on fast network this may not be an issue, so it needs a >> compromize solution here, may be configurable options. Currently, >> spooling into file can be disabled/enabled, lazy spooling may be >> implemented similar way. Actually, lazy file spooling can be easily >> done, because even Ns_ConnContent calls SockRead which does >> spooling, we >> just need to introduce an option that tells how much we should >> spool in >> the main thread and then continue in the conn thread. > > > We already have maxreadahead, which is the amount of data read by the > driver thread into a memory buffer. I think you're either happy > letting the driver thread block writing to disk, or you're not. Why > would you set a threshold on this? Right. We already have that knob: maxreadahead. It is the question what happens when we reach that point. At the moment, extra data is spooled into the file and file is mmaped at the end of reading when all the content from the client is received. I'm not that happy with the driver thread pumping the data into the file as this might affect the overall performance, therefore I suggested kernel AIO. This feature of OS is developed precisely for that purpose. Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-02 03:42:49
|
Another solution to reduce locking, just to allocate maxconn structures with its own mutex and perform locking for particular struct only, so writers will not block other writers, only writer/reader. More memory but less concurrency. I will try this tomorrow. Zoran Vasiljevic wrote: > > Am 31.12.2005 um 18:20 schrieb Vlad Seryakov: > >> Another possible solution can be, pre-allocating maxconn upload >> structs and update them without locks, it is integer anyway, so no >> need to lock, 4 byte write is never innterupted, usually it is 1 CPU >> instruction(true for Intel, maybe not for Sparc). > > > Nope. Need locking even for integers. I've seen crazy things... > > OTOH, I'm not that concerned about the locking if it is short-lived > and not frequently done. I believe the "not frequently done" is the > hardest thing to judge. > > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Stephen D. <sd...@gm...> - 2006-01-02 05:22:52
|
On 1/1/06, Vlad Seryakov <vl...@cr...> wrote: > Another solution to reduce locking, just to allocate maxconn structures > with its own mutex and perform locking for particular struct only, so > writers will not block other writers, only writer/reader. More memory > but less concurrency. > > I will try this tomorrow. But how do you get access to the right conn structure? You'll have to use a hash table of URLs, which will have to be locked. That's one lock/unlock at the start and end of each upload, and one lock/unlock for every query of the upload progress, say one per second or so. The thing about building upload tracking into the server is that it doesn't help solve other problems, such as the extra control needed before permitting a large upload. If hooks are added for that, then upload tracking could be implemented using the same mechanism. It would allow you to track uniqueness of the uploader via cookie for example. |
From: Stephen D. <sd...@gm...> - 2006-01-02 20:13:47
|
On 1/2/06, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 02.01.2006 um 17:23 schrieb Stephen Deasey: > > >> So, what is the problem? > > > > > > http://www.gnu.org/software/libc/manual/html_node/Configuration-of- > > AIO.html > > > > Still, where is the problem? The fact that Linux implements them as > userlevel thread does not mean much to me. If the problem is that threads are too "heavy", then it's pointless to use aio_write() etc. if the underlying implementation also uses threads. It will be worse than using conn threads alone, as there will be extra context switches as control bounces from driver thread, to IO thread, back to driver thread, and finally to conn thread. If what you're saying is Solaris needs it, Linux doesn't matter, well I don't know about that. Here's another way to look at it: We already dedicate a conn thread for every download. What's so special about uploads that we can't also dedicate a thread to them? The threads are long lived, you can prevent them from running Tcl etc... Sure, the ideal would be AIO for both uploads and downloads. But the io_* API is busted, at least on one major platform, possibly more.=20 Have you looked at Darwin, how do they implement it? What are you going to do on Windows? |
From: Zoran V. <zv...@ar...> - 2006-01-02 20:37:19
|
Am 02.01.2006 um 21:13 schrieb Stephen Deasey: > > If the problem is that threads are too "heavy", then it's pointless to > use aio_write() etc. if the underlying implementation also uses > threads. It will be worse than using conn threads alone, as there > will be extra context switches as control bounces from driver thread, > to IO thread, back to driver thread, and finally to conn thread. Well, I really do not know which OS uses what implementation. We now know thaat Linux uses userland threads. I do not know about Darwin but I can peek into their sources. Same for Solaris. > > Here's another way to look at it: We already dedicate a conn thread > for every download. What's so special about uploads that we can't > also dedicate a thread to them? The threads are long lived, you can > prevent them from running Tcl etc... This is true. What I had in mind was to make it simple as possible by reusing already present driver thread w/o making it more complex than needed by adding even more theads. We already have lots of them anyways. > > Sure, the ideal would be AIO for both uploads and downloads. But the > io_* API is busted, at least on one major platform, possibly more. > Have you looked at Darwin, how do they implement it? What are you > going to do on Windows? > I do not know. On windows most of the things are already async (WaitForMultipleObjects) so I believe that we can work-arround by adding our own wrapper for AIO as we have done for mmap. But this is all speculation. I never went farther than reading man pages and contemplating about it. Vlad went considerably farther and have tried couple of things. I believe it is the time we have to make ourselves pretty clear what are we about to do: o. add upload statistics and control? o. improve upload altogether o. redesign upload altogether At the moment, most visible/needed improvement coud/would be a sort of upload statistics, as we do miss this functionality. Zoran > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through =20 > log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD =20 > SPLUNK! > http://ads.osdn.com/?ad_idv37&alloc_id=16865&op=3Dclick > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel |
From: Vlad S. <vl...@cr...> - 2006-01-03 00:07:47
|
Check my last upload patch, it may be usefull until more generic approach will be developed. Zoran Vasiljevic wrote: > > Am 02.01.2006 um 21:13 schrieb Stephen Deasey: > >> >> If the problem is that threads are too "heavy", then it's pointless to >> use aio_write() etc. if the underlying implementation also uses >> threads. It will be worse than using conn threads alone, as there >> will be extra context switches as control bounces from driver thread, >> to IO thread, back to driver thread, and finally to conn thread. > > > Well, I really do not know which OS uses what implementation. > We now know thaat Linux uses userland threads. > I do not know about Darwin but I can peek into their sources. > Same for Solaris. > >> >> Here's another way to look at it: We already dedicate a conn thread >> for every download. What's so special about uploads that we can't >> also dedicate a thread to them? The threads are long lived, you can >> prevent them from running Tcl etc... > > > This is true. What I had in mind was to make it simple as possible > by reusing already present driver thread w/o making it more complex > than needed by adding even more theads. We already have lots of > them anyways. > >> >> Sure, the ideal would be AIO for both uploads and downloads. But the >> io_* API is busted, at least on one major platform, possibly more. >> Have you looked at Darwin, how do they implement it? What are you >> going to do on Windows? >> > > I do not know. On windows most of the things are already async > (WaitForMultipleObjects) so I believe that we can work-arround by > adding our own wrapper for AIO as we have done for mmap. > > But this is all speculation. I never went farther than reading > man pages and contemplating about it. Vlad went considerably > farther and have tried couple of things. > > I believe it is the time we have to make ourselves pretty clear > what are we about to do: > > o. add upload statistics and control? > o. improve upload altogether > o. redesign upload altogether > > At the moment, most visible/needed improvement coud/would be > a sort of upload statistics, as we do miss this functionality. > > Zoran > >> >> ------------------------------------------------------- >> This SF.net email is sponsored by: Splunk Inc. Do you grep through >> log files >> for problems? Stop! Download the new AJAX search engine that makes >> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! >> http://ads.osdn.com/?ad_idv37&alloc_id865&op=click >> _______________________________________________ >> naviserver-devel mailing list >> nav...@li... >> https://lists.sourceforge.net/lists/listinfo/naviserver-devel > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_idv37&alloc_id865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Bernd E. <eid...@we...> - 2006-01-03 09:47:32
|
> o. add upload statistics and control? Reading the current very interesting thread, I would vote for adding it, if the stats are some kind of by-product of more stability against DOS attacks, multiple slow clients, reducing memory needs etc. as you all pointed out. Because what is still not solved on the client side, the browser, is a good GUI and support for uploading multiple files and showing their progress. AJAX helps a bit, there are nice tricks to display only one HTML file element (http://the-stickman.com/web-development/javascript/upload-multiple-files-with-a-single-file-element/) or progress bars (http://blog.joshuaeichorn.com/archives/2005/05/01/ajax-file-upload-progress/). But I don't have a file selection dialog box where I can select multiple files at once (say 50 or more for uploading images to a galery) or even complete directories, where files on the client side that exceed a certain limit are automatically sorted out; where the files are uploaded one by one and not as one giant form post; re-sent in case of a broken connection or even uploaded in parallel. This is where all the java and flash upload applets come in and play out their advantages. Of course, AJAX is still evolving and so are browser features, it's good to have stats and it is often requested, but for now there are use cases where the client approach is still better. cu BE |
From: Zoran V. <zv...@ar...> - 2006-01-03 10:23:16
|
Am 03.01.2006 um 10:49 schrieb Bernd Eidenschink: > > Of course, AJAX is still evolving and so are browser features, it's > good to > have stats and it is often requested, but for now there are use > cases where > the client approach is still better. All very true... The ultimate solution is: both. Better support on the server as well on the client side. We have no control on the client side. Hence we can strive to get the server side done properly. Zoran |