From: Vlad S. <vl...@cr...> - 2006-01-03 00:04:54
|
It was kind of random, sometimes SIGBUS, next time SIGEGV, it needs another round of testing. The goal was just simple replace with and see if aio_write may work, looks like it does not. I checked samba and squid, they use some kind of high level wrappers around AIO, most of the time their own re-implemenation using threads. Will it be more generic way just to set upper limit, if we see that upload exceeds that limit, pass control to conn thread and let it finish reading, this way, even spooling to file will work, because each upload conn thread will use their own file and will not hold driver thread. Every conn thread calls NsGetRequest which uses SockRead, so somewhere in the driver.c we can check for max upload limit and mark Sock for conn queueing, so NsGetRequest will finish upload. Yes, too many connections will take all threads but that's how it works anyway, nsd can be configured with maxconn that corresponds to any particular system/load. Zoran Vasiljevic wrote: > > Am 02.01.2006 um 19:13 schrieb Vlad Seryakov: > >> I did very simple test, replaced write with aio_write and at the end >> checked aio_error/aio_return, they all returned 0 so mmap should work >> because file is synced. when i was doing aio_write i used aio_offset, >> so each aio_write would put data into separate region on the file. >> >> Unfortunately i removed modified driver.c by accident, so i will have >> to do it again but something is not right in simply replacing write >> with aio_write and mmap. Without mmap, GetForm will have to read the >> file manually and parse it, it makes it more complicated and still, >> if writes are not complete we may get SIGBUS again. >> >> The problem i see here i cannot be sure that all writes are >> completed, aio_error/aio_return are used to check only one last >> operation, not the batch of writes. > > > Hmmm... that means that before writing the next chunk, you should check > the > status of the last and skip if still not written? > > OTOH, how come you get SIGBUS? And where? Normally this might be only > if you mmap the file as some structure, not char array? > > Zoran > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-03 08:46:50
|
Am 03.01.2006 um 01:07 schrieb Vlad Seryakov: > > Will it be more generic way just to set upper limit, if we see that > upload exceeds that limit, pass control to conn thread and let it > finish reading, this way, even spooling to file will work, because > each upload conn thread will use their own file and will not hold > driver thread. But we already have this limit. It is: maxreadahead. If we'd want to kill many flies with one stoke, I believe we will have to step out of the box! By doing everything in the driver thread (provided we get the AIO to work, which I'm sure can be done cross-platform) we are getting only one of the things solved: upload of large conent. We are not solving the upload progress nor any other security, quota, whatelse, reqs. In order to do this, we'd need hooks into the driver thread processing. This will inevitably lead to adding a Tcl script processing into the driver thread plus some kind of locking which could possibly cripple drivers performance. I would not like to do anything in the driver thread which is Tcl- related. And, if any IO is being done there, it should not be blocking. One of the solution: move everything which exceeds "maxreadahead" into the connection thread. This way you have all bits and pieces and you can define whatever checking/statistic policy you need. The code doing the multipart dissasembly can have hooks to inspect for progress, you can stop/discard the upload based on some constraints (file size, whatelse) so you have total control. The down-side: you engage the conn thread and risk getting DOS attacks, as Gustaf already mentioned in one of his last responses to this thread. I think naviserver model using driver-thread for getting connections accepted and their data read in-advance before passing ctrl to conn thread is good for most of the dynamic-content systems but not for doing (large) file uploads and/or serving (large) static content (images, etc). I believe that event-loop-type (or AIO processing) is more suitable for that. After all, you can have 100's of slow peers pumping or pulling files to/from you at the snail speed. In such case it is a complete waste to allocate that many connection threads as it will blow the system memory. A dedicated pool of threads each running in the event-loop mode would be far more appropriate. Actually, just one thread in event-loop would do the work perfectly on a single CPU box. On many-CPU box a pool of threads (one per CPU) would be more appropriate. So, there it comes: we'd have to invent a third instance of the processing! 1. driver thread 2. connection thread 3. spooling thread The driver thread gets the connection and reads all up to maxreadahead. It then passes the connection to the conn thread. The conn thread decides to run that connection entirely (POST of a simple form or GET, all content read-in) OR it decides to pass the connection to the spooling thread(s). The spooling threads operate in event-loop mode. There has to be some kind of dispacher which evenly distributes the processing among spooling threads. Once in the spooling thread, the connection is processed entirely asynchronously as in Tcl event-loop. In fact, this whole thing can be implemented in Tcl with the building blocks we already have: channel-transfer between threads, thread management (create, teardown). Event the dispatcher can be done in Tcl alone. After processing in the spooling thread, the entire connection can then be bounced back to the connection thread OR finished in the spooling thread. I hope I did not write a tons of nonsense and thank you for being patient and reading up to here :-) So, what do you think? Zoran |
From: Andrew P. <at...@pi...> - 2006-01-03 10:21:59
|
On Tue, Jan 03, 2006 at 09:49:02AM +0100, Zoran Vasiljevic wrote: > The spooling threads operate in event-loop mode. There has to be > some kind of dispacher which evenly distributes the processing among > spooling threads. Or just start out with one spool thread, support for multiple spool threads can be added later, as it's not really necessary, just a possible performance tweak for large SMP machines. Then again, all the thread pool infrastructure is probably already there, so using it from the get go might be simple. > Once in the spooling thread, the connection is processed entirely > asynchronously as in Tcl event-loop. In fact, this whole thing can > be implemented in Tcl with the building blocks we already have: > channel-transfer between threads, thread management (create, > teardown). Event the dispatcher can be done in Tcl alone. I particularly like the all non-blocking, all event driven, all Tcl design of your spool threads. You can always add bits of C code later if that turns out to be more efficient, but being able to do the whole thing in Tcl is very nice. Hm, does Tcl support asynchronous (non-blocking) IO for both network sockets and local files? Tcl has 'fconfigure -blocking 0' of course, but I don't know for sure whether that really does what you want for this application. If Tcl DOES support it adequately, then all the previous questions about how to do cross-platform asynchronous IO from C vanish, which is a nice bonus. > I hope I did not write a tons of nonsense and thank you for being > patient and reading up to here :-) On the contrary, that was by far the clearest design explanation I've yet seen in this discussion. -- Andrew Piskorski <at...@pi...> http://www.piskorski.com/ |
From: Zoran V. <zv...@ar...> - 2006-01-03 10:35:07
|
Am 03.01.2006 um 11:21 schrieb Andrew Piskorski: > Hm, does Tcl support asynchronous (non-blocking) IO for both network > sockets and local files? Tcl has 'fconfigure -blocking 0' of course, > but I don't know for sure whether that really does what you want for > this application. If Tcl DOES support it adequately, then all the > previous questions about how to do cross-platform asynchronous IO from > C vanish, which is a nice bonus. Tcl does this properly by implementing non-blocking IO (O_NONBLOCK or O_NDELAY) and adding their own event-loop processing salt. This works for both files and sockets. The truble is: you need a fully loaded Tcl for that. But we do have it anyways. If we restrict that across limited number of spooling threads, the overhead might not be large. Normally I'd start with spoolthread- per-cpu but the infrastructure must be made so that it allows several spooling threads to gain from multiple-cpu's in modern boxes. Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-03 15:46:20
|
Spool thread can be a replica of driver thread, it will do the same as driver and then pass Sock to the conn thread. So making it C-based is not that hard and still it will be fast and small. Plus, upload statistics now will be handled in the spool thread, not driver thread, so no overhead and slowdown. So we read up to maxreradahead, then queue it into spooling thread and spooling thread will give it to conn thread. Spooling thread will use same Sock/Driver structures, actually Sock will not see any differences between running in driver thread or spooling thread. I can try to provide test of using spooling thread along with driver thread today if i will have time. Zoran Vasiljevic wrote: > > Am 03.01.2006 um 11:21 schrieb Andrew Piskorski: > >> Hm, does Tcl support asynchronous (non-blocking) IO for both network >> sockets and local files? Tcl has 'fconfigure -blocking 0' of course, >> but I don't know for sure whether that really does what you want for >> this application. If Tcl DOES support it adequately, then all the >> previous questions about how to do cross-platform asynchronous IO from >> C vanish, which is a nice bonus. > > > Tcl does this properly by implementing non-blocking IO (O_NONBLOCK or > O_NDELAY) > and adding their own event-loop processing salt. This works for both files > and sockets. > > The truble is: you need a fully loaded Tcl for that. But we do have it > anyways. If we restrict that across limited number of spooling threads, > the overhead might not be large. Normally I'd start with spoolthread- > per-cpu > but the infrastructure must be made so that it allows several spooling > threads to gain from multiple-cpu's in modern boxes. > > Zoran > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-03 18:57:19
|
Am 03.01.2006 um 16:47 schrieb Vlad Seryakov: > Spool thread can be a replica of driver thread, it will do the same > as driver and then pass Sock to the conn thread. So making it C- > based is not that hard and still it will be fast and small. > > Plus, upload statistics now will be handled in the spool thread, > not driver thread, so no overhead and slowdown. > > So we read up to maxreradahead, then queue it into spooling thread > and spooling thread will give it to conn thread. Spooling thread > will use same Sock/Driver structures, actually Sock will not see > any differences between running in driver thread or spooling thread. > > I can try to provide test of using spooling thread along with > driver thread today if i will have time. This is all true, and perhaps the easiest way to get to the result on the short term. I appreciate very much the effort you're doing and we will definitely use it, as soon as you get something working stable enough. Still, our precious (little) server would definitely benefit from some additional event-loop-type of processing where you need not spawn a thread for just about anything. Tcl has come long way with their simple event loop and single-thread approach. Ultimately I can imagine something like: --------- | driver_thread / \ conn_threads spool_threads | | blocking_io nonblocking-io where one would do blocking IO (db stuff etc) whereas the other can be used for simple non-blocking ops like copy_file_to_socket or vice-versa types of operations. Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-03 21:37:43
|
I submitted Service Request with new patch with spooler support. Very simple but seems to be working well. Zoran Vasiljevic wrote: > > Am 03.01.2006 um 16:47 schrieb Vlad Seryakov: > >> Spool thread can be a replica of driver thread, it will do the same >> as driver and then pass Sock to the conn thread. So making it C- based >> is not that hard and still it will be fast and small. >> >> Plus, upload statistics now will be handled in the spool thread, not >> driver thread, so no overhead and slowdown. >> >> So we read up to maxreradahead, then queue it into spooling thread >> and spooling thread will give it to conn thread. Spooling thread will >> use same Sock/Driver structures, actually Sock will not see any >> differences between running in driver thread or spooling thread. >> >> I can try to provide test of using spooling thread along with driver >> thread today if i will have time. > > > This is all true, and perhaps the easiest way to get > to the result on the short term. I appreciate very > much the effort you're doing and we will definitely > use it, as soon as you get something working stable > enough. > > Still, our precious (little) server would definitely > benefit from some additional event-loop-type of > processing where you need not spawn a thread for > just about anything. Tcl has come long way with > their simple event loop and single-thread approach. > > Ultimately I can imagine something like: > > --------- > | > driver_thread > / \ > conn_threads spool_threads > | | > blocking_io nonblocking-io > > where one would do blocking IO (db stuff etc) whereas > the other can be used for simple non-blocking ops like > copy_file_to_socket or vice-versa types of operations. > > > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Vlad S. <vl...@cr...> - 2006-01-04 05:18:06
Attachments:
driver.c
|
I am attaching the whole driver.c file because patch would not be very readable. See if this a good solution or not, it works and uses separate thread for all reading and spooling, also all upload stats are done in the spoller thread, so driver thread now works without any locking. Vlad Seryakov wrote: > I submitted Service Request with new patch with spooler support. Very > simple but seems to be working well. > > Zoran Vasiljevic wrote: > >> >> Am 03.01.2006 um 16:47 schrieb Vlad Seryakov: >> >>> Spool thread can be a replica of driver thread, it will do the same >>> as driver and then pass Sock to the conn thread. So making it C- >>> based is not that hard and still it will be fast and small. >>> >>> Plus, upload statistics now will be handled in the spool thread, not >>> driver thread, so no overhead and slowdown. >>> >>> So we read up to maxreradahead, then queue it into spooling thread >>> and spooling thread will give it to conn thread. Spooling thread >>> will use same Sock/Driver structures, actually Sock will not see any >>> differences between running in driver thread or spooling thread. >>> >>> I can try to provide test of using spooling thread along with driver >>> thread today if i will have time. >> >> >> >> This is all true, and perhaps the easiest way to get >> to the result on the short term. I appreciate very >> much the effort you're doing and we will definitely >> use it, as soon as you get something working stable >> enough. >> >> Still, our precious (little) server would definitely >> benefit from some additional event-loop-type of >> processing where you need not spawn a thread for >> just about anything. Tcl has come long way with >> their simple event loop and single-thread approach. >> >> Ultimately I can imagine something like: >> >> --------- >> | >> driver_thread >> / \ >> conn_threads spool_threads >> | | >> blocking_io nonblocking-io >> >> where one would do blocking IO (db stuff etc) whereas >> the other can be used for simple non-blocking ops like >> copy_file_to_socket or vice-versa types of operations. >> >> >> Zoran >> >> >> ------------------------------------------------------- >> This SF.net email is sponsored by: Splunk Inc. Do you grep through log >> files >> for problems? Stop! Download the new AJAX search engine that makes >> searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! >> http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click >> _______________________________________________ >> naviserver-devel mailing list >> nav...@li... >> https://lists.sourceforge.net/lists/listinfo/naviserver-devel >> > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-04 07:13:59
|
Am 04.01.2006 um 06:20 schrieb Vlad Seryakov: > I am attaching the whole driver.c file because patch would not be > very readable. Won't go as the list limits the message to 40K. Go and upload it into the RFE. That should work. Zoran |
From: Stephen D. <sd...@gm...> - 2006-01-04 10:31:07
|
On 1/3/06, Vlad Seryakov <vl...@cr...> wrote: > I am attaching the whole driver.c file because patch would not be very > readable. > > See if this a good solution or not, it works and uses separate thread > for all reading and spooling, also all upload stats are done in the > spoller thread, so driver thread now works without any locking. > > ... > I find it confusing that the actual spooling code is not in the SpoolThread, but still in SockRead(). Take a look at nsd/task.c. I think you should be able to implement this as a Ns_Task callback, which gives you the extra thread and all the poll handling etc. for free. Move the spooling code from SockRead into the task callback. Later, we could add another task queue for large downloads. Can you split out the upload stats code. One functional change at a time is much easier to follow. I think the basic approach you've got here is the right first step.=20 Single thread, blocking disk writes, triggered from driver thread so no bouncing between the conn thread. The basic programming model remains the same. Once the upload code is working we can add the upload stats and pre-queue filters for quota checking etc. After that we can investigate AIO for disk access to make everything more efficient. |
From: Vlad S. <vl...@cr...> - 2006-01-04 15:13:30
|
> I find it confusing that the actual spooling code is not in the > SpoolThread, but still in SockRead(). > Take a look at nsd/task.c. I think you should be able to implement > this as a Ns_Task callback, which gives you the extra thread and all > the poll handling etc. for free. Move the spooling code from SockRead > into the task callback. > > Can you split out the upload stats code. One functional change at a > time is much easier to follow. > The main reason to reuse driver.c is that spooler is alsmost identical to driver thread, and uses the same functions as driver. Spooler can be disabled(config option), in this case driver works as usual. Also it does parsing and other Sock related things like conn queueing, making it in tasks would result in very big code duplication just to run it in generic socket callback thread. I think it benefits from being in driver.c, it is very specific and does what is supposed to do. Also upload stats belongs to spooler as well, otherwise we get another round of locking optimisation. -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-06 10:20:47
|
Am 04.01.2006 um 16:15 schrieb Vlad Seryakov: > > The main reason to reuse driver.c is that spooler is alsmost > identical to driver thread, and uses the same functions as driver. > Spooler can be disabled(config option), in this case driver works > as usual. Also it does parsing and other Sock related things like > conn queueing, making it in tasks would result in very big code > duplication just to run it in generic socket callback thread. > I think it benefits from being in driver.c, it is very specific and > does what is supposed to do. Also upload stats belongs to spooler > as well, otherwise we get another round of locking optimisation. Can you upload the entire driver.c in RFE? I would like to have a deeper look at the whole... I examined the patch in the RFE and it seems to me (not sure though) that you have more/less duplicated the driver thread processing. So, we'd have just ONE spool thread collecting data from sockets and not a spool-thread PER socket? If this is true, then I believe we have a good interim solution to the upload blues. At least we can accept number of uploads at the same time and be able to query statistics. We can even invent kind of "stop-upload" bit which can be flipped by the inspector to stop the upload if needed. This can also satisfy requirements about quota enforcement etc... There will be no thread-pollution as we'll have just ONE new background thread and we'll be able to support number of uploads at the same time w/o wasting resources on the server. Cool. Cheers Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-06 15:54:46
|
I uploaded driver.c into SFE, it needs more testing because after my last corrections and cleanups it seems i broke something. Zoran Vasiljevic wrote: > > Am 04.01.2006 um 16:15 schrieb Vlad Seryakov: > >> >> The main reason to reuse driver.c is that spooler is alsmost >> identical to driver thread, and uses the same functions as driver. >> Spooler can be disabled(config option), in this case driver works as >> usual. Also it does parsing and other Sock related things like conn >> queueing, making it in tasks would result in very big code >> duplication just to run it in generic socket callback thread. >> I think it benefits from being in driver.c, it is very specific and >> does what is supposed to do. Also upload stats belongs to spooler as >> well, otherwise we get another round of locking optimisation. > > > Can you upload the entire driver.c in RFE? I would like to > have a deeper look at the whole... > I examined the patch in the RFE and it seems to me (not sure though) > that you have more/less duplicated the driver thread processing. > So, we'd have just ONE spool thread collecting data from sockets and > not a spool-thread PER socket? > > If this is true, then I believe we have a good interim solution to > the upload blues. At least we can accept number of uploads at the > same time and be able to query statistics. We can even invent kind > of "stop-upload" bit which can be flipped by the inspector to stop > the upload if needed. This can also satisfy requirements about > quota enforcement etc... There will be no thread-pollution as we'll > have just ONE new background thread and we'll be able to support > number of uploads at the same time w/o wasting resources on the server. > Cool. > > Cheers > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-07 08:30:12
|
Am 06.01.2006 um 16:56 schrieb Vlad Seryakov: > I uploaded driver.c into SFE, it needs more testing because after > my last corrections and cleanups it seems i broke something. > What is the meaning of: case SOCK_SPOOL: if (!SockSpoolPush(sockPtr)) { sockPtr->nextPtr = readPtr; readPtr = sockPtr; } break; The SockSpoolPush() as I see can only return true, hence the test is not needed. I would rewrite the SockSpoolPush to return void and hence: case SOCK_SPOOL: SockSpoolPush(sockPtr); break; Or is there any hidden case when SockSpoolPush() can fail (I can't see that from the code)? Cheers Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-07 16:38:28
|
Just in case for future encahncements, we may restrict spooling queue and keep socks in waiting list in driver for timeout or something else. Zoran Vasiljevic wrote: > > Am 06.01.2006 um 16:56 schrieb Vlad Seryakov: > >> I uploaded driver.c into SFE, it needs more testing because after my >> last corrections and cleanups it seems i broke something. >> > > What is the meaning of: > > case SOCK_SPOOL: > if (!SockSpoolPush(sockPtr)) { > sockPtr->nextPtr = readPtr; > readPtr = sockPtr; > } > break; > > The SockSpoolPush() as I see can only return true, > hence the test is not needed. > I would rewrite the SockSpoolPush to return void > and hence: > > case SOCK_SPOOL: > SockSpoolPush(sockPtr); > break; > > Or is there any hidden case when SockSpoolPush() > can fail (I can't see that from the code)? > > Cheers > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-07 20:53:34
|
Am 07.01.2006 um 17:41 schrieb Vlad Seryakov: > Just in case for future encahncements, we may restrict spooling > queue and keep socks in waiting list in driver for timeout or > something else. > allright. |
From: Zoran V. <zv...@ar...> - 2006-01-11 11:11:35
|
Am 06.01.2006 um 16:56 schrieb Vlad Seryakov: > I uploaded driver.c into SFE, it needs more testing because after > my last corrections and cleanups it seems i broke something. I would clean/destroy the "uploadTable" AND "hosts" hashtable in NsWaitDriversShutdown() if the drivers have been stopped allright: } else { Ns_Log(Notice, "driver: shutdown complete"); driverThread = NULL; ns_sockclose(drvPipe[0]); ns_sockclose(drvPipe[1]); /* CLEANUP THE hosts hashtable */ } } else { Ns_Log(Notice, "spooler: shutdown complete"); spoolThread = NULL; ns_sockclose(spoolPipe[0]); ns_sockclose(spoolPipe[1]); /* CLEANUP THE uploadTable hashtable */ } |
From: Vlad S. <vl...@cr...> - 2006-01-11 19:32:25
|
Sure Zoran Vasiljevic wrote: > > Am 06.01.2006 um 16:56 schrieb Vlad Seryakov: > >> I uploaded driver.c into SFE, it needs more testing because after my >> last corrections and cleanups it seems i broke something. > > > I would clean/destroy the "uploadTable" AND "hosts" hashtable > in NsWaitDriversShutdown() if the drivers have been stopped > allright: > > } else { > Ns_Log(Notice, "driver: shutdown complete"); > driverThread = NULL; > ns_sockclose(drvPipe[0]); > ns_sockclose(drvPipe[1]); > /* CLEANUP THE hosts hashtable */ > } > > } else { > Ns_Log(Notice, "spooler: shutdown complete"); > spoolThread = NULL; > ns_sockclose(spoolPipe[0]); > ns_sockclose(spoolPipe[1]); > /* CLEANUP THE uploadTable hashtable */ > } > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Vlad S. <vl...@cr...> - 2006-01-06 16:44:33
|
> I examined the patch in the RFE and it seems to me (not sure though) > that you have more/less duplicated the driver thread processing. > So, we'd have just ONE spool thread collecting data from sockets and > not a spool-thread PER socket? Yes, it is smaller replica of the driver thread because it does exactly the same thing, it reads the data the way driver does it, but uses spooling and upload stats and completely independent from the driver itself. > If this is true, then I believe we have a good interim solution to > the upload blues. At least we can accept number of uploads at the > same time and be able to query statistics. We can even invent kind > of "stop-upload" bit which can be flipped by the inspector to stop > the upload if needed. This can also satisfy requirements about > quota enforcement etc... There will be no thread-pollution as we'll > have just ONE new background thread and we'll be able to support > number of uploads at the same time w/o wasting resources on the server. > Cool. There are a lot of options here actually like: - confg option to enable/disable spooling - number of spooling threads for multi-CPU boxes and heavy upload sites - making spooler thread doing additional checks like quota, permissions etc. -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-06 16:59:38
|
Am 06.01.2006 um 17:46 schrieb Vlad Seryakov: > There are a lot of options here actually like: > - confg option to enable/disable spooling I do not think this is needed. I'd enable it all the time. Or perhaps, if we make it configurable (see below) a 0 count of spooler threads disables the functionality and >0 count enables it. > - number of spooling threads for multi-CPU boxes and heavy upload > sites True. There is one queue for socket structs operated by the SockSpoolPush and SockSpoolPull, so you can start number of spooling threads. Driver thread will fill in the queue (one producer) while the spooler thread(s) will pull from the queue (many consumers). Cool. > - making spooler thread doing additional checks like quota, > permissions etc. Hmmm... plus the location of the temp area where the files are spooled? Well, we can improve this as we go. Important is to have some kind of solution to the basics (controlable upload + statistics), and we have this now. I'm still studying the code. You say you broke something with recent changes? What do you have in mind? Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-06 17:23:52
|
It is already fixed in the last uploaded driver.c, it works good now Zoran Vasiljevic wrote: > > Am 06.01.2006 um 17:46 schrieb Vlad Seryakov: > >> There are a lot of options here actually like: >> - confg option to enable/disable spooling > > > I do not think this is needed. I'd enable it all the time. > Or perhaps, if we make it configurable (see below) a > 0 count of spooler threads disables the functionality > and >0 count enables it. > >> - number of spooling threads for multi-CPU boxes and heavy upload sites > > > True. There is one queue for socket structs operated by > the SockSpoolPush and SockSpoolPull, so you can start > number of spooling threads. Driver thread will fill in > the queue (one producer) while the spooler thread(s) will > pull from the queue (many consumers). Cool. > > >> - making spooler thread doing additional checks like quota, >> permissions etc. > > > Hmmm... plus the location of the temp area where the files are spooled? > Well, we can improve this as we go. Important is to have some kind > of solution to the basics (controlable upload + statistics), and we have > this now. > > I'm still studying the code. You say you broke something > with recent changes? What do you have in mind? > > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-06 17:30:48
|
Am 06.01.2006 um 18:25 schrieb Vlad Seryakov: > It is already fixed in the last uploaded driver.c, it works good now one question: what happens if the: sockPtr->keep = 0; if (sockPtr->drvPtr->opts & NS_DRIVER_ASYNC) { n = SockRead(sockPtr, 1); } else { n = SOCK_READY; } sockPtr->drvPtr->opts is not NS_DRIVER_ASYNC ? You set the SOCK_READY, allright... but what then? Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-06 17:58:22
|
sockPtr will be queued into the connection queue for processing, it may happen from driver thread or from spooler thread, works equally. Zoran Vasiljevic wrote: > > Am 06.01.2006 um 18:25 schrieb Vlad Seryakov: > >> It is already fixed in the last uploaded driver.c, it works good now > > > one question: > > what happens if the: > > sockPtr->keep = 0; > if (sockPtr->drvPtr->opts & NS_DRIVER_ASYNC) { > n = SockRead(sockPtr, 1); > } else { > n = SOCK_READY; > } > > sockPtr->drvPtr->opts is not NS_DRIVER_ASYNC ? > You set the SOCK_READY, allright... but what then? > > Zoran > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-07 08:16:21
|
Am 06.01.2006 um 19:00 schrieb Vlad Seryakov: > sockPtr will be queued into the connection queue for processing, it > may happen from driver thread or from spooler thread, works equally. Hmhmhmhmhmhmhm... The SOCK_SPOOL is only retured from SockRead(). The SockRead() is only attempted in the DriverThread if : if (sockPtr->drvPtr->opts & NS_DRIVER_ASYNC) { n = SockRead(sockPtr, 1); } else { n = SOCK_READY; } So, the same test as above in SpoolThread has no meaning at all since you will never come to SpoolThread unless somebody (driver thread) calls SockRead which MAY return SOCK_SPOOL. Do I see this right? I'm not nitpicking! I'm just trying to understand the code. I believe this thing could be misleading and you should (if I'm correct) remove the above snippet from the SpoolThread and just write: n = SockRead(sockPtr, 1); Zoran |
From: Zoran V. <zv...@ar...> - 2006-01-07 08:23:39
|
Am 07.01.2006 um 09:18 schrieb Zoran Vasiljevic: > if (sockPtr->drvPtr->opts & NS_DRIVER_ASYNC) { > n = SockRead(sockPtr, 1); Uhhhh, sorry, wrong part... if (sockPtr->drvPtr->opts & NS_DRIVER_ASYNC) { n = SockRead(sockPtr, 0); Cut/paste error. Zoran |