You can subscribe to this list here.
2005 |
Jan
|
Feb
(53) |
Mar
(62) |
Apr
(88) |
May
(55) |
Jun
(204) |
Jul
(52) |
Aug
|
Sep
(1) |
Oct
(94) |
Nov
(15) |
Dec
(68) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(130) |
Feb
(105) |
Mar
(34) |
Apr
(61) |
May
(41) |
Jun
(92) |
Jul
(176) |
Aug
(102) |
Sep
(247) |
Oct
(69) |
Nov
(32) |
Dec
(140) |
2007 |
Jan
(58) |
Feb
(51) |
Mar
(11) |
Apr
(20) |
May
(34) |
Jun
(37) |
Jul
(18) |
Aug
(60) |
Sep
(41) |
Oct
(105) |
Nov
(19) |
Dec
(14) |
2008 |
Jan
(3) |
Feb
|
Mar
(7) |
Apr
(5) |
May
(123) |
Jun
(5) |
Jul
(1) |
Aug
(29) |
Sep
(15) |
Oct
(21) |
Nov
(51) |
Dec
(3) |
2009 |
Jan
|
Feb
(36) |
Mar
(29) |
Apr
|
May
|
Jun
(7) |
Jul
(4) |
Aug
|
Sep
(4) |
Oct
|
Nov
(13) |
Dec
|
2010 |
Jan
|
Feb
|
Mar
(9) |
Apr
(11) |
May
(16) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(7) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
(92) |
Nov
(28) |
Dec
(16) |
2013 |
Jan
(9) |
Feb
(2) |
Mar
|
Apr
(4) |
May
(4) |
Jun
(6) |
Jul
(14) |
Aug
(12) |
Sep
(4) |
Oct
(13) |
Nov
(1) |
Dec
(6) |
2014 |
Jan
(23) |
Feb
(19) |
Mar
(10) |
Apr
(14) |
May
(11) |
Jun
(6) |
Jul
(11) |
Aug
(15) |
Sep
(41) |
Oct
(95) |
Nov
(23) |
Dec
(11) |
2015 |
Jan
(3) |
Feb
(9) |
Mar
(19) |
Apr
(3) |
May
(1) |
Jun
(3) |
Jul
(11) |
Aug
(1) |
Sep
(15) |
Oct
(5) |
Nov
(2) |
Dec
|
2016 |
Jan
(7) |
Feb
(11) |
Mar
(8) |
Apr
(1) |
May
(3) |
Jun
(17) |
Jul
(12) |
Aug
(3) |
Sep
(5) |
Oct
(19) |
Nov
(12) |
Dec
(6) |
2017 |
Jan
(30) |
Feb
(23) |
Mar
(12) |
Apr
(32) |
May
(27) |
Jun
(7) |
Jul
(13) |
Aug
(16) |
Sep
(6) |
Oct
(11) |
Nov
|
Dec
(12) |
2018 |
Jan
(1) |
Feb
(5) |
Mar
(6) |
Apr
(7) |
May
(23) |
Jun
(3) |
Jul
(2) |
Aug
(1) |
Sep
(6) |
Oct
(6) |
Nov
(10) |
Dec
(3) |
2019 |
Jan
(26) |
Feb
(15) |
Mar
(9) |
Apr
|
May
(8) |
Jun
(14) |
Jul
(10) |
Aug
(10) |
Sep
(4) |
Oct
(2) |
Nov
(20) |
Dec
(10) |
2020 |
Jan
(10) |
Feb
(14) |
Mar
(29) |
Apr
(11) |
May
(25) |
Jun
(21) |
Jul
(23) |
Aug
(12) |
Sep
(19) |
Oct
(6) |
Nov
(8) |
Dec
(12) |
2021 |
Jan
(29) |
Feb
(9) |
Mar
(8) |
Apr
(8) |
May
(2) |
Jun
(2) |
Jul
(9) |
Aug
(9) |
Sep
(3) |
Oct
(4) |
Nov
(12) |
Dec
(13) |
2022 |
Jan
(4) |
Feb
|
Mar
(4) |
Apr
(12) |
May
(15) |
Jun
(7) |
Jul
(10) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(8) |
Dec
|
2023 |
Jan
(15) |
Feb
|
Mar
(23) |
Apr
(1) |
May
(2) |
Jun
(10) |
Jul
|
Aug
(22) |
Sep
(19) |
Oct
(2) |
Nov
(20) |
Dec
|
2024 |
Jan
(1) |
Feb
|
Mar
(16) |
Apr
(15) |
May
(6) |
Jun
(4) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(13) |
Nov
(18) |
Dec
(6) |
2025 |
Jan
(12) |
Feb
|
Mar
(2) |
Apr
(1) |
May
(11) |
Jun
(5) |
Jul
(4) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Zoran V. <zv...@ar...> - 2006-01-07 06:07:12
|
Am 06.01.2006 um 20:18 schrieb Stephen Deasey: > On 1/6/06, Zoran Vasiljevic <zv...@ar...> wrote: >> >> Am 04.01.2006 um 20:40 schrieb Stephen Deasey: >> >>> On 1/4/06, Vlad Seryakov <vl...@cr...> wrote: >>>> I believe poll can be used on any file descriptor, not only >>>> sockets. >>>> >>> >>> It doesn't work if the file descriptor is backed by a file on disk. >>> If it did, we wouldn't have to talk about aio_read() :-) >>> >>> >> >> Hmmm... >> >> SYNOPSIS >> #include <poll.h> >> >> int >> poll(struct pollfd *fds, nfds_t nfds, int timeout); >> >> DESCRIPTION >> Poll() examines a set of file descriptors to see if some of >> them are >> ready for I/O or if certain events have occurred on them. The >> fds argu- >> ment is a pointer to an array of pollfd structures as defined >> in <poll.h> >> (shown below). The nfds argument determines the size of the >> fds array. >> >> I believe that poll should work with files as well. That is I can't >> find no reason why it shoudn't by reading the man and inspecting >> the "poll" emulation we have in nsd/unix.c. Tcl also uses the similar >> machinery to implement non-blocking read/write to files (see below). >> >> The AIO comes into place where you basically have one more layer >> of processing in the kernel which handles your dispatched events >> and let you asynchronously inspect them, cancel them etc. >> >> So: poll + tcl_event_loop *= AIO. Unfortunately this works only >> for single-threaded apps as tcl_event_loop only handles events >> from the current thread. Roughly. As AIO is normally done by the >> kernel it is/shouldbe much more faster. >> One can always simulate the aio by a specialized thread and non- >> blocking >> read/writing. But having this all done for you in the kernel (as >> in some "proper" implementation) things should be simpler to >> implement >> and faster in deployment. >> > > Don't believe everything you read. Man pages are often little more > than hopes and dreams... > > In practice poll() does not work with files backed by disk. Even the > Open Group specifies that "Regular files always poll TRUE for reading > and writing." > > http://www.opengroup.org/onlinepubs/007908799/xsh/poll.html This is what you ment, right? Regular files always poll TRUE for reading and writing. It makes me hard to believe that a write or read from a slow filesystem like floppy-disk or an NFS-mount is always readable or writable! It just makes no sense to me. > Make sure you've read the C10K page: > > http://www.kegel.com/c10k.html This is also what you ment, right? Use nonblocking calls (e.g. write() on a socket set to O_NONBLOCK) to start I/O, and readiness notification (e.g. poll() or /dev/poll) to know when it's OK to start the next I/O on that channel. Generally only usable with network I/O, not disk I/O. Use asynchronous calls (e.g. aio_write()) to start I/O, and completion notification (e.g. signals or completion ports) to know when the I/O finishes. Good for both network and disk I/O. Again "only usable with network I/O, not disk I/O" is here. That means people wanted to utilitze non-blocking IO to disk files (possibly on the slow filesystems like DVD-RAM's, floppies, NFS etc.) have no other possiblity to write non-blocking code by poll/select and non-blocking write except using AIO? This is a hard pill to swallow. That would mean if I'm to write a 1MB file on a floppy, my Tcl program will be permanently busy with that one operation? Hmmm... I must try this. I have no floppy on my Mac but I can use a mounted DVD-RAM which is almost equally slow and a Tcl program in event loop and measure times between write events... Zoran |
From: Gustaf N. <ne...@wu...> - 2006-01-06 20:58:01
|
> Make sure you've read the C10K page: > > http://www.kegel.com/c10k.html > > you might want to check lighthttp with its benchmarks, it tried a lot to improve throughput based on the c10k tipps, including epoll for linux. it is at least a nice reference. http://www.lighttpd.net/ -gustaf |
From: Zoran V. <zv...@ar...> - 2006-01-06 19:31:24
|
Am 06.01.2006 um 20:18 schrieb Stephen Deasey: > Don't believe everything you read. Man pages are often little more > than hopes and dreams... > LOL! > In practice poll() does not work with files backed by disk. Even the > Open Group specifies that "Regular files always poll TRUE for reading > and writing." > > http://www.opengroup.org/onlinepubs/007908799/xsh/poll.html > I will read this. > > Make sure you've read the C10K page: > > http://www.kegel.com/c10k.html > Most interesting. I will spend some thime on this doc! But then... what about Tcl event-loop? With the above, you indicate that their event-loop model will not really work with diskfiles? Zoran > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through =20 > log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD =20 > SPLUNK! > http://ads.osdn.com/?ad_idv37&alloc_id=16865&op=3Dclick > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel |
From: Stephen D. <sd...@gm...> - 2006-01-06 19:18:38
|
On 1/6/06, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 04.01.2006 um 20:40 schrieb Stephen Deasey: > > > On 1/4/06, Vlad Seryakov <vl...@cr...> wrote: > >> I believe poll can be used on any file descriptor, not only sockets. > >> > > > > It doesn't work if the file descriptor is backed by a file on disk. > > If it did, we wouldn't have to talk about aio_read() :-) > > > > > > Hmmm... > > SYNOPSIS > #include <poll.h> > > int > poll(struct pollfd *fds, nfds_t nfds, int timeout); > > DESCRIPTION > Poll() examines a set of file descriptors to see if some of > them are > ready for I/O or if certain events have occurred on them. The > fds argu- > ment is a pointer to an array of pollfd structures as defined > in <poll.h> > (shown below). The nfds argument determines the size of the > fds array. > > I believe that poll should work with files as well. That is I can't > find no reason why it shoudn't by reading the man and inspecting > the "poll" emulation we have in nsd/unix.c. Tcl also uses the similar > machinery to implement non-blocking read/write to files (see below). > > The AIO comes into place where you basically have one more layer > of processing in the kernel which handles your dispatched events > and let you asynchronously inspect them, cancel them etc. > > So: poll + tcl_event_loop *=3D AIO. Unfortunately this works only > for single-threaded apps as tcl_event_loop only handles events > from the current thread. Roughly. As AIO is normally done by the > kernel it is/shouldbe much more faster. > One can always simulate the aio by a specialized thread and non-blocking > read/writing. But having this all done for you in the kernel (as > in some "proper" implementation) things should be simpler to implement > and faster in deployment. > Don't believe everything you read. Man pages are often little more than hopes and dreams... In practice poll() does not work with files backed by disk. Even the Open Group specifies that "Regular files always poll TRUE for reading and writing." http://www.opengroup.org/onlinepubs/007908799/xsh/poll.html Make sure you've read the C10K page: http://www.kegel.com/c10k.html |
From: Vlad S. <vl...@cr...> - 2006-01-06 17:58:22
|
sockPtr will be queued into the connection queue for processing, it may happen from driver thread or from spooler thread, works equally. Zoran Vasiljevic wrote: > > Am 06.01.2006 um 18:25 schrieb Vlad Seryakov: > >> It is already fixed in the last uploaded driver.c, it works good now > > > one question: > > what happens if the: > > sockPtr->keep = 0; > if (sockPtr->drvPtr->opts & NS_DRIVER_ASYNC) { > n = SockRead(sockPtr, 1); > } else { > n = SOCK_READY; > } > > sockPtr->drvPtr->opts is not NS_DRIVER_ASYNC ? > You set the SOCK_READY, allright... but what then? > > Zoran > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-06 17:30:48
|
Am 06.01.2006 um 18:25 schrieb Vlad Seryakov: > It is already fixed in the last uploaded driver.c, it works good now one question: what happens if the: sockPtr->keep = 0; if (sockPtr->drvPtr->opts & NS_DRIVER_ASYNC) { n = SockRead(sockPtr, 1); } else { n = SOCK_READY; } sockPtr->drvPtr->opts is not NS_DRIVER_ASYNC ? You set the SOCK_READY, allright... but what then? Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-06 17:23:52
|
It is already fixed in the last uploaded driver.c, it works good now Zoran Vasiljevic wrote: > > Am 06.01.2006 um 17:46 schrieb Vlad Seryakov: > >> There are a lot of options here actually like: >> - confg option to enable/disable spooling > > > I do not think this is needed. I'd enable it all the time. > Or perhaps, if we make it configurable (see below) a > 0 count of spooler threads disables the functionality > and >0 count enables it. > >> - number of spooling threads for multi-CPU boxes and heavy upload sites > > > True. There is one queue for socket structs operated by > the SockSpoolPush and SockSpoolPull, so you can start > number of spooling threads. Driver thread will fill in > the queue (one producer) while the spooler thread(s) will > pull from the queue (many consumers). Cool. > > >> - making spooler thread doing additional checks like quota, >> permissions etc. > > > Hmmm... plus the location of the temp area where the files are spooled? > Well, we can improve this as we go. Important is to have some kind > of solution to the basics (controlable upload + statistics), and we have > this now. > > I'm still studying the code. You say you broke something > with recent changes? What do you have in mind? > > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-06 16:59:38
|
Am 06.01.2006 um 17:46 schrieb Vlad Seryakov: > There are a lot of options here actually like: > - confg option to enable/disable spooling I do not think this is needed. I'd enable it all the time. Or perhaps, if we make it configurable (see below) a 0 count of spooler threads disables the functionality and >0 count enables it. > - number of spooling threads for multi-CPU boxes and heavy upload > sites True. There is one queue for socket structs operated by the SockSpoolPush and SockSpoolPull, so you can start number of spooling threads. Driver thread will fill in the queue (one producer) while the spooler thread(s) will pull from the queue (many consumers). Cool. > - making spooler thread doing additional checks like quota, > permissions etc. Hmmm... plus the location of the temp area where the files are spooled? Well, we can improve this as we go. Important is to have some kind of solution to the basics (controlable upload + statistics), and we have this now. I'm still studying the code. You say you broke something with recent changes? What do you have in mind? Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-06 16:44:33
|
> I examined the patch in the RFE and it seems to me (not sure though) > that you have more/less duplicated the driver thread processing. > So, we'd have just ONE spool thread collecting data from sockets and > not a spool-thread PER socket? Yes, it is smaller replica of the driver thread because it does exactly the same thing, it reads the data the way driver does it, but uses spooling and upload stats and completely independent from the driver itself. > If this is true, then I believe we have a good interim solution to > the upload blues. At least we can accept number of uploads at the > same time and be able to query statistics. We can even invent kind > of "stop-upload" bit which can be flipped by the inspector to stop > the upload if needed. This can also satisfy requirements about > quota enforcement etc... There will be no thread-pollution as we'll > have just ONE new background thread and we'll be able to support > number of uploads at the same time w/o wasting resources on the server. > Cool. There are a lot of options here actually like: - confg option to enable/disable spooling - number of spooling threads for multi-CPU boxes and heavy upload sites - making spooler thread doing additional checks like quota, permissions etc. -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Vlad S. <vl...@cr...> - 2006-01-06 15:54:46
|
I uploaded driver.c into SFE, it needs more testing because after my last corrections and cleanups it seems i broke something. Zoran Vasiljevic wrote: > > Am 04.01.2006 um 16:15 schrieb Vlad Seryakov: > >> >> The main reason to reuse driver.c is that spooler is alsmost >> identical to driver thread, and uses the same functions as driver. >> Spooler can be disabled(config option), in this case driver works as >> usual. Also it does parsing and other Sock related things like conn >> queueing, making it in tasks would result in very big code >> duplication just to run it in generic socket callback thread. >> I think it benefits from being in driver.c, it is very specific and >> does what is supposed to do. Also upload stats belongs to spooler as >> well, otherwise we get another round of locking optimisation. > > > Can you upload the entire driver.c in RFE? I would like to > have a deeper look at the whole... > I examined the patch in the RFE and it seems to me (not sure though) > that you have more/less duplicated the driver thread processing. > So, we'd have just ONE spool thread collecting data from sockets and > not a spool-thread PER socket? > > If this is true, then I believe we have a good interim solution to > the upload blues. At least we can accept number of uploads at the > same time and be able to query statistics. We can even invent kind > of "stop-upload" bit which can be flipped by the inspector to stop > the upload if needed. This can also satisfy requirements about > quota enforcement etc... There will be no thread-pollution as we'll > have just ONE new background thread and we'll be able to support > number of uploads at the same time w/o wasting resources on the server. > Cool. > > Cheers > Zoran > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-06 10:20:47
|
Am 04.01.2006 um 16:15 schrieb Vlad Seryakov: > > The main reason to reuse driver.c is that spooler is alsmost > identical to driver thread, and uses the same functions as driver. > Spooler can be disabled(config option), in this case driver works > as usual. Also it does parsing and other Sock related things like > conn queueing, making it in tasks would result in very big code > duplication just to run it in generic socket callback thread. > I think it benefits from being in driver.c, it is very specific and > does what is supposed to do. Also upload stats belongs to spooler > as well, otherwise we get another round of locking optimisation. Can you upload the entire driver.c in RFE? I would like to have a deeper look at the whole... I examined the patch in the RFE and it seems to me (not sure though) that you have more/less duplicated the driver thread processing. So, we'd have just ONE spool thread collecting data from sockets and not a spool-thread PER socket? If this is true, then I believe we have a good interim solution to the upload blues. At least we can accept number of uploads at the same time and be able to query statistics. We can even invent kind of "stop-upload" bit which can be flipped by the inspector to stop the upload if needed. This can also satisfy requirements about quota enforcement etc... There will be no thread-pollution as we'll have just ONE new background thread and we'll be able to support number of uploads at the same time w/o wasting resources on the server. Cool. Cheers Zoran |
From: Zoran V. <zv...@ar...> - 2006-01-06 10:13:13
|
Am 04.01.2006 um 20:40 schrieb Stephen Deasey: > On 1/4/06, Vlad Seryakov <vl...@cr...> wrote: >> I believe poll can be used on any file descriptor, not only sockets. >> > > It doesn't work if the file descriptor is backed by a file on disk. > If it did, we wouldn't have to talk about aio_read() :-) > > Hmmm... SYNOPSIS #include <poll.h> int poll(struct pollfd *fds, nfds_t nfds, int timeout); DESCRIPTION Poll() examines a set of file descriptors to see if some of them are ready for I/O or if certain events have occurred on them. The fds argu- ment is a pointer to an array of pollfd structures as defined in <poll.h> (shown below). The nfds argument determines the size of the fds array. I believe that poll should work with files as well. That is I can't find no reason why it shoudn't by reading the man and inspecting the "poll" emulation we have in nsd/unix.c. Tcl also uses the similar machinery to implement non-blocking read/write to files (see below). The AIO comes into place where you basically have one more layer of processing in the kernel which handles your dispatched events and let you asynchronously inspect them, cancel them etc. So: poll + tcl_event_loop *= AIO. Unfortunately this works only for single-threaded apps as tcl_event_loop only handles events from the current thread. Roughly. As AIO is normally done by the kernel it is/shouldbe much more faster. One can always simulate the aio by a specialized thread and non-blocking read/writing. But having this all done for you in the kernel (as in some "proper" implementation) things should be simpler to implement and faster in deployment. Cheers Zoran |
From: Zoran V. <zv...@ar...> - 2006-01-06 09:13:46
|
Am 06.01.2006 um 08:25 schrieb Stephen Deasey: > Don't forget to run make test before you commit... > > ==== ns_conn-1.2 basic syntax: wrong argument FAILED Ah... Zoran |
From: Stephen D. <sd...@gm...> - 2006-01-06 07:25:12
|
Don't forget to run make test before you commit... =3D=3D=3D=3D ns_conn-1.2 basic syntax: wrong argument FAILED On 1/4/06, Zoran Vasiljevic <vas...@us...> wrote: > Update of /cvsroot/naviserver/naviserver/nsd > In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv15762/nsd > > Modified Files: > conn.c > Log Message: > * configure.in: added test for inet_ntop. > * nsthread/reentrant.c: use inet_ntop when available which fixes a > bug on 64 bit ppc, at least under linux. > * nsd/conn.c: added option to [ns_conn] to set/query contentsentlength > for getting the correct length into the logfile, when files are > delivered via event driven I/O. Thanks to Gustaf Neumann for providing > above fixes. > > > Index: conn.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > RCS file: /cvsroot/naviserver/naviserver/nsd/conn.c,v > retrieving revision 1.23 > retrieving revision 1.24 > diff -C2 -d -r1.23 -r1.24 > *** conn.c 12 Nov 2005 20:50:33 -0000 1.23 > --- conn.c 4 Jan 2006 14:08:03 -0000 1.24 > *************** > *** 1031,1034 **** > --- 1031,1035 ---- > static CONST char *opts[] =3D { > "authpassword", "authuser", "close", "content", "contentlength"= , > + "contentsentlength", > "copy", "channel", "driver", "encoding", "files", "fileoffset", > "filelength", "fileheaders", "flags", "form", "headers", > *************** > *** 1042,1046 **** > enum ISubCmdIdx { > CAuthPasswordIdx, CAuthUserIdx, CCloseIdx, CContentIdx, > ! CContentLengthIdx, CCopyIdx, CChannelIdx, CDriverIdx, CEncoding= Idx, > CFilesIdx, CFileOffIdx, CFileLenIdx, CFileHdrIdx, CFlagsIdx, > CFormIdx, CHeadersIdx, CHostIdx, CIdIdx, CIsConnectedIdx, > --- 1043,1047 ---- > enum ISubCmdIdx { > CAuthPasswordIdx, CAuthUserIdx, CCloseIdx, CContentIdx, > ! CContentLengthIdx, CContentSentLenIdx, CCopyIdx, CChannelIdx, C= DriverIdx, CEncodingIdx, > CFilesIdx, CFileOffIdx, CFileLenIdx, CFileHdrIdx, CFlagsIdx, > CFormIdx, CHeadersIdx, CHostIdx, CIdIdx, CIsConnectedIdx, > *************** > *** 1397,1400 **** > --- 1398,1415 ---- > Tcl_RegisterChannel(interp, chan); > Tcl_SetStringObj(result, Tcl_GetChannelName(chan), -1); > + break; > + > + case CContentSentLenIdx: > + if (objc =3D=3D 2) { > + Tcl_SetIntObj(result, connPtr->nContentSent); > + } else if (objc =3D=3D 3) { > + if (Tcl_GetIntFromObj(interp, objv[2], &connPtr->nContentSe= nt) !=3D TCL_OK) { > + return TCL_ERROR; > + } > + } else { > + Tcl_WrongNumArgs(interp, 2, objv, "?value?"); > + return TCL_ERROR; > + } > + break; > } > > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log fi= les > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=3D7637&alloc_id=3D16865&op=3Dclick > _______________________________________________ > naviserver-commits mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-commits > |
From: Stephen D. <sd...@gm...> - 2006-01-04 19:40:45
|
On 1/4/06, Vlad Seryakov <vl...@cr...> wrote: > I believe poll can be used on any file descriptor, not only sockets. > It doesn't work if the file descriptor is backed by a file on disk.=20 If it did, we wouldn't have to talk about aio_read() :-) |
From: Vlad S. <vl...@cr...> - 2006-01-04 16:55:38
|
As far as i remember JimD rewrote ns_http using new tasks API, but other than that i do not know where/how tasks API is used. Tasks API is good, i agree it is more generic and Tcl interface can be built around it so instead of using Tcl event loop it will be possible to use tasks. with Tcl main loop you need to spawn separate threads if you need multiple tasks to be queued from different threads/places. Zoran Vasiljevic wrote: > > Am 04.01.2006 um 16:10 schrieb Vlad Seryakov: > >> I believe poll can be used on any file descriptor, not only sockets. >> >> It is generic socket/fds callback infrastructure, it is very similar >> to socket callbacks we already have in sockcallback.c. >> >> Does not make sense to have both, they duplicating each other > > > Hm... on the first glance, yes... I think the benefit of > "task" interaface is that it's more genral in terms of > the API. Actually, one could re-write the latter with the > former, as I can see. > > The sockcallback exports just two/three calls: > Ns_SockCallback > Ns_SockCancelCallback > Ns_SockCancelCallbackEx > > whereas the task is much more elaborate. > > The question is who would benefit from task interface? > Are there any immediate ideas? > For what they (AS) wrote it for? > What can it do what Tcl event loop can't? > > Zoran > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Zoran V. <zv...@ar...> - 2006-01-04 16:51:00
|
Am 04.01.2006 um 16:10 schrieb Vlad Seryakov: > I believe poll can be used on any file descriptor, not only sockets. > > It is generic socket/fds callback infrastructure, it is very > similar to socket callbacks we already have in sockcallback.c. > > Does not make sense to have both, they duplicating each other Hm... on the first glance, yes... I think the benefit of "task" interaface is that it's more genral in terms of the API. Actually, one could re-write the latter with the former, as I can see. The sockcallback exports just two/three calls: Ns_SockCallback Ns_SockCancelCallback Ns_SockCancelCallbackEx whereas the task is much more elaborate. The question is who would benefit from task interface? Are there any immediate ideas? For what they (AS) wrote it for? What can it do what Tcl event loop can't? Zoran |
From: Vlad S. <vl...@cr...> - 2006-01-04 15:13:30
|
> I find it confusing that the actual spooling code is not in the > SpoolThread, but still in SockRead(). > Take a look at nsd/task.c. I think you should be able to implement > this as a Ns_Task callback, which gives you the extra thread and all > the poll handling etc. for free. Move the spooling code from SockRead > into the task callback. > > Can you split out the upload stats code. One functional change at a > time is much easier to follow. > The main reason to reuse driver.c is that spooler is alsmost identical to driver thread, and uses the same functions as driver. Spooler can be disabled(config option), in this case driver works as usual. Also it does parsing and other Sock related things like conn queueing, making it in tasks would result in very big code duplication just to run it in generic socket callback thread. I think it benefits from being in driver.c, it is very specific and does what is supposed to do. Also upload stats belongs to spooler as well, otherwise we get another round of locking optimisation. -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Vlad S. <vl...@cr...> - 2006-01-04 15:09:05
|
I believe poll can be used on any file descriptor, not only sockets. It is generic socket/fds callback infrastructure, it is very similar to socket callbacks we already have in sockcallback.c. Does not make sense to have both, they duplicating each other Stephen Deasey wrote: > On 1/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > >>Am 04.01.2006 um 11:35 schrieb Stephen Deasey: >> >> >>> The queue thread monitors all sockets for events >>>(readable/writeable) and then runs the callbacks. >>> >>> >> >>Isn't this what the driver thread is doing? >>Is this a potential replacement/abstraction >>of the driver thread? >> > > > Maybe. I haven't looked closely enough yet. > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://ads.osdn.com/?ad_idv37&alloc_id865&op=click > _______________________________________________ > naviserver-devel mailing list > nav...@li... > https://lists.sourceforge.net/lists/listinfo/naviserver-devel > -- Vlad Seryakov 571 262-8608 office vl...@cr... http://www.crystalballinc.com/vlad/ |
From: Stephen D. <sd...@gm...> - 2006-01-04 10:48:04
|
On 1/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 04.01.2006 um 11:35 schrieb Stephen Deasey: > > > The queue thread monitors all sockets for events > > (readable/writeable) and then runs the callbacks. > > > > > > Isn't this what the driver thread is doing? > Is this a potential replacement/abstraction > of the driver thread? > Maybe. I haven't looked closely enough yet. |
From: Stephen D. <sd...@gm...> - 2006-01-04 10:42:17
|
On 1/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > > Am 04.01.2006 um 11:35 schrieb Stephen Deasey: > > > It's an API for async network IO. You create a queue which is managed > > by a single thread. You add tasks to the queue. A task is a callback > > and a socket event. The queue thread monitors all sockets for events > > (readable/writeable) and then runs the callbacks. > > Thanks. Need it be a socket or can it handle any open descriptor? > Has to be a socket, it's a limitation of poll(). :-( |
From: Zoran V. <zv...@ar...> - 2006-01-04 10:40:56
|
Am 04.01.2006 um 11:35 schrieb Stephen Deasey: > The queue thread monitors all sockets for events > (readable/writeable) and then runs the callbacks. > > Isn't this what the driver thread is doing? Is this a potential replacement/abstraction of the driver thread? |
From: Zoran V. <zv...@ar...> - 2006-01-04 10:38:17
|
Am 04.01.2006 um 11:35 schrieb Stephen Deasey: > It's an API for async network IO. You create a queue which is managed > by a single thread. You add tasks to the queue. A task is a callback > and a socket event. The queue thread monitors all sockets for events > (readable/writeable) and then runs the callbacks. Thanks. Need it be a socket or can it handle any open descriptor? |
From: Stephen D. <sd...@gm...> - 2006-01-04 10:35:16
|
On 1/4/06, Zoran Vasiljevic <zv...@ar...> wrote: > Interesting... > What is a "task" (in a nutshell)? > It's an API for async network IO. You create a queue which is managed by a single thread. You add tasks to the queue. A task is a callback and a socket event. The queue thread monitors all sockets for events (readable/writeable) and then runs the callbacks. |
From: Stephen D. <sd...@gm...> - 2006-01-04 10:31:07
|
On 1/3/06, Vlad Seryakov <vl...@cr...> wrote: > I am attaching the whole driver.c file because patch would not be very > readable. > > See if this a good solution or not, it works and uses separate thread > for all reading and spooling, also all upload stats are done in the > spoller thread, so driver thread now works without any locking. > > ... > I find it confusing that the actual spooling code is not in the SpoolThread, but still in SockRead(). Take a look at nsd/task.c. I think you should be able to implement this as a Ns_Task callback, which gives you the extra thread and all the poll handling etc. for free. Move the spooling code from SockRead into the task callback. Later, we could add another task queue for large downloads. Can you split out the upload stats code. One functional change at a time is much easier to follow. I think the basic approach you've got here is the right first step.=20 Single thread, blocking disk writes, triggered from driver thread so no bouncing between the conn thread. The basic programming model remains the same. Once the upload code is working we can add the upload stats and pre-queue filters for quota checking etc. After that we can investigate AIO for disk access to make everything more efficient. |