From: Stephen D. <sd...@gm...> - 2005-06-17 20:26:23
|
On 6/17/05, Zoran Vasiljevic <zv...@ar...> wrote: >=20 > Am 17.06.2005 um 21:23 schrieb Stephen Deasey: >=20 > > > > Sounds like a plan for stage 1. > > >=20 > Let me then try this one and see how far I'll come :-) >=20 > > >=20 > > If there's any part of the code base we should be testing, it's this. >=20 > Yes. But this is also the most complex one... Luckily, the tests are easy to write :-) We already have some tests for this area of code in tests/http.test.=20 Here's a 3 line test for 404 pages: test http-1.3 {HTTP/1.0 GET} -constraints serverListen -body { nstest_http -getbody 1 GET /noexist } -match glob -result {404 *Not Found*} We just need to build up the test library one test at a time. |
From: Zoran V. <zv...@ar...> - 2005-06-20 08:00:12
|
Am 17.06.2005 um 22:26 schrieb Stephen Deasey: > > We just need to build up the test library one test at a time. > OK. I'm going to expand this when I come to rewriting this upload file stuff. But one thing at a time. Currently I'm rewriting Tcl VFS related stuff and this will take me (given the level of work I have lately) at least one week or more :-( But, we are in no rush... Zoran |
From: Zoran V. <zv...@ar...> - 2005-06-16 20:58:10
|
Am 16.06.2005 um 22:13 schrieb Stephen Deasey: > > At any time before the registered proc asks for the content it can > check the content length header and decide whether it is too large to > accept. You could imagine setting a low global maxinput, and a call > such as Ns_ConnSetMaxInput() which a registered proc could call to > increase the limit for that connection only. The advantage over the > limits scheme in 4.1 is that the code which checks the size of the > content and processes it is kept together, rather than having to > pre-decalare maxinput sizes for arbitrary URLs in the config file. > Aha... do: ns_conn setmaxinput before we do ns_conn content to get the content of the request. Hm... so we'd keep the maxinput reasonably *small* for most of the requests and will re-set it to a larger value *before* requesting the content when we anticipate large file upload. I think I'm beginning to understand.... Yes, this makes sense. Zoran |
From: Stephen D. <sd...@gm...> - 2005-06-17 05:48:18
|
On 6/16/05, Zoran Vasiljevic <zv...@ar...> wrote: >=20 > Am 16.06.2005 um 22:13 schrieb Stephen Deasey: >=20 > > > > At any time before the registered proc asks for the content it can > > check the content length header and decide whether it is too large to > > accept. You could imagine setting a low global maxinput, and a call > > such as Ns_ConnSetMaxInput() which a registered proc could call to > > increase the limit for that connection only. The advantage over the > > limits scheme in 4.1 is that the code which checks the size of the > > content and processes it is kept together, rather than having to > > pre-decalare maxinput sizes for arbitrary URLs in the config file. > > >=20 >=20 > Aha... do: >=20 > ns_conn setmaxinput >=20 > before we do >=20 > ns_conn content >=20 > to get the content of the request. >=20 > Hm... so we'd keep the maxinput reasonably *small* for most > of the requests and will re-set it to a larger value *before* > requesting the content when we anticipate large file upload. >=20 > I think I'm beginning to understand.... > Yes, this makes sense. Yes, that's the idea. I mentioned it because 4.1 introduces a new 'limits' mechanism to deal with this, and as we're looking at different implementations to decide what to do, I thought I'd compare the two strategies. Nothing stops us from also implementing limits. But with 4.1 where the request body is read eagerly and spilled to disk via the driver thread, the *only* way to set the maxinput on a per-URL as opposed to per-server basis is to preregister this limits data. With the scheme we're describing here, we can use a more natural, linear programming style to set the limit before asking for the data. |
From: Zoran V. <zv...@ar...> - 2005-06-17 07:57:24
|
Am 17.06.2005 um 09:24 schrieb Stephen Deasey: > > I believe this is pretty common, but I'm not sure if this is what you > want. An anonymous mapping is still going to account against the > processes memory budget, and I don't think it's going to be any more > likely to be swapped out than malloc'ed memory. > > In fact, on Linux (well, glibc) if you ask malloc for a chunk of > memory over a certain size then mmap is used internally. I think the > advantage is reduced memory fragmentation, but there is some overhead > for small sizes and you have to allocate in multiples of the page size > (4k). > > tempfs would be a better bet. Modern Linux systems have this mounted > at /dev/shm. I think this is a Solaris thing. This is a very low > overhead file system where the file data springs into existence when > you ask for it. So, no files, no overhead. Under memory pressure > it's sent to swap. It was designed for sharing memory between > processes using mmap, but it's handy for /tmp and some other things. > > You could try setting the TMPDIR environment variable to some tempfs > file system if you wanted to experiment with this. Not to forget: Darwin, Windows... I think I will have to find a common acceptable solution for all OS'es. Fortunately, mapping a regular (temp) file will always work. The rest is just optimizations. Cheers Zoran |
From: Stephen D. <sd...@gm...> - 2005-06-17 08:11:07
|
On 6/17/05, Zoran Vasiljevic <zv...@ar...> wrote: >=20 > Am 17.06.2005 um 09:24 schrieb Stephen Deasey: >=20 > > > > I believe this is pretty common, but I'm not sure if this is what you > > want. An anonymous mapping is still going to account against the > > processes memory budget, and I don't think it's going to be any more > > likely to be swapped out than malloc'ed memory. > > > > In fact, on Linux (well, glibc) if you ask malloc for a chunk of > > memory over a certain size then mmap is used internally. I think the > > advantage is reduced memory fragmentation, but there is some overhead > > for small sizes and you have to allocate in multiples of the page size > > (4k). > > > > tempfs would be a better bet. Modern Linux systems have this mounted > > at /dev/shm. I think this is a Solaris thing. This is a very low > > overhead file system where the file data springs into existence when > > you ask for it. So, no files, no overhead. Under memory pressure > > it's sent to swap. It was designed for sharing memory between > > processes using mmap, but it's handy for /tmp and some other things. > > > > You could try setting the TMPDIR environment variable to some tempfs > > file system if you wanted to experiment with this. >=20 > Not to forget: Darwin, Windows... I think I will have to find a > common acceptable solution for all OS'es. Fortunately, mapping > a regular (temp) file will always work. The rest is just optimizations. Right, but you don't need any code changes. You should be able to set the temp directory via environment variables on those platforms that support a fancy tempfs. Something for the sysadmins... |