If you upload a large (>> 100MB) file to Yaws, using the example upload.yaws script, something dies with an enomem. Now this isn't Yaws's fault as such; AFAIK, it's pretty common across most Web browsers, and from Googling, it sems to be a general problem. It appears that this happens because doing a multipart POST of a large file from most browsers results in the entire file being buffered in the Web server's memory, which is bad if the file is very large.

I'm thinking of adding a feature to Yaws to allow -- in a standard form/multipart POST -- the data to be "streamed" directly to the file without buffering more than a few MB in memory. Looking at the code, it should be doable. This will allow files limited only by disk space to be uploaded. It would have to include some notion of maximum size allowed to prevent DoS-style attacks or just plain cluelessness from making the host run out of disk space.

Does anyone think this is (a) feasible and (b) desirable?

For every expert there is an equal and opposite expert - Arthur C. Clarke