From: Stephen D. <sd...@gm...> - 2005-12-31 10:58:27
|
On 12/30/05, Vlad Seryakov <vl...@cr...> wrote: > On that note i have another idea i'd like to discuss before i even start > coding a prototype. There is thread on aolserver mailing list about > upload prgoress, so i thought would it be a good idea to have global > url-specific cache of all uploads, let's say, all requestes with > content-length > 0. It will have only 2 alues, current size and total > length and will last only for the time of upload. It will belong to Sock > structure of the Request, so on close it will be freed as well. > > Making it url-specific will give Web developer ability to generate > uniaue urls for upload and then request statistics, requests with the > same url will not override each other, server will update statistics for > the first created cache only, subsequent uploads with the same url > will not show anything or show old values which is fine for security > reasons. > > Overhead is minimal and it will add one new commmand like > ns_upload_stats url. SockRead will handle it all, so no other places are > affected. > > Is it worth trying? I think we talked about this before, but I can't find it in the mailing list archive. Anyway, the problem with recording the upload process is all the locking that's required. You could minimize this, e.g. by only recording uploads above a certain size, or to a certain URL. It reminds me of a similar problem we had. Spooling large uploads to disk: https://sourceforge.net/mailarchive/forum.php?thread_id=3D7524448&forum_id= =3D43966 Vlad implemented the actual spooling, but moving that work into the conn threads, reading lazily, is still to be done. Lazy uploading is exactly the hook you need to track upload progress.=20 The client starts to upload a file. Read-ahead occurs in the driver thread, say 8k. Control is passed to a conn thread, which then calls Ns_ConnContent(). The remaining content is read from the client, in the context of the conn thread and so not blocking the driver thread, and perhaps the content is spooled to disk. To implement upload tracking you would register a proc for /upload which instead of calling Ns_ConnContent(), calls Ns_ConnRead() multiple times, recording the number of bytes read in the upload tracking cache, and saving the data to disk or wherever. A lot more control of the upload process is needed, whether it be to control size, access, to record stats, or something we haven't thought of yet. If we complete the work to get lazy reading from the client working, an upload tracker will be an easy module to write. Does this make sense? |