You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(23) |
Dec
(9) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(32) |
Feb
(23) |
Mar
(23) |
Apr
(11) |
May
(19) |
Jun
(8) |
Jul
(28) |
Aug
(19) |
Sep
(11) |
Oct
(8) |
Nov
(39) |
Dec
(22) |
2002 |
Jan
(14) |
Feb
(64) |
Mar
(14) |
Apr
(28) |
May
(25) |
Jun
(34) |
Jul
(26) |
Aug
(88) |
Sep
(66) |
Oct
(26) |
Nov
(16) |
Dec
(22) |
2003 |
Jan
(18) |
Feb
(16) |
Mar
(20) |
Apr
(20) |
May
(26) |
Jun
(43) |
Jul
(42) |
Aug
(22) |
Sep
(41) |
Oct
(37) |
Nov
(27) |
Dec
(23) |
2004 |
Jan
(26) |
Feb
(9) |
Mar
(40) |
Apr
(24) |
May
(26) |
Jun
(56) |
Jul
(15) |
Aug
(19) |
Sep
(20) |
Oct
(30) |
Nov
(29) |
Dec
(10) |
2005 |
Jan
(1) |
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
(3) |
Jul
(6) |
Aug
|
Sep
(4) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
2006 |
Jan
(10) |
Feb
(6) |
Mar
(10) |
Apr
(9) |
May
(4) |
Jun
(1) |
Jul
(2) |
Aug
(6) |
Sep
(1) |
Oct
(1) |
Nov
(11) |
Dec
|
2007 |
Jan
(4) |
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
(5) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
2013 |
Jan
|
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(19) |
Dec
(3) |
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Brent W. <we...@pa...> - 2001-02-10 01:40:26
|
Hi folks - I'm switching to a new job at Panasas. They are in the network attached storage market. I'll continue to use Tcl in one way or the other. TclHttpd will continue to run all or part of the Tcl Developer Xchange. I'll be moving that site to a different physical machine, but that should remain visible at "dev.scriptics.com". -- Brent Welch <we...@pa...> http://www.beedub.com |
From: Brent W. <bre...@in...> - 2001-02-08 23:42:24
|
Whatever is in your CGI script will have no affect - that's a different process. You might try looking at /status/size which returns the number of bytes used for data and code at the Tcl script level. It won't highlight any leaks in the C core, but there shouldn't be many of those. Hmm - there is some possibility of a leak in exec on Windows - but I thought that was only in certain cases - I'll have to check on that. >>>"Derek McEachern" said: > All, > > I'm having a curious problem and I'm not sure what I > can do about it. Here's my setup: - > > Windows NT running tclhttpd3.2.0. > > Almost all my page access are tclsh cgi scripts which > have to do some kind of interaction with a mysql > database. > > It appears that something in this whole setup is gobbling > up memory that I don't recover until I kill tclhttpd. I've > been trying to figure out where the problem lies and I'm > not making much progress. > > First, I have a load of errors in the log80_error file > which look like so: > > [08/Feb/2001:15:49:12] nosock bgerror {Thu Feb 08 15:49:12 Central Standard > Time 2001 > can't unset "data": no such variable > while executing > "unset data" > (procedure "HttpdCloseFinal" line 14) > invoked from within > "HttpdCloseFinal $sock" > (procedure "Httpd_SockClose" line 44) > invoked from within > "Httpd_SockClose sock3920 1 timeout" > ("after" script)} > > I haven't got down to the root cause of this but I'm assuming that > this isn't good. > > The other thing I'm wondering about is the "code" of my > cgi's. They all look something like > > #!/bin/sh > # \ > exec tclsh "$0" ${1+"$0"} > > if {[catch { > package require ncgi > lappend auto_path [pwd] > set vars [ncgi::parse] > #--cgi code here > exit 0 > }]} { > #--Catch Errors > puts "Content-Type: text/html\n" > puts "<PRE>$errorInfo</PRE>" > exit 0 > } > > Reaching for straws now I wonder if it has anything to do > with "exiting" though I don't know why.. > > Any thoughts/ideas would be greatly appreciated. > > Derek > > > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: Derek M. <de...@ti...> - 2001-02-08 22:02:44
|
All, I'm having a curious problem and I'm not sure what I can do about it. Here's my setup: Windows NT running tclhttpd3.2.0. Almost all my page access are tclsh cgi scripts which have to do some kind of interaction with a mysql database. It appears that something in this whole setup is gobbling up memory that I don't recover until I kill tclhttpd. I've been trying to figure out where the problem lies and I'm not making much progress. First, I have a load of errors in the log80_error file which look like so: [08/Feb/2001:15:49:12] nosock bgerror {Thu Feb 08 15:49:12 Central Standard Time 2001 can't unset "data": no such variable while executing "unset data" (procedure "HttpdCloseFinal" line 14) invoked from within "HttpdCloseFinal $sock" (procedure "Httpd_SockClose" line 44) invoked from within "Httpd_SockClose sock3920 1 timeout" ("after" script)} I haven't got down to the root cause of this but I'm assuming that this isn't good. The other thing I'm wondering about is the "code" of my cgi's. They all look something like #!/bin/sh # \ exec tclsh "$0" ${1+"$0"} if {[catch { package require ncgi lappend auto_path [pwd] set vars [ncgi::parse] #--cgi code here exit 0 }]} { #--Catch Errors puts "Content-Type: text/html\n" puts "<PRE>$errorInfo</PRE>" exit 0 } Reaching for straws now I wonder if it has anything to do with "exiting" though I don't know why.. Any thoughts/ideas would be greatly appreciated. Derek |
From: Brent W. <bre...@in...> - 2001-02-01 18:37:54
|
Here is the first version of my Upload domain. I'm sending it out via email to the list because CVS access to SourceForge is temporarily messed up. You'll have to read through the procedure headers of Upload_Domain and UploadTest to get an idea of how to use it. The basic idea is that you register a domain, e.g., /upload, and then you can POST forms to that domain that contain <input type=file>. It will parse the multipart/form-data and save the files to a directory of your choice, and finally make a callback to a procedure you supply (see UploadTest) There are some unimplemented limits on the number of upload files and their size. I'm open to feedback/patches on this. -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: Sailer H. (ext.) <Hol...@ks...> - 2001-02-01 09:42:47
|
I only had zombies, with tclhttpd3.2.1 and the cgi-interface. this happened, because the asynchron fcopy didn't called=20 CgiCopyDone-callback (buggy fcopy), and therefore the output of the cgi-script=20 didn't get back to the browser. When the cgi-process terminates then,=20 you will get zombies because tclhttpd still waits for the CgiCopyDone-Callback Although by pressing the stop-button in the browser, they go away. Holger > -----Urspr=FCngliche Nachricht----- > Von: Brent Welch [SMTP:bre...@in...] > Gesendet am: Mittwoch, 31. Januar 2001 18:42 > An: Steve Blinkhorn > Cc: tcl...@li... > Betreff: Re: [Tclhttpd-users] Zombie processes=20 >=20 > What version of Tcl are you running? > These processes may be from exec's that fail in an odd way. > I'm looking through the changes file and don't see a reference to > zombies, but this rings a bell. I'll ask around. >=20 > >>>Steve Blinkhorn said: > > We're getting zombie processes that are children of the tclsh we = are > > using to start tclhttpd. It's not at all clear to me where these = are > > coming from or what I should do to prevent the problem - it could = be > > that they are the result of failed transfers of one kind or = another. > > In 25 years of Unix hacking I've never really come across many > > zombies, so I need all the help I can get. Please :-) > >=20 > > --=20 > > Steve Blinkhorn <st...@pr...> > >=20 > > _______________________________________________ > > TclHttpd-users mailing list > > Tcl...@li... > > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users >=20 > -- Brent Welch <bre...@in...> > http://www.interwoven.com >=20 >=20 >=20 > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users |
From: Brent W. <bre...@in...> - 2001-02-01 07:19:10
|
In response to the various folks having trouble with file uploading, I got around to implementing a file upload domain: * lib/upload.tcl: Added a file upload domain handler. Check out the definition of Upload_Url, which registers the domain, and UploadTest, which is the sample callback. I also uncovered a bug in the CGI module: * lib/cgi.tcl: Added binary translation to the pipe used to send POST data to a CGI script. These are in CVS at SourgeForge (http://sourceforge.net/projects/tclhttpd) I'll make a new tar file release "soon". -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: Brent W. <bre...@in...> - 2001-01-31 18:52:50
|
That suggests that the puts down the pipe by tclhttpd's cgi.tcl is not in binary mode, ... - ouch! that's it, looks like that direction of the pipe is not in binary mode. You can patch that in yourself - you can see in cgi.tcl in the CgiSpawn procedure where TclHttpd is about to copy the POST data down the pipe. In TclHttpd3.2.1 there is an fcopy call there. In the current CVS sources there is a registration of CgiCopyPost via a fileevent. Shortly the CVS sources will also have fconfigure $fd -translation binary >>>"Anders Nilsson" said: > > Instead of puts' back to the browser, why don't you open a file > > and write it out there? How does that work? > > Tried that, gets the same result... > > set fhnew [open "result.gif" w] > fconfigure $fhnew -trans binary > puts -nonewline $fhnew $result > close $fhnew > > /Anders N -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: Brent W. <bre...@in...> - 2001-01-31 17:39:14
|
What version of Tcl are you running? These processes may be from exec's that fail in an odd way. I'm looking through the changes file and don't see a reference to zombies, but this rings a bell. I'll ask around. >>>Steve Blinkhorn said: > We're getting zombie processes that are children of the tclsh we are > using to start tclhttpd. It's not at all clear to me where these are > coming from or what I should do to prevent the problem - it could be > that they are the result of failed transfers of one kind or another. > In 25 years of Unix hacking I've never really come across many > zombies, so I need all the help I can get. Please :-) > > -- > Steve Blinkhorn <st...@pr...> > > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: Steve B. <st...@pr...> - 2001-01-31 11:25:39
|
We're getting zombie processes that are children of the tclsh we are using to start tclhttpd. It's not at all clear to me where these are coming from or what I should do to prevent the problem - it could be that they are the result of failed transfers of one kind or another. In 25 years of Unix hacking I've never really come across many zombies, so I need all the help I can get. Please :-) -- Steve Blinkhorn <st...@pr...> |
From: Anders N. <an...@di...> - 2001-01-30 10:38:22
|
Brent Welch: > Which one is correct? The correct one is the output from 'testupload.tml': GIF89a ?yu?U?,3??aC$c!?s~--"'?Z?OS^?."f,~}|vutrqonkjih<kaschnip> The incorrect one is the output from my cgi-script: GIF89a ?yu?U?,3??aC$c!????????Z?????????~}|vutrqonkjih<kaschnip> As you see, in the output from my cgi-script, 'something' has problem readin/translating some of the input and exchanges those characters with '?'. > Make sure your stdin is in binary mode: > fconfigure stdin -trans binary > in your CGI version. How does that differ from what I'm doing? fconfigure stdin -translation {binary binary} Anyway, the result is exactly the same. My cgi-code: > > puts "Content-Type: text/html" > > fconfigure stdin -translation binary > > set sinput [read stdin $env(CONTENT_LENGTH)] > > puts -nonewline $sinput > By the way, I've started work on a file upload domain > handler for TclHttpd, but it'll probably take me a > few days to get through it. If it helps my problem, I'll gladly wait. :o) Thanks for helping! /Anders N |
From: Brent W. <bre...@in...> - 2001-01-29 21:14:21
|
By the way, I've started work on a file upload domain handler for TclHttpd, but it'll probably take me a few days to get through it. >>>"Anders Nilsson" said: > Brent Welch: > > You should print out the $env(CONTENT_TYPE) > > in your CGI - perhaps TclHttpd isn't passing through the right thing. > > I wrote a little snippet: > > puts "Content-Type: text/html" > fconfigure stdin -translation {binary binary} > set sinput [read stdin $env(CONTENT_LENGTH)] > puts -nonewline $sinput > > If I run that cgi-program and uploads a, say a GIF-file, > and compares the output with when I upload the same file > with the tclhttpd example testupload.tml, I don't get > exactly the same result... > > The differences in the result is visible below: > > My cgi-test from above: > GIF89a ?yu?U?,3??aC$c!????????Z?????????~}|vutrqonkjih<kaschnip> > > The output from testupload.tml: > GIF89a ?yu?U?,3??aC$c!?s~--"'?Z?OS^?."f,~}|vutrqonkjih<kaschnip> > > As we see, my version has some trouble with certain characters, > why I ask you. How shall I configure my cgi-script to read those > characters correctly? > > /Anders N > > > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: Brent W. <bre...@in...> - 2001-01-29 21:13:39
|
Which one is correct? Make sure your stdin is in binary mode: fconfigure stdin -trans binary in your CGI version. >>>"Anders Nilsson" said: > Brent Welch: > > You should print out the $env(CONTENT_TYPE) > > in your CGI - perhaps TclHttpd isn't passing through the right thing. > > I wrote a little snippet: > > puts "Content-Type: text/html" > fconfigure stdin -translation {binary binary} > set sinput [read stdin $env(CONTENT_LENGTH)] > puts -nonewline $sinput > > If I run that cgi-program and uploads a, say a GIF-file, > and compares the output with when I upload the same file > with the tclhttpd example testupload.tml, I don't get > exactly the same result... > > The differences in the result is visible below: > > My cgi-test from above: > GIF89a ?yu?U?,3??aC$c!????????Z?????????~}|vutrqonkjih<kaschnip> > > The output from testupload.tml: > GIF89a ?yu?U?,3??aC$c!?s~--"'?Z?OS^?."f,~}|vutrqonkjih<kaschnip> > > As we see, my version has some trouble with certain characters, > why I ask you. How shall I configure my cgi-script to read those > characters correctly? > > /Anders N > > > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: Anders N. <an...@di...> - 2001-01-29 16:19:20
|
Brent Welch: > You should print out the $env(CONTENT_TYPE) > in your CGI - perhaps TclHttpd isn't passing through the right thing. I wrote a little snippet: puts "Content-Type: text/html" fconfigure stdin -translation {binary binary} set sinput [read stdin $env(CONTENT_LENGTH)] puts -nonewline $sinput If I run that cgi-program and uploads a, say a GIF-file, and compares the output with when I upload the same file with the tclhttpd example testupload.tml, I don't get exactly the same result... The differences in the result is visible below: My cgi-test from above: GIF89a ?yu?U?,3??aC$c!????????Z?????????~}|vutrqonkjih<kaschnip> The output from testupload.tml: GIF89a ?yu?U?,3??aC$c!?s~--"'?Z?OS^?."f,~}|vutrqonkjih<kaschnip> As we see, my version has some trouble with certain characters, why I ask you. How shall I configure my cgi-script to read those characters correctly? /Anders N |
From: Brent W. <bre...@in...> - 2001-01-27 02:08:20
|
I'm pretty sure you don't want to do file upload in a .tml file, and it is also true that the ncgi:: package is not at all efficient because it loads the whole thing into memory. It sounds like we need a special domain handler that is designed and optimized for multipart uploads. It would, for example, automatically create the files in an upload area, and then set up some context that has the rest of the meta data, then call a handler routine (or .tml file, or something). I'd love to see something like that for TclHttpd - I can't guarantee how soon I'll get inspired to hack something up though. You might drop back to cgi.tcl in the meantime. >>>"MENGES,JOHN E (HP-Corvallis,ex1)" said: > Brent - I did some more investigation on this yesterday. Tcl-httpd is readi ng > the POST data in binary (actually lf) mode, but the ncgi package (from tclli b > 0.8) is doing a wholesale translation of <crlf> to <lf> in ncgi::multipart ( the > following lines): > > # The query data is typically read in binary mode, which preserves > # the \r\n sequence from a Windows-based browser. > if {[regexp -- $boundary\r\n $query]} { > regsub -all \r\n $query \n query > } > > Seems like it should not be doing this translation on the file content. I'v e > manually extracted the file from $query before this regsub is executed, and it's > all correct to that point. > > My form looks like this, in a .tml file: > > <form name="Form" method="post" enctype="multipart/form-data" action=""> > <input name="file" type="file"> > <input type="submit"> > </form> > > and in the .tml file that is the target of the action (the same .tml file in > this case), I access the file content with: > > set File [open {SomeFile} w] > fconfigure $File -translation {binary binary} > puts $File [ncgi::value file] > close $File > > A sample file I tried to upload has content "Line1<cr><lf>Line2". It ends u p as > "Line1<lf>Line2". > > I tried commenting out the <crlf> to <lf> translation above, but ncgi::multi part > seems to rely on this translation for other parts of the query. > > I have a couple of workarounds in mind. One option is to grab the query dat a > before it is destroyed and parse it myself. Another is to drop out to Don > Libes' cgi-tcl. There's another problem with tclhttpd's upload that is caus ing > me to consider the latter approach - the file upload times are too long. I' m > getting about 30 seconds per megabyte, while other web servers are giving us > about 3 seconds per megabyte on upload. A co-worker searched the web and fo und > a claim that cgi-tcl fixed both the binary file upload and performance probl ems. > So I'm going to look at that today as an alternative. > > Other than these problems with file upload, tclhttpd has been working fine f or > our project. > > > > -----Original Message----- > From: Brent Welch [mailto:bre...@in...] > Sent: Thursday, January 25, 2001 10:56 PM > To: MENGES,JOHN E (HP-Corvallis,ex1) > Cc: 'bre...@sc...' > Subject: Re: tclhttpd binary file upload > > > Can you be more specific about how you are doing the upload? > TclHttpd is pretty good about reading POST data in > binary mode. > > > >>>"MENGES,JOHN E (HP-Corvallis,ex1)" said: > > Brent - binary file upload using tclhttpd doesn't seem to work, > on windows > > (Win2k, to be exact). It looks like it's doing some sort of en > d-of-line > > translation. Is this a known problem? Is there a work-around? > > -- Brent Welch <bre...@in...> > http://www.interwoven.com -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: Brent W. <bre...@in...> - 2001-01-27 01:33:53
|
>>>"Anders Nilsson" said: > Hi. > > I'm trying to implement a file uploadpage with cgi. I've taken > a look at the demo that comes with the tclhttpd, especially the > testupload.tml file. For me, since I don't really get the .tml > workings, the main part of recieving the uploaded file is this: > > foreach {n v} [ncgi::nvlist] { > append html [html::row $n [html::tableFromList [lindex $v 0]]] > append html [html::row $n [ncgi::value $n]] > append html <tr>[html::cell "colspan=2" "<pre>[html::quoteFormValue > [lindex $v 1]]</pre>"]</tr> > } > > which nicely breaks down to: > > foreach {n v} [ncgi::nvlist] { > append html "name: $n " > append html "value: $v " > } > > for the sake of clarity. Now, that works just fine, but when I > try to put that into my own ncgitest.cgi file, it doesn't work > as in the .tml file above. When I run my own cgi-script, I don't > get any values in the $n variable, and in the $v variable I get > the whole thing. The differences below seem to indicate that the CGI case isn't correctly detecting the multipart/ content type. The ncgi:: package does some extra work for you in this case to unbundle the arguments as you can see. You should print out the $env(CONTENT_TYPE) in your CGI - perhaps TclHttpd isn't passing through the right thing. > Ie. the .tml file gives these values: > $n: hide_me > $v: {content-disposition form-data name hide_me} {hello, world} > > while my cgi-file gives these values: > $n: <empty> > $v: {} {Content-Disposition: form-data; name="hide_me" hello, world } > > Also, when I try to upload a little bigger imagefile the .tml version > handles it nicely, but my cgi-file just hangs... This bug was recently diagnosed. You can either backtrack to 3.2 from 3.2.1, or use the attached cgi.tcl and doc.tcl files. The fix is only in cgi.tcl, but it uses a new API provided by doc.tcl for other reasons. I'll put out a 3.2.2 "soon", but I can't say just when I'll get to it. Actually, but real bug is in Tcl's fcopy command - the 3.2.1 version of cgi.tcl switched to using fcopy in a certain case that exposed the fcopy bug. That bug has also been diagnosed. Thanks to Don Porter for all that sleuthing! These files are up-to-date in CVS. -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: Anders N. <an...@di...> - 2001-01-26 13:11:07
|
Hi. I'm trying to implement a file uploadpage with cgi. I've taken a look at the demo that comes with the tclhttpd, especially the testupload.tml file. For me, since I don't really get the .tml workings, the main part of recieving the uploaded file is this: foreach {n v} [ncgi::nvlist] { append html [html::row $n [html::tableFromList [lindex $v 0]]] append html [html::row $n [ncgi::value $n]] append html <tr>[html::cell "colspan=2" "<pre>[html::quoteFormValue [lindex $v 1]]</pre>"]</tr> } which nicely breaks down to: foreach {n v} [ncgi::nvlist] { append html "name: $n " append html "value: $v " } for the sake of clarity. Now, that works just fine, but when I try to put that into my own ncgitest.cgi file, it doesn't work as in the .tml file above. When I run my own cgi-script, I don't get any values in the $n variable, and in the $v variable I get the whole thing. Ie. the .tml file gives these values: $n: hide_me $v: {content-disposition form-data name hide_me} {hello, world} while my cgi-file gives these values: $n: <empty> $v: {} {Content-Disposition: form-data; name="hide_me" hello, world } Also, when I try to upload a little bigger imagefile the .tml version handles it nicely, but my cgi-file just hangs... What am I doing wrong, or what is the .tml version doing that I don't? /Anders N |
From: Brent W. <bre...@in...> - 2001-01-22 21:26:37
|
>>>Colin McCormack said: > Brent, > > Rather than expect someone else to unravel my strange terse (although > copiously commented) code, if you can give me some pointers on how best to > interface to tclhttpd, I'll make the mods. Glad to help. > I note, on inspection, my lib is in itcl, which's a bit of a possible proble m, > I guess. We can simply do package require itcl no worries > Anyway, I suppose someone will want to execute an arbitrary command, say a > select, on the postgresql backend, so what's the best way to do this in > tclhttpd while servicing a page? > > We've got an aexec primitive, which'll take a series of postgresql commands > and call back to a proc with args when each one is complete (of course it > works for n==1 too :) > > We've got an rexec which'll set some var with the value of the result, at so me > time in the future (when it's complete) and an mrexec, which'll append resul ts > as they come in, from a series of postgresql commands. > > Which'd be easiest to interface to tclhttpd? What's the drill for making > tclhttpd suspend processing of a page and finish up later? What's the best > way to cancel queries, and how will I know when they have to be cancelled? Two answers. A transparent way is simply: Httpd_Suspend $sock vwait whatever Httpd_Resume $sock The idea is that you can call Httpd_Suspend $sock and the protocol engine will unregister all the fileevents and set a timer. Later on - say, in response to your postgres callback - you call Httpd_Resume, and finish up with the Httpd_ReturnData. The drawback of this approach is that it leaves a bunch of stuff on the stack (The combined Tcl and C call stack). You are pretty much stuck with this artifact (nested Tcl event loops) unless you have a special DomainHandler that is willing to unwind its action and restart it later. You can raise a special error that'll unwind the Tcl call stack and trigger a call Httpd_Suspend for you automatically. However, if you were in the middle the "subst" that is processing a .tml page, you are hosed. In general, it may be difficult to take advantage of this feature. > There's a secondary matter here: each postgresql connection can only perfor m > one command at a time: the backend's singly threaded per connection, so > there'd be some need to create a pool of open connections, or associate a > connection with a session, or interlock a single pg connection to prevent > multiple concurrent write attempts, or perhaps queue them. If you have one connection per socket, then you'll stay single threaded. Your cache could be indexed by $sock to see if you have one. By the way, if you call Httpd_CompletionCallback $sock $cmd then the Httpd layer will call you back at the very end of the transaction. You'll need that for cleanup. > I know I could hunt through tclhttpd to discover the answers to these Qs, bu t > it may be quicker (certainly for me :) if you can shed some light on it for me. Happy to help. -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: Sailer H. (ext.) <Hol...@ks...> - 2001-01-22 13:19:14
|
> -----Urspr=FCngliche Nachricht----- > Von: Brent Welch [SMTP:bre...@in...] > Gesendet am: Freitag, 19. Januar 2001 20:17 > An: Sailer Holger (ext.) > Cc: 'Tcl...@li...' > Betreff: Re: [Tclhttpd-users] failed to upload files to server=20 >=20 >=20 > >>>"Sailer Holger (ext.)" said: > > Hello, > >=20 > > I'm trying to post files to the server in two ways: > > =20 > > 1. via cgi: posting an file fails due to: > > in cgi.tcl: proc CgiSpawn:=20 > > fcopy $sock $fd -command [list CgiCopyDone $sock $fd] > -size > > $data(count) > > -> dependent on size of posted file, CgiCopyDone never got = called.=20 >=20 > There is a timeout setting in cgi.tcl - I wonder if the server is > aborting the connection before it has had a chance to push out the = file. > Do you find any clues in the error log? =20 [Holger Sailer (ext)] it seems to me, that the server don't write all bytes to the cgi-script.=20 I've made some output in 'proc CgiCopyDone' to see if it gets called and with how many bytes. Posting a little textfile ( < ~15 KB ) works fine, but bigger ones got either cancelled by timeout=20 or if i press the stop button in my browser. In the latter case, CgiCopyDone gets called with=20 less bytes written to the Cgi-process, than $env(CONTENT_LENGTH). The errorlog says timeout=20 and/or CgiCancel what is OK.=20 With Windows NT, the Cgi-Process don't read anything from stdin, or the CgiCopyDone-=20 gets called with 0 bytes written. By the way, the upload example of Don Libes Cgi.tcl package don't work for bigger files either. > > cgifile.tml: > > <form ENCtype=3D"multipart/form-data" > > action=3D"/cgi-bin/testupload.cgi" method=3D"post"> > > =20 > > --------------------------------- > > File <input type=3D"file" name=3D"the_file" > accept=3D"application/*"> > > <p> > > <input type=3Dsubmit> > > </form> > > =20 > > cgi-bin/testupload.tcl: > > puts "Content-Type: text/html" > > puts <pre> > > fconfigure stdin -translation binary > > puts [read stdin $env(CONTENT_LENGTH)] > > puts </pre> >=20 > By the way, it's probably better to dump the file into a file instead > of back into the web page.=20 [Holger Sailer (ext)] it's only for testing > > 2. post file by not calling cgi-process: > > -> works with all ascii-files as well as binary files < ~ 80 = KB. > > fails with bigger binary files. > > e.g. compressed tar - archives can't be extracted any more. > >=20 > > myfile.tml: > > --------------- > > <form ENCtype=3D"multipart/form-data" > action=3D"mytestupload.tml" > > method=3D"post"> > > =20 > > ------------------------- > > File <input type=3D"file" name=3D"the_file" > accept=3D"application/*"> > > <p> > > <input type=3Dsubmit> > > </form> > >=20 > > mytestupload.tml: > > ------------------------- >=20 > One big drawback of using a .tml file for this purpose is that, by > default, > TclHttpd will buffer all the POST data into memory before calling = your > page. It will be a lot more efficient to have a custom domain = handler > that reads the POST data directly from the socket. Make sure you > call Url_PrefixInstall with "-readpost 0" [Holger Sailer (ext)] But why it will not store a bigger binary file this way,=20 also it will not be efficient ? > Someone should contribute a simple file upload domain handler so > folks don't have to keep solving this problem. >=20 > > <table border=3D1 cellpadding=3D2> > > [ > > set html "" > > set fname "unknown.upload" > > foreach {n v} [ncgi::nvlist] { > > switch -- $n { > > the_file { > > foreach { vn vv } [lindex $v 0] { > > append html "<pre> NAME=3D$vn VALUE=3D$vv \n</pre>" > > switch -- $vn { > > filename { > > regsub -all {\\} $vv {/} new > > set fname [ file tail $new ] > > } > > } > > } > > set fd [ open [ file join / tmp $fname] w] > > fconfigure $fd -translation binary > > puts -nonewline $fd [ncgi::value the_file] > > close $fd=20 > > append html [html::h1 "File saved on [ file join / tmp > > $fname]"] > > } > > } > > append html [html::row $n [html::tableFromList [lindex $v 0]]] > > } > > set html > > ] > > </table> > >=20 > >=20 > > I'm using tclhttpd3.2.1 and tcl8.3 / tcl8.3.2 and tcllib0.8 (no > threads). > > I've tested on Win NT and QNX > >=20 > >=20 > > Is there a bug or am i doing something wrong? > > Thanks in advance, >=20 > -- Brent Welch <bre...@in...> > http://www.interwoven.com >=20 >=20 >=20 > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users |
From: Colin M. <co...@fi...> - 2001-01-20 14:02:34
|
Brent, Rather than expect someone else to unravel my strange terse (although copiously commented) code, if you can give me some pointers on how best to interface to tclhttpd, I'll make the mods. I note, on inspection, my lib is in itcl, which's a bit of a possible problem, I guess. Anyway, I suppose someone will want to execute an arbitrary command, say a select, on the postgresql backend, so what's the best way to do this in tclhttpd while servicing a page? We've got an aexec primitive, which'll take a series of postgresql commands and call back to a proc with args when each one is complete (of course it works for n==1 too :) We've got an rexec which'll set some var with the value of the result, at some time in the future (when it's complete) and an mrexec, which'll append results as they come in, from a series of postgresql commands. Which'd be easiest to interface to tclhttpd? What's the drill for making tclhttpd suspend processing of a page and finish up later? What's the best way to cancel queries, and how will I know when they have to be cancelled? There's a secondary matter here: each postgresql connection can only perform one command at a time: the backend's singly threaded per connection, so there'd be some need to create a pool of open connections, or associate a connection with a session, or interlock a single pg connection to prevent multiple concurrent write attempts, or perhaps queue them. I know I could hunt through tclhttpd to discover the answers to these Qs, but it may be quicker (certainly for me :) if you can shed some light on it for me. Colin. |
From: Brent W. <bre...@in...> - 2001-01-20 05:07:07
|
I've added a pointer about this to http://dev.scriptics.com/software/tclhttpd/technotes.html >>>co...@fi... said: > It seems like there's some interest, so I've put up a copy here: > ftp://coldstore.sourceforge.net/pub/coldstore/libtclpq-20010120.tgz > > I've requested a sourceforge project libtclpq (pretty catchy name, huh :) > > I'll move it into cvs etc when the project's approved. > > Some random points: > 1) it works with tcl8.4 > 2) it works with the latest postgresql > 3) there's a tcl support suite (and a test file :) > 4) we're using it pretty heavily at $work > 5) it integrates with tclhttpd (although I've never integrated the > asynchronous behavior) > 6) there's also a set of itcl stuff (I'll release later) which gives the > ability to create widgets from tables, creates browsers, etc. > > Email me privately with bugs until the sourceforge infrastructure's up. > > Enjoy, > Colin. > > > > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: <co...@fi...> - 2001-01-20 00:02:35
|
It seems like there's some interest, so I've put up a copy here: ftp://coldstore.sourceforge.net/pub/coldstore/libtclpq-20010120.tgz I've requested a sourceforge project libtclpq (pretty catchy name, huh :) I'll move it into cvs etc when the project's approved. Some random points: 1) it works with tcl8.4 2) it works with the latest postgresql 3) there's a tcl support suite (and a test file :) 4) we're using it pretty heavily at $work 5) it integrates with tclhttpd (although I've never integrated the asynchronous behavior) 6) there's also a set of itcl stuff (I'll release later) which gives the ability to create widgets from tables, creates browsers, etc. Email me privately with bugs until the sourceforge infrastructure's up. Enjoy, Colin. |
From: Colin M. <co...@fi...> - 2001-01-19 23:46:09
|
> Cool - did you have to create and event source and all that to get > proper integration with the Tcl event loop? No, it was much simpler than that - postgresql uses a socket to communicate with the backend. I use Tcl_MakeTcpClientChannel to let tcl supervise it, set the appropriate options, and expose the postgresql C library functions needed to tcl (using Swig.) That way you can just treat it as a normal socket, as long as you let postgresql's library do the reading. I'm supposed to be at linux.conf.au, but I'll try to set up a sourceforge account for it now, and upload it. Colin. > >>>Colin McCormack said: > > > 3) If the server blocks for some other reason it can indeed starve clients > . > > > The most typical example is if you make SQL calls to a database server dir > ectly > > > from TclHttpd. Those calls will block until the SQL server returns the an > swer. > > > Right now the OraTcl and SybTcl interfaces do not really support a non-blo > cking > > > interface that will free up TclHttpd to get back into its event loop. > > > > I've got a postgresql tcl interface which doesn't block on the db server bac > kend, so 3 doesn't necessarily apply. > > > > You can have it if you want, it's GPL :) > > > > Colin. > > > > -- Brent Welch <bre...@in...> > http://www.interwoven.com > > |
From: Brent W. <bre...@in...> - 2001-01-19 19:15:12
|
Cool - did you have to create and event source and all that to get proper integration with the Tcl event loop? >>>Colin McCormack said: > > 3) If the server blocks for some other reason it can indeed starve clients . > > The most typical example is if you make SQL calls to a database server dir ectly > > from TclHttpd. Those calls will block until the SQL server returns the an swer. > > Right now the OraTcl and SybTcl interfaces do not really support a non-blo cking > > interface that will free up TclHttpd to get back into its event loop. > > I've got a postgresql tcl interface which doesn't block on the db server bac kend, so 3 doesn't necessarily apply. > > You can have it if you want, it's GPL :) > > Colin. > -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: Brent W. <bre...@in...> - 2001-01-19 19:14:37
|
>>>"Sailer Holger (ext.)" said: > Hello, > > I'm trying to post files to the server in two ways: > > 1. via cgi: posting an file fails due to: > in cgi.tcl: proc CgiSpawn: > fcopy $sock $fd -command [list CgiCopyDone $sock $fd] -size > $data(count) > -> dependent on size of posted file, CgiCopyDone never got called. There is a timeout setting in cgi.tcl - I wonder if the server is aborting the connection before it has had a chance to push out the file. Do you find any clues in the error log? > cgifile.tml: > <form ENCtype="multipart/form-data" > action="/cgi-bin/testupload.cgi" method="post"> > > --------------------------------- > File <input type="file" name="the_file" accept="application/*"> > <p> > <input type=submit> > </form> > > cgi-bin/testupload.tcl: > puts "Content-Type: text/html" > puts <pre> > fconfigure stdin -translation binary > puts [read stdin $env(CONTENT_LENGTH)] > puts </pre> By the way, it's probably better to dump the file into a file instead of back into the web page. > 2. post file by not calling cgi-process: > -> works with all ascii-files as well as binary files < ~ 80 KB. > fails with bigger binary files. > e.g. compressed tar - archives can't be extracted any more. > > myfile.tml: > --------------- > <form ENCtype="multipart/form-data" action="mytestupload.tml" > method="post"> > > ------------------------- > File <input type="file" name="the_file" accept="application/*"> > <p> > <input type=submit> > </form> > > mytestupload.tml: > ------------------------- One big drawback of using a .tml file for this purpose is that, by default, TclHttpd will buffer all the POST data into memory before calling your page. It will be a lot more efficient to have a custom domain handler that reads the POST data directly from the socket. Make sure you call Url_PrefixInstall with "-readpost 0" Someone should contribute a simple file upload domain handler so folks don't have to keep solving this problem. > <table border=1 cellpadding=2> > [ > set html "" > set fname "unknown.upload" > foreach {n v} [ncgi::nvlist] { > switch -- $n { > the_file { > foreach { vn vv } [lindex $v 0] { > append html "<pre> NAME=$vn VALUE=$vv \n</pre>" > switch -- $vn { > filename { > regsub -all {\\} $vv {/} new > set fname [ file tail $new ] > } > } > } > set fd [ open [ file join / tmp $fname] w] > fconfigure $fd -translation binary > puts -nonewline $fd [ncgi::value the_file] > close $fd > append html [html::h1 "File saved on [ file join / tmp > $fname]"] > } > } > append html [html::row $n [html::tableFromList [lindex $v 0]]] > } > set html > ] > </table> > > > I'm using tclhttpd3.2.1 and tcl8.3 / tcl8.3.2 and tcllib0.8 (no threads). > I've tested on Win NT and QNX > > > Is there a bug or am i doing something wrong? > Thanks in advance, -- Brent Welch <bre...@in...> http://www.interwoven.com |
From: Colin M. <co...@fi...> - 2001-01-19 13:52:27
|
> 3) If the server blocks for some other reason it can indeed starve clients. > The most typical example is if you make SQL calls to a database server directly > from TclHttpd. Those calls will block until the SQL server returns the answer. > Right now the OraTcl and SybTcl interfaces do not really support a non-blocking > interface that will free up TclHttpd to get back into its event loop. I've got a postgresql tcl interface which doesn't block on the db server backend, so 3 doesn't necessarily apply. You can have it if you want, it's GPL :) Colin. |