You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(23) |
Dec
(9) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(32) |
Feb
(23) |
Mar
(23) |
Apr
(11) |
May
(19) |
Jun
(8) |
Jul
(28) |
Aug
(19) |
Sep
(11) |
Oct
(8) |
Nov
(39) |
Dec
(22) |
2002 |
Jan
(14) |
Feb
(64) |
Mar
(14) |
Apr
(28) |
May
(25) |
Jun
(34) |
Jul
(26) |
Aug
(88) |
Sep
(66) |
Oct
(26) |
Nov
(16) |
Dec
(22) |
2003 |
Jan
(18) |
Feb
(16) |
Mar
(20) |
Apr
(20) |
May
(26) |
Jun
(43) |
Jul
(42) |
Aug
(22) |
Sep
(41) |
Oct
(37) |
Nov
(27) |
Dec
(23) |
2004 |
Jan
(26) |
Feb
(9) |
Mar
(40) |
Apr
(24) |
May
(26) |
Jun
(56) |
Jul
(15) |
Aug
(19) |
Sep
(20) |
Oct
(30) |
Nov
(29) |
Dec
(10) |
2005 |
Jan
(1) |
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
(3) |
Jul
(6) |
Aug
|
Sep
(4) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
2006 |
Jan
(10) |
Feb
(6) |
Mar
(10) |
Apr
(9) |
May
(4) |
Jun
(1) |
Jul
(2) |
Aug
(6) |
Sep
(1) |
Oct
(1) |
Nov
(11) |
Dec
|
2007 |
Jan
(4) |
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
(5) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
2013 |
Jan
|
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(19) |
Dec
(3) |
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Brent W. <we...@pa...> - 2001-04-24 22:10:28
|
format query should not take a list - try http::geturl $url -query [http::formatQuery key1 value1 key2 value2] >>>Erik Leunissen said: > L.S. > > When using the Url_DecodeQuery procedure, I ran into behaviour I don't > understand. > > In my case, where a client (Tcl http 2.3) connects to TclHttpd, invoking > the procedure 'Url_DecodeQuery' returns two lists (instead of one). The > first list is the correctly decoded query; the second is an empty list. > > This behaviour can easily be reproduced by the one-liner below. For sake > of simplicity, you may want to test it in a running tclhttpd with the > package http loaded additionally. (That way you short-circuit the > client-server communication channel). > > set keyvaluelist {key1 value1 key2 value2} > puts [Url_DecodeQuery [http::formatQuery $keyvaluelist]] > > The result is: > {key1 value1 key2 value2} {} > ^^ > > Questions: > - Am I perhaps using Url_DecodeQuery in an unappropriate way? > - Why is the empty list being appended? > > I've been using tclhttpd3.2.1 and http 2.3 (if that is relevant) > > Thanks for your comments, > > Erik Leunissen. > > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users -- Brent Welch Software Architect, Panasas Inc Pioneering Object-Based Network Storage (ONS) www.panasas.com we...@pa... |
From: Erik L. <e.l...@hc...> - 2001-04-24 21:30:11
|
L.S. When using the Url_DecodeQuery procedure, I ran into behaviour I don't understand. In my case, where a client (Tcl http 2.3) connects to TclHttpd, invoking the procedure 'Url_DecodeQuery' returns two lists (instead of one). The first list is the correctly decoded query; the second is an empty list. This behaviour can easily be reproduced by the one-liner below. For sake of simplicity, you may want to test it in a running tclhttpd with the package http loaded additionally. (That way you short-circuit the client-server communication channel). set keyvaluelist {key1 value1 key2 value2} puts [Url_DecodeQuery [http::formatQuery $keyvaluelist]] The result is: {key1 value1 key2 value2} {} ^^ Questions: - Am I perhaps using Url_DecodeQuery in an unappropriate way? - Why is the empty list being appended? I've been using tclhttpd3.2.1 and http 2.3 (if that is relevant) Thanks for your comments, Erik Leunissen. |
From: Vloet P. <pet...@si...> - 2001-04-23 17:05:28
|
=20 Got the same problem a few weeks ago. Unluckily I changed the office = the last weeks and have stored the data on my former host but I will try. =20 % cd tclhttpd3.3-dist/tclhttpd3.3 # delete the copied stuff from tclhttpd3.2.1 % rm -r configure=20 % ln -s ../thread2.2/config =20 # recreate configure file, You need autoconf for this !!!=20 % autoconf % cd .. % pwd % ../tclhttpd3.3-dist % gmake=20 =20 =20 As said I hope i did not forget something! Version 3.3.1 is valid. =20 #-------With best regards, Mit freundlichen Gruessen, Met = vriendelijke groet, -------------=20 # Piet Vloet=20 # Siemens AG Austria=20 # PSE ECT OTN4=20 # Rampengasse 3-5 Phone : +43-51707-43819=20 # A-1190 Vienna Fax : +43-51707-53582=20 # mailto:pet...@si... <mailto:pet...@si...> www: http://www.siemens.at <http://www.siemens.at/>=20 =20 -----Urspr=FCngliche Nachricht----- Von: Acacio Cruz [mailto:aca...@eu...] Gesendet am: Montag, 23. April 2001 17:30 An: Tcl...@li... Betreff: [Tclhttpd-users] 3.3 config problems Hi,=20 tclhttpd3.3.tar.gz package has a few configure problems.=20 I have 3.2.1 installed and running and after unpackaging 3.3 I noticed = that: * config/ directory is missing and configure does not run.=20 * even after copying config/ from 3.2.1 I get configure run errors (see below).=20 The same configuration runs perfectly under 3.2.1 and I have multiple software installation that use the same compiler extensively without = any problems:=20 * Solaris SPARC 7=20 * gcc 2.95.2=20 * Tcl 8.2.3=20 root@olimpus /local/admin/dnsadmin/src/tclhttpd3.3# ./configure --prefix=3D/local=20 /admin/dnsadmin --enable-gcc --enable-threads=20 loading cache ./config.cache=20 ./configure: SC_ENABLE_GCC: not found=20 checking for a BSD compatible install... config/install-sh -c=20 checking whether make sets ${MAKE}... yes=20 checking for ranlib... ranlib=20 checking for Cygwin environment... no=20 checking for object suffix... configure: error: installation or configuration pr=20 oblem; compiler does not work=20 root@olimpus /local/admin/dnsadmin/src/tclhttpd3.3#=20 Thank you and keep up the good work,=20 -Acacio=20 Autodesk _______________________________________________ TclHttpd-users mailing list Tcl...@li... http://lists.sourceforge.net/lists/listinfo/tclhttpd-users |
From: Acacio C. <aca...@eu...> - 2001-04-23 15:30:35
|
<!doctype html public "-//w3c//dtd html 4.0 transitional//en"> <html> Hi, <p>tclhttpd3.3.tar.gz package has a few configure problems. <p>I have 3.2.1 installed and running and after unpackaging 3.3 I noticed that: <ul> <li> config/ directory is missing and configure does not run.</li> <li> even after copying config/ from 3.2.1 I get configure run errors (see below).</li> </ul> The same configuration runs perfectly under 3.2.1 and I have multiple software installation that use the same compiler extensively without any problems: <ul> <li> Solaris SPARC 7</li> <li> gcc 2.95.2</li> <li> Tcl 8.2.3</li> </ul> <tt>root@olimpus /local/admin/dnsadmin/src/tclhttpd3.3# ./configure --prefix=/local</tt> <br><tt>/admin/dnsadmin --enable-gcc --enable-threads</tt> <br><tt>loading cache ./config.cache</tt> <br><tt>./configure: SC_ENABLE_GCC: not found</tt> <br><tt>checking for a BSD compatible install... config/install-sh -c</tt> <br><tt>checking whether make sets ${MAKE}... yes</tt> <br><tt>checking for ranlib... ranlib</tt> <br><tt>checking for Cygwin environment... no</tt> <br><tt>checking for object suffix... configure: error: installation or configuration pr</tt> <br><tt>oblem; compiler does not work</tt> <br><tt>root@olimpus /local/admin/dnsadmin/src/tclhttpd3.3#</tt> <p>Thank you and keep up the good work, <br>-Acacio <br>Autodesk</html> |
From: Brent W. <we...@pa...> - 2001-04-20 18:50:04
|
This version of Doc_Cookie might work a bit better for you, but I'm puzzeled about the lack of HTTP_COOKIE in the environment. In the DocTemplate procedure there is a call to Cgi_SetEnv that should be setting this up for you. >>>mr...@bi... said: > Doc_Cookie always returns the empty string, regardless of what the > browser is sending. Inspection of the code reveals that Doc_Cookie > is trying to retrieve the cookie from env(HTTP_COOKIE) , which is > always empty. My first guess is that the env() info and functionality > was moved into the Httpd$sock state array to prevent simultaneous > requests from colliding with each other, but the Doc_Cookie proc > was not updated to follow the change. The following code produces > the advertised behavior of Doc_Cookie : > > proc MyGetCookie {sock cookie} { > #global env > upvar #0 "Httpd$sock" env > set result "" > if {[info exist env(mime,cookie)]} { > foreach pair [split $env(mime,cookie) \;] { > lassign [split [string trim $pair] =] key value > if {[string compare $cookie $key] == 0} { > lappend result $value > } > } > } > return $result > } proc Doc_Cookie {cookie} { global env set result "" if {[info exist env(HTTP_COOKIE)]} { set rawcookie $env(HTTP_COOKIE) } elseif {![info exist env(HTTP_COOKIE)]} { # Try to find the connection if {[info exists env(HTTP_CHANNEL)]} { upvar #0 Httpd$env(HTTP_CHANNEL) data if {[info exist data(mime,cookie)]} { set rawcookie $data(mime,cookie) } } } if {[info exist rawcookie]} { foreach pair [split $rawcookie \;] { lassign [split [string trim $pair] =] key value if {[string compare $cookie $key] == 0} { lappend result $value } } } return $result } -- Brent Welch Software Architect, Panasas Inc Pioneering Object-Based Network Storage (ONS) www.panasas.com we...@pa... |
From: <mr...@bi...> - 2001-04-18 20:36:46
|
Doc_Cookie always returns the empty string, regardless of what the browser is sending. Inspection of the code reveals that Doc_Cookie is trying to retrieve the cookie from env(HTTP_COOKIE) , which is always empty. My first guess is that the env() info and functionality was moved into the Httpd$sock state array to prevent simultaneous requests from colliding with each other, but the Doc_Cookie proc was not updated to follow the change. The following code produces the advertised behavior of Doc_Cookie : proc MyGetCookie {sock cookie} { #global env upvar #0 "Httpd$sock" env set result "" if {[info exist env(mime,cookie)]} { foreach pair [split $env(mime,cookie) \;] { lassign [split [string trim $pair] =] key value if {[string compare $cookie $key] == 0} { lappend result $value } } } return $result } |
From: Dale M. <mag...@km...> - 2001-04-06 00:11:36
|
I am not sure this is related but: In moving from TCLhttpd3.2.1 to TCLhttpd3.3 get the missing page error on the demo CGI scripts as well as my scripts. The 3.2.1 version works fine but am trying to move up to 3.3 to get file transfer going. Running on Window NT at this time but plan to shift to Linux in the future. Dale Magnuson At 12:04 PM 4/3/01 -0700, you wrote: >Send TclHttpd-users mailing list submissions to > tcl...@li... > >To subscribe or unsubscribe via the World Wide Web, visit > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users >or, via email, send a message with subject or body 'help' to > tcl...@li... > >You can reach the person managing the list at > tcl...@li... > >When replying, please edit your Subject line so it is more specific >than "Re: Contents of TclHttpd-users digest..." > > >Today's Topics: > > 1. Re: POSTing with CGIs (Brent Welch) > 2. Tcl - Netscape - PDF (petrus vloet) > 3. RE: Tcl - Netscape - PDF (David LeBlanc) > >--__--__-- > >Message: 1 >To: "Anders Nilsson" <an...@di...> >cc: tcl...@li... >Subject: Re: [Tclhttpd-users] POSTing with CGIs >From: Brent Welch <we...@pa...> >Date: Mon, 02 Apr 2001 21:08:46 -0700 > >What I do to debug these sorts of cases is put some command like > >close [open c:/temp/iwashere.txt] > >at the beginning of the CGI script to ensure it is even executing. >On UNIX I'd also switch in a /bin/sh script, but on Windows TclHttpd >should try to use the Tclsh that is running TclHttpd to also run the >.cgi scripts. You might jam some debug calls in TclHttpd's cgi.tcl >and run the server with "wish" instead of "tclsh" so you can be sure >to see the output - you may need to add "console show" to your >startup code so you get the console that has puts output. > >>>>"Anders Nilsson" said: > > Hi, > > can anybody tell me why this doesn't work. > > Two files, the first is the .html-file that > > gets the input and parses it to the .cgi-file: > > > > cgitest.html: > > <HTML> > > <BODY> > > <FORM ACTION="/cgi-bin/cgitest.cgi" METHOD="POST"> > > <INPUT TYPE="TEXT" NAME="alfa"> > > <INPUT TYPE="TEXT" NAME="beta"> > > <INPUT TYPE="SUBMIT"> > > </FORM> > > </BODY> > > </HTML> > > > > cgitest.cgi: > > package require ncgi > > puts "Content-Type: text/html\n\n" > > puts "The values: " > > ncgi::input > > set p1 [ncgi::value alfa] > > set p2 [ncgi::value beta] > > puts "$p1 $p2" > > exit 0 > > > > When I click submit I just get an errorpage saying that > > the page couldn't be found. If I there click refresh, the > > cgi-script returns "The values: " but have lost the values. > > If I do a GET instead, it works, but not with POST. > > What's my error? > > > > Win2K, TclPro 1.4, tclhttpd3.3, IE5. On Netscape 4.73 I > > get an "Document contains no data". > > > > /Anders N > > > > > > _______________________________________________ > > TclHttpd-users mailing list > > Tcl...@li... > > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users > >-- >Brent Welch >Software Architect, Panasas Inc >Pioneering Object-Based Network Storage (ONS) > >www.panasas.com >we...@pa... > > >--__--__-- > >Message: 2 >Date: Tue, 03 Apr 2001 13:20:41 +0200 >From: petrus vloet <pet...@si...> >Organization: Siemens AG Austria >To: TCL-HTTPD <tcl...@li...> >Subject: [Tclhttpd-users] Tcl - Netscape - PDF > >Hi, > >Have some problems when downloading a pdf file with netscape as web >client running on Win NT. >The size of the file downloaded is greater than the size of the file on >the WEB-Server. > >Netscape on Solaris doesn't show this problem. Also Internet Explorer >downloads fine on Win NT. > >Question: Any Netscape + Win NT dependent in tclhttpd or is this a >Netscape bug on Win Nt? > >tclhttpd: tclhttpd-dist.3.3 >OS: Solaris2.5.1 > >W.b.R. Piet > >-- >#-------With best regards, Mit freundlichen Gruessen, Met >vriendelijke groet, ------------- >#Piet Vloet >#Siemens AG Austria >#Boschstrasse 10 Phone : +43-51707-42906 >#A-1190 Vienna Fax : +43-51707-52606 >#mailto:pet...@si... WWW:http://www.siemens.at > > > > >--__--__-- > >Message: 3 >From: "David LeBlanc" <wh...@oz...> >To: "petrus vloet" <pet...@si...>, > "TCL-HTTPD" <tcl...@li...> >Subject: RE: [Tclhttpd-users] Tcl - Netscape - PDF >Date: Tue, 3 Apr 2001 08:05:30 -0700 > >I have also seen this on NT 4.0 with IE! I think it's an HTML markup goof of >some kind. It can be quite exciting (!not!) to d/l a 4mb file and have it >fill the disk @ 63mb because I went off to do something else while waiting >for it. > >Using http to d/l amaya.tar.gz from w3.org is amazingly reliable about >demonstrating this goof. > >Dave LeBlanc > >> -----Original Message----- >> From: tcl...@li... >> [mailto:tcl...@li...]On Behalf Of petrus >> vloet >> Sent: Tuesday, April 03, 2001 4:21 AM >> To: TCL-HTTPD >> Subject: [Tclhttpd-users] Tcl - Netscape - PDF >> >> >> Hi, >> >> Have some problems when downloading a pdf file with netscape as web >> client running on Win NT. >> The size of the file downloaded is greater than the size of the file on >> the WEB-Server. >> >> Netscape on Solaris doesn't show this problem. Also Internet Explorer >> downloads fine on Win NT. >> >> Question: Any Netscape + Win NT dependent in tclhttpd or is this a >> Netscape bug on Win Nt? >> >> tclhttpd: tclhttpd-dist.3.3 >> OS: Solaris2.5.1 >> >> W.b.R. Piet >> >> -- >> #-------With best regards, Mit freundlichen Gruessen, Met >> vriendelijke groet, ------------- >> #Piet Vloet >> #Siemens AG Austria >> #Boschstrasse 10 Phone : +43-51707-42906 >> #A-1190 Vienna Fax : +43-51707-52606 >> #mailto:pet...@si... WWW:http://www.siemens.at >> >> >> >> _______________________________________________ >> TclHttpd-users mailing list >> Tcl...@li... >> http://lists.sourceforge.net/lists/listinfo/tclhttpd-users >> > > > > >--__--__-- > >_______________________________________________ >TclHttpd-users mailing list >Tcl...@li... >http://lists.sourceforge.net/lists/listinfo/tclhttpd-users > > >End of TclHttpd-users Digest > |
From: David L. <wh...@oz...> - 2001-04-03 15:03:19
|
I have also seen this on NT 4.0 with IE! I think it's an HTML markup goof of some kind. It can be quite exciting (!not!) to d/l a 4mb file and have it fill the disk @ 63mb because I went off to do something else while waiting for it. Using http to d/l amaya.tar.gz from w3.org is amazingly reliable about demonstrating this goof. Dave LeBlanc > -----Original Message----- > From: tcl...@li... > [mailto:tcl...@li...]On Behalf Of petrus > vloet > Sent: Tuesday, April 03, 2001 4:21 AM > To: TCL-HTTPD > Subject: [Tclhttpd-users] Tcl - Netscape - PDF > > > Hi, > > Have some problems when downloading a pdf file with netscape as web > client running on Win NT. > The size of the file downloaded is greater than the size of the file on > the WEB-Server. > > Netscape on Solaris doesn't show this problem. Also Internet Explorer > downloads fine on Win NT. > > Question: Any Netscape + Win NT dependent in tclhttpd or is this a > Netscape bug on Win Nt? > > tclhttpd: tclhttpd-dist.3.3 > OS: Solaris2.5.1 > > W.b.R. Piet > > -- > #-------With best regards, Mit freundlichen Gruessen, Met > vriendelijke groet, ------------- > #Piet Vloet > #Siemens AG Austria > #Boschstrasse 10 Phone : +43-51707-42906 > #A-1190 Vienna Fax : +43-51707-52606 > #mailto:pet...@si... WWW:http://www.siemens.at > > > > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users > |
From: petrus v. <pet...@si...> - 2001-04-03 11:43:20
|
Hi, Have some problems when downloading a pdf file with netscape as web client running on Win NT. The size of the file downloaded is greater than the size of the file on the WEB-Server. Netscape on Solaris doesn't show this problem. Also Internet Explorer downloads fine on Win NT. Question: Any Netscape + Win NT dependent in tclhttpd or is this a Netscape bug on Win Nt? tclhttpd: tclhttpd-dist.3.3 OS: Solaris2.5.1 W.b.R. Piet -- #-------With best regards, Mit freundlichen Gruessen, Met vriendelijke groet, ------------- #Piet Vloet #Siemens AG Austria #Boschstrasse 10 Phone : +43-51707-42906 #A-1190 Vienna Fax : +43-51707-52606 #mailto:pet...@si... WWW:http://www.siemens.at |
From: Brent W. <we...@pa...> - 2001-04-03 04:09:11
|
What I do to debug these sorts of cases is put some command like close [open c:/temp/iwashere.txt] at the beginning of the CGI script to ensure it is even executing. On UNIX I'd also switch in a /bin/sh script, but on Windows TclHttpd should try to use the Tclsh that is running TclHttpd to also run the .cgi scripts. You might jam some debug calls in TclHttpd's cgi.tcl and run the server with "wish" instead of "tclsh" so you can be sure to see the output - you may need to add "console show" to your startup code so you get the console that has puts output. >>>"Anders Nilsson" said: > Hi, > can anybody tell me why this doesn't work. > Two files, the first is the .html-file that > gets the input and parses it to the .cgi-file: > > cgitest.html: > <HTML> > <BODY> > <FORM ACTION="/cgi-bin/cgitest.cgi" METHOD="POST"> > <INPUT TYPE="TEXT" NAME="alfa"> > <INPUT TYPE="TEXT" NAME="beta"> > <INPUT TYPE="SUBMIT"> > </FORM> > </BODY> > </HTML> > > cgitest.cgi: > package require ncgi > puts "Content-Type: text/html\n\n" > puts "The values: " > ncgi::input > set p1 [ncgi::value alfa] > set p2 [ncgi::value beta] > puts "$p1 $p2" > exit 0 > > When I click submit I just get an errorpage saying that > the page couldn't be found. If I there click refresh, the > cgi-script returns "The values: " but have lost the values. > If I do a GET instead, it works, but not with POST. > What's my error? > > Win2K, TclPro 1.4, tclhttpd3.3, IE5. On Netscape 4.73 I > get an "Document contains no data". > > /Anders N > > > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users -- Brent Welch Software Architect, Panasas Inc Pioneering Object-Based Network Storage (ONS) www.panasas.com we...@pa... |
From: Anders N. <an...@di...> - 2001-03-29 09:44:38
|
Hi, can anybody tell me why this doesn't work. Two files, the first is the .html-file that gets the input and parses it to the .cgi-file: cgitest.html: <HTML> <BODY> <FORM ACTION="/cgi-bin/cgitest.cgi" METHOD="POST"> <INPUT TYPE="TEXT" NAME="alfa"> <INPUT TYPE="TEXT" NAME="beta"> <INPUT TYPE="SUBMIT"> </FORM> </BODY> </HTML> cgitest.cgi: package require ncgi puts "Content-Type: text/html\n\n" puts "The values: " ncgi::input set p1 [ncgi::value alfa] set p2 [ncgi::value beta] puts "$p1 $p2" exit 0 When I click submit I just get an errorpage saying that the page couldn't be found. If I there click refresh, the cgi-script returns "The values: " but have lost the values. If I do a GET instead, it works, but not with POST. What's my error? Win2K, TclPro 1.4, tclhttpd3.3, IE5. On Netscape 4.73 I get an "Document contains no data". /Anders N |
From: Brent W. <we...@pa...> - 2001-03-26 06:24:54
|
You are probably seeing caching effects on your home page. It is nice that a home page gets cached (perhaps at a University's caching proxy) but you won't see the accesses in your log. If you use ".tml" pages for your site, you can make the home page dynamic (insert [Doc_Dynamic]) and it will defeat the caching. >>>Steve Blinkhorn said: > > > > > > >>>Steve Blinkhorn said: > > > Three not very related questions. > > > > > > 1. Is there a straightforward way of capturing a referring page ID? > > > In other words, if people get to my site on a click through from a > > > portal, how do I discover that? > > > > The standard log includes the Referer header, which is normally set by > > a browser to track things like this. For example: > > > > 194.117.133.22 - - [24/Mar/2001:17:44:08 PST] "GET /scripting HTTP/1.1" - "http://tcl .activestate.com/" "Mozilla/4.76 (Windows NT 5.0; U) Opera 5.02 [en]" - > > > > indicates that /scripting was reference from the tcl.activestate.com home page. > > OK, I see that in my log80_01.03.?? file, but home page hits don't > show up there. Is there a reason for this? So people who come to > the home page from a search engine, say, are shown as referred to a > download page by my home page, which slightly defeats the object. > > > > > > 2. I have two servers running, one on port 80, one on a non-standard > > > port on the same machine. I have the impressions that items from one > > > server get logged in the logs for the other. Possible? curable? > > > > Possible if the Log_SetFile calls in their configuration files specify the > > same file name. > > Such that entries meant for log80_01.03.25 end up in log8123_01.03.25? > When I first saw this happening I swiftly used the same docRoot for > both, in case people were accidentally getting to the wrong port. WE > intentionally use 8123 only for background file transfers using custom > code, to avoid possible problems with proxy caches and firewalls, on > adice from some fairly serious sites who operate such things. > > > > > 3. I have previously reported zombie processes: it seems that they > > > arise only when downloading files from a default page - i.e. when > > > there is no index.htm, so the server offers the directory list of > > > docRoot. However, when people download files offered through a page > > > of html, we seem to get `broken pipe' error messages some time after > > > (successful) downloads. Is there something I should be doing to stop > > > this? It leads to clogged logs:-) > > > > Hmm - there should really be no difference with links from an html page > > and from the default directory listing. Either way the server returns HTML > > and the browser just picks a URL. You could look for subtle differences in > > the requested URL in the logs, though. > > > > As for the "broken pipe" errors, what you should try to do is isolate which > > line of code is generating that log record. You may need to modify the > > source to take the error message with some sort of ID. I'd be willing to > > look at that more closely. > > > > > I'll look in to this tomorrow > > > -- > Steve Blinkhorn <st...@pr...> -- Brent Welch Software Architect, Panasas Inc Pioneering Object-Based Network Storage (ONS) www.panasas.com we...@pa... |
From: Steve B. <st...@pr...> - 2001-03-25 18:59:50
|
> > > >>>Steve Blinkhorn said: > > Three not very related questions. > > > > 1. Is there a straightforward way of capturing a referring page ID? > > In other words, if people get to my site on a click through from a > > portal, how do I discover that? > > The standard log includes the Referer header, which is normally set by > a browser to track things like this. For example: > > 194.117.133.22 - - [24/Mar/2001:17:44:08 PST] "GET /scripting HTTP/1.1" - "http://tcl.activestate.com/" "Mozilla/4.76 (Windows NT 5.0; U) Opera 5.02 [en]" - > > indicates that /scripting was reference from the tcl.activestate.com home page. OK, I see that in my log80_01.03.?? file, but home page hits don't show up there. Is there a reason for this? So people who come to the home page from a search engine, say, are shown as referred to a download page by my home page, which slightly defeats the object. > > > 2. I have two servers running, one on port 80, one on a non-standard > > port on the same machine. I have the impressions that items from one > > server get logged in the logs for the other. Possible? curable? > > Possible if the Log_SetFile calls in their configuration files specify the > same file name. Such that entries meant for log80_01.03.25 end up in log8123_01.03.25? When I first saw this happening I swiftly used the same docRoot for both, in case people were accidentally getting to the wrong port. WE intentionally use 8123 only for background file transfers using custom code, to avoid possible problems with proxy caches and firewalls, on adice from some fairly serious sites who operate such things. > > > 3. I have previously reported zombie processes: it seems that they > > arise only when downloading files from a default page - i.e. when > > there is no index.htm, so the server offers the directory list of > > docRoot. However, when people download files offered through a page > > of html, we seem to get `broken pipe' error messages some time after > > (successful) downloads. Is there something I should be doing to stop > > this? It leads to clogged logs:-) > > Hmm - there should really be no difference with links from an html page > and from the default directory listing. Either way the server returns HTML > and the browser just picks a URL. You could look for subtle differences in > the requested URL in the logs, though. > > As for the "broken pipe" errors, what you should try to do is isolate which > line of code is generating that log record. You may need to modify the > source to take the error message with some sort of ID. I'd be willing to > look at that more closely. > > I'll look in to this tomorrow -- Steve Blinkhorn <st...@pr...> |
From: Brent W. <we...@pa...> - 2001-03-25 07:47:34
|
>>>Steve Blinkhorn said: > Three not very related questions. > > 1. Is there a straightforward way of capturing a referring page ID? > In other words, if people get to my site on a click through from a > portal, how do I discover that? The standard log includes the Referer header, which is normally set by a browser to track things like this. For example: 194.117.133.22 - - [24/Mar/2001:17:44:08 PST] "GET /scripting HTTP/1.1" - "http://tcl.activestate.com/" "Mozilla/4.76 (Windows NT 5.0; U) Opera 5.02 [en]" - indicates that /scripting was reference from the tcl.activestate.com home page. > 2. I have two servers running, one on port 80, one on a non-standard > port on the same machine. I have the impressions that items from one > server get logged in the logs for the other. Possible? curable? Possible if the Log_SetFile calls in their configuration files specify the same file name. > 3. I have previously reported zombie processes: it seems that they > arise only when downloading files from a default page - i.e. when > there is no index.htm, so the server offers the directory list of > docRoot. However, when people download files offered through a page > of html, we seem to get `broken pipe' error messages some time after > (successful) downloads. Is there something I should be doing to stop > this? It leads to clogged logs:-) Hmm - there should really be no difference with links from an html page and from the default directory listing. Either way the server returns HTML and the browser just picks a URL. You could look for subtle differences in the requested URL in the logs, though. As for the "broken pipe" errors, what you should try to do is isolate which line of code is generating that log record. You may need to modify the source to take the error message with some sort of ID. I'd be willing to look at that more closely. > All this on 3.2.1. -- Brent Welch Software Architect, Panasas Inc Pioneering Object-Based Network Storage (ONS) www.panasas.com we...@pa... |
From: Steve B. <st...@pr...> - 2001-03-23 18:21:33
|
Three not very related questions. 1. Is there a straightforward way of capturing a referring page ID? In other words, if people get to my site on a click through from a portal, how do I discover that? 2. I have two servers running, one on port 80, one on a non-standard port on the same machine. I have the impressions that items from one server get logged in the logs for the other. Possible? curable? 3. I have previously reported zombie processes: it seems that they arise only when downloading files from a default page - i.e. when there is no index.htm, so the server offers the directory list of docRoot. However, when people download files offered through a page of html, we seem to get `broken pipe' error messages some time after (successful) downloads. Is there something I should be doing to stop this? It leads to clogged logs:-) All this on 3.2.1. -- Steve Blinkhorn <st...@pr...> |
From: Brent W. <we...@pa...> - 2001-03-21 05:29:50
|
I believe the problems are due to missing config subdirectory in the various modules. But, there is a copy in the thread directory, so you can simply create a symlink to that: cd tcl8.3.2 ; ln -s ../thread2.2/config . The errors reported below about not finding certain formats of doc files in TclLib are benign and don't affect anything. >>>petrus vloet said: > Brent Welch wrote: > > > I'm pleased to annouce the TclHttpd 3.3 release. > > The cvs tag is tclhttpd-3-3. The files are in > > ftp://ftp.scriptics.com/pub/tcl/httpd/tclhttpd3.3.tar.gz > > tclhttpd3.3-dist.tar.gz (bundles Tcl 8.3.2, Thread, and TclLib 0.8) > > Zip files are tclhttpd33.zip and tclhttpd33dist.zip > > > > > > Hi Brent, > > Can not build the tclhttpd3.3-dist from scratch. > > Running install.sh in /usr/local/www/tclhttpd3.3-dist/tcllib0.8 > > cp: cannot access base64/*.n > cp: cannot access cmdline/*.n > cp: cannot access fileutil/*.n > cp: cannot access ftp/*.n > cp: cannot access mime/*.n > > Cannot install tclhttpd3.3 > > > What is missing compared to 3.2 is a Makefile in the directory > build/solaris-sparc/tclhttpd3.3: > total 4 > -rw-r--r-- 1 sxcsw PSE_EZE_TR4 1 Mar 20 09:35 confdefs.h > -rw-r--r-- 1 sxcsw PSE_EZE_TR4 0 Mar 19 18:58 config.cache > -rw-r--r-- 1 sxcsw PSE_EZE_TR4 127 Mar 20 09:35 config.log > > W.b.R. > Piet > > -- > #-------With best regards, Mit freundlichen Gruessen, Met vriendelijke groet, -- ---- > # Piet Vloet > # Siemens AG Austria > # Boschstrasse 10 Phone : +43-51707-42906 > # A-1190 Vienna Fax : +43-51707-52606 > # mailto:pet...@si... WWW:http://www.pse.siemens.at > > > -- Brent Welch Software Architect, Panasas Inc Pioneering Object-Based Network Storage (ONS) www.panasas.com we...@pa... |
From: Brent W. <we...@pa...> - 2001-03-21 05:24:35
|
Hmm - not sure why the .cgi version doesn't work, except for a newly found bug in "fcopy" in Tcl. The latest version of TclHttpd has changes in the CGI module to avoid this bug. It also has a completely new file upload mechanism - you might give that (v3.3) a try. >>>Dale Magnuson said: > The example under CGI file upload directs to /forms/testupload.tml and > displays the file data uploaded. When the forms is pointed to > /cgi-bin/testupload.cgi it does not respond. Need to upload files that > relate to a failure but unable to capture the file in a script. > Why does the CGI example point to a .tml form? What is wrong with the CGI > example? > Thanks > Dale Magnuson > > > > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users -- Brent Welch Software Architect, Panasas Inc Pioneering Object-Based Network Storage (ONS) www.panasas.com we...@pa... |
From: Dale M. <mag...@km...> - 2001-03-20 18:50:09
|
The example under CGI file upload directs to /forms/testupload.tml and displays the file data uploaded. When the forms is pointed to /cgi-bin/testupload.cgi it does not respond. Need to upload files that relate to a failure but unable to capture the file in a script. Why does the CGI example point to a .tml form? What is wrong with the CGI example? Thanks Dale Magnuson |
From: petrus v. <pet...@si...> - 2001-03-20 09:13:38
|
Brent Welch wrote: > I'm pleased to annouce the TclHttpd 3.3 release. > The cvs tag is tclhttpd-3-3. The files are in > ftp://ftp.scriptics.com/pub/tcl/httpd/tclhttpd3.3.tar.gz > tclhttpd3.3-dist.tar.gz (bundles Tcl 8.3.2, Thread, and TclLib 0.8) > Zip files are tclhttpd33.zip and tclhttpd33dist.zip > Also tclhttpd3.3 is not installable. Compared to 3.2 the config directory is missing ! ./configure loading cache ./config.cache configure: error: can not find install-sh or install.sh in config ./config ws5512% -- #-------With best regards, Mit freundlichen Gruessen, Met vriendelijke groet, ------ # Piet Vloet # Siemens AG Austria # Boschstrasse 10 Phone : +43-51707-42906 # A-1190 Vienna Fax : +43-51707-52606 # mailto:pet...@si... WWW:http://www.pse.siemens.at |
From: petrus v. <pet...@si...> - 2001-03-20 09:02:50
|
Brent Welch wrote: > I'm pleased to annouce the TclHttpd 3.3 release. > The cvs tag is tclhttpd-3-3. The files are in > ftp://ftp.scriptics.com/pub/tcl/httpd/tclhttpd3.3.tar.gz > tclhttpd3.3-dist.tar.gz (bundles Tcl 8.3.2, Thread, and TclLib 0.8) > Zip files are tclhttpd33.zip and tclhttpd33dist.zip > > Hi Brent, Can not build the tclhttpd3.3-dist from scratch. Running install.sh in /usr/local/www/tclhttpd3.3-dist/tcllib0.8 cp: cannot access base64/*.n cp: cannot access cmdline/*.n cp: cannot access fileutil/*.n cp: cannot access ftp/*.n cp: cannot access mime/*.n Cannot install tclhttpd3.3 What is missing compared to 3.2 is a Makefile in the directory build/solaris-sparc/tclhttpd3.3: total 4 -rw-r--r-- 1 sxcsw PSE_EZE_TR4 1 Mar 20 09:35 confdefs.h -rw-r--r-- 1 sxcsw PSE_EZE_TR4 0 Mar 19 18:58 config.cache -rw-r--r-- 1 sxcsw PSE_EZE_TR4 127 Mar 20 09:35 config.log W.b.R. Piet -- #-------With best regards, Mit freundlichen Gruessen, Met vriendelijke groet, ------ # Piet Vloet # Siemens AG Austria # Boschstrasse 10 Phone : +43-51707-42906 # A-1190 Vienna Fax : +43-51707-52606 # mailto:pet...@si... WWW:http://www.pse.siemens.at |
From: Brent W. <we...@pa...> - 2001-03-20 00:01:49
|
You are observing that the client http package does not do keep-alives. For which I must apologize, because that is my code too. But, I don't have any plans to improve that (hint, hint) >>>Erik Leunissen said: > Brent Welch wrote: > > > > Yes - by default TclHttpd supports both pre-1.1 "keepalive" connections an d > > 1.1 persistent connections. The built-in /status URL shows how many > > connections of different types you are getting. > > > > Thanks for your reply Brent, > > When issuing an http request to TclHttpd, I fail to compose the command > such that the connection is kept-alive (HTTP/1.0) or persistent > (HTTP/1.1). > > I've tested this, using TclHttpd with two http clients: > - the Netcape Navigator browser (4.7) > - the Tcl http package 2.0 > > When Netscape connects I see clearly that a keep-alive connection is > established and I can see the 'left' counter decreases at each request. > The same socket remains open during subsequent requests. > > When I issue an http command with Tcl package 2.0, I fail to > accomplish this effect. Here's what I did: > > http::geturl $url -command {} -handler ReadData -headers [list > connection Keep-Alive] > > Although the headers flag appears to have an effect on how the request > is handled within TclHttpd, the socket is closed by TclHttpd after > having returned succesfully the results to the client: > a call to HttpdCloseFinal is made and the socket is closed. > > Apparently, the extra header flag isn't good enough. > I'd like to know: > - how the keep-alive request should be composed; > - whether that is done differently compared to how a HTTP/1.1 persistent > connection is > being established (assumed that the Tcl http packages support HTTP/1.1). > > > Thanks for any directions, references, > > Erik Leunissen. -- Brent Welch Software Architect, Panasas Inc Pioneering Object-Based Network Storage (ONS) www.panasas.com we...@pa... |
From: Erik L. <e.l...@hc...> - 2001-03-19 14:34:35
|
Brent Welch wrote: > > Yes - by default TclHttpd supports both pre-1.1 "keepalive" connections and > 1.1 persistent connections. The built-in /status URL shows how many > connections of different types you are getting. > Thanks for your reply Brent, When issuing an http request to TclHttpd, I fail to compose the command such that the connection is kept-alive (HTTP/1.0) or persistent (HTTP/1.1). I've tested this, using TclHttpd with two http clients: - the Netcape Navigator browser (4.7) - the Tcl http package 2.0 When Netscape connects I see clearly that a keep-alive connection is established and I can see the 'left' counter decreases at each request. The same socket remains open during subsequent requests. When I issue an http command with Tcl package 2.0, I fail to accomplish this effect. Here's what I did: http::geturl $url -command {} -handler ReadData -headers [list connection Keep-Alive] Although the headers flag appears to have an effect on how the request is handled within TclHttpd, the socket is closed by TclHttpd after having returned succesfully the results to the client: a call to HttpdCloseFinal is made and the socket is closed. Apparently, the extra header flag isn't good enough. I'd like to know: - how the keep-alive request should be composed; - whether that is done differently compared to how a HTTP/1.1 persistent connection is being established (assumed that the Tcl http packages support HTTP/1.1). Thanks for any directions, references, Erik Leunissen. |
From: Brent W. <we...@pa...> - 2001-03-18 16:39:20
|
Yes - by default TclHttpd supports both pre-1.1 "keepalive" connections and 1.1 persistent connections. The built-in /status URL shows how many connections of different types you are getting. >>>Erik Leunissen said: > L.S. > > The header says it pretty much. > > I want to use persistent connections (HTTP/1.1 compliant, as specified > in RFC 2068 chapter 8). > > Is that possible in TclHttpd? > > Thanks for your attention, > > Erik Leunissen. > > -- > > Le coeur a ses raisons, que la raison ne connait pas. > > Blaise Pascal. > > > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > http://lists.sourceforge.net/lists/listinfo/tclhttpd-users -- Brent Welch Software Architect, Panasas Inc Pioneering Object-Based Network Storage (ONS) www.panasas.com we...@pa... |
From: Erik L. <e.l...@hc...> - 2001-03-16 14:03:29
|
L.S. The header says it pretty much. I want to use persistent connections (HTTP/1.1 compliant, as specified in RFC 2068 chapter 8). Is that possible in TclHttpd? Thanks for your attention, Erik Leunissen. -- Le coeur a ses raisons, que la raison ne connait pas. Blaise Pascal. |
From: Brent W. <we...@pa...> - 2001-03-15 01:18:21
|
I'm pleased to annouce the TclHttpd 3.3 release. The cvs tag is tclhttpd-3-3. The files are in ftp://ftp.scriptics.com/pub/tcl/httpd/tclhttpd3.3.tar.gz tclhttpd3.3-dist.tar.gz (bundles Tcl 8.3.2, Thread, and TclLib 0.8) Zip files are tclhttpd33.zip and tclhttpd33dist.zip The main new feature is a file upload domain. Currently this is in a bare-bones state. There are some options that are documented (e.g., to limit the number of files that can appear in the upload directory) that are not implemented. See the lib/upload.tcl file for details. Use Upload_Url to register the domain, and UploadTest which is a sample upload handler. There is an important bug fix to the CGI module to avoid an fcopy bug that interferred with large POST data in some cases. The Auth module has had a once over based on work by Piet Vloet to add support for multiple "require user" and "require group" specifications in .htaccess files. He also updated the documentation in the sample htdocs directory about .htaccess files. There are other minor bug fixes in the CGI module and elsewhere. See the ChangeLog (attached) for details. -- Brent Welch Software Architect, Panasas Inc Pioneering Object-Based Network Storage (ONS) www.panasas.com we...@pa... |