You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(23) |
Dec
(9) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(32) |
Feb
(23) |
Mar
(23) |
Apr
(11) |
May
(19) |
Jun
(8) |
Jul
(28) |
Aug
(19) |
Sep
(11) |
Oct
(8) |
Nov
(39) |
Dec
(22) |
2002 |
Jan
(14) |
Feb
(64) |
Mar
(14) |
Apr
(28) |
May
(25) |
Jun
(34) |
Jul
(26) |
Aug
(88) |
Sep
(66) |
Oct
(26) |
Nov
(16) |
Dec
(22) |
2003 |
Jan
(18) |
Feb
(16) |
Mar
(20) |
Apr
(20) |
May
(26) |
Jun
(43) |
Jul
(42) |
Aug
(22) |
Sep
(41) |
Oct
(37) |
Nov
(27) |
Dec
(23) |
2004 |
Jan
(26) |
Feb
(9) |
Mar
(40) |
Apr
(24) |
May
(26) |
Jun
(56) |
Jul
(15) |
Aug
(19) |
Sep
(20) |
Oct
(30) |
Nov
(29) |
Dec
(10) |
2005 |
Jan
(1) |
Feb
(2) |
Mar
(1) |
Apr
|
May
|
Jun
(3) |
Jul
(6) |
Aug
|
Sep
(4) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
2006 |
Jan
(10) |
Feb
(6) |
Mar
(10) |
Apr
(9) |
May
(4) |
Jun
(1) |
Jul
(2) |
Aug
(6) |
Sep
(1) |
Oct
(1) |
Nov
(11) |
Dec
|
2007 |
Jan
(4) |
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
(5) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2009 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
2013 |
Jan
|
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(19) |
Dec
(3) |
2016 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Reinhard M. <ma...@tc...> - 2004-07-20 07:43:41
|
Hi, On Tue, 20 Jul 2004 at 11:49, Colin McCormack wrote: > I suppose the standards are silent on the maximum URL length, but it's > worth a check. RFC 2616 says in section 3.2.1: The HTTP protocol does not place any a priori limit on the length of a URI. Servers MUST be able to handle the URI of any resource they serve, and SHOULD be able to handle URIs of unbounded length if they provide GET-based forms that could generate such URIs. A server SHOULD return 414 (Request-URI Too Long) status if a URI is longer than the server can handle (see section 10.4.15). Note: Servers ought to be cautious about depending on URI lengths above 255 bytes, because some older client or proxy implementations might not properly support these lengths. cu Reinhard |
From: Colin M. <co...@ch...> - 2004-07-20 05:04:04
|
On Tue, 2004-07-20 at 14:51, Gerald W. Lester wrote: > Colin McCormack wrote: > >However, on a peripherally related matter, GPS has pointed out on tcl > >chat that it's possible to crash the underlying tcl by sending an > >*extremely* long string - because [gets] has no bounds checking on the > >length of a line, and tcl will eventually run out of allocatable storage > >for buffering. > >Probably the best/only/complete solution to that is a TIP to fconfigure > >a line limit. > Actually I'd to see things use read and TIP for an option -breakon > ListOfChars Why would you prefer this, Gerald? I have no objection to it at all, but I'm interested to know what it buys. I think the gets overflow problem is similar to the C gets() overflow, but it has the desirable property of crashing tcl rather than allowing the sender to execute arbitrary code :) Given that [gets] will continue to exist, [gets] overflow and tcl crash will always be a problem, so I see your suggestion as orthogonal to this specific problem, although it may stand on its own merits. -- Colin McCormack <co...@ch...> |
From: Gerald W. L. <Ger...@co...> - 2004-07-20 04:51:07
|
Colin McCormack wrote: >On Tue, 2004-07-20 at 11:34, Brent Welch wrote: > > >>I'd be OK with a simple string length limit of say, 10,000 characters. >>Put the value into the Httpd array in lib/httpd.tcl along with the >>other hardwired constants in that module. >> >> > >Yes, this might well be worth doing. > >However, on a peripherally related matter, GPS has pointed out on tcl >chat that it's possible to crash the underlying tcl by sending an >*extremely* long string - because [gets] has no bounds checking on the >length of a line, and tcl will eventually run out of allocatable storage >for buffering. > >I hasten to add that tclhttpd shares this behaviour with anything >net-connected which uses gets (ie: just about every web service in >tcllib.) > >Probably the best/only/complete solution to that is a TIP to fconfigure >a line limit. > > Actually I'd to see things use read and TIP for an option -breakon ListOfChars -- +--------------------------------+---------------------------------------+ | Gerald W. Lester | "The man who fights for his ideals is | | Ger...@co... | the man who is alive." -- Cervantes | +--------------------------------+---------------------------------------+ |
From: Colin M. <co...@ch...> - 2004-07-20 01:55:00
|
On Tue, 2004-07-20 at 11:35, Brent Welch wrote: > One thing I didn't get to in the last release was a good write-up > on the various sample apps. Are you inspired to start that? Ok, sure. By write-up, do you want user doc, tech doc, marketing material, or all three? I really suck at documentation ... oh, and testing, but I'm trying to get better. At the moment I'm playing with virtual file systems. If it stands still long enough, I'll make a VFS out of it. Some of SEH's work on wikit http://wiki.tcl.tk/11371 is particularly inspiring. -- Colin McCormack <co...@ch...> |
From: Colin M. <co...@ch...> - 2004-07-20 01:50:01
|
On Tue, 2004-07-20 at 11:34, Brent Welch wrote: > I'd be OK with a simple string length limit of say, 10,000 characters. > Put the value into the Httpd array in lib/httpd.tcl along with the > other hardwired constants in that module. Yes, this might well be worth doing. However, on a peripherally related matter, GPS has pointed out on tcl chat that it's possible to crash the underlying tcl by sending an *extremely* long string - because [gets] has no bounds checking on the length of a line, and tcl will eventually run out of allocatable storage for buffering. I hasten to add that tclhttpd shares this behaviour with anything net-connected which uses gets (ie: just about every web service in tcllib.) Probably the best/only/complete solution to that is a TIP to fconfigure a line limit. > You need to be somewhat generous because some folks may use GET-style > forms that put a lot into the query part of the URL. I know some > browsers choke if that gets too big, but it is surely implementation > dependent what the limit is. Some browsers, and clearly some servers too - otherwise the exploit wouldn't work. I've also noticed some attempts to send too-large keys to SSL - might be another windows exploit. I suppose the standards are silent on the maximum URL length, but it's worth a check. > You could work out what the limit is > on a few of your favorite browsers with the /debug/echo?foo=(longstring) > URL, and increase the size of the string until it stops echoing correctly. I just spoke to daveb on tcl chat about a webdav implementation he's doing for aolserver. I'll see if I can hoist that for tclhttpd when he's done. Since the exploit I'm seeing is webdav specific (it's an MS webdav bug, well known for a long time, apparently) it might be as well to view parameter-driven size limitation as a stop-gap solution. > > In the end, I'd like to come up with some virus attempt recognition > > patterns which will enable me to fire off automated complaints to the > > local ISP. It'll give 'em something to do. I still think this would be a fun project. In fact I'm going to phone my ISP now and threaten them some more with it :) -- Colin McCormack <co...@ch...> |
From: Brent W. <we...@pa...> - 2004-07-20 01:35:35
|
One thing I didn't get to in the last release was a good write-up on the various sample apps. Are you inspired to start that? >>>Colin McCormack said: > I've just checked in under sampleapp/ a simplistic Certificate Authority > for tclhttp. It is sufficient to construct CA and server certificates, > and deliver them to the client. It's not the most secure thing on the > planet, so don't trust your life to it. > -- > Colin McCormack <co...@ch...> > > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - > digital self defense, top technical experts, no vendor pitches, > unmatched networking opportunities. Visit www.blackhat.com > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > https://lists.sourceforge.net/lists/listinfo/tclhttpd-users > -- Brent Welch Software Architect, Panasas Inc Delivering the premier storage system for scalable Linux clusters www.panasas.com we...@pa... |
From: Brent W. <we...@pa...> - 2004-07-20 01:34:51
|
I'd be OK with a simple string length limit of say, 10,000 characters. Put the value into the Httpd array in lib/httpd.tcl along with the other hardwired constants in that module. You need to be somewhat generous because some folks may use GET-style forms that put a lot into the query part of the URL. I know some browsers choke if that gets too big, but it is surely implementation dependent what the limit is. You could work out what the limit is on a few of your favorite browsers with the /debug/echo?foo=(longstring) URL, and increase the size of the string until it stops echoing correctly. >>>Colin McCormack said: > I've been noticing quite a few extremely long bogus URLs, presumably > from MS virus-ridden machines attempting a buffer overflow in some > lamentably bad MS web server (ISS?) > > The URLs have the form SEARCH / followed by 64Kb of 0x902f 0xb102 ... > > I think this really clags up our regexp at lib/httpd.tcl line 611 (the > one in state 1,$start which splits the line up into prototype and URL) > although it's hard for me to tell because the xemacs buffer I'm using to > test usually crashes when I try to manipulate the 64k literal string :) > > I wonder what people think might be good effective protective measures > against this 'sploit? Is it reasonable to put an upper limit on the > size of a URL we're prepared to process? Say, 10,000 characters? Some > other figure? > > I'm looking for standards-friendly computationally cheap countermeasures > which don't entail running a regular expression over the potentially > offending URL. > > Oh, this URL also populates the log with large failure messages. > > In the end, I'd like to come up with some virus attempt recognition > patterns which will enable me to fire off automated complaints to the > local ISP. It'll give 'em something to do. > -- > Colin McCormack <co...@ch...> > > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - > digital self defense, top technical experts, no vendor pitches, > unmatched networking opportunities. Visit www.blackhat.com > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > https://lists.sourceforge.net/lists/listinfo/tclhttpd-users > -- Brent Welch Software Architect, Panasas Inc Delivering the premier storage system for scalable Linux clusters www.panasas.com we...@pa... |
From: Brent W. <we...@pa...> - 2004-07-20 01:30:53
|
I do this: Url_PrefixRemove / ;# Remove normal doc domain Url_PrefixInstall / [list WikiDomain /] Doc_AddRoot /htdocs [file join $here ../htdocs] Doc_AddRoot /images [file join $here ../htdocs/images] So, I still have the doc module, and in fact I needed a few real files. But, the root domain is not a document domain. >>>Erik Leunissen said: > L.S. > > I'm using tclhttpd to run a HTTP service that has no business at all > with documents. The service is completely implemented through domain > handlers that perform very specific operations upon HTTP requests, but > serving documents in not one of them. > > Therefore, I want to disable the entire document service. > > I can think of several ways to achieve more or less what I want, but I > can't judge what's the best thing to do, considering aspects of > security, resource friendliness, ... > > Any advice ? > > > Here's what I've been thinking of: > - commenting out "Config(docRoot)" from the configuration file > - setting Config(docRoot) to the emtpy string > - pointing Config(docRoot) to an empty directory > - pointing Config(docRoot) to a directory that has a single empty file > "index.html" > - ... other suggestions? > > > Thanks in advance, > > Erik Leunissen > ============== > > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - > digital self defense, top technical experts, no vendor pitches, > unmatched networking opportunities. Visit www.blackhat.com > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > https://lists.sourceforge.net/lists/listinfo/tclhttpd-users > -- Brent Welch Software Architect, Panasas Inc Delivering the premier storage system for scalable Linux clusters www.panasas.com we...@pa... |
From: Colin M. <co...@ch...> - 2004-07-09 05:18:21
|
I've just checked in under sampleapp/ a simplistic Certificate Authority for tclhttp. It is sufficient to construct CA and server certificates, and deliver them to the client. It's not the most secure thing on the planet, so don't trust your life to it. -- Colin McCormack <co...@ch...> |
From: Colin M. <co...@ch...> - 2004-07-09 01:48:55
|
I've been noticing quite a few extremely long bogus URLs, presumably from MS virus-ridden machines attempting a buffer overflow in some lamentably bad MS web server (ISS?) The URLs have the form SEARCH / followed by 64Kb of 0x902f 0xb102 ... I think this really clags up our regexp at lib/httpd.tcl line 611 (the one in state 1,$start which splits the line up into prototype and URL) although it's hard for me to tell because the xemacs buffer I'm using to test usually crashes when I try to manipulate the 64k literal string :) I wonder what people think might be good effective protective measures against this 'sploit? Is it reasonable to put an upper limit on the size of a URL we're prepared to process? Say, 10,000 characters? Some other figure? I'm looking for standards-friendly computationally cheap countermeasures which don't entail running a regular expression over the potentially offending URL. Oh, this URL also populates the log with large failure messages. In the end, I'd like to come up with some virus attempt recognition patterns which will enable me to fire off automated complaints to the local ISP. It'll give 'em something to do. -- Colin McCormack <co...@ch...> |
From: Erik L. <e.l...@hc...> - 2004-07-07 18:50:37
|
L.S. I'm using tclhttpd to run a HTTP service that has no business at all with documents. The service is completely implemented through domain handlers that perform very specific operations upon HTTP requests, but serving documents in not one of them. Therefore, I want to disable the entire document service. I can think of several ways to achieve more or less what I want, but I can't judge what's the best thing to do, considering aspects of security, resource friendliness, ... Any advice ? Here's what I've been thinking of: - commenting out "Config(docRoot)" from the configuration file - setting Config(docRoot) to the emtpy string - pointing Config(docRoot) to an empty directory - pointing Config(docRoot) to a directory that has a single empty file "index.html" - ... other suggestions? Thanks in advance, Erik Leunissen ============== |
From: Brent W. <we...@pa...> - 2004-06-28 04:48:18
|
>>>Colin McCormack said: > On Mon, 2004-06-28 at 01:54, Brent Welch wrote: > > >Colin McCormack said: > > > it would be nice to be able to efficiently/easily go from path > > > to URL, to implement the 'domain' part of the domain+realm Digest > > > algorithm. > > > > I don't quite follow what "from path to URL" means. > > Given the path within which one is inspecting a .htacess file, or more > generally a directory's path, it would be nice to be able to come up > with the url prefix sufficient to get to that path. For some reason > Digest authentication allows you to specify a URL 'domain' to which it > applies the authentication. This defaults to the whole site, whereas I > think Basic assumes the more natural prefix-of-current-URL. > > Can you think of an easy, general and correct way to calculate this > directory->prefix-URL in the presence of all the possible registered > tclhttpd Domains? Perhaps as an inverse of the Url prefix -> domain > calculation? All the authentication procedures get called with $sock, from which you can get data(url). So, don't get hung up on the filename, but instead look at the connection state to determine the URL. -- Brent Welch Software Architect, Panasas Inc Delivering the premier storage system for scalable Linux clusters www.panasas.com we...@pa... |
From: Colin M. <co...@ch...> - 2004-06-27 22:06:16
|
On Mon, 2004-06-28 at 01:54, Brent Welch wrote: > >Colin McCormack said: > > it would be nice to be able to efficiently/easily go from path > > to URL, to implement the 'domain' part of the domain+realm Digest > > algorithm. > > I don't quite follow what "from path to URL" means. Given the path within which one is inspecting a .htacess file, or more generally a directory's path, it would be nice to be able to come up with the url prefix sufficient to get to that path. For some reason Digest authentication allows you to specify a URL 'domain' to which it applies the authentication. This defaults to the whole site, whereas I think Basic assumes the more natural prefix-of-current-URL. Can you think of an easy, general and correct way to calculate this directory->prefix-URL in the presence of all the possible registered tclhttpd Domains? Perhaps as an inverse of the Url prefix -> domain calculation? -- Colin McCormack <co...@ch...> |
From: Brent W. <we...@pa...> - 2004-06-27 15:54:26
|
>>>Colin McCormack said: > On Sun, 2004-06-20 at 10:30, Brent Welch wrote: > > Looking at digest.tcl it looks like > > it caches state in the Digest$nonce global variable, but I don't think > > that saves any md5 calculations. > > > It is a memory leak, however. > > Colin, you'll need to fix that up so that digest authentication doesn't > > build up these Digest values forever in long running servers. I suggest > > you hook HttpdReset. That is the place to do caching across requests > > during a keep-alive connection. Basically, if you can work out how > > to cache things in Httpd$sock instead of Digest$data(digest,nonce) > > then you'll get proper memory cleanup. > > Ok, I can and shall do that. However, the notion is that a Digest > authentication creates a 'nonce' which is almost like a lightweight > account, which should persist even between sessions (as the client > caches it and tries the last nonce it remembers for the given > authentication domain+realm.) Possibly it would be more in the spirit > of the thing to schedule a periodic clean up of old Digest elements - > what do you think? Certainly tying them to the http session would be > possible, but would result in additional latency when a client tried to > reuse a nonce, and the server had to provoke it to re-prompt for the > user's password. Come to think of it ... the timeout is looking like a > better idea all the time :) Periodic sweep is fine, coupled with an overall cap on the amount of state you build up. E.g., reclaim more agressively if you have a lot of state. > One thing worth noting for future reference (in the 'it would be nice' > basket): it would be nice to be able to efficiently/easily go from path > to URL, to implement the 'domain' part of the domain+realm Digest > algorithm. It's not essential though. I don't quite follow what "from path to URL" means. -- Brent Welch Software Architect, Panasas Inc Delivering the premier storage system for scalable Linux clusters www.panasas.com we...@pa... |
From: Brent W. <we...@pa...> - 2004-06-27 15:52:35
|
Also, the per-transaction authentication stems from the HTTP protocol, which really doesn't have a session abstraction. HTTP/1.0 even opened and closed a TCP connection for each request. That is fixed in HTTP/1.1, but still all session abstractions are layered on by the application. With tclhttpd you could cache some state associated with a socket connection to make your authentication go more quickly, but you'd have to hack that together yourself. It wouldn't be hard. More typically, however, you use a cookie or CGI form value to pass along a session identifier with each request. That might survive across multiple connections, have it's own timeout, etc. etc. >>>Colin McCormack said: > On Tue, 2004-06-22 at 06:41, Erik Leunissen wrote: > > > Maybe my simplictic reasoning carries the signature of someone not too > > experienced with authentication practice, but if the purpose of > > authentication is establishing the authenticity of the *user* (i.e. not > > the authenticity of the request) then why is authentication performed on > > each transaction? > > > > If my logic is off base, please correct me, > > The only logical flaw is the initial assumption that Digest auth is > intended to authenticate a user. Digest authentication also tries to > protect against 'man in the middle' attacks by including in each > transaction an increasing integer. That stops an intruder from copying > the previous MD5 into their message and masquerading as the user. > > For that reason, each transaction's MD5 has to be recalculated. > -- > Colin McCormack <co...@ch...> > -- Brent Welch Software Architect, Panasas Inc Delivering the premier storage system for scalable Linux clusters www.panasas.com we...@pa... |
From: Brent W. <we...@pa...> - 2004-06-22 17:52:54
|
Try building the C-version of the crypt command that I distribute with tclhttpd. You may need to figure out what C library contains the crypt() routine. I know it works for various UNIX's, but probably not Windows. >>>"Brand Hilton" said: > I've tried this under both Windows and Solaris using ActiveTcl 8.3.4.3. > Works fine under the latest ActiveTcl 8.4. > > I've looked around and did a couple of web searches, and didn't see this > flagged anywhere else. Any workarounds would be appreciated. > > Here's the error message (ellipses added to shorten file names): > > Error processing main startup script > "/.../tclhttpd3.5.1/bin/httpdthread.tcl". > can't read "S(0, 41)": no such element in array > while executing > "set k $S($j, [expr { > ($preS($t) << 5) + > ($preS([expr {$t + 1}]) << 3) + > ($preS([expr {$t + 2}]) << 2) + > ($preS([expr {$..." > (procedure "crypt" line 236) > invoked from within > "crypt $webmaster_password $salt" > invoked from within > "if {[info exists Config(Auth)]} { > foreach {var val} $Config(Auth) { > if {[string match user,* $var]} { > # encrypt the password > set salt [..." > (file "/.../tclhttpd3.5.1/bin/../lib/auth.tcl" line 72) > invoked from within > "source /.../tclhttpd3.5.1/bin/../lib/auth.tcl" > ("package ifneeded" script) > invoked from within > "package require httpd::auth " > (file "/.../tclhttpd3.5.1/bin/httpdthread.tcl" line 57) > invoked from within > "source $Config(main)" > while executing > "error $error" > invoked from within > "if {[catch {source $Config(main)} message]} then { > global errorInfo > set error "Error processing main startup script \"[file nativename > $Config..." > (file "./httpd.tcl" line 317) > > > > ------------------------------------------------------- > This SF.Net email is sponsored by The 2004 JavaOne(SM) Conference > Learn from the experts at JavaOne(SM), Sun's Worldwide Java Developer > Conference, June 28 - July 1 at the Moscone Center in San Francisco, CA > REGISTER AND SAVE! http://java.sun.com/javaone/sf Priority Code NWMGYKND > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > https://lists.sourceforge.net/lists/listinfo/tclhttpd-users > -- Brent Welch Software Architect, Panasas Inc Delivering the premier storage system for scalable Linux clusters www.panasas.com we...@pa... |
From: Erik L. <e.l...@hc...> - 2004-06-22 17:43:39
|
Colin McCormack wrote: > ... Digest authentication also tries to > protect against 'man in the middle' attacks ... > Thanks for teaching me this, Erik Leunissen ============== |
From: Colin M. <co...@ch...> - 2004-06-22 03:22:33
|
On Sun, 2004-06-20 at 10:30, Brent Welch wrote: > Looking at digest.tcl it looks like > it caches state in the Digest$nonce global variable, but I don't think > that saves any md5 calculations. > It is a memory leak, however. > Colin, you'll need to fix that up so that digest authentication doesn't > build up these Digest values forever in long running servers. I suggest > you hook HttpdReset. That is the place to do caching across requests > during a keep-alive connection. Basically, if you can work out how > to cache things in Httpd$sock instead of Digest$data(digest,nonce) > then you'll get proper memory cleanup. Ok, I can and shall do that. However, the notion is that a Digest authentication creates a 'nonce' which is almost like a lightweight account, which should persist even between sessions (as the client caches it and tries the last nonce it remembers for the given authentication domain+realm.) Possibly it would be more in the spirit of the thing to schedule a periodic clean up of old Digest elements - what do you think? Certainly tying them to the http session would be possible, but would result in additional latency when a client tried to reuse a nonce, and the server had to provoke it to re-prompt for the user's password. Come to think of it ... the timeout is looking like a better idea all the time :) One thing worth noting for future reference (in the 'it would be nice' basket): it would be nice to be able to efficiently/easily go from path to URL, to implement the 'domain' part of the domain+realm Digest algorithm. It's not essential though. -- Colin McCormack <co...@ch...> |
From: Colin M. <co...@ch...> - 2004-06-22 02:19:34
|
On Tue, 2004-06-22 at 06:41, Erik Leunissen wrote: > Maybe my simplictic reasoning carries the signature of someone not too > experienced with authentication practice, but if the purpose of > authentication is establishing the authenticity of the *user* (i.e. not > the authenticity of the request) then why is authentication performed on > each transaction? > > If my logic is off base, please correct me, The only logical flaw is the initial assumption that Digest auth is intended to authenticate a user. Digest authentication also tries to protect against 'man in the middle' attacks by including in each transaction an increasing integer. That stops an intruder from copying the previous MD5 into their message and masquerading as the user. For that reason, each transaction's MD5 has to be recalculated. -- Colin McCormack <co...@ch...> |
From: Erik L. <e.l...@hc...> - 2004-06-21 21:33:41
|
Brent Welch wrote: > > The digest authentication module uses md5 on each transaction to > verify credentials. We don't do any caching of credentials throughout > an HTTP 1.1 connection. Did I correctly understand the above to mean: "An authentication procedure is performed for each transaction regardless session state/connection state (if any)?" Only if I understood the above correctly: Maybe my simplictic reasoning carries the signature of someone not too experienced with authentication practice, but if the purpose of authentication is establishing the authenticity of the *user* (i.e. not the authenticity of the request) then why is authentication performed on each transaction? If my logic is off base, please correct me, Thanks in advance, Erik Leunissen ============== |
From: Brand H. <b-h...@ad...> - 2004-06-21 16:32:30
|
I've tried this under both Windows and Solaris using ActiveTcl 8.3.4.3. Works fine under the latest ActiveTcl 8.4. I've looked around and did a couple of web searches, and didn't see this flagged anywhere else. Any workarounds would be appreciated. Here's the error message (ellipses added to shorten file names): Error processing main startup script "/.../tclhttpd3.5.1/bin/httpdthread.tcl". can't read "S(0, 41)": no such element in array while executing "set k $S($j, [expr { ($preS($t) << 5) + ($preS([expr {$t + 1}]) << 3) + ($preS([expr {$t + 2}]) << 2) + ($preS([expr {$..." (procedure "crypt" line 236) invoked from within "crypt $webmaster_password $salt" invoked from within "if {[info exists Config(Auth)]} { foreach {var val} $Config(Auth) { if {[string match user,* $var]} { # encrypt the password set salt [..." (file "/.../tclhttpd3.5.1/bin/../lib/auth.tcl" line 72) invoked from within "source /.../tclhttpd3.5.1/bin/../lib/auth.tcl" ("package ifneeded" script) invoked from within "package require httpd::auth " (file "/.../tclhttpd3.5.1/bin/httpdthread.tcl" line 57) invoked from within "source $Config(main)" while executing "error $error" invoked from within "if {[catch {source $Config(main)} message]} then { global errorInfo set error "Error processing main startup script \"[file nativename $Config..." (file "./httpd.tcl" line 317) |
From: Brent W. <we...@pa...> - 2004-06-20 00:30:16
|
TclHttpd does implement keep-alive (either HTTP 1.1 or the older 1.0 keep-alive extension.) However, there isn't anything exported from the protocol layer to allow other modules to sense that a connection remains open. The session module only uses md5 in Session_Create, and a session can last across many HTTP transactions (keep-alive or not). The digest authentication module uses md5 on each transaction to verify credentials. We don't do any caching of credentials throughout an HTTP 1.1 connection. Hmm. Looking at digest.tcl it looks like it caches state in the Digest$nonce global variable, but I don't think that saves any md5 calculations. It is a memory leak, however. Colin, you'll need to fix that up so that digest authentication doesn't build up these Digest values forever in long running servers. I suggest you hook HttpdReset. That is the place to do caching across requests during a keep-alive connection. Basically, if you can work out how to cache things in Httpd$sock instead of Digest$data(digest,nonce) then you'll get proper memory cleanup. >>>Erik Leunissen said: > Colin McCormack wrote: > > > > md5 v2 offers a C implementation of md5, which is not something to be > > sniffed at, given that digest authentication requires at least 3 md5 > > calculations per httpd interaction, and they're not cheap. > > > > With respect to the above statement: > > How often exactly does digest authentication calculate an md5 hash: > - three times per HTTP transaction, or > - three times per connection > > That would make a difference with respect to the protocol versions > HTTP/1.0 and HTTP/1.1, the latter supporting connection keepalive. > > > Thanks for any clarification, > > Erik Leunissen > ============== > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by The 2004 JavaOne(SM) Conference > Learn from the experts at JavaOne(SM), Sun's Worldwide Java Developer > Conference, June 28 - July 1 at the Moscone Center in San Francisco, CA > REGISTER AND SAVE! http://java.sun.com/javaone/sf Priority Code NWMGYKND > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > https://lists.sourceforge.net/lists/listinfo/tclhttpd-users > -- Brent Welch Software Architect, Panasas Inc Delivering the premier storage system for scalable Linux clusters www.panasas.com we...@pa... |
From: Erik L. <e.l...@hc...> - 2004-06-19 16:29:34
|
Colin McCormack wrote: > > md5 v2 offers a C implementation of md5, which is not something to be > sniffed at, given that digest authentication requires at least 3 md5 > calculations per httpd interaction, and they're not cheap. > With respect to the above statement: How often exactly does digest authentication calculate an md5 hash: - three times per HTTP transaction, or - three times per connection That would make a difference with respect to the protocol versions HTTP/1.0 and HTTP/1.1, the latter supporting connection keepalive. Thanks for any clarification, Erik Leunissen ============== |
From: Brent W. <we...@pa...> - 2004-06-18 16:00:19
|
>>>Matthias Hoffmann said: > Hi all, > > Just started experimenting with the new version of tclhttpd. Up until > now I'm running (an unpacked) tclhttpd 3.4.2 with > a multithreaded tclsh 8.4.4 and some additional modules unter w2k. I > want to go one step further in using the > tclkit-win32-sh-interpreter with tclhttpd.kit instead, but I don't want > to change too much in the existing httproot and > custom directories. In general, things are working well with this new > environment, but a few questions arise: > > - I can't find the package htmlutils anywhere it in the starkit - is it > gone? Looks like I moved that to sampleapp/sunscript/htmlutils.tcl You'll need to retrieve that and drop it into your custom directory. > - after switching to my old -docRoot, 'package require mypage' fails > (mypage.tcl is located in the httproot under > /libtml); ok - I could move it to the custom dir, but then some > dependency problems happen as a result of the order of > SOURCes. You could always create a aaa_init.tcl in the custom directory that explicitly sources the other files in the proper order. This assumes that once you do that it is OK to load something twice. Or, change the names to sort properly. That sounds a bit goofy, but it just seems simpler than maintaining a pkgIndex.tcl file there. Or, because it sounds like you have global code (not just procedures) you can move that to its own file, put procedure defs in other files, and run that global code last, e.g., in zzz_startup.tcl > - What about encodings? tclkit-win32-sh only has a few encodings build > in. If i store them in some auto_path lib, how > could I use them? It's probably easiest to explode the tclkit, add the encodes, and then re-wrap it up. I don't know how to load additional encodings from Tcl. You may have to ask jc...@eq... more about this topic. > - a cosmetic thing: tclkit-win32-sh tclhttp.kit -? shows the help, but > also this error: > > while executing > "error [usage $optlist $usage]" > (procedure "cmdline::getoptions" line 15) > invoked from within > "cmdline::getoptions argv $CommandLineOptions \ > "usage: httpd.tcl options:"" > invoked from within > "array set Config [cmdline::getoptions argv $CommandLineOptions \ > "usage: httpd.tcl options:"]" > (file "D:/PGM/WebSrv4/bin/tclhttpd.kit/bin/httpd.tcl" line 195) > invoked from within > "source [file join $starkit::topdir bin/httpd.tcl]" > (file "tclhttpd.kit/main.tcl" line 3) > invoked from within > "source tclhttpd.kit/main.tcl" > ("uplevel" body line 1) > invoked from within > "uplevel [list source [file join $self main.tcl]]" > > ;-) -- Brent Welch Software Architect, Panasas Inc Delivering the premier storage system for scalable Linux clusters www.panasas.com we...@pa... |
From: Brent W. <we...@pa...> - 2004-06-18 15:35:03
|
Yeah, I'm getting over being annoyed at md5 and will work out a scheme to just do "package require md5" >>>Erik Leunissen said: > Colin McCormack wrote: > > > > md5 v2 offers a C implementation of md5, which is not something to be > > sniffed at, given that digest authentication requires at least 3 md5 > > calculations per httpd interaction, and they're not cheap. > > > > Seems a rather convincing argument to me. > > Erik Leunissen > ============== > > > > ------------------------------------------------------- > This SF.Net email is sponsored by The 2004 JavaOne(SM) Conference > Learn from the experts at JavaOne(SM), Sun's Worldwide Java Developer > Conference, June 28 - July 1 at the Moscone Center in San Francisco, CA > REGISTER AND SAVE! http://java.sun.com/javaone/sf Priority Code NWMGYKND > _______________________________________________ > TclHttpd-users mailing list > Tcl...@li... > https://lists.sourceforge.net/lists/listinfo/tclhttpd-users > -- Brent Welch Software Architect, Panasas Inc Delivering the premier storage system for scalable Linux clusters www.panasas.com we...@pa... |