From: Alex R. <al...@se...> - 2002-08-19 03:13:49
|
On Sun, 18 Aug 2002, Christopher Todd wrote: > Matt, > > > The problem will be coming up with enough different cases to > > cover as many methods of "attack" as possible. > > Actually, we can simply ask all the gurus on the webappsec list to submit > test cases to us. I'm sure they'll come up with a ton. > > > I don't see why a web server is necessary to test these cases? > > When it comes to testing, I always assume it is best to test code in an > environment that mimcs the production environment. Different web servers > and web app servers will process the input before web apps see it, and they > might play with things like character encodings, etc. we're going to be forcing canonicalization in our API anyway. Bring on the funky encodings. I see no need to have a server involved to muck with stuff when we can do it ourselves = ) > > We could write a test suite application in an appropriate language (I'm > > thinking perl) that could invoke a simple application written in each > > language that filters test data by a supplied filter, this simple app > > then returns the filtered data to the test suite application which > > determines if the output is correct. This can be easily run after each > > build. > > I think the tricky part here is finding standalone interpreters for all the > languages we will (eventually) support. I am thinking of things like Cold > Fusion, Cocoon, Zope and Enhydra (which uses XMLC). Granted, these are not > hugely popular platforms (please no flames, I'm only talking about market > share :-), and I'm not suggesting we support them out of the box. This is going to be grossly unpopular, but I say we get something working for langauages that are reasonable candidates for a reference implementation: Perl, PHP, Python, Java, C, C++. Among those I don't see any that can't be called from the command line, so I think that in the interest of getting something working, let's sidestep the complexity of mucking with a server until we absolutely need to do so. That might be sooner rather than later, but how much harder will it be to throw in a couple of wget calls once we've got the rest of the testing infrastructure in place? > Your suggestion would be simpler than mine, and would generally make things > easier on API porters. My thought on having just one tool for doing all the > testing for all the languages we support is that maintaining that tool and > the test cases would require less overhead, and would provide greater > consistency across the languages. I think eventually we'll get there, but I'm not so sure it makes sense unless we have our own test machines to play with. Until then, let's get some command line testing tools going. > An excellent point. It's just that I'm not a very good Perl hacker. :-) > Most developers hate writing tests (or comments or documentation), and I > love writing all three, so I thought I could help there. > > On the flip side, it means we won't have one testing framework, we'll have > one for every language we support, which could become a maintenance burden. Ick. I say let's do the testing framework in one language (Java's great with me) and let's have it do file-based/stdin-based invocation of the various interepreters and define some form of output that test programs should feed back to the caller so that we can determine success or failure. Extending this to parse HTTP server input shouldn't be much harder at all (simply request with a socket, feed our malicous input, parse HTTP reply in common format). Something like this might even be able to test against various servers running different configs once we have it working corretly. The first step though is to get it working at the command line with a set of reasonable bad input against a reasonable set of interpreters. Anyone think I'm smoking crack here? -- Alex Russell al...@Se... al...@ne... |