You can subscribe to this list here.
| 2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(90) |
Dec
(25) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2002 |
Jan
(183) |
Feb
(124) |
Mar
(123) |
Apr
(75) |
May
(49) |
Jun
(60) |
Jul
(58) |
Aug
(41) |
Sep
(27) |
Oct
(30) |
Nov
(13) |
Dec
(19) |
| 2003 |
Jan
(119) |
Feb
(70) |
Mar
(5) |
Apr
(16) |
May
(3) |
Jun
(1) |
Jul
|
Aug
|
Sep
(1) |
Oct
(3) |
Nov
(4) |
Dec
(7) |
| 2004 |
Jan
(9) |
Feb
|
Mar
(1) |
Apr
(7) |
May
(12) |
Jun
(4) |
Jul
(11) |
Aug
(17) |
Sep
(3) |
Oct
(15) |
Nov
(7) |
Dec
(2) |
| 2005 |
Jan
(4) |
Feb
(7) |
Mar
(2) |
Apr
(2) |
May
|
Jun
(1) |
Jul
(3) |
Aug
(1) |
Sep
(9) |
Oct
(4) |
Nov
(1) |
Dec
|
| 2006 |
Jan
(5) |
Feb
(7) |
Mar
(19) |
Apr
(8) |
May
(6) |
Jun
(2) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
(1) |
Nov
(1) |
Dec
(1) |
| 2007 |
Jan
(1) |
Feb
|
Mar
(4) |
Apr
(2) |
May
(2) |
Jun
(1) |
Jul
(1) |
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
| 2008 |
Jan
|
Feb
(3) |
Mar
|
Apr
(1) |
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
(4) |
Nov
|
Dec
|
| 2009 |
Jan
(2) |
Feb
(2) |
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2012 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Wizard <wi...@ne...> - 2002-02-21 13:28:28
|
> Not that I would like to discourage anyone from creating a new storage > mechanism that could be used on green field sites (bearing in mind this > can't use any non-Core modules so XML or DBI is out :() If anyone has a list of what modules were distributed with 5.004_04, it would help a lot. The earliest version available on perl.com is 5.005_03. Additionally, some requirement of the minimum version of CGI.pm required to run these nms scripts would probably be in order, and the scripts should test for it (CGI->version()) and be tested against it. The 5.005_03 version on perl.com comes with 2.46, but I have no idea if that's what originally came with it. As a note, CGI.pm versions prior to 2.36 "suffered a year 2000 problem in the handling of cookies" and who knows what else. Grant M. |
|
From: Jonathan S. <gel...@ge...> - 2002-02-21 12:18:07
|
On Wed, 20 Feb 2002, Wizard wrote: > > The problem with using a different storage scheme .... > I'm not suggesting that we should change the storage scheme: this would militate against one of our tenets which is to retain compatibility with an existing installation of the equivalent MSA program, rather I am trying to draw peoples attention to the possibility that the current storage mechanism does have an issue wherein a black hat could fill your disk up quite quickly with a succession of follow-ups to their own threads. I don't think I would be bothered to write yet another bulletin board (YABB) if we were going to discard the backward compatibility requirement - I would tend to push people in the direction of FooBBS <http://www.geeksalad.org/FooBBS/> by Clinton Pierce - of course this does have some external module requirements ... Not that I would like to discourage anyone from creating a new storage mechanism that could be used on green field sites (bearing in mind this can't use any non-Core modules so XML or DBI is out :() /J\ -- Jonathan Stowe | <http://www.gellyfish.com> | This space for rent | |
|
From: Dave C. <da...@da...> - 2002-02-21 11:05:28
|
On Thu, Feb 21, 2002 at 05:53:41AM -0800, Wizard (wi...@ne...) wrote: > > Can't use CPAN modules (see the NMS faq for reasons why) :( > > Our own email verification is pretty good anyways > > I couldn't find anything in the FAQ (doesn't mean it's not there, it might > just might mean that 5:20am is too early to go looking for it :). > > I'm assuming you're making a distinction between standard distribution > modules and CPAN (as in "CGI.pm is part of the standard distribution")? > They're technically all CPAN modules. From the main page at <http://nms-cgi.sourceforge.net/> Note that these scripts are intended to replace Matt's scripts. This means that their target audience is people who might not know very much at all about Perl. Any contributed scripts must therefore follow certain rules: * They must run under Perl 5.004_04 or later. Any earlier Perl than this is pre-historic and can therefore be safely ignored. * They must not use any non-standard Perl modules. I know this is a bit contentious, but I really think that the target audience will have problems installing modules from CPAN. * They must run with no errors or warnings under use strict and -wT. One of the worst legacies of Matt's scripts is that people use them to learn Perl. None of the MSA scripts use these constraints and therefore people copying them wiil learn bad habits. If people learn Perl from NMS they will at least learn better programming habits. And, yes, we do distinguish between "standard" modules and "CPAN" modules :) Dave... -- Drugs are just bad m'kay |
|
From: Wizard <wi...@ne...> - 2002-02-21 10:55:58
|
> Can't use CPAN modules (see the NMS faq for reasons why) :( > Our own email verification is pretty good anyways I couldn't find anything in the FAQ (doesn't mean it's not there, it might just might mean that 5:20am is too early to go looking for it :). I'm assuming you're making a distinction between standard distribution modules and CPAN (as in "CGI.pm is part of the standard distribution")? They're technically all CPAN modules. Grant M. |
|
From: Jonathan S. <gel...@ge...> - 2002-02-21 09:10:09
|
On Tue, 19 Feb 2002, Dave Cross wrote: > > Note to nms developers: Should we look at a change so that setting $style to > a false value, turns off all stylesheet code? > It shouldn't really be necessary because as you say if the stylesheet isn't there then the page will simply get rendered without the benefit of any styles. However if the user were to examine their error log they are possibly going to be alarmed by the number of 404s for the 'nms.css' or whatever is the default. I think we should distribute a stylesheet with each individual program as a sort of gentle hint - in the fullness of time it might be nice to get together a gallery of CSS for use with NMS. I'm hedging on the actual question because the code might become a little ugly - I'll make the change to FormMail.pl to see what you guys think. /J\ -- Jonathan Stowe | <http://www.gellyfish.com> | This space for rent | |
|
From: Jonathan S. <gel...@ge...> - 2002-02-21 09:10:08
|
The tests in FormMail are a great idea but there is something about having that large chunk of code commented out at the end of the file that I find unsettling - might it not be an idea to split them out into a separate file and having a $TESTING variable that controls whether that file is 'required' or not , we can then choose whether or not to distribute the test code :) Indeed it might be an idea if we were to create a single test harness for all of the programs , possibly instrumenting the programs accordingly, so that we can run tests on all of them bits of the programs that can be tested before making a new release. /J\ -- Jonathan Stowe | <http://www.gellyfish.com> | This space for rent | |
|
From: Joseph R. <rya...@os...> - 2002-02-21 09:09:50
|
----- Original Message -----
From: "Craig Sanders" <ca...@ta...>
To: <da...@us...>
Cc: <nms...@li...>
Sent: Wednesday, February 20, 2002 7:55 PM
Subject: [Nms-cgi-devel] patch for nms FormMail
> neat script. fixes most of the problems with the original (it was a
> struggle to remain polite here). i've been modifying the original for a
> few years now, so that my web servers don't get hijacked by spammers
> just because my customers want to use MW's broken scripts.
>
> anyway, i've modified the nms FormMail so that it has all the
> enhancements that i added to the original.
>
> feel free to use any or all of the following patch, license is GNU GPL.
>
> the attached patch:
>
> 1. reads in a list of allowed recipient domains from
> /etc/formmail.recipients
A good idea, but Im not sure if implementing it is a good idea since it
changes behaivor from the original version
>
> 2. checks the MX records of the domain part of each recipient
> address against an @valid_mx array
>
good idea
> 3. checks the NS records of the domain part of each recipient
> address against an @valid_ns array
good idea
>
> 4. uses Email::Valid to verify that addresses are valid
Can't use CPAN modules (see the NMS faq for reasons why) :(
Our own email verification is pretty good anyways
>
> 5. use CGI::Carp to log each usage of the script, both successful and
> failed.
We rolled our version of CGI::Carp fatalsToBrowser, so this shouldnt be that
hard to implement. However, it does change behaivor from the original...
>
> 6. adds X-Script-URL and X-Referring-URL headers to the sent message
> to make it easier to trace where any given formmail message was sent
> from. essential if you have hundreds of virtual hosts and hundreds
> of html forms which use the script.
>
seems like another good idea
> 7. adds Sender:, Reply-To:, and Errors-To: headers to the mail so that
> any bounces have a chance of actually being seen by the sender rather
> than getting lost in the webserver's unread mailbox.
>
I think our version already has this.
> 8. gets rid of an annoying warning message in the error.log if
> $Config{subject} is undefined
Should already be fixed.
.
>
>
>
> NOTE: features 1 to 3 eliminate the need for an ISP with multiple
> virtual hosting customers to edit the script every time s/he adds a new
> customer. if you host the DNS or the mail for the domain, then the
> domain is a valid recipient.
>
>
>
> not included in the patch is some code that reads my vhost configuration
> and adds all domains found to the @referrers array. it's not included
> because it's too specific to the way i configure virtual hosts. it's
> trivial to do- just read in a file and add each line.
>
>
> craig
>
> --
> craig sanders <ca...@ta...>
>
> Fabricati Diem, PVNC.
> -- motto of the Ankh-Morpork City Watch
>
|
|
From: Craig S. <ca...@ta...> - 2002-02-21 03:55:19
|
neat script. fixes most of the problems with the original (it was a
struggle to remain polite here). i've been modifying the original for a
few years now, so that my web servers don't get hijacked by spammers
just because my customers want to use MW's broken scripts.
anyway, i've modified the nms FormMail so that it has all the
enhancements that i added to the original.
feel free to use any or all of the following patch, license is GNU GPL.
the attached patch:
1. reads in a list of allowed recipient domains from
/etc/formmail.recipients
2. checks the MX records of the domain part of each recipient
address against an @valid_mx array
3. checks the NS records of the domain part of each recipient
address against an @valid_ns array
4. uses Email::Valid to verify that addresses are valid
5. use CGI::Carp to log each usage of the script, both successful and
failed.
6. adds X-Script-URL and X-Referring-URL headers to the sent message
to make it easier to trace where any given formmail message was sent
from. essential if you have hundreds of virtual hosts and hundreds
of html forms which use the script.
7. adds Sender:, Reply-To:, and Errors-To: headers to the mail so that
any bounces have a chance of actually being seen by the sender rather
than getting lost in the webserver's unread mailbox.
8. gets rid of an annoying warning message in the error.log if
$Config{subject} is undefined.
NOTE: features 1 to 3 eliminate the need for an ISP with multiple
virtual hosting customers to edit the script every time s/he adds a new
customer. if you host the DNS or the mail for the domain, then the
domain is a valid recipient.
not included in the patch is some code that reads my vhost configuration
and adds all domains found to the @referrers array. it's not included
because it's too specific to the way i configure virtual hosts. it's
trivial to do- just read in a file and add each line.
craig
--
craig sanders <ca...@ta...>
Fabricati Diem, PVNC.
-- motto of the Ankh-Morpork City Watch
|
|
From: Wizard <wi...@ne...> - 2002-02-21 00:19:48
|
> I think that wwwboard is probably the program that we should lay seige to
> next anyhow so it would be interesting if people could go out and discover
> real or perceived vulnerabilities in the the original version ... I know
> of a few (cf. The Alaskan Electrician) but I am sure there are more -
> mostly to do with the, er, baroque storage mechanism employed in the
> original program.
The problem with using a different storage scheme is that any changes that
are made would require an accompanying conversion tool that is simple enough
for a non-programmer to use. That means that it would have to meet the
following criteria:
1.> as little shell-based interaction as possible.
2.> simple configuration of the tool
3.> It would have to be capable of taking into account all possible
scenarios for a posting. Some examples:
o posts that refer to missing replies or missing threads.
o posts containing ALL sorts of HTML within the post
o posts that have fields that exceed the limits for nms fields.
That said, I like the idea of either moving to a database backend (I've been
playing around with DBD::XBase, and I should think that would work) or at a
minimum converting to XML, which would at least make parsing a lot easier. A
database would be best however, and should be pretty simple to implement (a
multiply-linked list or btree). DBD::XBase could probably be included in the
distribution (with permission), and we could just "use libs" to load it.
This would also allow migration to a full-fledged enterprise database,
should the need arise.
I would suggest PHP or HTML::Mason would be ideal, but that would require
stuff that very likely would not be universally available.
I'll be starting in again on my rewrite of my database package, so I'll try
to spend some time figuring in the possibility of WWWBoard and maybe
prototyping some functionality if that sounds reasonable.
Let me know,
Grant M.
P.S.> Just as a note, ADMIN_README is still very Matt Wright's. I'll try to
get to it tomorrow afternoon, but if anyone else can get to it sooner, that
would be great.
|
|
From: Wizard <wi...@ne...> - 2002-02-20 21:54:23
|
> you might consider expanding the allow list to be a pattern too so that > you could deny 195.157.* and allow 195.157.10.* for instance. I actually though about that quite a bit, and couldn't think of any practical situation (at the time) where one would want to deny over 65000 addresses. The only possible situation would be if it was a private board, in which case you would make it allow/deny. The intent was just to handle abuse situations, but would still have limitations, such as the abuser being an AOLer; do you deny all of AOL? That could seriously reduce one's traffic. I'll think about the scheme some more. Perhaps I could make the deny/allow-allow/deny user selectable. Or maybe something else ;-) > Also you > might want to look at the bit in FormMail.pl that someone did (sorry I > can't remember who) that allows one to use CIDR notation for acceptable > referers - it would be nice if we could allow a clueful administrator to > allow or deny networks at a finer granularity than 2^8 chunks. I'll take a look as soon as I have some time, Grant M. |
|
From: Jonathan S. <gel...@ge...> - 2002-02-20 20:57:31
|
On Wed, 20 Feb 2002, Wizard wrote: > Back around 1997, I had made some security modifications to Matt > Wright's WWWBoard for a friend of mine who was having some problems with it. > In addition to restricting content-length, limiting re-posting, and a banner > ad addition, I added a section which allowed for IP filtering. I finally got > a chance to dig around, and came-up with the attached code. > It works through a deny/allow mechanism, which first denies the domain > specified by the IP address, and then allowing specific IPs within that > address range. For example, "10.10.32.*" in the $deny string would deny > postings by any users within that domain, however if "10.10.32.7" were in > the $allow string, then that particular IP could still post. The IPs of all > users is posted as an HTML comment to each post. This IS somewhat of an > advanced option for a lot of WWWBoard users and may be a problem with the > un-initiated admin, however. Any thoughts? > Let me know, In principle this looks great, I would go for sticking some code that does this in and guarding it with 'unless $emulate_matts_code', remembering of course to update the README. One suggestion I might make though is that you might consider expanding the allow list to be a pattern too so that you could deny 195.157.* and allow 195.157.10.* for instance. Also you might want to look at the bit in FormMail.pl that someone did (sorry I can't remember who) that allows one to use CIDR notation for acceptable referers - it would be nice if we could allow a clueful administrator to allow or deny networks at a finer granularity than 2^8 chunks. I think that wwwboard is probably the program that we should lay seige to next anyhow so it would be interesting if people could go out and discover real or perceived vulnerabilities in the the original version ... I know of a few (cf. The Alaskan Electrician) but I am sure there are more - mostly to do with the, er, baroque storage mechanism employed in the original program. Oh BTW while you are in wwwboard could you fix the threading that I appear to have broken a while ago ;-} /J\ -- Jonathan Stowe | <http://www.gellyfish.com> | This space for rent | |
|
From: Wizard <wi...@ne...> - 2002-02-20 15:28:15
|
Joseph Ryan wrote: > Thats a really great idea; however, not much good if the admin > doesn't know > the offending ip address :) Should there also be an option to record ip > addresses on each post? It's in there. It records the IP address and the Referrer as an HTML comment in the HTML for each post. That way, you can also ensure that the user isn't bastardizing the submit form on another machine. Grant M. |
|
From: Joseph R. <rya...@os...> - 2002-02-20 14:55:37
|
Thats a really great idea; however, not much good if the admin doesn't know the offending ip address :) Should there also be an option to record ip addresses on each post? ----- Original Message ----- From: "Wizard" <wi...@ne...> To: <nms...@li...> Sent: Wednesday, February 20, 2002 7:38 AM Subject: [Nms-cgi-devel] Just a suggestion... > Back around 1997, I had made some security modifications to Matt > Wright's WWWBoard for a friend of mine who was having some problems with it. > In addition to restricting content-length, limiting re-posting, and a banner > ad addition, I added a section which allowed for IP filtering. I finally got > a chance to dig around, and came-up with the attached code. > It works through a deny/allow mechanism, which first denies the domain > specified by the IP address, and then allowing specific IPs within that > address range. For example, "10.10.32.*" in the $deny string would deny > postings by any users within that domain, however if "10.10.32.7" were in > the $allow string, then that particular IP could still post. The IPs of all > users is posted as an HTML comment to each post. This IS somewhat of an > advanced option for a lot of WWWBoard users and may be a problem with the > un-initiated admin, however. Any thoughts? > Let me know, > Grant M. > |
|
From: Wizard <wi...@ne...> - 2002-02-20 12:40:54
|
Back around 1997, I had made some security modifications to Matt
Wright's WWWBoard for a friend of mine who was having some problems with it.
In addition to restricting content-length, limiting re-posting, and a banner
ad addition, I added a section which allowed for IP filtering. I finally got
a chance to dig around, and came-up with the attached code.
It works through a deny/allow mechanism, which first denies the domain
specified by the IP address, and then allowing specific IPs within that
address range. For example, "10.10.32.*" in the $deny string would deny
postings by any users within that domain, however if "10.10.32.7" were in
the $allow string, then that particular IP could still post. The IPs of all
users is posted as an HTML comment to each post. This IS somewhat of an
advanced option for a lot of WWWBoard users and may be a problem with the
un-initiated admin, however. Any thoughts?
Let me know,
Grant M.
|
|
From: DH <cra...@ya...> - 2002-02-15 00:52:41
|
Just a reminder. Let's keeep the list strictly text-email. I get read the digest everyday, and there shouldn't be any attachments posted to the list (this reminder in response to the MS Messenger Virus thing) Also, what do you guys othink of SourceForge's new Terms of Service? There is a thread on slashdot on it. SourceForge Terms of Service Change, Users Unhappy (432) http://slashdot.org/developers/02/02/13/188234.shtml?tid=150 __________________________________________________ Do You Yahoo!? Send FREE Valentine eCards with Yahoo! Greetings! http://greetings.yahoo.com |
|
From: Jonathan S. <gel...@ge...> - 2002-02-14 21:01:12
|
On Thu, 14 Feb 2002, Olivier Dragon wrote: > On Thu, Feb 14, 2002 at 09:02:04AM -0800, Wizard wrote: > > > - putting all possible world r/w files below the document root (or > > > above, depending how you see this) > > This would only work for the .txt files. The HTML must be under the docroot. > > Yes I know. But the less r/w files exposed, the better, no? Or is this a > false sense of pseudo security? > > And what about the directories? I've heard of an exploit using something > like ../../../../../../../../../../../tmp as cgi-input to gain write > access to a machine. Again, I'm not a security expert and I don't know > any methods of gaining access to a machine, but it seems to me that the > more holes plugged, the better. > The NMS programs aren't vulnerable to exploits of that sort themselves, but of course the files are vulnerable to those sort of holes in other peoples programs running on the same server. Of course we have no way of knowing about the configuration of any given webserver - however we probably could have the program files attempt to chmod themselves 0550 and their data directories 0750 and set a conservative umask of 0077 for the creation of new files. This won't save us on a shared web server where all of the programs for all of the users run with the same uid but it will buy us some security on dedicated machines or shared servers where each user has their CGI programs run SuExec to their own uid. In the end of the day the programs are only ever going to be as secure as the servers that they are running on and we can certainly expect that there are places where they are going to have to be chmod 555 because of the configuration of the server - we could put an iterative description of 'try 0500, then 0550, then 0555 ...' in the README but I think people are going to get bored with that very quickly ;-} /J\ -- Jonathan Stowe | <http://www.gellyfish.com> | This space for rent | |
|
From: Jonathan S. <gel...@ge...> - 2002-02-14 21:01:05
|
On Thu, 14 Feb 2002, Wizard wrote: > > There really is no totally secure way of securing r/w files on a webserver, > as the webserver UID is the one that needs to write to them, and this is the > most likely UID target for exploits (but it has become more rare). Ideally a shared webserver would have some kind of mechanism such as Apache's SuExec whereby each user's CGI programs get run under a separate UID in a relatively secure fashion. Unfortunately we cannot expect that of our constituency :( Whilst a lot of the files do need to be readable *and* writeable (Guestbook, FFA) - for a certain number of them I think that we could sysopen them for writing but with a mode of 0400 /J\ -- Jonathan Stowe | <http://www.gellyfish.com> | This space for rent | |
|
From: Olivier D. <dr...@sh...> - 2002-02-14 15:07:19
|
On Thu, Feb 14, 2002 at 09:02:04AM -0800, Wizard wrote: > > - putting all possible world r/w files below the document root (or > > above, depending how you see this) > This would only work for the .txt files. The HTML must be under the docroot. Yes I know. But the less r/w files exposed, the better, no? Or is this a false sense of pseudo security? And what about the directories? I've heard of an exploit using something like ../../../../../../../../../../../tmp as cgi-input to gain write access to a machine. Again, I'm not a security expert and I don't know any methods of gaining access to a machine, but it seems to me that the more holes plugged, the better. Thanks for the opinion. I'm trying to get a better feel for security and this is helping me a lot. And who knows, might help the project too :o) -Olivier -- +----------------------------------------------+ | Olivier Dragon dr...@sh... | | Software Engineering II, McMaster University | +----------------------------------------------+ |
|
From: Wizard <wi...@ne...> - 2002-02-14 14:04:13
|
> - disabling r/w access to others but making sure the files are r/w to > group and set the group to www-data (or whatever the httpd is running > as) Remember that the webserver is the ONLY user accessing these files (regardless of whether it's through someone's browser or not). So changing the group permissions doesn't do anything unless someone is logged-in locally. > - putting all possible world r/w files below the document root (or > above, depending how you see this) This would only work for the .txt files. The HTML must be under the docroot. There really is no totally secure way of securing r/w files on a webserver, as the webserver UID is the one that needs to write to them, and this is the most likely UID target for exploits (but it has become more rare). The one way that I have seen it done (and I don't recommend it for writeable files), have the first few lines of the file like so: #!/usr/bin/perl exit; # text data below And then name the file (something).pl and make it r/x. The script now ignores these lines when reading the file, and just reads the data below. The result is that when someone is trying to read the file in a browser, the script is executed rather than displayed. The problems are that the file needs to be in an ExecCGI directory, and it can't be writable, as that could open the door to malicious code being inserted. I think that the existing system of file storage should be fine. You could suggest that the password file be moved somewhere more secure, but I think even that might be overkill. As long as the server is secure, and the user uses decent passwords (no dictionary words, at least 7 chars, and mixed case), it should be fine. Grant M. |
|
From: Olivier D. <dr...@sh...> - 2002-02-14 13:15:19
|
I'm no security expert, and I'd like to ask everyone here their opinion on a certain security issue: World r/w directories and files. Is this an issue when it comes to www and httpd security? For example, the wwwboard has a directory (messages/) that is world r/w, as well as several files like data.txt, password.txt and wwwboard.html and I feel concerned about having those files accessible by web browsers and other programs through my httpd. While it is impossible to remedy such permissions on these files and directories, unless using a database such as mysql or postgres (which isn't an option for this project to keep compatibility with MWS) would such measure as: - disabling r/w access to others but making sure the files are r/w to group and set the group to www-data (or whatever the httpd is running as) - putting all possible world r/w files below the document root (or above, depending how you see this) My idea is *not* to make this the standard, but maybe include an optional security section in the README that deals with this, if of course this is a security issue. Ideas? Suggestions? -Olivier -- +----------------------------------------------+ | Olivier Dragon dr...@sh... | | Software Engineering II, McMaster University | +----------------------------------------------+ |
|
From: Paul R. <pa...@ro...> - 2002-02-14 01:49:59
|
http://slashdot.org/article.pl?sid=3D02/02/14/000257&mode=3Dthread&tid=3D= 109 "Sequence: Get messaged "Go To http://www.masenko-media.net/cool.html = NoW !!!" or something similar with another URL. Follow the link. That = webpage contains malicious code which gets your messenger contacts and = sends a similar message to your contacts. It looks like it uses a = vulnerability in formmail.pl as well, although I'm not exactly sure = how..." I weighed in with one suggested solution :-) http://slashdot.org/comments.pl?sid=3D27946&cid=3D3004573 I should have said "minor" contributor, but too late now... -paul |
|
From: Nick C. <ni...@cl...> - 2002-02-14 00:08:31
|
On Wed, Feb 13, 2002 at 03:36:46PM -0800, Nick Cleaton wrote: > > uid=68644(nickjc) gid=100(users) groups=100(users),40625(nms-cgi) > formmail FormMail.pl,1.35,1.36 > Wed Feb 13 15:36:46 PST 2002 > Update of /cvsroot/nms-cgi/formmail > In directory usw-pr-cvs1:/tmp/cvs-serv11480 > > Modified Files: > FormMail.pl > Log Message: > (This is the log message for the previous checkin) > * reworked check_email With so many different people (including me) adding clauses to the 'if' statement, it was getting too tangled. I've re-written check_email() from scratch, everyone please look and find the errors. Did you recently add to check_email ? If so, please check that what you blocked is still blocked. It no-longer allows SPAM relaying if $emulate_matts_code is set. It does allow pretty much anything that's at all like a real email address, so hopefully it will still be a drop-in replacement for 99% of users. > * made it produce debugging output when rejecting recipients Detailed reporting on rejected recipients if $DEBUGGING. That should help the remaining 1% track down any issues. > * sort order: doc correction Someone pointed this out on the list a few weeks ago, and I though it had been fixed, but it hadn't. > * doc typo > * added a way to keep the email address out of the form (user request) > * POST_MAX and DISABLE_UPLOADS stuff > * restricted the body attribute inputs to sane values > * squished a couple of warnings > * allowed relative URLs in check_url_valid -- Nick |
|
From: Jonathan S. <gel...@ge...> - 2002-02-13 13:55:39
|
On Tue, 12 Feb 2002, Wizard wrote: > > , nope, the > > DOCUMENT_URI still doesn't show up. > I think it should be "REQUEST_URI", not "DOCUMENT_URI". Although both work > on Apache, only the "REQUEST_URI" is a documented SSI variable. > "REQUEST_URI" is also the same as "SCRIPT_NAME" when called from CGI script, > but I don't think it is passed in SSI. The stuff that Suresh posted earlier would indicate that this is not the case for Xitami. /J\ -- Jonathan Stowe | <http://www.gellyfish.com> | This space for rent | |
|
From: Nick C. <ni...@cl...> - 2002-02-13 08:52:08
|
On Tue, Feb 12, 2002 at 10:23:24PM -0500, Joseph F. Ryan wrote: > The reason he is having trouble is because the script is not passing the > taint checking. > When it hits the : in the path, the pattern fails because a : is not in the > character class in $dirname =~ m|^([-+@\w./]+)$|. Does anyone (such as our > taint checking experts) see a problem with adding a colon to the character > class? That sounds fine to me. -- Nick |
|
From: Joseph F. R. <rya...@os...> - 2002-02-13 03:23:28
|
The reason he is having trouble is because the script is not passing the
taint checking.
When it hits the : in the path, the pattern fails because a : is not in the
character class in $dirname =~ m|^([-+@\w./]+)$|. Does anyone (such as our
taint checking experts) see a problem with adding a colon to the character
class?
>Message: 2
>From: Joost Lommers <Joo...@is...>
>To: "'nms...@li...'"
> <nms...@li...>
>Date: Tue, 12 Feb 2002 09:30:26 +0100
>Subject: [Nms-cgi-support] I cannot get search.pl to search the right
>directories under Win
> 2000 + IndigoPerl 5.6
>
>Hi,
>
>I am trying to get the Simple Search script working under Win 2000 with
>IndigoPerl 5.6.
>
>IndigoPerl is installed in E:\My Development\IndigoPerl56. In this directory
>are the Perl and Apache subdirectories like \bin, \cgi-bin, \htdocs,
>\perl-bin, etc. located. Your search.pl script is in \cgi-bin, your
>search.html page is in \htdocs.
>
>I have trouble setting the $basedir variable. Whatever way I define it, I
>either get no search results or various appliction errors. E.g.
>
> Setting
> my $basedir = '/IndigoPerl56/htdocs';
> my $baseurl = '/IndigoPerl56/htdocs';
> my @files = ('*.html');
> results in no hits when I search on "simple" + AND + insensitive ("simple"
>should give a hit on search.html in \htdocs).
>
>
> Setting
> my $basedir = '/htdocs';
> my $baseurl = '/htdocs';
> my @files = ('*.html');
> results in no hits when I search on "simple" + AND + insensitive ("simple"
>should give a hit on search.html in \htdocs).
>
> Setting
> my $basedir = '../htdocs';
> my $baseurl = '../htdocs';
> my @files = ('*.html');
> results in a hit on search.html (finally), but the script stops with an
>application error:
> "suspect directory name: e:/my development/indigoperl56/cgi-bin at
>e:\MYDEVE~1\INDIGO~1\cgi-bin\search.pl line 366."
>
> Adding some print statements, I can see that the script first searches
>../htdocs, but the second directory it tries to search is e:/my
>development/indigoperl56/cgi-bin, on which it dies. In my opinion, the
>script shouldn't search this directory because it is at the same level as
>../htdocs (full path is e:/my development/indigoperl56/htdocs). When I add a
>subdirectory to ../htdocs (e.q. ../htdocs/test), I also see that this
>subdirectory is never searched.
>
>Can you help? I am a programmer, but new to Perl and this pattern matching
>stuff is way beyond my capabilities. I tried to understand the line the
>script dies on ($dirname =~ m|^([-+@\w./]+)$| or die "suspect directory
>name: $dirname";) but even with a Perl book and the IndigoPerl on-line
>documentation I still don't understand what is going on here. Sorry.
>
>Thanks for any advice. 8-)
>Joost Lommers. ISES International B.V.
>
>mailto:Joo...@is... / Mobile: +31 (0)650 664 634 /
>Mail: Postbus 2003, 5300 CA Zaltbommel / Visit: Hogeweg 65, 5301 LJ
>Zaltbommel / E-visit: http://www.ises-international.com/
>
>"I never said I always make sense"
>
>
>
>
>--__--__--
>
>_______________________________________________
>Nms-cgi-support mailing list
>Nms...@li...
>https://lists.sourceforge.net/lists/listinfo/nms-cgi-support
>
>
>End of Nms-cgi-support Digest
|