From: Yuri T. <qar...@gm...> - 2008-07-17 23:30:07
|
> Something quite similar to this was checked in [2] a few months back. > I considered doing exactly as you suggested, but it seemed a little > too restrictive so I used pythons url parser to leave a little more > flexibility. In any event, it is only available in safe_mode. See the > docstring in the patch for an explanation. Oh, indeed: $ cat > test.txt [foo][alert] [alert]: javascript:alert(42) $ python markdown.py -s remove test.txt <p><a href="">foo</a> Perhaps what we need is the documentation... > I'm not completely convinced it covers every possibility. Actually as > http://ha.ckers.org/xss.html points out, there very well may be as yet > undiscovered possibilities that we don't know to check for. Yes, I don't think this is safe - it assumes the behavior of a standards-complient browser, but won't prevent some XSS attacks that target IE6. I think being both flexible and secure is a balancing act that is best left for a good XSS filter. I don't think it's our job to write one. (Is there a good one for python that we can just recommend for people to use?) To the extent that we implement a "safe" mode, I think we should go for the most restrictive approach. If we are not (pretty much) sure that it's safe, through it out in safe mode. For me, this means the URL should start with "http://", "https://", "/" or "#". - yuri -- http://sputnik.freewisdom.org/ |