[mod-security-users] Standard testing methodology for virtual patching
Brought to you by:
victorhora,
zimmerletw
|
From: Kyle R. O. <ky...@st...> - 2021-07-01 16:58:44
|
Hi, Does anyone have any tips of recommendations for a standard way of testing/evaluating the effectiveness of a set of virtual patches? I've written a little python script that conditionally includes CRS rules for a given location and parameter(s) based on a vulnerability report generated by scanners like ZAP. Really, I can't think of much more beyond a before/after active scan with ZAP, and then a before/after scan with another tool that wasn't used in virtual patch creation. I know these virtual patches reduce the rate of false positives compared to just setting up CRS out-of-the-box, since they won't block requests for locations/parameters that aren't associated with some vulnerability. However, I can't think of a particularly useful way of showing this beyond picking a random vulnerable location and parameter and "attacking" it with words from a dictionary or book. I've been to a couple of OWASP pages on the topic of virtual patching, but the testing methodology seems to be fairly manual and ad-hoc. Thanks, Kyle |