Thread: [mod-security-users] ModSecurity version 3.0.1 announcement
Brought to you by:
victorhora,
zimmerletw
|
From: Felipe C. <FC...@tr...> - 2018-04-02 12:45:31
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 It is a pleasure to announce the release of ModSecurity version 3.0.1 (libModSecurity). This version contains improvements, fixes and new features. The most important new feature is the support for the libMaxMinddb, popularly kown as the new version of the GeoIP library. There is a splendid performance upgrade on v3.0.1. A significant amount of work was placed on how to handle the memory usage more efficiently, which leaded to great improvements in terms of latency and requests per second. The list with the full changes can be found on the project CHANGES file, available here: - - https://github.com/SpiderLabs/ModSecurity/releases/tag/v3.0.1/CHANGES The list of open issues is available on GitHub: - - https://github.com/SpiderLabs/ModSecurity/labels/3.x Thanks to everybody who helped in this process: reporting issues, making comments and suggestions, sending patches and so on. Further details on the compilation process for ModSecurity v3, can be found on the project README: - https://github.com/SpiderLabs/ModSecurity/tree/v3/master#compilation Complementary documentation for the connectors are available here: - nginx: https://github.com/SpiderLabs/ModSecurity-nginx/#compilation - Apache: https://github.com/SpiderLabs/ModSecurity-apache/#compilation IMPORTANT: ModSecurity version 2 will be available and maintained parallel to version 3. There is no ETA to deprecate the version 2.x. New features and major improvements will be implemented on version 3.x. Security or major bugs are planned to be back ported. Version 2 and version 3 has a completely independent development/release cycle. Br., Felipe “Zimmerle” Costa Security Researcher, Lead Developer ModSecurity. Trustwave | SMART SECURITY ON DEMAND www.trustwave.com -----BEGIN PGP SIGNATURE----- Comment: GPGTools - https://gpgtools.org iF0EARECAB0WIQQZDvrMoen6RmqOzZzm37CM6LESdwUCWsIiCAAKCRDm37CM6LES d8DzAKCpQmKnYFVcWD99ue+nxihZZep8BACgsJxQu9UrapxBBZu0cJMEekwHBzo= =OTkS -----END PGP SIGNATURE----- |
|
From: Christian F. <chr...@ne...> - 2018-04-03 20:18:57
Attachments:
signature.asc
|
Congratulations on this release Felipe. I confirm the great improvement in terms of requests per second. A brief test against nginx/modsecurity running on localhost brought me a speedup of factor 5 with a CRS 3.0.2 default installation. The changelog mentions three performance improvements. Which one had this dramatic effect? And finally: Apache/ModSec 2.9.2 still trumps Nginx/ModSec 3.0.1 big time in my lab setup. What is your projection for future performance improvements on the new 3.0 release line? When will this be ready to replace existing installations with a similar performance? Cheers, Christian On Mon, Apr 02, 2018 at 12:30:07PM +0000, Felipe Costa wrote: > > > It is a pleasure to announce the release of ModSecurity version 3.0.1 > (libModSecurity). This version contains improvements, fixes and new features. > > The most important new feature is the support for the libMaxMinddb, > popularly kown as the new version of the GeoIP library. > > There is a splendid performance upgrade on v3.0.1. A significant amount of > work was placed on how to handle the memory usage more efficiently, which > leaded to great improvements in terms of latency and requests per second. > > The list with the full changes can be found on the project CHANGES > file, available here: > - https://github.com/SpiderLabs/ModSecurity/releases/tag/v3.0.1/CHANGES > > The list of open issues is available on GitHub: > - https://github.com/SpiderLabs/ModSecurity/labels/3.x > > Thanks to everybody who helped in this process: reporting issues, making > comments and suggestions, sending patches and so on. > > Further details on the compilation process for ModSecurity v3, can be found on > the project README: > - https://github.com/SpiderLabs/ModSecurity/tree/v3/master#compilation > > Complementary documentation for the connectors are available here: > - nginx: https://github.com/SpiderLabs/ModSecurity-nginx/#compilation > - Apache: https://github.com/SpiderLabs/ModSecurity-apache/#compilation > > > IMPORTANT: ModSecurity version 2 will be available and maintained parallel > to version 3. There is no ETA to deprecate the version 2.x. New features > and major improvements will be implemented on version 3.x. Security or major > bugs are planned to be back ported. Version 2 and version 3 has a completely > independent development/release cycle. > > > Br., > Felipe “Zimmerle” Costa > Security Researcher, Lead Developer ModSecurity. > > Trustwave | SMART SECURITY ON DEMAND > www.trustwave.com > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > mod-security-users mailing list > mod...@li... > https://lists.sourceforge.net/lists/listinfo/mod-security-users > Commercial ModSecurity Rules and Support from Trustwave's SpiderLabs: > http://www.modsecurity.org/projects/commercial/rules/ > http://www.modsecurity.org/projects/commercial/support/ -- https://www.feistyduck.com/training/modsecurity-training-course https://www.feistyduck.com/books/modsecurity-handbook/ mailto:chr...@ne... twitter: @ChrFolini |
|
From: Robert P. <rpa...@fe...> - 2018-04-04 01:15:35
|
Christian, > On Apr 3, 2018, at 13:18, Christian Folini <chr...@ne...> wrote: > > Congratulations on this release Felipe. > > I confirm the great improvement in terms of requests per second. A brief > test against nginx/modsecurity running on localhost brought me a speedup of > factor 5 with a CRS 3.0.2 default installation. Can you share the specifics of your evaluation? Performance in modsec + crs will vary greatly depending on the request payload. Soon I would like to do some before and after trace profiling of these releases to better illustrate how libmodsec performs in various conditions. |
|
From: Felipe Z. <fe...@zi...> - 2018-04-05 18:28:26
|
Hi, On Thu, Apr 5, 2018 at 2:08 PM Robert Paprocki < rpa...@fe...> wrote: > Hi Felipe, > > On Thu, Apr 5, 2018 at 6:33 AM, Felipe Zimmerle <fe...@zi...> > wrote: > >> Hi, >> >> There are few things in the code that can be improved. There are even >> "TODO:" marking. But at certain point you may want to look at the rules. >> That is very important. >> > > I'm not sure if I emphasized my last point well enough. Yes, the design of > the rules impacts performance (that's a given in a data-driven model), but > the engine itself suffers from some severe limitations at this point. > > I created a very dumb, sizeable set of rules: > https://gist.github.com/p0pr0ck5/e0c73606f0be8ab93edb729e6cb56c5d > > Using the simple harness I mentioned earlier ( > https://gist.github.com/p0pr0ck5/9b2c414641c9b03d527679d0c8cb7d86), we > see a sizeable decrease in performance as the number of dumb rules > increases. With 200 rules, a single process can evaluate around 800 full > transactions per second. When we double the number of rules processed > (remaining equally distributed throughout phases 1-5), around 450 > transactions are processed, and another doubling in rule size (to 800 > rules) results in a throughput of about 200 transactions/sec. > > Not sure if i understood but, that sounds about right to me. That is also valid for any script language. [a] for (i = 0; i < 10; i++) { echo "a"; } [b] for (i = 0; i < 100; i++) { echo "b"; } [c] for (i = 0; i < 10; i++) { for (j = 0; j < 10; j++) { echo "b"; } } In that case, [a] will be faster than [b]. [b] is likely to run on the same time of [c]. The response time is directly proportional to the task that needs to be computed. That is pretty much it from anything that runs on computer. Of course, this simple example does a lot of unnecessary memory > transactions with ModSecurity objects; the Nginx connector is a bit more > performant. With 800 rules, a single worker process reports still only > triple-digit RPS: > > $ wrk -t 5 -c 50 -d 5s http://localhost:8080/index.html > Running 5s test @ http://localhost:8080/index.html > 5 threads and 50 connections > Thread Stats Avg Stdev Max +/- Stdev > Latency 133.28ms 230.40ms 1.92s 92.92% > Req/Sec 133.53 53.89 410.00 77.39% > 3085 requests in 5.03s, 2.50MB read > Requests/sec: 612.94 > Transfer/sec: 508.79KB > > Yes, this is better than the CRS performance, but it's still orders of > magnitude from what Nginx is capable of processing on its own. So we cannot > say that such poor performance with the CRS is related to the design of the > ruleset itself. There is a clear relationship between the *number* of > rules that lobmodsecurity has to process, and performance. This further > confirms my original assertions, that there are fundamental limitations in > core ModSecurity code that hinders performance. Furthermore there are no > TODO comments regarding the performance of Rule object evaluation: > > I think you meant that nginx by its own can delivery more content than nginx with ModSecurity. Which can be considered true regardless how good or bad the performance of ModSecurity is. Considers ModSec load "B" and nginx load "A". A < A + B. Please correct me if I understood it wrong. poprocks@mini-vm:~/src/ModSecurity/src$ git grep TODO > parser/seclang-scanner.cc:/* TODO: this is always defined, so inline it */ > parser/seclang-scanner.cc: /** TODO: Implement the server > logging mechanism. */ > parser/seclang-scanner.cc: /* TODO. We should be able to replace this > entire function body > parser/seclang-scanner.ll: /** TODO: Implement the server > logging mechanism. */ > request_body_processor/json.cc: * TODO: make UTF8 validation optional, > as it depends on Content-Encoding > transaction.cc: /** TODO: Check variable */ > transaction.cc: /** TODO: Check variable */ > transaction.cc: /** TODO: Check variable */ > transaction.cc: /** TODO: Check variable */ > transaction.cc: /** TODO: write audit_log D part. */ > transaction.cc: /** TODO: write audit_log G part. */ > transaction.cc: /** TODO: write audit_log H part. */ > transaction.cc: /** TODO: write audit_log I part. */ > transaction.cc: /** TODO: write audit_log J part. */ > transaction.cc: /** TODO: write audit_log K part. */ > utils/acmp.cc: * TODO: This code comes from ModSecurity 2.9.0 there are > two memory leaks here > utils/msc_tree.h: * TODO: This is an improved copy of the ModSecurity 2.9 > file, this may need > > Sure. I will investigate. > There is a lot of needlessly repeated work done in Rule::evaluate. > Collection values, exemptions, transformations, etc., these can all be > cached based on some key relevant to the rule. I implemented a lot of these > design patterns for lua-resty-waf, and it's really helped with performance; > I'd love to share some detailed thoughts on this in a development > discussion setting. And, as flamegraphs show, there's a lot of std > container allocation/management that's done that I suspect could be > optimized away. But these are fundamental design changes that would have a > major impact, and would require careful study and planning. This is > something I doubt could be handled by a community contribution. > How so? I don't understand why a cache is that difficult to implement. As a matter of fact, we used cache for transformations because sounds to be a popular solution and we move back to not have cache as it shown to be bad for performance. As illustrared here: https://github.com/SpiderLabs/ModSecurity/commit/37619bae778183159beee455a5f0d2a0fe02a883#diff-0fae944d3cf096e2fbb8e0063ce0b585 > Ultimately we're talking about a refactor of some significant hot path > code. And Felipe, please understand that I and the community deeply respect > the work that's been done on this project, but frankly this doesn't seem to > be a priority for you, based on the responses in this thread. And if in > your view I'm way off base in my assumptions, I'd love to hear it, and > review some data and example execution that contradicts what I, Christian, > Jai, and Andrei have shown here, and if not, at least an acknowledgement > from either Trustwave or Nginx that the inclusion of Modsecurity into Nginx > results in a substantial, meaningful variance in throughput capacity. > I cannot foresee how my priorities have to do with this thread :D :) Performance is a very important subject which have being, and will be, in discussion forever. It is correct to say that it will always a improvements to be made. And count on my attention to that. As a matter of fact, there are a lot of room for improvements. Myself, and [i think] everybody which uses ModSecurity will be very happy and welcome to receive your contribution. As we already receive a lot of contributions from Andrei. You just have to send the patches. As I mentioned before, I am anxious for that. I think everybody that you mentioned wants to see those improvements as well. When do you think you can share something with us? Is that anything that I can do to help you? Br., Felipe |
|
From: Christian F. <chr...@ne...> - 2018-04-06 11:07:10
|
Dear all, dear Felipe, On Thu, Apr 05, 2018 at 06:28:06PM +0000, Felipe Zimmerle wrote: > > I cannot foresee how my priorities have to do with this thread :D :) > No, your past and your future priorities matter a big deal. Various people have presented numbers about the performance of ModSec3 and the data does not look good. I continued my measurements and arrived with the following results: Apache, no ModSec2 : 5490 requests per second Apache, ModSec2, 1 rule : 4588 - 15% (against naked webserver) Apache, ModSec2, 10 rules : 4329 - 20% Apache, ModSec2, CRS3 : 1123 - 79% NGINX, no ModSec3 : 10104 requests per second NGINX, ModSec3.0.0, 1 rule : 4619 - 54% (against naked webserver) NGINX, ModSec3.0.0, 10 rules : 3380 - 66% NGINX, ModSec3.0.0, CRS3 : 36 - 99% NGINX, ModSec3.0.2, 1 rule : 4168 - 58% NGINX, ModSec3.0.2, 10 rules : 3251 - 67% NGINX, ModSec3.0.2, CRS3 : 255 - 97% Parameters: 100000 reqs: http://localhost/index.html?test=test 3 test runs, I took the mean of the three runs. Testrule used (1 time or 10 times in succession): SecRule REQUEST_URI "@unconditionalMatch" "id:1,phase:1,pass,nolog,noauditlog" [For those not familiar with ModSec Performance. The numbers look already quite bad on Apache but that's because we are testing locally. Over the net, the numbers for ModSec2 are acceptable.] The takeaway message of the statistic above is this: ModSec3 has a huge overhead and every additional rule makes it worse. This confirms previous messages in this discussion. Namely those of Robert. And all these confirmations contradict your words when you announced 3.0 at https://www.trustwave.com/Resources/SpiderLabs-Blog/ModSecurity-Version-3-0-Announcement/ There you wrote: "This release represents a significant improvement to the ModSecurity WAF. It removes dependencies and improves performance." I knew this was only wishful thinking when ModSec3 came out, but I did not want to badmouth your work Felipe. So I waited for 3.0.1 and hoped for performance improvements. The improvements are significant (what I immediately reported). Unfortunately the performance is still miles away from what I would expect from ModSec3 on NGINX. So on April 3, I asked about your view on future performance improvements and a possible timeline. I think that is a reasonable thing to ask. You did not respond, though, and when Robert pointed out where he sees room for substantial improvements you invited him to refactor central aspects of your code by himself. We agree that it is not your job to do all the work. But please, at least come up with a plan how we can all solve this together. It's the least thing that we expect from a leader. I interpret your communication here as a denial of any substantial problem. You are the lead developer. If you do not acknowledge the problem and if you do not present a plan to the community how this problem will be fixed, then you are hurting ModSecurity. And that's why your and Trustwave's priorities matter a big deal. Best regards, Christian Folini -- Author of the ModSecurity Handbook, 2ed Author of a series of ModSecurity tutorials published at https://netnea.com Renown teacher and frequent speaker on ModSecurity topics |
|
From: Robert P. <rpa...@fe...> - 2018-04-06 17:48:01
|
Hi, On Fri, Apr 6, 2018 at 7:05 AM, Felipe Zimmerle <fe...@zi...> wrote: > > Hi, > [ ... ] > > I would suggest you to work an real use case. Using a real environment. As > you said, testing in the loop back is not good thing. > Felipe, with all respect I think you should go into politics :D This is a disingenuous non-answer. Are you saying that you'd expect to see _better_ performance in a more complex environment? That's clearly not the goal here. We're not trying to simulate a realistic production workload. We're profiling the performance specifically of libmodsecurity. Removing variables induced by network connections, additional applications, etc., provides _more_ reliable results when examining libmodsecurity's performance and behavior. And Andrei's own work and results align very closely with ours. Are you saying his data is unreliable as well? What variables do you suggest we adjust to better highlight libmodsecurity's performance? From what I can tell, lightweight benchmarks have clearly shown a behavior change based on the libmodsecurity configuration, and flame graphs have highlighted hot code paths that need optimization/refactoring. I'm not sure what more you'd like to see. I have taken the liberty of opening a few tracking issues on GitHub, since discussion here is going nowhere: https://github.com/SpiderLabs/ModSecurity/issues/1731 https://github.com/SpiderLabs/ModSecurity/issues/1732 I want to highlight that I don't think Christian or I are trying to sandbag anyone. But this discussion has been rather frustrating; from our perspective, we've provided real numbers and done benchmarking/profiling with modern tooling, and that data has aligned with what Andrei (who works for Nginx) has shown as well. And apart from vague answers like "performance is a very import subject which will always be discussed", there's been no response even acknowledging that our results are meaningful, or that our expectations about performance and latency are valid. I understand that Trustwave has it's own priorities (Felipe blink twice if they won't let you make performance improvements ;) ), but this really feels like a show-stopper for deploying at any meaningful scale. At this point I really don't know how to proceed. If I'm completely off-base then please let me know. |
|
From: Chaim S. <ch...@ch...> - 2018-04-06 18:06:44
|
Guys please remember to be civil. Although I know your intention is to get better performance numbers and improvement, remember that there are a lot of factors going into development of v3. Try to look at the feedback you're giving from Felipe's perspective, it does appear to be slightly aggressive in nature. Just remember that our goal as a whole community is to produce awesome work. On Fri, Apr 6, 2018, 12:51 PM Robert Paprocki < rpa...@fe...> wrote: > Hi, > > On Fri, Apr 6, 2018 at 7:05 AM, Felipe Zimmerle <fe...@zi...> > wrote: > >> >> Hi, >> [ ... ] >> >> I would suggest you to work an real use case. Using a real environment. >> As you said, testing in the loop back is not good thing. >> > > Felipe, with all respect I think you should go into politics :D This is a > disingenuous non-answer. Are you saying that you'd expect to see _better_ > performance in a more complex environment? That's clearly not the goal > here. We're not trying to simulate a realistic production workload. We're > profiling the performance specifically of libmodsecurity. Removing > variables induced by network connections, additional applications, etc., > provides _more_ reliable results when examining libmodsecurity's > performance and behavior. And Andrei's own work and results align very > closely with ours. Are you saying his data is unreliable as well? What > variables do you suggest we adjust to better highlight libmodsecurity's > performance? From what I can tell, lightweight benchmarks have clearly > shown a behavior change based on the libmodsecurity configuration, and > flame graphs have highlighted hot code paths that need > optimization/refactoring. I'm not sure what more you'd like to see. > > I have taken the liberty of opening a few tracking issues on GitHub, since > discussion here is going nowhere: > > https://github.com/SpiderLabs/ModSecurity/issues/1731 > https://github.com/SpiderLabs/ModSecurity/issues/1732 > > > I want to highlight that I don't think Christian or I are trying to > sandbag anyone. But this discussion has been rather frustrating; from our > perspective, we've provided real numbers and done benchmarking/profiling > with modern tooling, and that data has aligned with what Andrei (who works > for Nginx) has shown as well. And apart from vague answers like > "performance is a very import subject which will always be discussed", > there's been no response even acknowledging that our results are > meaningful, or that our expectations about performance and latency are > valid. I understand that Trustwave has it's own priorities (Felipe blink > twice if they won't let you make performance improvements ;) ), but this > really feels like a show-stopper for deploying at any meaningful scale. At > this point I really don't know how to proceed. If I'm completely off-base > then please let me know. > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > mod-security-users mailing list > mod...@li... > https://lists.sourceforge.net/lists/listinfo/mod-security-users > Commercial ModSecurity Rules and Support from Trustwave's SpiderLabs: > http://www.modsecurity.org/projects/commercial/rules/ > http://www.modsecurity.org/projects/commercial/support/ > |
|
From: Christian F. <chr...@ne...> - 2018-04-04 04:48:55
|
Hey Robert, On Tue, Apr 03, 2018 at 05:50:07PM -0700, Robert Paprocki wrote: > Can you share the specifics of your evaluation? Performance in modsec + crs > will vary greatly depending on the request payload. Soon I would like to do > some before and after trace profiling of these releases to better illustrate > how libmodsec performs in various conditions. I did a minimal self-compiled NGINX with a basic ModSecurity and CRS as documented on https://www.netnea.com/cms/nginx-modsecurity-tutorials/ . (These new tutorials are in a draft state, the quality is not yet there. Use with caution.) Testrun 1: ---------- /apache/bin/ab -n 1000 -c 1 "http://localhost/index.html?test=/etc/passwd" 3.0.0 Concurrency Level: 1 Time taken for tests: 26.172 seconds Complete requests: 1000 Failed requests: 0 Non-2xx responses: 1000 Total transferred: 320000 bytes HTML transferred: 162000 bytes Requests per second: 38.21 [#/sec] (mean) Time per request: 26.172 [ms] (mean) Time per request: 26.172 [ms] (mean, across all concurrent reqs) Transfer rate: 11.94 [Kbytes/sec] received 3.0.2 Concurrency Level: 1 Time taken for tests: 4.585 seconds Complete requests: 1000 Failed requests: 0 Non-2xx responses: 1000 Total transferred: 320000 bytes HTML transferred: 162000 bytes Requests per second: 218.12 [#/sec] (mean) Time per request: 4.585 [ms] (mean) Time per request: 4.585 [ms] (mean, across all concurrent reqs) Transfer rate: 68.16 [Kbytes/sec] received Testrun 2: ---------- /apache/bin/ab -n 1000 -c 1 "http://localhost/index.html?test=innocent" 3.0.0 Concurrency Level: 1 Time taken for tests: 26.168 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 853000 bytes HTML transferred: 612000 bytes Requests per second: 38.21 [#/sec] (mean) Time per request: 26.168 [ms] (mean) Time per request: 26.168 [ms] (mean, across all concurrent reqs) Transfer rate: 31.83 [Kbytes/sec] received 3.0.2 Concurrency Level: 1 Time taken for tests: 3.996 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 853000 bytes HTML transferred: 612000 bytes Requests per second: 250.25 [#/sec] (mean) Time per request: 3.996 [ms] (mean) Time per request: 3.996 [ms] (mean, across all concurrent reqs) Transfer rate: 208.46 [Kbytes/sec] received Felipe tagged a 3.0.2 yesterday and made it available at https://github.com/SpiderLabs/ModSecurity/releases I took that one for my tests. I reckon the performance is the same as with the 3.0.1 that has been announced. This perf test is obviously very superficial. A thing to note is that even testrun 2 would write the error-log (to gather statistical data). But whatever the specifics, I think this big performance boost will show in any setup even if the factor might not be that high. Having real perf tests done regularly would be very welcome, Robert. Best, Christian -- I don't believe that we have come to the end of the democratic experiment. -- Bruce Schneier |
|
From: Robert P. <rpa...@fe...> - 2018-04-04 05:12:14
|
Whups, sorry for the x-post.
Christian, thanks for the numbers.
I made a very very quick test with libmodsec 3.0.2
(commit 8d0f51beda5c031e38741c27f29b67f0266352bb) and Nginx 1.13.9, built
against Modsecurity-nginx/master (commit 995f631767c24de8fabf82
8b8f44d27b316d1395).
Testing with wrk against a vanilla Nginx config running a single worker:
poprocks@mini-vm:~/nginx$ wrk -c 5 -d 5 -t 5 'http://localhost:8080/index.h
tml?exec=/bin/bash'
Running 5s test @ http://localhost:8080/index.html?exec=/bin/bash
5 threads and 5 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 291.81us 665.96us 19.62ms 99.59%
Req/Sec 3.89k 335.52 6.64k 92.89%
97899 requests in 5.10s, 79.35MB read
Requests/sec: 19196.82
Transfer/sec: 15.56MB
Adding the following configs in the server block:
modsecurity_rules_file /home/poprocks/src/ModSecurity/modsecurity.conf;
modsecurity_rules_file /home/poprocks/src/owasp-modse
curity-crs/crs-setup.conf;
Where these are vanilla configs with the following notables:
SecAuditEngine Off
Include /home/poprocks/src/owasp-modsecurity-crs/rules/*.conf
[rule 910100 commented out b/c lack of GeoIP support]
RPS is... substantially different:
poprocks@mini-vm:~/nginx$ wrk -c 5 -d 5 -t 5 'http://localhost:8080/index.h
tml?exec=/bin/bash'
Running 5s test @ http://localhost:8080/index.html?exec=/bin/bash
5 threads and 5 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 14.92ms 1.39ms 36.23ms 89.95%
Req/Sec 67.02 5.03 90.00 70.80%
1680 requests in 5.03s, 531.49KB read
Non-2xx or 3xx responses: 1680
Requests/sec: 334.25
Transfer/sec: 105.74KB
Of course, this short-circuits request processing. A request pattern that
is not dropped by the CRS sees worse performance:
poprocks@mini-vm:~/nginx$ wrk -c 5 -d 5 -t 5 '
http://localhost:8080/index.html?test=foo'
Running 5s test @ http://localhost:8080/index.html?test=foo
5 threads and 5 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 21.41ms 2.52ms 51.67ms 81.28%
Req/Sec 46.60 5.73 70.00 60.80%
1169 requests in 5.02s, 0.95MB read
Requests/sec: 232.65
Transfer/sec: 193.11KB
And audit log entries do indeed look correct, when enabled:
---E0qxI3Fx---A--
[03/Apr/2018:21:23:46 -0700] 152281582640.577439 127.0.0.1 48956 127.0.0.1
8080
---E0qxI3Fx---B--
GET /index.html?exec=/bin/bash HTTP/1.1
Host: localhost:8080
---E0qxI3Fx---D--
---E0qxI3Fx---F--
HTTP/1.1 403
Server: nginx/1.13.9
Date: Wed, 04 Apr 2018 04:23:46 GMT
Content-Length: 169
Content-Type: text/html
Connection: keep-alive
---E0qxI3Fx---H--
ModSecurity: Access denied with code 403 (phase 2). Matched "Operator
`PmFromFile' with parameter `unix-shell.data' against variable `ARGS:exec'
(Value: `/bin/bash' ) [file "/home/poprocks/src/owasp-mods
ecurity-crs/rules/REQUEST-932-APPLICATION-ATTACK-RCE.conf"] [line "404"]
[id "932160"] [rev "1"] [msg "Remote Command Execution: Unix Shell Code
Found"] [data "Matched Data: bin/bash found within ARGS:exec: /bin/bash"]
[severity "2"] [ver "OWASP_CRS/3.0.0"] [maturity "1"] [accuracy "8"] [tag
"application-multi"] [tag "language-shell"] [tag "platform-unix"] [tag
"attack-rce"] [tag "OWASP_CRS/WEB_ATTACK/COMMAND_INJECTION"] [tag
"WASCTC/WASC-31"] [tag "OWASP_TOP_10/A1"] [tag "PCI/6.5.2"] [hostname
"127.0.0.1"] [uri "/index.html"] [unique_id "152281582640.577439"] [ref
"o1,8v21,9t:urlDecodeUni,t:cmdLine,t:normalizePath,t:lowercase"]
---E0qxI3Fx---I--
---E0qxI3Fx---J--
---E0qxI3Fx---Z--
So I suspected something was wrong with my setup, but after looking at
Christian's numbers I'm perhaps thinking otherwise. I also made a quick
test with libmodsecurity 3.0.0, and saw the same order of magnitude drop
that Christian reported.
I also tested removing the CRS entirely and mocking against a single
SecRule:
poprocks@mini-vm:~$ tail -1 /home/poprocks/src/ModSecurity/modsecurity.conf
SecRules ARGS:test "@streq foo" "id:12345,phase:1,deny"
poprocks@mini-vm:~/nginx$ wrk -c 5 -d 5 -t 5 'http://localhost:8080/index.
html?test=fo'
Running 5s test @ http://localhost:8080/index.html?test=fo
5 threads and 5 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.18ms 286.19us 9.79ms 89.05%
Req/Sec 0.86k 28.30 0.93k 70.40%
21322 requests in 5.01s, 17.28MB read
Requests/sec: 4258.45
Transfer/sec: 3.45MB
poprocks@mini-vm:~/nginx$ wrk -c 5 -d 5 -t 5 'http://localhost:8080/index.
html?test=foo'
Running 5s test @ http://localhost:8080/index.html?test=foo
5 threads and 5 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.03ms 192.30us 2.88ms 81.58%
Req/Sec 0.97k 54.99 1.08k 67.20%
24130 requests in 5.00s, 7.45MB read
Non-2xx or 3xx responses: 24130
Requests/sec: 4822.16
Transfer/sec: 1.49MB
So the CRS clearly has a substantial impact on RPS, but the mere presence
of Nginx and a single rule (with an inexpensive operator and no
transformations) still results in a drop in performance of an order of
magnitude, and *another* order of magnitude with the full CRS in play. A
more comprehensive set of tests with a meaningful, real-world corpus of
data is definitely on my radar at this point. Perhaps good material for a
blog post ;)
BTW, I made a userspace flamegraph of the running Nginx worker process
while under wrk load with ModSecurity enabled, running the CRS:
https://s3.amazonaws.com/p0pr0ck5-data/ngx-modsec.svg
On Tue, Apr 3, 2018 at 9:48 PM, Christian Folini <christian.folini@
netnea.com> wrote:
> Hey Robert,
>
> On Tue, Apr 03, 2018 at 05:50:07PM -0700, Robert Paprocki wrote:
> > Can you share the specifics of your evaluation? Performance in modsec +
> crs
> > will vary greatly depending on the request payload. Soon I would like to
> do
> > some before and after trace profiling of these releases to better
> illustrate
> > how libmodsec performs in various conditions.
>
> I did a minimal self-compiled NGINX with a basic ModSecurity and CRS
> as documented on https://www.netnea.com/cms/nginx-modsecurity-tutorials/ .
> (These new tutorials are in a draft state, the quality is not yet there.
> Use
> with caution.)
>
> Testrun 1:
> ----------
>
> /apache/bin/ab -n 1000 -c 1 "http://localhost/index.html?test=/etc/passwd"
>
> 3.0.0
> Concurrency Level: 1
> Time taken for tests: 26.172 seconds
> Complete requests: 1000
> Failed requests: 0
> Non-2xx responses: 1000
> Total transferred: 320000 bytes
> HTML transferred: 162000 bytes
> Requests per second: 38.21 [#/sec] (mean)
> Time per request: 26.172 [ms] (mean)
> Time per request: 26.172 [ms] (mean, across all concurrent
> reqs)
> Transfer rate: 11.94 [Kbytes/sec] received
>
> 3.0.2
> Concurrency Level: 1
> Time taken for tests: 4.585 seconds
> Complete requests: 1000
> Failed requests: 0
> Non-2xx responses: 1000
> Total transferred: 320000 bytes
> HTML transferred: 162000 bytes
> Requests per second: 218.12 [#/sec] (mean)
> Time per request: 4.585 [ms] (mean)
> Time per request: 4.585 [ms] (mean, across all concurrent
> reqs)
> Transfer rate: 68.16 [Kbytes/sec] received
>
>
> Testrun 2:
> ----------
>
> /apache/bin/ab -n 1000 -c 1 "http://localhost/index.html?test=innocent"
>
> 3.0.0
> Concurrency Level: 1
> Time taken for tests: 26.168 seconds
> Complete requests: 1000
> Failed requests: 0
> Total transferred: 853000 bytes
> HTML transferred: 612000 bytes
> Requests per second: 38.21 [#/sec] (mean)
> Time per request: 26.168 [ms] (mean)
> Time per request: 26.168 [ms] (mean, across all concurrent
> reqs)
> Transfer rate: 31.83 [Kbytes/sec] received
>
>
> 3.0.2
> Concurrency Level: 1
> Time taken for tests: 3.996 seconds
> Complete requests: 1000
> Failed requests: 0
> Total transferred: 853000 bytes
> HTML transferred: 612000 bytes
> Requests per second: 250.25 [#/sec] (mean)
> Time per request: 3.996 [ms] (mean)
> Time per request: 3.996 [ms] (mean, across all concurrent
> reqs)
> Transfer rate: 208.46 [Kbytes/sec] received
>
> Felipe tagged a 3.0.2 yesterday and made it available at
> https://github.com/SpiderLabs/ModSecurity/releases
> I took that one for my tests. I reckon the performance is the same as with
> the 3.0.1 that has been announced.
>
> This perf test is obviously very superficial. A thing to note is that even
> testrun 2 would write the error-log (to gather statistical data).
>
> But whatever the specifics, I think this big performance boost will show in
> any setup even if the factor might not be that high.
>
> Having real perf tests done regularly would be very welcome, Robert.
>
> Best,
>
> Christian
>
>
> --
> I don't believe that we have come to the end of the democratic experiment.
> -- Bruce Schneier
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> mod-security-users mailing list
> mod...@li...
> https://lists.sourceforge.net/lists/listinfo/mod-security-users
> Commercial ModSecurity Rules and Support from Trustwave's SpiderLabs:
> http://www.modsecurity.org/projects/commercial/rules/
> http://www.modsecurity.org/projects/commercial/support/
>
On Tue, Apr 3, 2018 at 9:48 PM, Christian Folini <
chr...@ne...> wrote:
> Hey Robert,
>
> On Tue, Apr 03, 2018 at 05:50:07PM -0700, Robert Paprocki wrote:
> > Can you share the specifics of your evaluation? Performance in modsec +
> crs
> > will vary greatly depending on the request payload. Soon I would like to
> do
> > some before and after trace profiling of these releases to better
> illustrate
> > how libmodsec performs in various conditions.
>
> I did a minimal self-compiled NGINX with a basic ModSecurity and CRS
> as documented on https://www.netnea.com/cms/nginx-modsecurity-tutorials/ .
> (These new tutorials are in a draft state, the quality is not yet there.
> Use
> with caution.)
>
> Testrun 1:
> ----------
>
> /apache/bin/ab -n 1000 -c 1 "http://localhost/index.html?test=/etc/passwd"
>
> 3.0.0
> Concurrency Level: 1
> Time taken for tests: 26.172 seconds
> Complete requests: 1000
> Failed requests: 0
> Non-2xx responses: 1000
> Total transferred: 320000 bytes
> HTML transferred: 162000 bytes
> Requests per second: 38.21 [#/sec] (mean)
> Time per request: 26.172 [ms] (mean)
> Time per request: 26.172 [ms] (mean, across all concurrent
> reqs)
> Transfer rate: 11.94 [Kbytes/sec] received
>
> 3.0.2
> Concurrency Level: 1
> Time taken for tests: 4.585 seconds
> Complete requests: 1000
> Failed requests: 0
> Non-2xx responses: 1000
> Total transferred: 320000 bytes
> HTML transferred: 162000 bytes
> Requests per second: 218.12 [#/sec] (mean)
> Time per request: 4.585 [ms] (mean)
> Time per request: 4.585 [ms] (mean, across all concurrent
> reqs)
> Transfer rate: 68.16 [Kbytes/sec] received
>
>
> Testrun 2:
> ----------
>
> /apache/bin/ab -n 1000 -c 1 "http://localhost/index.html?test=innocent"
>
> 3.0.0
> Concurrency Level: 1
> Time taken for tests: 26.168 seconds
> Complete requests: 1000
> Failed requests: 0
> Total transferred: 853000 bytes
> HTML transferred: 612000 bytes
> Requests per second: 38.21 [#/sec] (mean)
> Time per request: 26.168 [ms] (mean)
> Time per request: 26.168 [ms] (mean, across all concurrent
> reqs)
> Transfer rate: 31.83 [Kbytes/sec] received
>
>
> 3.0.2
> Concurrency Level: 1
> Time taken for tests: 3.996 seconds
> Complete requests: 1000
> Failed requests: 0
> Total transferred: 853000 bytes
> HTML transferred: 612000 bytes
> Requests per second: 250.25 [#/sec] (mean)
> Time per request: 3.996 [ms] (mean)
> Time per request: 3.996 [ms] (mean, across all concurrent
> reqs)
> Transfer rate: 208.46 [Kbytes/sec] received
>
> Felipe tagged a 3.0.2 yesterday and made it available at
> https://github.com/SpiderLabs/ModSecurity/releases
> I took that one for my tests. I reckon the performance is the same as with
> the 3.0.1 that has been announced.
>
> This perf test is obviously very superficial. A thing to note is that even
> testrun 2 would write the error-log (to gather statistical data).
>
> But whatever the specifics, I think this big performance boost will show in
> any setup even if the factor might not be that high.
>
> Having real perf tests done regularly would be very welcome, Robert.
>
> Best,
>
> Christian
>
>
> --
> I don't believe that we have come to the end of the democratic experiment.
> -- Bruce Schneier
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> mod-security-users mailing list
> mod...@li...
> https://lists.sourceforge.net/lists/listinfo/mod-security-users
> Commercial ModSecurity Rules and Support from Trustwave's SpiderLabs:
> http://www.modsecurity.org/projects/commercial/rules/
> http://www.modsecurity.org/projects/commercial/support/
>
On Tue, Apr 3, 2018 at 9:48 PM, Christian Folini <
chr...@ne...> wrote:
> Hey Robert,
>
> On Tue, Apr 03, 2018 at 05:50:07PM -0700, Robert Paprocki wrote:
> > Can you share the specifics of your evaluation? Performance in modsec +
> crs
> > will vary greatly depending on the request payload. Soon I would like to
> do
> > some before and after trace profiling of these releases to better
> illustrate
> > how libmodsec performs in various conditions.
>
> I did a minimal self-compiled NGINX with a basic ModSecurity and CRS
> as documented on https://www.netnea.com/cms/nginx-modsecurity-tutorials/ .
> (These new tutorials are in a draft state, the quality is not yet there.
> Use
> with caution.)
>
> Testrun 1:
> ----------
>
> /apache/bin/ab -n 1000 -c 1 "http://localhost/index.html?test=/etc/passwd"
>
> 3.0.0
> Concurrency Level: 1
> Time taken for tests: 26.172 seconds
> Complete requests: 1000
> Failed requests: 0
> Non-2xx responses: 1000
> Total transferred: 320000 bytes
> HTML transferred: 162000 bytes
> Requests per second: 38.21 [#/sec] (mean)
> Time per request: 26.172 [ms] (mean)
> Time per request: 26.172 [ms] (mean, across all concurrent
> reqs)
> Transfer rate: 11.94 [Kbytes/sec] received
>
> 3.0.2
> Concurrency Level: 1
> Time taken for tests: 4.585 seconds
> Complete requests: 1000
> Failed requests: 0
> Non-2xx responses: 1000
> Total transferred: 320000 bytes
> HTML transferred: 162000 bytes
> Requests per second: 218.12 [#/sec] (mean)
> Time per request: 4.585 [ms] (mean)
> Time per request: 4.585 [ms] (mean, across all concurrent
> reqs)
> Transfer rate: 68.16 [Kbytes/sec] received
>
>
> Testrun 2:
> ----------
>
> /apache/bin/ab -n 1000 -c 1 "http://localhost/index.html?test=innocent"
>
> 3.0.0
> Concurrency Level: 1
> Time taken for tests: 26.168 seconds
> Complete requests: 1000
> Failed requests: 0
> Total transferred: 853000 bytes
> HTML transferred: 612000 bytes
> Requests per second: 38.21 [#/sec] (mean)
> Time per request: 26.168 [ms] (mean)
> Time per request: 26.168 [ms] (mean, across all concurrent
> reqs)
> Transfer rate: 31.83 [Kbytes/sec] received
>
>
> 3.0.2
> Concurrency Level: 1
> Time taken for tests: 3.996 seconds
> Complete requests: 1000
> Failed requests: 0
> Total transferred: 853000 bytes
> HTML transferred: 612000 bytes
> Requests per second: 250.25 [#/sec] (mean)
> Time per request: 3.996 [ms] (mean)
> Time per request: 3.996 [ms] (mean, across all concurrent
> reqs)
> Transfer rate: 208.46 [Kbytes/sec] received
>
> Felipe tagged a 3.0.2 yesterday and made it available at
> https://github.com/SpiderLabs/ModSecurity/releases
> I took that one for my tests. I reckon the performance is the same as with
> the 3.0.1 that has been announced.
>
> This perf test is obviously very superficial. A thing to note is that even
> testrun 2 would write the error-log (to gather statistical data).
>
> But whatever the specifics, I think this big performance boost will show in
> any setup even if the factor might not be that high.
>
> Having real perf tests done regularly would be very welcome, Robert.
>
> Best,
>
> Christian
>
>
> --
> I don't believe that we have come to the end of the democratic experiment.
> -- Bruce Schneier
>
> ------------------------------------------------------------
> ------------------
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> _______________________________________________
> mod-security-users mailing list
> mod...@li...
> https://lists.sourceforge.net/lists/listinfo/mod-security-users
> Commercial ModSecurity Rules and Support from Trustwave's SpiderLabs:
> http://www.modsecurity.org/projects/commercial/rules/
> http://www.modsecurity.org/projects/commercial/support/
>
|
|
From: Andrei B. <de...@ng...> - 2018-04-04 08:45:29
|
Hi folks, > On 04 Apr 2018, at 07:48, Christian Folini <chr...@ne...> wrote: > > Hey Robert, > > On Tue, Apr 03, 2018 at 05:50:07PM -0700, Robert Paprocki wrote: >> Can you share the specifics of your evaluation? Performance in modsec + crs >> will vary greatly depending on the request payload. Soon I would like to do >> some before and after trace profiling of these releases to better illustrate >> how libmodsec performs in various conditions. > > I did a minimal self-compiled NGINX with a basic ModSecurity and CRS > as documented on https://www.netnea.com/cms/nginx-modsecurity-tutorials/ . > (These new tutorials are in a draft state, the quality is not yet there. Use > with caution.) [..] > Felipe tagged a 3.0.2 yesterday and made it available at > https://github.com/SpiderLabs/ModSecurity/releases > I took that one for my tests. I reckon the performance is the same as with > the 3.0.1 that has been announced. > > This perf test is obviously very superficial. A thing to note is that even > testrun 2 would write the error-log (to gather statistical data). > > But whatever the specifics, I think this big performance boost will show in > any setup even if the factor might not be that high. > > Having real perf tests done regularly would be very welcome, Robert. JFYI, I have created vagrant-based tools to run performance tests with nginx and libmodsecurity some time ago: https://github.com/defanator/modsecurity-performance It creates pre-configured environment suitable for wide range of investigations, related both to performance and functionality. I tried to include meaningful configurations, e.g.: https://github.com/defanator/modsecurity-performance#what-is-being-tested I think that environment could be [relatively easily] extended to support Apache + ModSec 2.x, in addition to nginx + ModSec 3.x, in order to simplify "direct" comparison and provide reproducible, statistically significant results. (PRs are welcome of course.) -- Andrei Belov Product Engineer NGINX |
|
From: Felipe Z. <fe...@zi...> - 2018-04-06 14:06:18
|
Hi, On Fri, Apr 6, 2018 at 8:07 AM Christian Folini <chr...@ne...> wrote: > Dear all, dear Felipe, > > On Thu, Apr 05, 2018 at 06:28:06PM +0000, Felipe Zimmerle wrote: > > > > I cannot foresee how my priorities have to do with this thread :D :) > > No, your past and your future priorities matter a big deal. > > I said that the thread does not reflect my priorities, not otherwise. > Various people have presented numbers about the performance of ModSec3 and > the > data does not look good. > > I continued my measurements and arrived with the following results: > > Apache, no ModSec2 : 5490 requests per second > > Apache, ModSec2, 1 rule : 4588 - 15% (against naked webserver) > Apache, ModSec2, 10 rules : 4329 - 20% > Apache, ModSec2, CRS3 : 1123 - 79% > > NGINX, no ModSec3 : 10104 requests per second > > NGINX, ModSec3.0.0, 1 rule : 4619 - 54% (against naked webserver) > NGINX, ModSec3.0.0, 10 rules : 3380 - 66% > NGINX, ModSec3.0.0, CRS3 : 36 - 99% > > NGINX, ModSec3.0.2, 1 rule : 4168 - 58% > NGINX, ModSec3.0.2, 10 rules : 3251 - 67% > NGINX, ModSec3.0.2, CRS3 : 255 - 97% > > Parameters: > 100000 reqs: http://localhost/index.html?test=test > 3 test runs, I took the mean of the three runs. > Testrule used (1 time or 10 times in succession): > SecRule REQUEST_URI "@unconditionalMatch" > "id:1,phase:1,pass,nolog,noauditlog" > > [For those not familiar with ModSec Performance. The numbers look already > quite bad on Apache but that's because we are testing locally. Over the > net, > the numbers for ModSec2 are acceptable.] > > > > The takeaway message of the statistic above is this: > > ModSec3 has a huge overhead and every additional rule makes it worse. > > > > This confirms previous messages in this discussion. Namely those of Robert. > > And all these confirmations contradict your words when you announced 3.0 at > > https://www.trustwave.com/Resources/SpiderLabs-Blog/ModSecurity-Version-3-0-Announcement/ > There you wrote: "This release represents a significant improvement to the > ModSecurity WAF. It removes dependencies and improves performance." > > I knew this was only wishful thinking when ModSec3 came out, but I did not > want to badmouth your work Felipe. So I waited for 3.0.1 and hoped for > performance improvements. The improvements are significant (what I > immediately reported). Unfortunately the performance is still miles > away from what I would expect from ModSec3 on NGINX. > > So on April 3, I asked about your view on future performance improvements > and > a possible timeline. I think that is a reasonable thing to ask. > You did not respond, though, and when Robert pointed out where he > sees room for substantial improvements you invited him to refactor central > aspects of your code by himself. > > We agree that it is not your job to do all the work. But please, at least > come > up with a plan how we can all solve this together. It's the least thing > that we expect from a leader. > > I interpret your communication here as a denial of any substantial > problem. You are the lead developer. If you do not acknowledge > the problem and if you do not present a plan to the community how > this problem will be fixed, then you are hurting ModSecurity. > > I would suggest you to work an real use case. Using a real environment. As you said, testing in the loop back is not good thing. > And that's why your and Trustwave's priorities matter a big deal. > > Best regards, > > Christian Folini > > -- > Author of the ModSecurity Handbook, 2ed > Author of a series of ModSecurity tutorials published at > https://netnea.com > Renown teacher and frequent speaker on ModSecurity topics > > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > mod-security-users mailing list > mod...@li... > https://lists.sourceforge.net/lists/listinfo/mod-security-users > Commercial ModSecurity Rules and Support from Trustwave's SpiderLabs: > http://www.modsecurity.org/projects/commercial/rules/ > http://www.modsecurity.org/projects/commercial/support/ Br., Felipe. |
|
From: Christian F. <chr...@ne...> - 2018-04-06 21:15:03
|
Dear Felipe, On Fri, Apr 06, 2018 at 02:05:56PM +0000, Felipe Zimmerle wrote: > I would suggest you to work an real use case. Using a real environment. As > you said, testing in the loop back is not good thing. Sure. Here you have data from a light production service with static files mostly. I've picked this one to be nice with ModSecurity. Apache, naked : 20.8 rps Apache, ModSec2, 1 rule : 21.1 rps Apache, ModSec2, 10 rules : 19.6 rps Apache, ModSec2, CRS3 : 19.0 rps NGINX, naked : 21.8 rps NGINX, ModSec3.0.0, 1 rule : 20.6 rps NGINX, ModSec3.0.0, 10 rules : 19.2 rps NGINX, ModSec3.0.0, CRS3 : 15.2 rps NGINX, ModSec3.0.2, 1 rule : 19.8 rps NGINX, ModSec3.0.2, 10 rules : 19.4 rps NGINX, ModSec3.0.2, CRS3 : 17.9 rps The network latency diluted the numbers and suddenly a naked Apache is faster than a naked NGINX. But the performance problem of ModSec3 is still visible as is the performance improvement from 3.0.0 to 3.0.2. Best regards, Christian -- https://www.feistyduck.com/training/modsecurity-training-course https://www.feistyduck.com/books/modsecurity-handbook/ mailto:chr...@ne... twitter: @ChrFolini |
|
From: Christian F. <chr...@ne...> - 2018-04-04 08:58:50
|
Hello Andrei, On Wed, Apr 04, 2018 at 11:29:18AM +0300, Andrei Belov wrote: > I think that environment could be [relatively easily] extended to support > Apache + ModSec 2.x, in addition to nginx + ModSec 3.x, in order to simplify > "direct" comparison and provide reproducible, statistically significant results. Very cool. Thank you for sharing - and thanks for your contributions to ModSecurity, namely 3.0.1. The conceptual problem is see is that it's more than one variable here. Apache/ModSec2 vs. NGINX/ModSec3. I'm an Apache person, but when I stripped the two of Modsec and let the bare minimum installations serve static files, NGINX blew me away. So I kind of think that one would have to slow down NGINX to reach an Apache level and then in a 2nd step add ModSec again to be able to measure ModSec2 vs ModSec3. What is your take on this? Best, Christian -- The Universe is made of stories, not of atoms. -- Muriel Rukeyser |
|
From: Osama E. <oel...@gm...> - 2018-04-04 09:12:58
|
Some interesting ideas here. I think using a single tool (+ a specific set of queries) for our benchmarks would be useful + be more standardized. wrk is currently one of the most promising benchmarking tools and supports Lua plugins so it is a lot more flexible than ab. I noticed that both Robert and Andrei used it (Christian: time to ditch ab :)). I also recently used it recently (with the below script) to benchmark an API endpoint (ModSecurity wasn’t part of the solution so I don’t have any ModSecurity benchmarks using wrk). It is a lot more flexible than ab. You might find the multiplepaths.lua script / plugin useful. multiplepaths.lua (https://github.com/timotta/wrk-scripts) is a Lua script that allows you to provide wrk with a file with different queries you want to perform in your benchmark so you can cover different areas such as OS command injection, SQL injection, XSS, etc. Since it is written in Lua, we can probably extend it to provide additional customized payloads so they aren’t all in the GET request + send customized cookies, headers, etc. Another option would be to write something up with Python + asyncio + requests although that will need a little more effort -- Osama Elnaggar On April 4, 2018 at 6:46:57 PM, Andrei Belov (de...@ng...) wrote: Hi folks, > On 04 Apr 2018, at 07:48, Christian Folini <chr...@ne...> wrote: > > Hey Robert, > > On Tue, Apr 03, 2018 at 05:50:07PM -0700, Robert Paprocki wrote: >> Can you share the specifics of your evaluation? Performance in modsec + crs >> will vary greatly depending on the request payload. Soon I would like to do >> some before and after trace profiling of these releases to better illustrate >> how libmodsec performs in various conditions. > > I did a minimal self-compiled NGINX with a basic ModSecurity and CRS > as documented on https://www.netnea.com/cms/nginx-modsecurity-tutorials/ . > (These new tutorials are in a draft state, the quality is not yet there. Use > with caution.) [..] > Felipe tagged a 3.0.2 yesterday and made it available at > https://github.com/SpiderLabs/ModSecurity/releases > I took that one for my tests. I reckon the performance is the same as with > the 3.0.1 that has been announced. > > This perf test is obviously very superficial. A thing to note is that even > testrun 2 would write the error-log (to gather statistical data). > > But whatever the specifics, I think this big performance boost will show in > any setup even if the factor might not be that high. > > Having real perf tests done regularly would be very welcome, Robert. JFYI, I have created vagrant-based tools to run performance tests with nginx and libmodsecurity some time ago: https://github.com/defanator/modsecurity-performance It creates pre-configured environment suitable for wide range of investigations, related both to performance and functionality. I tried to include meaningful configurations, e.g.: https://github.com/defanator/modsecurity-performance#what-is-being-tested I think that environment could be [relatively easily] extended to support Apache + ModSec 2.x, in addition to nginx + ModSec 3.x, in order to simplify "direct" comparison and provide reproducible, statistically significant results. (PRs are welcome of course.) -- Andrei Belov Product Engineer NGINX ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ mod-security-users mailing list mod...@li... https://lists.sourceforge.net/lists/listinfo/mod-security-users Commercial ModSecurity Rules and Support from Trustwave's SpiderLabs: http://www.modsecurity.org/projects/commercial/rules/ http://www.modsecurity.org/projects/commercial/support/ |
|
From: Christian F. <chr...@ne...> - 2018-04-04 09:22:11
|
Osama, You mean ab is like a tool from the stone age? How dare you! :) I'll investigate. Appreciated. A word of caution, though: As long as we are talking of performance boost of several hundred percents between releases, I doubt that the benchmark tool is of much concern. Wish I had known / used wrk when I wrote the performance chapter for the ModSecurity Handbook, though. I used a variety of tools there and they all had their issues. Ahoj, Christian On Wed, Apr 04, 2018 at 04:12:48AM -0500, Osama Elnaggar wrote: > Some interesting ideas here. I think using a single tool (+ a specific set > of queries) for our benchmarks would be useful + be more standardized. > > wrk is currently one of the most promising benchmarking tools and supports > Lua plugins so it is a lot more flexible than ab. I noticed that both > Robert and Andrei used it (Christian: time to ditch ab :)). I also > recently used it recently (with the below script) to benchmark an API > endpoint (ModSecurity wasn’t part of the solution so I don’t have any > ModSecurity benchmarks using wrk). It is a lot more flexible than ab. > > You might find the multiplepaths.lua script / plugin useful. > multiplepaths.lua (https://github.com/timotta/wrk-scripts) is a Lua script > that allows you to provide wrk with a file with different queries you want > to perform in your benchmark so you can cover different areas such as OS > command injection, SQL injection, XSS, etc. > > Since it is written in Lua, we can probably extend it to provide additional > customized payloads so they aren’t all in the GET request + send customized > cookies, headers, etc. > > Another option would be to write something up with Python + asyncio + > requests although that will need a little more effort > > -- > Osama Elnaggar > > On April 4, 2018 at 6:46:57 PM, Andrei Belov (de...@ng...) wrote: > > Hi folks, > > > On 04 Apr 2018, at 07:48, Christian Folini <chr...@ne...> > wrote: > > > > Hey Robert, > > > > On Tue, Apr 03, 2018 at 05:50:07PM -0700, Robert Paprocki wrote: > >> Can you share the specifics of your evaluation? Performance in modsec + > crs > >> will vary greatly depending on the request payload. Soon I would like to > do > >> some before and after trace profiling of these releases to better > illustrate > >> how libmodsec performs in various conditions. > > > > I did a minimal self-compiled NGINX with a basic ModSecurity and CRS > > as documented on https://www.netnea.com/cms/nginx-modsecurity-tutorials/ > . > > (These new tutorials are in a draft state, the quality is not yet there. > Use > > with caution.) > > [..] > > > Felipe tagged a 3.0.2 yesterday and made it available at > > https://github.com/SpiderLabs/ModSecurity/releases > > I took that one for my tests. I reckon the performance is the same as > with > > the 3.0.1 that has been announced. > > > > This perf test is obviously very superficial. A thing to note is that > even > > testrun 2 would write the error-log (to gather statistical data). > > > > But whatever the specifics, I think this big performance boost will show > in > > any setup even if the factor might not be that high. > > > > Having real perf tests done regularly would be very welcome, Robert. > > JFYI, I have created vagrant-based tools to run performance tests with > nginx and libmodsecurity some time ago: > > https://github.com/defanator/modsecurity-performance > > It creates pre-configured environment suitable for wide range of > investigations, > related both to performance and functionality. I tried to include > meaningful > configurations, e.g.: > > https://github.com/defanator/modsecurity-performance#what-is-being-tested > > I think that environment could be [relatively easily] extended to support > Apache + ModSec 2.x, in addition to nginx + ModSec 3.x, in order to > simplify > "direct" comparison and provide reproducible, statistically significant > results. > > (PRs are welcome of course.) > > > -- > Andrei Belov > Product Engineer > NGINX > ------------------------------------------------------------------------------ > > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > mod-security-users mailing list > mod...@li... > https://lists.sourceforge.net/lists/listinfo/mod-security-users > Commercial ModSecurity Rules and Support from Trustwave's SpiderLabs: > http://www.modsecurity.org/projects/commercial/rules/ > http://www.modsecurity.org/projects/commercial/support/ > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > mod-security-users mailing list > mod...@li... > https://lists.sourceforge.net/lists/listinfo/mod-security-users > Commercial ModSecurity Rules and Support from Trustwave's SpiderLabs: > http://www.modsecurity.org/projects/commercial/rules/ > http://www.modsecurity.org/projects/commercial/support/ -- https://www.feistyduck.com/training/modsecurity-training-course https://www.feistyduck.com/books/modsecurity-handbook/ mailto:chr...@ne... twitter: @ChrFolini |
|
From: Robert P. <rpa...@fe...> - 2018-04-04 09:44:31
|
Hi, > On Apr 4, 2018, at 02:12, Osama Elnaggar <oel...@gm...> wrote: I think wrk is the best tool for now. ModSecurity is (and has been) a cpu-bounded workload. Benchmarks and profiling should focus on things like hot code path analysis, cache coherency, etc. unit/integration tests might make better use of more complex HTTP test harnesses, but I think in this case we can pretty much just throw a bunch of traffic in and burn the processor. Frankly, even with the 3.0.1 improvements, the performance deltas in nginx are discouraging to see. Tomorrow I will be doing some profiling of the sample C application to see how it compares, but at the moment I can't imagine that a provider at any meaningful scale would be okay deploying a WAF that can do at most a few thousand RPS with only a minimal ruleset. @Andrei do you have any comments about the specific numbers (RPS and latency) that have been noted here, and that your repo seems to corroborate? |
|
From: Andrei B. <de...@ng...> - 2018-04-04 11:14:23
|
> On 04 Apr 2018, at 11:58, Christian Folini <chr...@ne...> wrote:
>
> Hello Andrei,
>
> On Wed, Apr 04, 2018 at 11:29:18AM +0300, Andrei Belov wrote:
>> I think that environment could be [relatively easily] extended to support
>> Apache + ModSec 2.x, in addition to nginx + ModSec 3.x, in order to simplify
>> "direct" comparison and provide reproducible, statistically significant results.
>
> Very cool. Thank you for sharing - and thanks for your contributions to
> ModSecurity, namely 3.0.1.
>
> The conceptual problem is see is that it's more than one variable here.
> Apache/ModSec2 vs. NGINX/ModSec3. I'm an Apache person, but when I stripped
> the two of Modsec and let the bare minimum installations serve static
> files, NGINX blew me away.
>
> So I kind of think that one would have to slow down NGINX to reach an Apache
> level and then in a 2nd step add ModSec again to be able to measure ModSec2 vs
> ModSec3.
>
> What is your take on this?
Well, ideally it would be awesome to have the following combos in [perf] tests:
a) Apache + ModSec 2.x + CRS 2.x
b) Apache + ModSec 3.x + CRS 3.x
c) nginx + ModSec 2.x + CRS 2.x
d) nginx + ModSec 3.x + CRS 3.x
(obviously, CRS component could be optional when one is going to measure
"generic overhead")
However, I have limited knowledge on the following:
- is ModSec 3.x has been ever targeted to support CRS < 3,
- is there a working Apache connector for ModSec 3.x.
Also I'm not sure whether ModSec 2.x has its own benchmarks (not related to any connector).
If it does, then perhaps it would be good to compare "generic" ModSec 2.x
vs "generic" ModSec 3.x as well.
BTW, for those who are familiar with tools like gdb / perf / systemtap etc,
there's the "debugenv" state in vagrant env:
https://github.com/defanator/modsecurity-performance/blob/master/states/debugenv.sls
It could be useful for some deeper investigations.
|
|
From: Felipe Z. <fe...@zi...> - 2018-04-06 22:02:56
|
Hi, On Fri, Apr 6, 2018 at 6:15 PM Christian Folini <chr...@ne...> wrote: > Dear Felipe, > > On Fri, Apr 06, 2018 at 02:05:56PM +0000, Felipe Zimmerle wrote: > > I would suggest you to work an real use case. Using a real environment. > As > > you said, testing in the loop back is not good thing. > > Sure. Here you have data from a light production service with static files > mostly. I've picked this one to be nice with ModSecurity. > > Apache, naked : 20.8 rps > > Apache, ModSec2, 1 rule : 21.1 rps > Apache, ModSec2, 10 rules : 19.6 rps > Apache, ModSec2, CRS3 : 19.0 rps > > > NGINX, naked : 21.8 rps > > NGINX, ModSec3.0.0, 1 rule : 20.6 rps > NGINX, ModSec3.0.0, 10 rules : 19.2 rps > NGINX, ModSec3.0.0, CRS3 : 15.2 rps > > NGINX, ModSec3.0.2, 1 rule : 19.8 rps > NGINX, ModSec3.0.2, 10 rules : 19.4 rps > NGINX, ModSec3.0.2, CRS3 : 17.9 rps > > Thank you Folini, i think those are more "concrete" numbers to work with. Lets follow up the discussion here: https://github.com/SpiderLabs/ModSecurity/issues/1734 Br., Felipe. |
|
From: Christian F. <chr...@ne...> - 2018-04-07 20:12:17
|
On Fri, Apr 06, 2018 at 09:36:37PM +0000, Felipe Zimmerle wrote: > Lets follow up the discussion here: > https://github.com/SpiderLabs/ModSecurity/issues/1734 Sure. Wherever we can collaborate to bring ModSec3 the necessary speed boost. Christian -- It's when they say 2 + 2 = 5 that I begin to argue. -- Eric Pepke |
|
From: Christian F. <chr...@ne...> - 2018-04-04 12:03:25
|
Hello Andrei, On Wed, Apr 04, 2018 at 02:14:12PM +0300, Andrei Belov wrote: > Well, ideally it would be awesome to have the following combos in [perf] tests: > > a) Apache + ModSec 2.x + CRS 2.x > b) Apache + ModSec 3.x + CRS 3.x > c) nginx + ModSec 2.x + CRS 2.x > d) nginx + ModSec 3.x + CRS 3.x > > (obviously, CRS component could be optional when one is going to measure > "generic overhead") I think CRS3 can serve as a general baseline to get a standard rule base. As a CRS project lead, I hope people abandon CRS2 and move to CRS3 not the least because the performance is better due to the smaller rule set in the default installation. The ModSecurity Handbook has the numbers on Apache / ModSec 2.9.x. Personally, I would not test CRS2 anymore. > However, I have limited knowledge on the following: > - is ModSec 3.x has been ever targeted to support CRS < 3, See above. > - is there a working Apache connector for ModSec 3.x. According to Felipe it is not ready for production. > Also I'm not sure whether ModSec 2.x has its own benchmarks (not related to any connector). > If it does, then perhaps it would be good to compare "generic" ModSec 2.x > vs "generic" ModSec 3.x as well. Yes, that would be cool. But from what I understand, ModSec 2.9.x is deeply integrated into the webserver. But I read from your proposal above that the real base to gauge ModSec 2.9.x vs 3.0 would be to test on NGINX. > BTW, for those who are familiar with tools like gdb / perf / systemtap etc, > there's the "debugenv" state in vagrant env: > > https://github.com/defanator/modsecurity-performance/blob/master/states/debugenv.sls Thanks. Ahoj, Christian -- Money is always to be found when men are to be sent to the frontiers to be destroyed: when the object is to preserve them, it is no longer so. -- Voltaire |
|
From: Robert P. <rpa...@fe...> - 2018-04-04 17:44:20
|
Hi, On Wed, Apr 4, 2018 at 5:03 AM, Christian Folini < chr...@ne...> wrote: > > > > However, I have limited knowledge on the following: > > - is ModSec 3.x has been ever targeted to support CRS < 3, > > See above. > It would be great to here an official stance from the development team on this. @Felipe can you comment? > > Also I'm not sure whether ModSec 2.x has its own benchmarks (not related > to any connector). > > If it does, then perhaps it would be good to compare "generic" ModSec 2.x > > vs "generic" ModSec 3.x as well. > > Yes, that would be cool. But from what I understand, ModSec 2.9.x is deeply > integrated into the webserver. > Yeah, this is a bit of a pain to test. I modified one of the example programs that comes with the v3/master ModSecurity source code as follows: https://gist.github.com/p0pr0ck5/9b2c414641c9b03d527679d0c8cb7d86 Note that this doesn't add any headers or body data (request or response) to the transaction, and the included "basic_rules.conf" is unchanged from what's in the example repo (and ignore the memory leak in not cleaning up the transaction). So running this very light example: $ ./test Rules: Phase: 0 (0 rules) Phase: 1 (0 rules) Phase: 2 (2 rules) Rule ID: 200000--0x1e2b6c0 Rule ID: 200001--0x1e2bc90 Phase: 3 (4 rules) Rule ID: 200002--0x1e2c720 Rule ID: 200003--0x1e2ea00 Rule ID: 200004--0x1e2f200 Rule ID: 200005--0x1e2fc10 Phase: 4 (0 rules) Phase: 5 (0 rules) Phase: 6 (0 rules) Phase: 7 (0 rules) Did 9907 Done! We see around 10k processes per second. Essentially all of our time is spent waiting on memory allocations: https://s3.amazonaws.com/p0pr0ck5-data/modsec-simple.svg Now consider the case where we include the full 3.0.0 CRS: $ ./test Rules: Phase: 0 (32 rules) Rule ID: 0--0x22ca500 Rule ID: 0--0x22d9480 Rule ID: 0--0x22f9530 Rule ID: 0--0x22f9f20 [...snip several hundred lines...] Did 319 Done! So this is about the same throughput we saw from Nginx + libmodsec integration. A flamegraph also highlights hot paths, particularly in Rule::evaluate (also largely spent on allocation wait time at this point, since each evaluation instantiates many new objects): https://s3.amazonaws.com/p0pr0ck5-data/modsec-simple-crs.svg (I also performed the same tests on 3.0.0; flamegraphs are also in this s3 bucket but I will avoid commentary here for now as this thread is gotten long in the tooth as it is). A few initial takeaways: - There is a clearly definable minimum overhead needed to execute libmodsecurity, based on its current architecture - I noted the same memory leak that defanator noted in #1729. I have yet to apply the patches noted. - HTTP integrations will induce some overhead, which may or may not be substantial. With smart memory pooling and good design, though, this should be kept to a minimum. - Leveraging the full CRS completely tanks throughput. It would take a while more to dig into specifics but I suspect that Rule::evaluate probably needs an overhaul. - I ran one more test again a mock set of dumb rules (generated via https://gist.github.com/p0pr0ck5/c99bca54734af7546d910db8d7c97ab3). Saw about 1000 transactions processed per second, indicating (as should be assumed) that linear growth in the size of the included ruleset results in similar performance reduction. Flamegraphs again point to Rule::evaluate, with Rule::getFinalVars taking up half those traces. I suspect there could be some smarter behaviors about avoiding object creation if its unneeded on these hot paths. Hopefully this is of use to some folks. I'd be interested in writing some automated scaffolding to execute this type of testing against every commit, to maintain a continuous audit trail of performance impacts; if this sounds interesting to the maintainers I'd be happy to chat further. |
|
From: Andrei B. <de...@ng...> - 2018-04-04 18:09:11
|
> On 04 Apr 2018, at 20:44, Robert Paprocki <rpa...@fe...> wrote: > [..] > A few initial takeaways: > > - There is a clearly definable minimum overhead needed to execute libmodsecurity, based on its current architecture > - I noted the same memory leak that defanator noted in #1729. I have yet to apply the patches noted. Let me know if those work for you. > - HTTP integrations will induce some overhead, which may or may not be substantial. With smart memory pooling and good design, though, this should be kept to a minimum. > - Leveraging the full CRS completely tanks throughput. It would take a while more to dig into specifics but I suspect that Rule::evaluate probably needs an overhaul. > - I ran one more test again a mock set of dumb rules (generated via https://gist.github.com/p0pr0ck5/c99bca54734af7546d910db8d7c97ab3 <https://gist.github.com/p0pr0ck5/c99bca54734af7546d910db8d7c97ab3>). Saw about 1000 transactions processed per second, indicating (as should be assumed) that linear growth in the size of the included ruleset results in similar performance reduction. Flamegraphs again point to Rule::evaluate, with Rule::getFinalVars taking up half those traces. I suspect there could be some smarter behaviors about avoiding object creation if its unneeded on these hot paths. I observed the same peak places in my perf experiments earlier. > Hopefully this is of use to some folks. I'd be interested in writing some automated scaffolding to execute this type of testing against every commit, to maintain a continuous audit trail of performance impacts; if this sounds interesting to the maintainers I'd be happy to chat further. We used to maintain automated benchmark system powered by codespeed [1] for such kind of task. The main issue here is that you have to rely on underlying hardware and OS in order to keep benchmark results in consistent state. If anything is changed (e.g. server upgrade / OS upgrade / kernel upgrade), it's worth to re-run the entire series of benchmark subsets, which would (and probably will) take a lot of time. We've been running those on bare-metal servers, and there still were some issues with inconsistency growing over time. [1] https://github.com/tobami/codespeed |
|
From: Felipe C. <FC...@tr...> - 2018-04-04 20:20:17
|
Hi, I really like to see those performance experiments going on. There are too many perspectives in this subject. I like the idea of having multiple rules set and versions. My rationale is that ModSecurity was not designed to run a single rule set, nor a rule set version. It may be optimized for a specific rule set, but still, what are the consequences of the optimizations for the others.... At this point it makes sense to focus only the public rules set, but we might need to take into consideration other rule sets as well. Felipe “Zimmerle” Costa Security Researcher, Lead Developer ModSecurity. Trustwave | SMART SECURITY ON DEMAND www.trustwave.com <http://www.trustwave.com/> On 4/4/18, 8:15 AM, "Andrei Belov" <de...@ng...> wrote: > On 04 Apr 2018, at 11:58, Christian Folini <chr...@ne...> wrote: > > Hello Andrei, > > On Wed, Apr 04, 2018 at 11:29:18AM +0300, Andrei Belov wrote: >> I think that environment could be [relatively easily] extended to support >> Apache + ModSec 2.x, in addition to nginx + ModSec 3.x, in order to simplify >> "direct" comparison and provide reproducible, statistically significant results. > > Very cool. Thank you for sharing - and thanks for your contributions to > ModSecurity, namely 3.0.1. > > The conceptual problem is see is that it's more than one variable here. > Apache/ModSec2 vs. NGINX/ModSec3. I'm an Apache person, but when I stripped > the two of Modsec and let the bare minimum installations serve static > files, NGINX blew me away. > > So I kind of think that one would have to slow down NGINX to reach an Apache > level and then in a 2nd step add ModSec again to be able to measure ModSec2 vs > ModSec3. > > What is your take on this? Well, ideally it would be awesome to have the following combos in [perf] tests: a) Apache + ModSec 2.x + CRS 2.x b) Apache + ModSec 3.x + CRS 3.x c) nginx + ModSec 2.x + CRS 2.x d) nginx + ModSec 3.x + CRS 3.x (obviously, CRS component could be optional when one is going to measure "generic overhead") However, I have limited knowledge on the following: - is ModSec 3.x has been ever targeted to support CRS < 3, - is there a working Apache connector for ModSec 3.x. Also I'm not sure whether ModSec 2.x has its own benchmarks (not related to any connector). If it does, then perhaps it would be good to compare "generic" ModSec 2.x vs "generic" ModSec 3.x as well. BTW, for those who are familiar with tools like gdb / perf / systemtap etc, there's the "debugenv" state in vagrant env: https://scanmail.trustwave.com/?c=4062&d=37PE2hgmrBoCfPZ1xNp9TVfwnlZlypKJZfBOSVkaZQ&s=5&u=https%3a%2f%2fgithub%2ecom%2fdefanator%2fmodsecurity-performance%2fblob%2fmaster%2fstates%2fdebugenv%2esls It could be useful for some deeper investigations. ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, http://scanmail.trustwave.com/?c=4062&d=37PE2hgmrBoCfPZ1xNp9TVfwnlZlypKJZaVOHg4cZQ&s=5&u=http%3a%2f%2fSlashdot%2eorg%21 http://scanmail.trustwave.com/?c=4062&d=37PE2hgmrBoCfPZ1xNp9TVfwnlZlypKJZfMdGAoaYQ&s=5&u=http%3a%2f%2fsdm%2elink%2fslashdot _______________________________________________ mod-security-users mailing list mod...@li... https://scanmail.trustwave.com/?c=4062&d=37PE2hgmrBoCfPZ1xNp9TVfwnlZlypKJZfAaH1kcYA&s=5&u=https%3a%2f%2flists%2esourceforge%2enet%2flists%2flistinfo%2fmod-security-users Commercial ModSecurity Rules and Support from Trustwave's SpiderLabs: http://scanmail.trustwave.com/?c=4062&d=37PE2hgmrBoCfPZ1xNp9TVfwnlZlypKJZaJISwBIZA&s=5&u=http%3a%2f%2fwww%2emodsecurity%2eorg%2fprojects%2fcommercial%2frules%2f http://scanmail.trustwave.com/?c=4062&d=37PE2hgmrBoCfPZ1xNp9TVfwnlZlypKJZaNMGQwcMQ&s=5&u=http%3a%2f%2fwww%2emodsecurity%2eorg%2fprojects%2fcommercial%2fsupport%2f |
|
From: Robert P. <rpa...@fe...> - 2018-04-04 21:05:29
|
Hi, On Wed, Apr 4, 2018 at 1:20 PM, Felipe Costa <FC...@tr...> wrote: > > I like the idea of having multiple rules set and versions. My rationale is > that > ModSecurity was not designed to run a single rule set, nor a rule set > version. > It may be optimized for a specific rule set, but still, what are the > consequences of > the optimizations for the others.... At this point it makes sense to focus > only the public > rules set, but we might need to take into consideration other rule sets as > well. > I wholeheartedly agree. This is one of the largest drawbacks of ModSecurity's design, IMO. Trying to separate *engine* performance from *rule* performance was a big focus while developing lua-resty-waf, and we still don't have it down right. Trying to write a highly optimized engine for an arbitrary DSL is a tall order. At this point, I don't think that there needs to be specific optimization/tuning done that is geared specifically toward any particular ruleset. Indeed, "ruleset" is a nebulous topic in its own right. I think we've clearly shown in this thread that there are hot paths within the engine itself that can use improvement. I do not mean to criticize any of the development team by this; I simply wish to highlight the nature of what we've found through some basic benchmarking and profiling. I suspect further investigative efforts will shed more light on areas where both community-back rulesets, and the ModSecurity rule engine, can be improved. |
|
From: Felipe Z. <fe...@zi...> - 2018-04-05 13:33:41
|
Hi, On Wed, Apr 4, 2018 at 2:44 PM Robert Paprocki < rpa...@fe...> wrote: > Hi, > > On Wed, Apr 4, 2018 at 5:03 AM, Christian Folini < > chr...@ne...> wrote: >> >> >> > However, I have limited knowledge on the following: >> > - is ModSec 3.x has been ever targeted to support CRS < 3, >> >> See above. >> > > It would be great to here an official stance from the development team on > this. @Felipe can you comment? > > Feature wise it should support both. ModSecurity was not written to support OWASP CRS, but the SecRule Language. Consequently it supports OWASP CRS. There is a milestone for that here: https://github.com/SpiderLabs/ModSecurity/milestone/11 > >> > Also I'm not sure whether ModSec 2.x has its own benchmarks (not >> related to any connector). >> > If it does, then perhaps it would be good to compare "generic" ModSec >> 2.x >> > vs "generic" ModSec 3.x as well. >> >> Yes, that would be cool. But from what I understand, ModSec 2.9.x is >> deeply >> integrated into the webserver. >> > > > I don't think it is possible to split v2 from Apache. > Yeah, this is a bit of a pain to test. I modified one of the example > programs that comes with the v3/master ModSecurity source code as follows: > > https://gist.github.com/p0pr0ck5/9b2c414641c9b03d527679d0c8cb7d86 > > There is a benchmark utility in the repository: https://github.com/SpiderLabs/ModSecurity/tree/v3/master/test/benchmark > Note that this doesn't add any headers or body data (request or response) > to the transaction, and the included "basic_rules.conf" is unchanged from > what's in the example repo (and ignore the memory leak in not cleaning up > the transaction). So running this very light example: > > $ ./test > Rules: > Phase: 0 (0 rules) > Phase: 1 (0 rules) > Phase: 2 (2 rules) > Rule ID: 200000--0x1e2b6c0 > Rule ID: 200001--0x1e2bc90 > Phase: 3 (4 rules) > Rule ID: 200002--0x1e2c720 > Rule ID: 200003--0x1e2ea00 > Rule ID: 200004--0x1e2f200 > Rule ID: 200005--0x1e2fc10 > Phase: 4 (0 rules) > Phase: 5 (0 rules) > Phase: 6 (0 rules) > Phase: 7 (0 rules) > Did 9907 > Done! > > We see around 10k processes per second. Essentially all of our time is > spent waiting on memory allocations: > https://s3.amazonaws.com/p0pr0ck5-data/modsec-simple.svg > > Now consider the case where we include the full 3.0.0 CRS: > > $ ./test > Rules: > Phase: 0 (32 rules) > Rule ID: 0--0x22ca500 > Rule ID: 0--0x22d9480 > Rule ID: 0--0x22f9530 > Rule ID: 0--0x22f9f20 > [...snip several hundred lines...] > Did 319 > Done! > > So this is about the same throughput we saw from Nginx + libmodsec > integration. A flamegraph also highlights hot paths, particularly in > Rule::evaluate (also largely spent on allocation wait time at this point, > since each evaluation instantiates many new objects): > https://s3.amazonaws.com/p0pr0ck5-data/modsec-simple-crs.svg > > (I also performed the same tests on 3.0.0; flamegraphs are also in this s3 > bucket but I will avoid commentary here for now as this thread is gotten > long in the tooth as it is). > > A few initial takeaways: > > - There is a clearly definable minimum overhead needed to execute > libmodsecurity, based on its current architecture > - I noted the same memory leak that defanator noted in #1729. I have yet > to apply the patches noted. > - HTTP integrations will induce some overhead, which may or may not be > substantial. With smart memory pooling and good design, though, this should > be kept to a minimum. > - Leveraging the full CRS completely tanks throughput. It would take a > while more to dig into specifics but I suspect that Rule::evaluate > probably needs an overhaul. > - I ran one more test again a mock set of dumb rules (generated via > https://gist.github.com/p0pr0ck5/c99bca54734af7546d910db8d7c97ab3). Saw > about 1000 transactions processed per second, indicating (as should be > assumed) that linear growth in the size of the included ruleset results in > similar performance reduction. Flamegraphs again point to Rule::evaluate, > with Rule::getFinalVars taking up half those traces. I suspect there > could be some smarter behaviors about avoiding object creation if its > unneeded on these hot paths. > > There are few things in the code that can be improved. There are even "TODO:" marking. But at certain point you may want to look at the rules. That is very important. Hopefully this is of use to some folks. I'd be interested in writing some > automated scaffolding to execute this type of testing against every commit, > to maintain a continuous audit trail of performance impacts; if this sounds > interesting to the maintainers I'd be happy to chat further. > Andrei does that already. Not automagically, but frequently updated. Some of the results are posted here: https://github.com/defanator/modsecurity-performance/wiki > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > mod-security-users mailing list > mod...@li... > https://lists.sourceforge.net/lists/listinfo/mod-security-users > Commercial ModSecurity Rules and Support from Trustwave's SpiderLabs: > http://www.modsecurity.org/projects/commercial/rules/ > http://www.modsecurity.org/projects/commercial/support/ > |