> On 04 Apr 2018, at 20:44, Robert Paprocki <rpa...@fe...> wrote:
>
[..]
> A few initial takeaways:
>
> - There is a clearly definable minimum overhead needed to execute libmodsecurity, based on its current architecture
> - I noted the same memory leak that defanator noted in #1729. I have yet to apply the patches noted.
Let me know if those work for you.
> - HTTP integrations will induce some overhead, which may or may not be substantial. With smart memory pooling and good design, though, this should be kept to a minimum.
> - Leveraging the full CRS completely tanks throughput. It would take a while more to dig into specifics but I suspect that Rule::evaluate probably needs an overhaul.
> - I ran one more test again a mock set of dumb rules (generated via https://gist.github.com/p0pr0ck5/c99bca54734af7546d910db8d7c97ab3 <https://gist.github.com/p0pr0ck5/c99bca54734af7546d910db8d7c97ab3>). Saw about 1000 transactions processed per second, indicating (as should be assumed) that linear growth in the size of the included ruleset results in similar performance reduction. Flamegraphs again point to Rule::evaluate, with Rule::getFinalVars taking up half those traces. I suspect there could be some smarter behaviors about avoiding object creation if its unneeded on these hot paths.
I observed the same peak places in my perf experiments earlier.
> Hopefully this is of use to some folks. I'd be interested in writing some automated scaffolding to execute this type of testing against every commit, to maintain a continuous audit trail of performance impacts; if this sounds interesting to the maintainers I'd be happy to chat further.
We used to maintain automated benchmark system powered by codespeed [1] for such kind of task.
The main issue here is that you have to rely on underlying hardware and OS in order to keep benchmark results in consistent state. If anything is changed (e.g. server upgrade / OS upgrade / kernel upgrade), it's worth to re-run the entire series of benchmark subsets, which would (and probably will) take a lot of time.
We've been running those on bare-metal servers, and there still were some issues with inconsistency growing over time.
[1] https://github.com/tobami/codespeed
|