Here is the more complete data on the test I performed, it
just so happened the weather was a little to nice on friday ;-)
I wanted to find a way to improve the performance of a number of
operations on the snort database, specifically deletes. As I
am a user of postgres I wanted to see if I could tighten up the
database while at the same time improving performance. So I
considered a number of operations, adding referential integrity,
using cascading deletes, creating views into the database to
simplify client queries. For now I setted on the first 2.
I generated 2 new schemas, one that took the orginal snort schema
and added referential integrity with cascading deletes, and another
that took that schema and modified the indexes to attempt to improve
the exection performance of the queries.
The server is a 997Mhz P3 with 2G ram, and a 10k scsi disk. I am running
FreeBSD 4.5. I am running postgresql 7.1.3 with fsynch off, sort_mem =
1024, shared_buffers = 4096. I am not sure if these are the best settings
but I am including them here for completeness. The tests where performed local
to the server using psql. Attached are the 2 schemas.
Here are the steps I followed in performing the test.
1. created 2 seperate databases, Both had Referential integrity and
ST1: was the first, it had no additional modifications to the
ST2: was the secod, it had modifications to the indexes.
My focus was on the cid/sid of each table and the any other
2. modified the snort config to log to both ST1 and ST2.
3. Generating 2000 alerts by sending large icmp pkts at test host to
cause speedra alerts.
4. For test I was measuring the time it took to complete the following
query: "delete from signature where sig_id = '1'; ". This caused all
event rows, iphdr rows, etc that were related to
to a signature with sig_id of 1 to be deleted.
5. I performed the initial test by timing the execution of the test query
on ST1 then ST2.
6. ST1: 13.53 sec ST2: 30.77
7. repeating after running vaccum and vacuum analyze.
8. ST1: 13.45 sec ST2: 0.95
9. repeading with 25013 alerts, re vaccum and vacuum analyzed.
10. ST1: 1974.90 sec ST2: 12.62, ST1: 12.6 delps ST2: 1982 delps
Without running vaccum analyze, ST1 saw 147.8 events deleted per
second(delps), ST2 on the other hand provided a meager 64.9 delps.
Next I reran the tests but having run vaccum analyze on both databases,
this time ST1 saw 138 delps and ST2 saw 2105.2 delps.
I was stunned by not only the slowness of ST2 in the first test but
more so by the improvement gained by vacuum analyzing the database. In
the last test I increased the number of events to 25013 and re analyzed
the dbs, this time ST1 dropped to 12.63 delps and ST1 had only a slight
dropoff to 1982 delps.
This tells me that providing the query optimizer with reliable
stats makes postgresql very happy. As with anything I would take these
number with a grain of salt because in my test databases the composition
did not mimic what one might see in a typical database, ie I didnt have
dozens of unique signatures and a good mix of packet types etc.
What else this doesnt tell me:
I havent compared this to the current way that a delete would work in the
current schema, so I dont yet know if this is any better than what we
currently have, though based on my initial experience I am gonna guess
that it is faster.
I dont have a good sense of what the modifications to the indexes will do
to the rate at which we can insert into the database. I didnt notice any
problems during my test, but I wasnt even doing a flood ping.
Anyhow, if anyone has any other questions I will attempt to answer them...