From: Tord H. <th...@ha...> - 2001-08-14 14:47:25
|
> Personally Im more interested in the testing perspective, in that it > allows us to check for problems, but I assume the figures also allow us > to compare to other systems. Any idea how we fare? This is hard to guess, since the small "databases" (like postgres, mysql, and so on) didnt run the tpc benchmarks. And the results at tpc.org use database sizes which are nearly impossible to handle with firebird. So I think, it can only be used to stress test the engine, when I found the reason, why the queries 3, 10, 12, 13 and 17 returned the wrong result set. But back to the tpc-r: There are only two results submited at the tpc.org homepage. I have downloaded the last result, dated March 6, 2000. The benchmark run at scale 1000, which means 1000 Gigabytes worth of data. With only one query executing (== power test), all queries finished after 3 hours 27 minutes. In the second test (== throuhput test) 7 parallel thread were executed. Everyone did the full set of 22 queries. At the same time, a 8rd thread did 1,500,000 inserts into the orders table and about 5,250,000 inserts into the lineitem table, after that the thread delete 1,500,000 entries in the orders table together with the linked entries in the lineitem table. The 7 threads finished after about 6-9 hours, the 8rd thread after 21 hours. Load time for the database (including sending 1000 Gigabytes to the database server and building all the indexes) was 84 hours 15 minutes. The database server was a server with 16 nodes, each containing 4 pentium iii/550 and 4 gb ram. The TOC for 5 years (including system price and support for 5 years) was about 13 million dollars :) There are more results for the TPC-C benchmark, but this one is quite difficult to convert, since this benchmark simulates terminals (with think and key time). But FYI, I have downloaded the result from the best non-cluster system as well. In this test, a compaq Alphaserver with 32 cpus and 256 GB memory and 22 Terrabytes disk-space could handle 27 millions new-order (keying in a new order from a customer) and 26 millions payment (commit paying of a bill) transactions in two hours. Remember the bug with transaction numbers, Ann fixed some days ago ? With 1k pages the limit was about 130 million transactions; this beast would hit the limit in about 4 hours :)) A little update to my last message: > >query | execution time (seconds) | correct result > >-------+--------------------------+--------------- > > 20 | still running | n/a after 4 hours, this query runs still at 100% cpu and nearly no disk activity. Tord |