[Assorted-commits] SF.net SVN: assorted:[1184] ydb/trunk/README
Brought to you by:
yangzhang
From: <yan...@us...> - 2009-02-15 03:22:41
|
Revision: 1184 http://assorted.svn.sourceforge.net/assorted/?rev=1184&view=rev Author: yangzhang Date: 2009-02-15 03:22:34 +0000 (Sun, 15 Feb 2009) Log Message: ----------- more progress/notes Modified Paths: -------------- ydb/trunk/README Modified: ydb/trunk/README =================================================================== --- ydb/trunk/README 2009-02-15 03:22:07 UTC (rev 1183) +++ ydb/trunk/README 2009-02-15 03:22:34 UTC (rev 1184) @@ -30,6 +30,7 @@ - [C++ Commons] svn r1082 - [clamp] 153 - [GCC] 4.3.2 +- [google-sparsehash] 1.4 - [googletest] 1.2.1 - [Lazy C++] 2.8.0 - [Protocol Buffers] 2.0.0 @@ -39,6 +40,7 @@ [C++ Commons]: http://assorted.sourceforge.net/cpp-commons/ [clamp]: http://home.clara.net/raoulgough/clamp/ [GCC]: http://gcc.gnu.org/ +[google-sparsehash]: http://code.google.com/p/google-sparsehash/ [googletest]: http://code.google.com/p/googletest/ [Lazy C++]: http://www.lazycplusplus.com/ [Protocol Buffers]: http://code.google.com/p/protobuf/ @@ -280,7 +282,7 @@ - what to do? limit parallelism? how? - include actual separate clients? -Period: 2/5- +Period: 2/5-2/12 - DONE commit!!! - DONE google profiling @@ -331,21 +333,55 @@ - DONE make readmsg perform fewer syscalls (buffer opportunistically) - like magic: now can sustain 90 Ktps all the way up through 3 xacts! -Period +Period 2/12-2/19 - DONE p2 prototype - some interesting performance bugs - forgot to make some sockets non-blocking, eg accepted client socket, eg the client's socket to server; everything still works with select - i was indeed forgetting to set this as well in ydb + - this made bcast-async irrelevant - was always inadvertently calling read() whenever i requested some # bytes - made a big diff in leveling the field between smallerish to largerish msg sizes - - this was hurting me only slightly in ydb, it seems + - this was hurting me only slightly in ydb - was not aggressively consuming as many msgs as i could, only 1 at a time (per return from select) + - not having this problem in ydb - DONE batch responses - made a marked difference; ~100Ktps -> ~140Ktps (for 1-4 reps) +- found a possible perf bug: string.c_str() instead of .data() +- DONE make regular bcastmsg use a single st_write call instead of two (by + serializing the len-prefix in) + - this maybe improved things only a teeny bit +- DONE try introducing protobuf serialization into both wal and solo to see how + much of the perf degradation is due to ser + - lowered solo from 220 Ktps to 190 Ktps + - lowered wal from 180 Ktps to 170 Ktps +- DONE try making process_txns also use st_reader + - didn't help much; in fact, seemed to hurt?! +- DONE try lifting txnbatch in process_txns + - made a huge diff: 140 Ktps to 220 Ktps + - network is now faster than local!!! +- DONE try lifting resbatch in handle_responses + - added a little bit more: 220 Ktps to 225 Ktps +- DONE try reusing the serialized msgs + - easier than expected; just call .Clear()! + - lost the amazing new breakthrough above + - -1: 190 -> 240 + - 0: 220 -> 320 Ktps + - 1: 220 -> <240 + - 3: 220 -> <240 +- DONE try adding fake-execution + - made a huge difference + - -1: 680K + - 0: 2M + - 1: 730K + - 2: 600K + - 3: 450K +- TODO commit +- TODO remove extraneous copies; use custom buffer-backed data structures + designed for serialization/deserialization - TODO flushing - TODO make the logger a "single replica" - TODO oprofile This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. |