|
From: Bart V. A. <bar...@gm...> - 2006-08-27 14:59:25
|
Hello Julian,
You reported trouble when running knode underd drd. When I tried to run
knode under drd with segment tracing enabled (inst/bin/valgrind --tool=drd
--trace-segment=yes knode), I observed the following behavior:
- As long as knode was running single-threaded, everything went fine (only
one segment was kept in memory).
- After knode created a second thread, the number of segments associated
with the first thread kept increasing, while there was only one segment
associated with the second thread.
Result: memory use kept increasing, and knode got killed by the OOM handler.
I'm not familiar with the source code of knode, but from the vector clocks I
can see that no synchronization actions are performed by thread two (vector
clock component for thread 2 remains at 1 all the time). That is why the
number of segments for the first thread keeps increasing. This is an effect
that is also explained in the papers on DIOTA. The solution is to change
thread_discard_ordered_segments() such that if the number of segments
associated with a thread becomes too large, that these segments are merged
(bitwise or) into a single segment. The result is that data race reports
become less precise but that memory consumption stays within reasonable
bounds. See also the paragraph called "Merging Segments" in the paper "An
Efficient Data Race Detector Backend for DIOTA".
|