|
From: Julian S. <js...@ac...> - 2011-03-05 00:23:14
|
> 1,456,988,256 in 90,146: hg.ids.5 (univ_laog) Urr, that's terrible. I had no idea that the laog mechanism could use so much memory. Search for the string "hg.ids.5" -- this is the allocation tag used for allocating the word sets. The LAOG implementation is my first attempt to implement a lock order graph checker, and probably could use some improvement, or "rm -rf" followed by a new implementation. These are sets of words -- 90146 sets, each set is a set of words, and each word is a Lock*. So in short these are sets of Lock*s. I don't remember exactly how the LAOG works -- I would need to study the code. Basically it must build a directed graph of Lock*s and then check that later acquisitions of locks does not violate the ordering recorded in the graph. The scaling problem that worries me is like this: the application creates thousands of (eg) C++ objects, and each one has a lock (eg, a pthread_mutex_t). That means Helgrind has to allocate the same number (thousands) of its own Lock structures, and so the laog gets very big. Having said that .. if the application does pthread_mutex_destroy on a lock before deleting the object, then H should free up the associated Lock structure. Maybe the problem is that the WordSets (which are sets of Lock*'s) are never GC'd. (Just guessing.) In this situation it is useful to make a small test case which shows the problem. Maybe you can do that? J |