From: Pedot, W. <wp...@ha...> - 2004-04-05 08:54:35
|
Hi Richard, > From: "Pedot, Wolfgang" <wp...@ha...> > > [...] I noticed that the new build is awfully slow. > > For small repositories, the current build is *a bit* slower because of > additional charts, and requires *much* more memory because we use a > different collection class (tree-based instead of array-based). > > The larger memory footprint turns into a heavy performance > hit when the JVM > has to garbace collect all the time or starts swapping. I guess that's > what's happening in your case. > > > So here's what you can do (not very useful, I know ...): > > - buy more RAM > > - fiddle with the -mx setting to set the max amount of RAM > for the JVM: > "java -mx512m -jar statcvs.jar ..." > > - use the -verbose switch of StatCvs to see if it's still running and > how much memory it needs (shown after finishing) > > Richard I just tested what you said, without the -mx512m option the JVM takes ~85megs after 10minutes and the amount is not growing any more after that so I guess thats the default-limit. With the -mx512m option it is now at >110megs after 100min of runtime but still has not completed the first kernel. My system is definetly not swapping the JVM, in fact the amount of used swap has dropped slightly since I started statcvs. The main stats (LOC, authors page...) are generated pretty fast, what takes ages is the generation of all the html and png-files for every subdir, especialy in the kernel-dirs. Maybe thats because the import-message (with tons of links) of the whole Kernel is regenerated in most of the subdirs because we only change small parts of the kernel and so it stays in the recent commits for ever. So I guess I will stick with ignoring the kernels, I cant let statcvs run for so long because that would collide with the backup... thanks for your help Wolfgang Pedot |