I'm trying to run LXR 0.93 with swish-e on a very large code base, but it runs out of memory while traversing the files.
I was wondering whether there is a known workaround for this kind of problem.
I was thinking to partition the code base in smaller subsets but write the indexing results to the same database, or will lxr think that the rest of the code base just disappeared and it will remove it from the database?
I experienced a similar out of memory error on an extremely large code base, however it was caused by an individual file being too large. Eliminating files larger than 30MB fixed the problem.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.