From: SourceForge.net <no...@so...> - 2009-03-26 17:31:32
|
Bugs item #559121, was opened at 2002-05-22 12:40 Message generated for change (Comment added) made by mbox You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=390117&aid=559121&group_id=27350 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: genxref Group: v0.9.1 >Status: Closed >Resolution: Works For Me Priority: 5 Private: No Submitted By: Andy Baskett (abaskett) Assigned to: Nobody/Anonymous (nobody) Summary: Out of memory errors during genxref Initial Comment: I'm not sure where the problem arises - I have spent hours (8+) attempting to debug the code and looked through CVS to see where previous leaks were fixed but to no avail. My MySQL "lxr.files" table currently contains 4400 files. My repository contains thousands of font files and other non-indexable files. My repository contains hundreds of files generated with old (9+ years) versions of RCS which results in "co" errors. Using Devel::Leak I believe the biggest leak is when "co" fails - under these circumstances I think the FileHandle is not "undef" but actually contains the error message from "co". Perhaps parsing this causes a problem? In any case Devel::Leak shows each file generates 2 leaks per file (even when indexing successfully) - I think %files lib/LXR/Index/Mysql.pl causes one as it keeps getting appended - is it possible to include % files = undef in the "empty_cache" function? Other than that I am at a loss for ideas. I am running very old versions of perl and MySQL which I hope to update and may be the cause, but I expect they are not the only reason for the problem. I am running: LXR 0.9.1 mysql 3.22.30 ectags 5.0.1 HP-UX 10.20 ---------------------------------------------------------------------- >Comment By: Malcolm Box (mbox) Date: 2009-03-26 17:31 Message: Running against a very large repository works here, and several memory leaks have been fixed since the 0.9.1 release. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-07-01 15:31 Message: Logged In: NO Similar case here, also indexing a CVS tree. The "Out of memory!" occurs always at the same point, in just a few minutes, after indexing a few hundred files. OpenBSD 3.0 perl-5.6.1 (official obsd pkg) LXR 0.9.1 mysql-3.23.37 (official obsd pkg, on remote host running OBSD 2.9) ectags 5.2.3 swish-e 2.1-dev-25 (didn't get to the point of actually running it) $ ulimit -a time(cpu-seconds) unlimited file(blocks) unlimited coredump(blocks) unlimited data(kbytes) 65536 stack(kbytes) 4096 lockedmem(kbytes) 61366 memory(kbytes) 184100 nofiles(descriptors) 64 processes 64 The funny thing is that it when run under perl debugger (perl -d genxref ...) it kept working for many hours, having indexed thousands of files. `ulimit -a` returns the same values, not matter if run inside the perl debugger or not. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=390117&aid=559121&group_id=27350 |