[Torch5-devel] Memory leaks
Status: Pre-Alpha
Brought to you by:
andresy
From: Clément F. <cle...@gm...> - 2010-07-12 16:23:15
|
Hi all, I also noticed some severe memory leaks (lack of garbage collection) on tensor allocation. Try to run: > for i = 1,1000000 do a = torch.Tensor(500,500) end (you can stop it before it consumes all your mem...) at the end of this for loop, there's only one valid reference to a tensor, but nothing ever gets garbage collected. Even a manual call to the garbage collector doesn't clean that up. Maybe I'm mistaken, and that kind of data cannot be handled by the garbage collector ? If it is so the work around is to always pre-allocate tensors, and only resize/fill them with new data in loops, as you would do it in C... but that sort of defeats the purpose of a scripting language right ? Other (less important) leak, the function qt.QImage.fromTensor(tensor) returns a freshly allocated QImage, that doesn't seem to be garbage collectible. That basically makes QT displays have a limited lifetime :-(, as any tensor needs to be converted like that first. I've written an other function that fills a QImage passed as an argument, to avoid this leak, but it would be better if the QImage was simply collected at some point. Clement. |