From: Norm P. <nj...@nj...> - 2005-04-21 15:24:55
|
Hello all, I'm designing an industrial data logging application, and hope to use PyTables as the data storage mechanism. Thanks, Francesc, Ivan, and everybody else for such an attractive product. I have a question, though, about process memory usage with PyTables: specifically, it appears that the virtual memory size of my Python process increases constantly as I append rows to my tables, and never decreases. Is this a feature or a leak? Does PyTables (and/or the underlying HDF5 library) map the full table(s) into my process address space at all times? Logging 2500 values per second (new rows to each of 10 tables), my process virtual size is over 100MB in an hour, and I run out of virtual memory before my expected 86400 rows per day are accumulated. It appears to make some difference whether I open/close the file each scan, or just leave it open (flushing after each row is appended, of course); opening/closing seems to cause virtual size to increase faster. I am running on Windows 2000, with HDF5 1.6.3 and PyTables 0.9.1, although I see the same behavior under Cygwin with HDF5 1.6.4 and the most-recent PyTables snapshot I downloaded and built yesterday. I'd appreciate anything you can tell me about memory usage (to save me having to dig directly into the sources ;-) ... Thanks again for your fine work. Regards, Norm Petterson |