From: David S. <da...@dr...> - 2004-05-26 18:15:23
|
Hello, I am receiving a core dump when using pytables. I have a simple script that is adding entries to a single table. I am currently attempting to add a million rows to a table. The row consists of a float32col and an int32col. I am doing this to get some general transaction rates and file size estimates. The python process runs along pretty well at about 30-50 percent cpu utilization. The process grows to about 518MB in size. At that point I receive the following message: Python in malloc(): error: allocation failed The process continues to run for about 10-15 seconds, I then see the following message: Abort (core dumped) At that point the process terminates. I have run this script with pymalloc and without pymalloc and it fails the same way. If I run it again, I see the same results. Are there limitations inherent to python as to how large a python process can become, or is this indicative of a limitation in the pytables api? Are there settings I can change to prevent this or avoid this. I saw the expectedrows parameter when creating a new table. Is this an optimization, or does this also effect how many rows can exist reliably in a table. I know pytables is built to keep as much data in memory as available and then push data to the hard drive once memory becomes scarce. Can I control this aspect of pytables? I am running this on a dual 2.4GHz Xeon server, with 1GB of RAM, raid 5 with 4 36GB 160 SCSI drives. I am running FreeBSD 5.2.1 with hyperthreading enabled, so the system is acting as though it had 4 cpus. Any help would be appreciated. I am attempting to use pytables to store log entries collected from multiple sources. This would be a long running process that interacted with pytables and would probably not be shut down during regular use. The application will trim the database to keep around 30 days of activity stored in the hdf5 file. Is pytables a good backend for this type of solution? Thank You, David Sumner |