From: Francesc A. <fa...@py...> - 2009-01-14 14:09:35
|
Hi Toby, A Wednesday 14 January 2009, Toby Mathieson escrigué: > Hi there, > > I am trying to port a single large table (~150 million records) into > a single HDF5 file using Pytables. Whilst I am sure this is no > problem to store in a HDF5 file, I am wondering if there is any way > of getting this data directly from the MySQL file into the HDF5 file > without having to scan through all data using python's MySQLdb module > and then appending via Pytables to the HDF5 file? I tried this and > ran out of memory whilst scanning through MySQLdb's cursor object ... Can you show the code that you are using for doing this? At any rate, have you tried with querying using row restrictions so that you read the MySQL table by chunks of rows and save them to the PyTables file by using Table.append()? I think that would be a sensible way to do it without running into memory issues. Cheers, -- Francesc Alted |