From: Francesc A. <fa...@py...> - 2004-06-21 08:19:13
|
A Dissabte 19 Juny 2004 02:49, Andrew Straw va escriure: > I am trying to save realtime video (640x480x100 fps uint8 grayscale > data) using PyTables for scientific purposes (lossy compression is > bad). So, first off, is using PyTables for this task a reasonable > idea? Although I'm no expert, it seems the compression algorithms that > PyTables offers may be ideal. It also may be nice to use HDF5 to > incorporate some data. I've never thought in such an application for PyTables, but I think that for your case (provided that you can't afford lossing information) maybe just fine. > Using this code, I get approximately 4 MB/sec with no compression, and > MB/sec with complevel=1 UCL. This is with an XFS filesystem on linux Mmm... How much using UCL?. Anyway, you may want to try LZO and ZLIB (with different compression levels) as well in order to see if this improve the speed. > So, are there any suggestions for getting this to run faster? A couple: 1.- Ensure that your bottleneck is really the call to .append() method by commenting it out and doing timings again. 2.- EArray.append() method do many checks so as to ensure that you pass an object compatible with the EArray being saved. If you are going to pass a *NumArray* object that you are sure it's compliant with the underlying EArray shape, you can save quite time by calling to the ._append(numarrayObject) instead of .append(numarrayObject). If suggestion 2 is not enough (although I'd doubt it), things can be further speeded-up by optimizing the number of calls to the underlying HDF5 library. However, this must be regarded as a commercial service only (but you can always do it by yourself, of course!). Cheers, -- Francesc Alted |