You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(5) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
|
Feb
(2) |
Mar
|
Apr
(5) |
May
(11) |
Jun
(7) |
Jul
(18) |
Aug
(5) |
Sep
(15) |
Oct
(4) |
Nov
(1) |
Dec
(4) |
2004 |
Jan
(5) |
Feb
(2) |
Mar
(5) |
Apr
(8) |
May
(8) |
Jun
(10) |
Jul
(4) |
Aug
(4) |
Sep
(20) |
Oct
(11) |
Nov
(31) |
Dec
(41) |
2005 |
Jan
(79) |
Feb
(22) |
Mar
(14) |
Apr
(17) |
May
(35) |
Jun
(24) |
Jul
(26) |
Aug
(9) |
Sep
(57) |
Oct
(64) |
Nov
(25) |
Dec
(37) |
2006 |
Jan
(76) |
Feb
(24) |
Mar
(79) |
Apr
(44) |
May
(33) |
Jun
(12) |
Jul
(15) |
Aug
(40) |
Sep
(17) |
Oct
(21) |
Nov
(46) |
Dec
(23) |
2007 |
Jan
(18) |
Feb
(25) |
Mar
(41) |
Apr
(66) |
May
(18) |
Jun
(29) |
Jul
(40) |
Aug
(32) |
Sep
(34) |
Oct
(17) |
Nov
(46) |
Dec
(17) |
2008 |
Jan
(17) |
Feb
(42) |
Mar
(23) |
Apr
(11) |
May
(65) |
Jun
(28) |
Jul
(28) |
Aug
(16) |
Sep
(24) |
Oct
(33) |
Nov
(16) |
Dec
(5) |
2009 |
Jan
(19) |
Feb
(25) |
Mar
(11) |
Apr
(32) |
May
(62) |
Jun
(28) |
Jul
(61) |
Aug
(20) |
Sep
(61) |
Oct
(11) |
Nov
(14) |
Dec
(53) |
2010 |
Jan
(17) |
Feb
(31) |
Mar
(39) |
Apr
(43) |
May
(49) |
Jun
(47) |
Jul
(35) |
Aug
(58) |
Sep
(55) |
Oct
(91) |
Nov
(77) |
Dec
(63) |
2011 |
Jan
(50) |
Feb
(30) |
Mar
(67) |
Apr
(31) |
May
(17) |
Jun
(83) |
Jul
(17) |
Aug
(33) |
Sep
(35) |
Oct
(19) |
Nov
(29) |
Dec
(26) |
2012 |
Jan
(53) |
Feb
(22) |
Mar
(118) |
Apr
(45) |
May
(28) |
Jun
(71) |
Jul
(87) |
Aug
(55) |
Sep
(30) |
Oct
(73) |
Nov
(41) |
Dec
(28) |
2013 |
Jan
(19) |
Feb
(30) |
Mar
(14) |
Apr
(63) |
May
(20) |
Jun
(59) |
Jul
(40) |
Aug
(33) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Francesc A. <fa...@ca...> - 2005-12-08 09:35:34
|
Hi Beth, El dc 07 de 12 del 2005 a les 16:39 -0700, en/na Beth C Weekley va escriure: > I would like to create a pytables hdf file in either c a > or c++ program. Does anyone have an example of this or > know where I could get more information on doing this?=20 I have no examples. However, you can look into the Appendix D (http://pytables.sourceforge.net/html-doc/usersguide11.html) for a list of the datasets that are supported by PyTables currently, so that you can use the HDF5 C API in order to create a truly native PyTables file. Cheers, --=20 >0,0< Francesc Altet http://www.carabos.com/ V V C=E1rabos Coop. V. Enjoy Data "-" |
From: Beth C W. <be...@uc...> - 2005-12-07 23:39:45
|
Hello, I would like to create a pytables hdf file in either c a or c++ program. Does anyone have an example of this or know where I could get more information on doing this? Thanks, Beth Weekley |
From: Francesc A. <fa...@ca...> - 2005-12-07 18:29:58
|
A Dimecres 30 Novembre 2005 11:41, Alain Fagot va escriure: > unfortunatly due do software architecture I can not apply first method. > User can create row from highlevel API and then decide to flush after a > number of row created. > > I so applied the patch you proposed, software runs ok now, but is slown > down. > It seems that the patch effect is that flush is automatically done after > each row creation (or something like this). > Ex. Create 100 000 rows and flush each 10 000. > - runs in 50 seconds with pytables 1.1.1 > - runs in 1055 seconds with patch in pytables 1.2 Mmmm, I don't quite understand why this is happening. Could you please send me a small example showing this slowdown in PyTables 1.2+patch? I'd like to investigate more on this. Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-12-07 11:13:18
|
Hi Remi, Let me some days to digest your contributions and send back my feedback to you. An issue that concerns me a little bit is whether we should include your patches in the next stable version of PyTables or not. The problem here is that dimension scales will be implemented in forthcoming HDF5 1.8, but not in HDF5 1.6.x, which is the current production version of HDF5. I think that the best would be to create a branch for dimension scales in the PyTables SVN repository, and continue the work there until HDF5 1.8 would hit the street. However, there is currently not provision of offering snapshots for SVN branches :-(. Anyway, this remember us that we should start implementing a public interface for the complete PyTables repository. We probably will choose Trac (http://projects.edgewall.com/trac/) for this, but feedback is welcome. I'll tell you more on this soon. Thanks! A Dimarts 06 Desembre 2005 17:34, rem...@gm... va escriure: > Hi, > Please accept my apologies for spamming so heavily this list. > I've made small improvements to my work : > - added a new class in hdf5Extension to better distinguish between 'simpl= e' > Arrays and Dimension Scales > - comments updated > - removed the method get_name(), given that this method merely used to get > the attribute 'NAME'. I can have it back if you wish. > Similarly I'm not convinced of the usefullness of the method get_label(). > - some small bugs fixed > - added a new test file for the new attributes and the 'cleaning' functio= ns > > I join to this mail the patch to be applied on Pytables ver 1.2 > I was too lazzy to generate the patches to be applied on the already > patched sources, but feel free to ask if you need them. > Cheers > -- > Remi =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-12-07 10:59:51
|
Hi Dragan, Sorry for the delay, but I was in the IX HDF workshop last week, and have just come back. Here you have the original patch from Pete. If you manage to extend the original work to *Array objects, please, contribute it back. Cheers, =46rancesc =2D--------- Missatge transm=E8s ---------- Subject: Re: [Pytables-users] Indexing Arrays: a la numarray Date: Dissabte 17 Setembre 2005 02:22 =46rom: travlr <vel...@gm...> To: pyt...@li... Here is the diff patch for pytables-1.1.1 The utility is the same as what I provided above for 1.0 and 1.1, but I've also included using either of the following object types: Numarray array Numeric array List Tuple =2E..(to be used as described in my prior post). I tried once again (unsuccessfully) to enable pytables.Array[key] functionality (as well as attempting to enable Table/Column/Array[key] setting functionality) for non-sequential index-array [keys]. |
From: dragan s. <sav...@ya...> - 2005-12-01 14:54:06
|
Hi! Francesc: You posted a mail few months ago with this content: I'd liked to include the patch sent by travlr, to be able to use indexing to get and set values in *Array, Table and Column objects (i.e. "indexing a la numarray"); but provided he sent a patch just for getting values in Table and Column objects and that we are rather busy right now, we've decided to not incorporate this functionality until a next version, when all the objects would receive complete support for both getting and setting values "a la numarray". Is it possible to get this patch/code? I am very interested in this feature! Thanks! Regards, Dragan. __________________________________ Yahoo! Mail - PC Magazine Editors' Choice 2005 http://mail.yahoo.com |
From: Alain F. <ala...@fr...> - 2005-11-30 10:41:10
|
Hi Francesc, thanks for your quick answer. unfortunatly due do software architecture I can not apply first method. User can create row from highlevel API and then decide to flush after a=20 number of row created. I so applied the patch you proposed, software runs ok now, but is slown=20 down. It seems that the patch effect is that flush is automatically done after=20 each row creation (or something like this). Ex. Create 100 000 rows and flush each 10 000. - runs in 50 seconds with pytables 1.1.1 - runs in 1055 seconds with patch in pytables 1.2 For the time being I switch back to pytables 1.1.1, and will follow up th= e=20 evolution of pytables 1.2. Thanks for your help, Best Regards ----- Original Message -----=20 From: "Francesc Altet" <fa...@ca...> To: <pyt...@li...> Cc: "Alain Fagot" <ala...@fr...>; "Waldemar Osuch"=20 <wal...@gm...> Sent: Tuesday, November 29, 2005 8:55 PM Subject: Re: [Pytables-users] Problem flushing table with pytables 1.2 Hello, Yes, I was aware of this problem shortly after releasing 1.2 :-( Fortunately there is an easy workaround, which is flushing the table immediately after ending the append loop: for i range(...): .... row.append() table.flush() # Add this after append loops Alternatively, you can apply the next patch: --- pytables-1.2/tables/Table.py 2005-11-07 17:30:41.000000000 +010 0 +++ tables/Table.py 2005-11-29 20:47:42.357539540 +0100 @@ -1861,6 +1861,8 @@ def _g_cleanIOBuf(self): """Clean the I/O buffers.""" + # Flush the buffers before to clean-up them + self.flush() if 'row' in self.__dict__: # Decrement pointers to I/O buffers in Row instance self.row._cleanIOBuf() Cheers, A Dimarts 29 Novembre 2005 20:00, Alain Fagot va escriure: > Hello, > I switched from pytables1.1.1 to pytables 1.2 and have a problem when > flushing tables. I tryed to reproduce the problem I have on my applicat= ion > to tutorial1-1.py. (I have attached the modified tutorial1-1.py) > > I created two separate functions: > - create_file : which create an hdf5 file and create "detector" group = and > "readout" table - fill_10 : which put data in "readout" table > - _unittest : which call the two previous ones and try to flush the > "readout" table and close the hdf5 file > > When running I have the following error: > Traceback (most recent call last): > > File "tutorial1-1.py", line 87, in ? > > _unittest() > > File "tutorial1-1.py", line 82, in _unittest > > table.flush()# Close the file > > File "C:\Logiciels\Devt\Python241\Lib\site-packages\tables\Table.py", l= ine > 1880, in flush > > self._saveBufferedRows() > > File "C:\Logiciels\Devt\Python241\Lib\site-packages\tables\Table.py", l= ine > 1413, in _saveBufferedRows > > self._open_append(self._v_wbuffer) > > File "TableExtension.pyx", line 361, in TableExtension.Table._open_appe= nd > > AttributeError: 'NoneType' object has no attribute '_data' --=20 >0,0< Francesc Altet http://www.carabos.com/ V V C=E1rabos Coop. V. Enjoy Data "-" |
From: Waldemar O. <wal...@gm...> - 2005-11-30 04:05:23
|
Thanks for the patch. It fixed the error. Waldemar |
From: Francesc A. <fa...@ca...> - 2005-11-29 20:12:53
|
Hi R=E9mi, I'm out for a workshop on HDF in San Francisco. When I go back to Spain (next week), I'll check your patches more in detail and will give you my feedback shortly after. Maybe Ivan would like to have a look at it meanwhile. Please, remember to attach just your patches, not the entire code. Also, remember that you should not send attachements bigger than 40 KB to this list. If you want to make your patches publicly available, you can submit it to SF: http://sourceforge.net/tracker/?group_id=3D63486&atid=3D504146 Thank you very much for your contribution! =46rancesc A Dilluns 28 Novembre 2005 18:40, rem...@gm... va escriure: > > Hi, > > It's been quite a long time since I sent you last news about Dimension > > Scales. Don't worry, I have not stayed iddle all this time long. > > I join to this mail an archive of what I've done so far. I've only put = in > > this archive (as you may guess) the files from PyTables that I have > > modified. > > I shall precise that the files I've modified are the ones from PyTables > > v1.2 (I've started to work with a release candidate, and then I shifted= - > > without any problems - to the lattest release). > > If you encounter any problem in trying to make Dimension Scales work, > > feel free to ask me. > > > > Here is a link for the specification for Dimension Scales, from the HDF5 > > team. It 's not fully up to date, but to my mind it's interesting (it c= an > > help you understand the "philosophy" of Hdf5 Dimension Scales). > > -> http://hdf.ncsa.uiuc.edu/RFC/H5DimScales/ > > Here is a summary of what I've done : > > - I've added the functions of the DS Api into the "hdf5Extension.pyx", > > and then created new methods for the class hdf5Extension.Array which use > > these functions. That was rather easy. > > > > - Then I've added new methods for the class Array of tables.Array, which > > call these (inherited) methods and add exception handling stuff. > > Then I realized that an exception would always be raised when I try to > > open a hdf5 file which contains a Dimension Scale. > > Actually the hdf5 class name didn't correspond to the '_c_classId' of t= he > > PyTables Python class (it's badly explained but I hope you'll get the > > idea). > > > > - So I created a new class on the Python side, with a new '_c_classId'= : > > 'DIMENSION_SCALE'. I had to put this class in the same file as > > tables.Array (ie "Array.py") to avoid a loop in the Python imports : > > DimensionScale should inherit from Array, and so should import it, but > > Array should also import DimensionScale. > > There I use a little trick. I needed some kind of a "cast" to turn an > > existing Array into a DimensionScale. The trick merely consists in > > modifying the attribute __class__ . Simple and pretty efficient. It > > doesn't sound very clean, but I haven't found a way to work around my > > problem differently. > > > > - Then I started tackling the special attributes for DS, which turn out > > to be the biggest difficulty in my work. > > I first needed to understand the code related to these attribute in hdf5 > > code, and then add it into hdf5Extension.pyx. I was not sure whether I > > should declare them as system attributes, so in the end I declared them > > as public ones (but I protect them). > > > > Currently these attributes are read from hdf5 whenever one access them. > > I'll try to improve this in the coming days. > > > > Two of these attributes are "reference list". The "DIMENSION_LIST" is an > > attribute for an Array, it stores references on the Dimension Scales th= at > > are attached to it. Conversely the "REFERENCE_LIST" is an attribute for= a > > Dimension Scale, it stores references on the Arrays it is attached to. > > These references consist of four digit numbers (and an other number to > > specify the index of the dimension concerned). The problem is that I've > > not managed in fully retrieving the target of a reference from its > > reference... > > > > I succeeded in retrieving the number of dimension, the number of points, > > the type, etc...But the name of the target seems to be lost. I've asked > > one of the hdf5 team about this point, and he didn't know how I can get > > it back. Actually from the reference one can retrieve an ID, which is n= ot > > the ID of the target but a new ID pointing to it. There are some infos - > > including the name - that can't be obtained from this new ID... > > > > But for example, let's assume "(ref, idx)" is a tuple from the > > "REFERENCE_LIST" of a DImension Scale "ds" ("ds" is attached to the > > dimension "idx" of the Array the reference of which is "ref"). > > From "ref" one can retrieve a new ID for this Array (using > > 'H5Rdereference'). Then from this new ID one can detach "ds" from the > > Array (but one can't get the name of this Array !). I've taken advantage > > of this to create two new methods which allow a Dimension Scale to deta= ch > > itself from all the Arrays it was attached to and to allow an Array to > > detach itself from all the Dimension that were attached to it . > > > > In the end, it's maybe not that usefull for an user to see these > > attributes (look at them and you'll understand ;-)). But they were very > > usefull for me to test my work, and I spent so much time on them...but = if > > you want me to hide them, it's not a problem. > > > > - If you don't understand all this stuff about attributes, play a little > > with DS and you'll understand. > > I've made two small test files (one for the DS and the other focused on > > the attributes of DS). > > I was too lazzy to make test files which look like the ones from > > PyTables, but I could improve it. > > > > - Finally, I've thought a little about how I could extend DS to CArrray= s, > > EArrays, etc. It doesn't sound very difficult, I've only done it for > > CArray - but I haven't tested it so far. Actually I've done the easiest > > part. Indeed I don't know how to deal with the enlargeable dimension. > > I've some ideas, but I would be interested in knowing yours (and this > > mail seems long enough to me :-)). > > > > If you have any question about this mail or DS or whatever, feel free = to > > ask me ! > > Cheers > > -- > > Remi =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-11-29 20:02:49
|
Hello, Yes, I was aware of this problem shortly after releasing 1.2 :-( =46ortunately there is an easy workaround, which is flushing the table immediately after ending the append loop: for i range(...): .... row.append() table.flush() # Add this after append loops Alternatively, you can apply the next patch: =2D-- pytables-1.2/tables/Table.py 2005-11-07 17:30:41.000000000 +010 0 +++ tables/Table.py 2005-11-29 20:47:42.357539540 +0100 @@ -1861,6 +1861,8 @@ def _g_cleanIOBuf(self): """Clean the I/O buffers.""" + # Flush the buffers before to clean-up them + self.flush() if 'row' in self.__dict__: # Decrement pointers to I/O buffers in Row instance self.row._cleanIOBuf() Cheers, A Dimarts 29 Novembre 2005 20:00, Alain Fagot va escriure: > Hello, > I switched from pytables1.1.1 to pytables 1.2 and have a problem when > flushing tables. I tryed to reproduce the problem I have on my application > to tutorial1-1.py. (I have attached the modified tutorial1-1.py) > > I created two separate functions: > - create_file : which create an hdf5 file and create "detector" group and > "readout" table - fill_10 : which put data in "readout" table > - _unittest : which call the two previous ones and try to flush the > "readout" table and close the hdf5 file > > When running I have the following error: > Traceback (most recent call last): > > File "tutorial1-1.py", line 87, in ? > > _unittest() > > File "tutorial1-1.py", line 82, in _unittest > > table.flush()# Close the file > > File "C:\Logiciels\Devt\Python241\Lib\site-packages\tables\Table.py", line > 1880, in flush > > self._saveBufferedRows() > > File "C:\Logiciels\Devt\Python241\Lib\site-packages\tables\Table.py", line > 1413, in _saveBufferedRows > > self._open_append(self._v_wbuffer) > > File "TableExtension.pyx", line 361, in TableExtension.Table._open_append > > AttributeError: 'NoneType' object has no attribute '_data' =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Waldemar O. <wal...@gm...> - 2005-11-29 19:30:14
|
I have very similar problem. After the upgrade my program started failing during the flush() operation with almost identical traceback as Alain's. Test suite passes OK: -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-= =3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D= -=3D PyTables version: 1.2 HDF5 version: 1.6.5 numarray version: 1.4.1 Zlib version: 1.2.3 Python version: 2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)] Byte-ordering: little -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-= =3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D= -=3D Performing the complete test suite! -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-= =3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D= -=3D Numeric (version 24.2) is present. Adding the Numeric test suite. Scientific.IO.NetCDF not found. Skipping HDF5 <--> NetCDF conversion tests. -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-= =3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D= -=3D But attached program fails with: C:\WINDOWS\system32\cmd.exe /c test_tables.py Traceback (most recent call last): File "E:\work\BatchJobs\regii\test_tables.py", line 98, in ? main() File "E:\work\BatchJobs\regii\test_tables.py", line 94, in main walkDB(fileh, grp, 'db') File "E:\work\BatchJobs\regii\test_tables.py", line 86, in walkDB fileh.flush() File "C:\python\Python24\Lib\site-packages\tables\File.py", line 1843, in flush leaf.flush() File "C:\python\Python24\Lib\site-packages\tables\Table.py", line 1880, in flush self._saveBufferedRows() File "C:\python\Python24\Lib\site-packages\tables\Table.py", line 1413, in _saveBuffered Rows self._open_append(self._v_wbuffer) File "TableExtension.pyx", line 361, in TableExtension.Table._open_append AttributeError: 'NoneType' object has no attribute '_data' shell returned 1 |
From: Alain F. <ala...@fr...> - 2005-11-29 19:01:18
|
"""Small but quite comprehensive example showing the use of PyTables.=0A= =0A= The program creates an output file, 'tutorial1.h5'. You can view it=0A= with any HDF5 generic utility.=0A= =0A= """=0A= =0A= =0A= import sys=0A= from numarray import *=0A= from tables import *=0A= =0A= # Define a user record to characterize some kind of particles=0A= class Particle(IsDescription):=0A= name =3D StringCol(16) # 16-character String=0A= idnumber =3D Int64Col() # Signed 64-bit integer=0A= ADCcount =3D UInt16Col() # Unsigned short integer=0A= TDCcount =3D UInt8Col() # unsigned byte=0A= grid_i =3D Int32Col() # integer=0A= grid_j =3D IntCol() # integer (equivalent to Int32Col)=0A= pressure =3D Float32Col() # float (single-precision)=0A= energy =3D FloatCol() # double (double-precision)=0A= =0A= def create_file():=0A= =0A= print=0A= print '-**-**-**-**-**-**- file creation -**-**-**-**-**-**-**-'=0A= =0A= # The name of our HDF5 filename=0A= filename =3D "tutorial1.h5"=0A= =0A= print "Creating file:", filename=0A= =0A= # Open a file in "w"rite mode=0A= h5file =3D openFile(filename, mode =3D "w", title =3D "Test file")=0A= =0A= print=0A= print '-**-**-**-**-**- group and table creation = -**-**-**-**-**-**-**-'=0A= =0A= # Create a new group under "/" (root)=0A= group =3D h5file.createGroup("/", 'detector', 'Detector information')=0A= print "Group '/detector' created"=0A= =0A= # Create one table on it=0A= table =3D h5file.createTable(group, 'readout', Particle, "Readout = example")=0A= print "Table '/detector/readout' created"=0A= =0A= return h5file=0A= =0A= def fill_10(h5file):=0A= for group in h5file.walkGroups("/detector"):=0A= entity_g =3D group=0A= =0A= table =3D entity_g.readout=0A= # Get a shortcut to the record object in table=0A= particle =3D table.row=0A= =0A= # Fill the table with 10 particles=0A= for i in xrange(10):=0A= particle['name'] =3D 'Particle: %6d' % (i)=0A= particle['TDCcount'] =3D i % 256=0A= particle['ADCcount'] =3D (i * 256) % (1 << 16)=0A= particle['grid_i'] =3D i=0A= particle['grid_j'] =3D 10 - i=0A= particle['pressure'] =3D float(i*i)=0A= particle['energy'] =3D float(particle['pressure'] ** 4)=0A= particle['idnumber'] =3D i * (2 ** 34)=0A= # Insert a new particle record=0A= particle.append()=0A= =0A= =0A= def _unittest():=0A= h5file =3D create_file()=0A= =0A= fill_10(h5file)=0A= =0A= # Flush the buffers for table=0A= for group in h5file.walkGroups("/detector"):=0A= entity_g =3D group=0A= =0A= table =3D entity_g.readout=0A= table.flush()# Close the file=0A= h5file.close()=0A= print "File '"+filename+"' created"=0A= =0A= if __name__ =3D=3D '__main__':=0A= _unittest()=0A= =0A= |
From: Francesc A. <fa...@ca...> - 2005-11-22 20:00:54
|
Hi List, Below is the official announcement for the new PyTables 1.2. Thanks to every body that has contributed in some way or another to make this release a reality.=20 The release has been checked very toughly (in fact, so toughly that the mantra "release early/release often" is starting to suffer a bit :-/). However, I'm sure that some of you will soon come with some nasty bug that has not been checked properly. Anyway, your feedback is still encouraged and, what's more, appreciated ;-) Enjoy! =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Announcing PyTables 1.2 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The PyTables development team is happy to announce the availability of a new major version of PyTables package. This version sports a completely new in-memory tree implementation based around a *node cache system*. This system loads nodes only when needed and unloads them when they are rarely used. The new feature allows the opening and creation of HDF5 files with large hierarchies very quickly and with a low memory consumption (the object tree is no longer completely loaded in-memory), while retaining all the powerful browsing capabilities of the previous implementation of the object tree. You can read more about the dings and bells of the new cache system in: http://www.carabos.com/downloads/pytables/NewObjectTreeCache.pdf Also, Jeff Whitaker has kindly contributed a new module called tables.NetCDF. It is designed to be used as a drop-in replacement for Scientific.IO.NetCDF, with only minor actions to existing code. Also, if you have the Scientific.IO.NetCDF module installed, it allows to do conversions between HDF5 <--> NetCDF3 formats. Go to the PyTables web site for downloading the beast: http://pytables.sourceforge.net/ If you want more info about this release, please check out the more comprehensive announcement message available in: http://www.carabos.com/downloads/pytables/ANNOUNCE-1.2.html Acknowledgments =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Thanks to the users who provided feature improvements, patches, bug reports, support and suggestions. See THANKS file in distribution package for a (incomplete) list of contributors. Many thanks also to SourceForge who have helped to make and distribute this package! And last but not least, a big thank you to THG (http://www.hdfgroup.org/) for sponsoring many of the new features recently introduced in PyTables. =2D-- **Enjoy data!** -- The PyTables Team |
From: Francesc A. <fa...@ca...> - 2005-11-14 11:05:59
|
El dl 14 de 11 del 2005 a les 11:13 +0100, en/na R=E9mi Sassolas va escriure: > I've found how to fix my problem. In fact it was rather simple (I must > admit that I'm a little ashamed...). When I look more closely at what > the setup.py does, I've realized that during the linking process, > there was only one library from Hdf5 which was included : "libhdf5". > But the functions in the Dimension Scale API don't belong to this > library, but to "libhdf5_hl". So I've modified the setup.py, to add > this missing library. And then I don't get any error message from > Python. Don't be ashamed. You have discovered now how to add more library dependencies in PyTables ;-) > For the moment I've only modified the code for the linux part, but it > should be easy to extend it to the other os. Yeah, I think so. Cheers, --=20 >0,0< Francesc Altet http://www.carabos.com/ V V C=E1rabos Coop. V. Enjoy Data "-" |
From: <rem...@gm...> - 2005-11-14 10:13:12
|
2005/11/14, Francesc Altet <fa...@ca...>: > Oh, I see. I think this can be due to the fact that you forgot to point > your LD_LIBRARY_PATH to your HDF5 1.7.x installation directory (most > probably in /the_path_to_your_hdf5-1.7.x/hdf5/lib). > > Hope that helps, Hi, I've found how to fix my problem. In fact it was rather simple (I must admit that I'm a little ashamed...). When I look more closely at what the setup.py does, I've realized that during the linking process, there was only one library from Hdf5 which was included : "libhdf5". But the functions in the Dimension Scale API don't belong to this library, but to "libhdf5_hl". So I've modified the setup.py, to add this missing library. And then I don't get any error message from Python. For the moment I've only modified the code for the linux part, but it should be easy to extend it to the other os. Now I can start to design the Python class for the Dimension Scale which can use these methods. I join to this mail the modified setup.py -- R=E9mi |
From: Francesc A. <fa...@ca...> - 2005-11-14 09:38:58
|
El dl 14 de 11 del 2005 a les 10:10 +0100, en/na R=E9mi Sassolas va escriure: > I may not have been clear. Nothing goes - at least seems to go - wrong > when I try to compile the modified extension (using the "setup.py" > just as you do). > But when I try to import PyTables (or even directly the > "hdf5Extension.so"), Python complains about an undefined symbol > ("H5DSset_scale", which is the name of the first function I've tried > to add from the Dimension Scale APi). > After having compiled the modified extension, have you tried to import > PyTables, or the "hdf5Extension.so" ? Oh, I see. I think this can be due to the fact that you forgot to point your LD_LIBRARY_PATH to your HDF5 1.7.x installation directory (most probably in /the_path_to_your_hdf5-1.7.x/hdf5/lib). Hope that helps, --=20 >0,0< Francesc Altet http://www.carabos.com/ V V C=E1rabos Coop. V. Enjoy Data "-" |
From: <rem...@gm...> - 2005-11-14 09:10:41
|
Hi, 2005/11/10, Francesc Altet <fa...@ca...>: > R=E9mi, > > I've checked your version of the extension and it compiles fine to me. > Are you directing the distutils to use the HDF5 1.7.x headers?. I > normally use something like: > > python setup.py build_ext --inplace --hdf5=3D/the_path_to_your_hdf5-1.7.x= /hdf5 > > and that's all. I may not have been clear. Nothing goes - at least seems to go - wrong when I try to compile the modified extension (using the "setup.py" just as you do). But when I try to import PyTables (or even directly the "hdf5Extension.so"), Python complains about an undefined symbol ("H5DSset_scale", which is the name of the first function I've tried to add from the Dimension Scale APi). After having compiled the modified extension, have you tried to import PyTables, or the "hdf5Extension.so" ? > > BTW, try to send attachments below 40 KB to the list, as it is a > netiquette rule. > Oups I'm sorry about that, I didn't pay enough attention. |
From: Francesc A. <fa...@ca...> - 2005-11-10 17:35:48
|
R=E9mi, A Dijous 10 Novembre 2005 17:37, R=E9mi Sassolas va escriure: > The generation of the C file from the pyrex file works fine, and no > errors are reported during the following compliation process (just > some Warning which are not related to my additions). > As for the linking process, no errors reported too. > But when I try to import the modified tables module, python complains > about an undefined symbol : H5DSset_scale, which as you may have > guessed is the name of the first function from the Dimension Scale C > Api I would like to add to PyTables. I've checked your version of the extension and it compiles fine to me. Are you directing the distutils to use the HDF5 1.7.x headers?. I normally use something like: python setup.py build_ext --inplace --hdf5=3D/the_path_to_your_hdf5-1.7.x/h= df5 and that's all. BTW, try to send attachments below 40 KB to the list, as it is a netiquette rule. =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-11-09 10:58:58
|
A Dissabte 05 Novembre 2005 12:39, Francesc Altet va escriure: > El dv 04 de 11 del 2005 a les 13:14 -0700, en/na Jeff Whitaker va > > escriure: > > I would expect that the default value for 'pressure' would be set to > > -999.9. However, ptdump -v shows > >.... > > Why does it say dflt=3DNone? > > Just because the default values are not saved permanently on-disk. The > user defaults are only used at creation time of the table. If you close > it and re-open for append, then the default values are lost (although > sensible values are set as the new defaults). This is a flaw in the > design that should be addressed in the future. Well, after some work, I've added support for persistent defaults in PyTables :-) It will be out for the forthcoming PyTables 1.2 release. Meanwhile, you can test it by donwloading the 1.2rc2 version available in: http://www.carabos.com/downloads/pytables/preliminary/pytables-1.2-rc2.tar.= gz Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-11-09 10:52:10
|
Ops, I forgot to say that a good starting point for modifying the sources is to fetch the 1.2rc2 release in: http://www.carabos.com/downloads/pytables/preliminary/pytables-1.2-rc2.tar.= gz which will hopefully be the definitive 1.2 version, bar some arrangements on the array protocol for doing numarray<-->Numeric conversions. Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-11-09 10:48:29
|
Hi Remi, A Dimecres 09 Novembre 2005 10:59, v=E0reu escriure: > Hi, > I'm currently planning to add Dimension Scales to PyTables. Actually > Cyril Giraudon (do you remember him ?) told me about this idea, which > I think is a great one. Given that I've more time to spend working on > it than Cyril, I've decided to make it happen. Excellent news :-) > I first plan to limit myself to one dimensional Array objects, then > I'll extend it. I've had a look at PyTables sources, and to my mind > adding these Dimension Scale shouldn't be very big a problem. Sounds good > So my question is the following : how do you build "hdf5Extension.so" > ? Do you use a makefile, or is this build performed by the "setup.py"? It's very easy. Just issue a: python setup.py build_ext --inplace and distutils will be in charge of computing dependencies an recompile everything that is needed. Have in mind that you need Pyrex installed before doing this. The --inplace flag means that the extensions will be put in the tables/ directory so that you can do your tests without a need to install the complete beast (remember to add your working tables/ directory to your PYTHONPATH environment variable first). Good luck! =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-11-05 11:22:28
|
El dv 04 de 11 del 2005 a les 13:14 -0700, en/na Jeff Whitaker va escriure: > I would expect that the default value for 'pressure' would be set to=20 > -999.9. However, ptdump -v shows >.... > Why does it say dflt=3DNone? Just because the default values are not saved permanently on-disk. The user defaults are only used at creation time of the table. If you close it and re-open for append, then the default values are lost (although sensible values are set as the new defaults). This is a flaw in the design that should be addressed in the future. Cheers, --=20 >0,0< Francesc Altet http://www.carabos.com/ V V C=E1rabos Coop. V. Enjoy Data "-" |
From: Jeff W. <Jef...@no...> - 2005-11-04 20:14:07
|
Consider this script: from tables import * class Particle(IsDescription): name = StringCol(16, pos=1) # 16-character String lati = IntCol(pos=2) # integer longi = IntCol(pos=3) # integer pressure = Float32Col(dflt=-999.9,pos=4) # float (single-precision) temperature = FloatCol(pos=5) # double (double-precision) # Open a file in "w"rite mode fileh = openFile("table1.h5", mode = "w") # Create a new group group = fileh.createGroup(fileh.root, "newgroup") # Create a new table in newgroup group table = fileh.createTable(group, 'table', Particle, "A table", Filters(1)) table.flush() fileh.close() I would expect that the default value for 'pressure' would be set to -999.9. However, ptdump -v shows / (Group) '' children := ['newgroup' (Group)] /newgroup (Group) '' children := ['table' (Table)] /newgroup/table (Table(0L,), shuffle, zlib(1)) 'A table' description := { "name": Col(dtype='CharType', shape=(16,), dflt=None, pos=0, indexed=0), "lati": Col(dtype='Int32', shape=1, dflt=None, pos=1, indexed=0), "longi": Col(dtype='Int32', shape=1, dflt=None, pos=2, indexed=0), "pressure": Col(dtype='Float32', shape=1, dflt=None, pos=3, indexed=0), "temperature": Col(dtype='Float64', shape=1, dflt=None, pos=4, indexed=0) } byteorder := big Why does it say dflt=None? -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/CDC R/CDC1 Email : Jef...@no... 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg |
From: Francesc A. <fa...@ca...> - 2005-11-04 12:29:34
|
Hi Jeff, A Divendres 04 Novembre 2005 04:15, Jeff Whitaker va escriure: > Is it possible to create tables with entries are themselves tables? (I > read section 3.6 of the tutorial, but couldn't quite figure out if it > does what I want). Specifically, I would > like to have a table like this: > > Observation =3D { > "id": StringCol(length=3D16) > "lat: Float32Col() > "lon": Float32Col() > "value": Float32Col() > "type": StringCol(length=3D4) > "qcflags": <another table whose structure depends on the value of "type"> > } Nope. You cannot do this sort of things. The nested capabilities of PyTables means the you are able to create nested records in the *same* table, not referencing other tables. You can nevertheless add the name of the table as a string column, like this: Observation =3D { "id": StringCol(length=3D16) "lat: Float32Col() "lon": Float32Col() "value": Float32Col() "type": StringCol(length=3D4) "qcflags": StringCol(length=3D256) # put here the name of the other table } and then, retrieve the actual table by issuing a: for row in table: ... othertable=3Dfileh.getNode(row['qcflags']) Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Jeff W. <js...@fa...> - 2005-11-04 03:21:21
|
Hi all: Is it possible to create tables with entries are themselves tables? (I read section 3.6 of the tutorial, but couldn't quite figure out if it does what I want). Specifically, I would like to have a table like this: Observation = { "id": StringCol(length=16) "lat: Float32Col() "lon": Float32Col() "value": Float32Col() "type": StringCol(length=4) "qcflags": <another table whose structure depends on the value of "type"> } -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/CDC R/CDC1 Email : Jef...@no... 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg |