You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(5) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
|
Feb
(2) |
Mar
|
Apr
(5) |
May
(11) |
Jun
(7) |
Jul
(18) |
Aug
(5) |
Sep
(15) |
Oct
(4) |
Nov
(1) |
Dec
(4) |
2004 |
Jan
(5) |
Feb
(2) |
Mar
(5) |
Apr
(8) |
May
(8) |
Jun
(10) |
Jul
(4) |
Aug
(4) |
Sep
(20) |
Oct
(11) |
Nov
(31) |
Dec
(41) |
2005 |
Jan
(79) |
Feb
(22) |
Mar
(14) |
Apr
(17) |
May
(35) |
Jun
(24) |
Jul
(26) |
Aug
(9) |
Sep
(57) |
Oct
(64) |
Nov
(25) |
Dec
(37) |
2006 |
Jan
(76) |
Feb
(24) |
Mar
(79) |
Apr
(44) |
May
(33) |
Jun
(12) |
Jul
(15) |
Aug
(40) |
Sep
(17) |
Oct
(21) |
Nov
(46) |
Dec
(23) |
2007 |
Jan
(18) |
Feb
(25) |
Mar
(41) |
Apr
(66) |
May
(18) |
Jun
(29) |
Jul
(40) |
Aug
(32) |
Sep
(34) |
Oct
(17) |
Nov
(46) |
Dec
(17) |
2008 |
Jan
(17) |
Feb
(42) |
Mar
(23) |
Apr
(11) |
May
(65) |
Jun
(28) |
Jul
(28) |
Aug
(16) |
Sep
(24) |
Oct
(33) |
Nov
(16) |
Dec
(5) |
2009 |
Jan
(19) |
Feb
(25) |
Mar
(11) |
Apr
(32) |
May
(62) |
Jun
(28) |
Jul
(61) |
Aug
(20) |
Sep
(61) |
Oct
(11) |
Nov
(14) |
Dec
(53) |
2010 |
Jan
(17) |
Feb
(31) |
Mar
(39) |
Apr
(43) |
May
(49) |
Jun
(47) |
Jul
(35) |
Aug
(58) |
Sep
(55) |
Oct
(91) |
Nov
(77) |
Dec
(63) |
2011 |
Jan
(50) |
Feb
(30) |
Mar
(67) |
Apr
(31) |
May
(17) |
Jun
(83) |
Jul
(17) |
Aug
(33) |
Sep
(35) |
Oct
(19) |
Nov
(29) |
Dec
(26) |
2012 |
Jan
(53) |
Feb
(22) |
Mar
(118) |
Apr
(45) |
May
(28) |
Jun
(71) |
Jul
(87) |
Aug
(55) |
Sep
(30) |
Oct
(73) |
Nov
(41) |
Dec
(28) |
2013 |
Jan
(19) |
Feb
(30) |
Mar
(14) |
Apr
(63) |
May
(20) |
Jun
(59) |
Jul
(40) |
Aug
(33) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Norbert N. <Nor...@gm...> - 2005-11-03 16:02:48
|
Great work! I would definitely like to see this library in pytables, especially in the future, when it is compatible to NetCDF-4. Jeff Whitaker wrote: > Francesc et al: > > I've written a module that emulates the Scientific.IO.NetCDF API, but > saves the data in a pytables hdf5 file. Enclosed is a tarball that > contains: > > 1) NetCDF.py is the Scientific.I0.NetCDF emulator for pytables. > It should be installed in > <sys.prefix>/lib/pythonX.Y/site-packages/tables. > To use, 'import tables.NetCDF'. > > 2) test.py is a simple test.script. > > 3) h5.py is a plugin for the python OPeNDAP module, available from > http://opendap.oceanografia.org/. Just drop it in the dap/plugins > directory before running setup.py. This allows files created with the > tables.NetCDF API to be served over http with OPenDAP, as long as the > file extension is h5, H5, hdf5 or HDF5. Note: h5.py is pretty much a > direct copy of the netcdf plugin (netcdf.py) with only 2 lines changed. > > 4) nctoh5.py is a new version of the nctoh5 utility that uses the > nctoh5 method of a tables.NetCDF.NetCDFFile class instance. > > Ulimately, I'd like this module to save the data in an hdf5 file that > is compatible with netCDF 4 > (http://www.unidata.ucar.edu/software/netcdf/netcdf-4/) so that netCDF 4 > clients can read it. In the meantime, I think some users who are used > to the Scientific.IO.NetCDF interface might find it useful, especially > those who are interested in sharing their data over http with OPeNDAP. > > -Jeff > |
From: Francesc A. <fa...@ca...> - 2005-11-03 07:24:35
|
El dc 02 de 11 del 2005 a les 16:20 -0700, en/na Gerry Wiener va escriure: > When I run the attached example, I get the output below. What's going=20 > wrong in test2()? How can I output the numarray record array containing=20 > character data in test2()? >=20 I think you misplaced the pos parameter in the description declaration. Try: class hdf_vars2(tables.IsDescription): """Class for establising record field types""" datum1 =3D tables.Float64Col(pos=3D0) site =3D tables.StringCol(2, pos=3D1) and this should work. Regards, --=20 >0,0< Francesc Altet http://www.carabos.com/ V V C=E1rabos Coop. V. Enjoy Data "-" |
From: Gerry W. <ge...@uc...> - 2005-11-02 23:20:15
|
When I run the attached example, I get the output below. What's going wrong in test2()? How can I output the numarray record array containing character data in test2()? Thanks, Gerry Wiener Pytables version: 1.1 test1 record array: RecArray[ (20.0, 1), (3.5, 2) ] <class 'numarray.numarraycore.NumArray'> test1 succeeded test2 record array: RecArray[ (20.0, 'ab'), (3.5, 'cd') ] <class 'numarray.strings.CharArray'> Traceback (most recent call last): File "recarray.py", line 56, in ? main() File "recarray.py", line 50, in main test2() File "recarray.py", line 42, in test2 tab.append(r) File "/d1/local/lib/python2.4/site-packages/tables/Table.py", line 1252, in append raise ValueError, \ ValueError: rows parameter cannot be converted into a recarray object compliant with table '/data_records (Table(0,)) 'my hdf''. The error was: <buffer formats don't match those provided by the format specification> |
From: Francesc A. <fa...@py...> - 2005-11-02 11:48:26
|
Hi Jeff, First of all, thank you very much for your module. It seems very interesting and I'm looking forward to add it to the forthcoming release of PyTables. However, there are some issues that should be addressed first. 1.- I've run the test.py against PyTables 1.1.1 and 1.2 and I'm getting this error (using both numarray 1.3.3 and Numeric 24.0 (b2)): [snipped...] bar[0:4:2,0:3] = [[0 0 0] [2 2 2]] Scientific.IO.NetCDF must be installed to convert to NetCDF Scientific.IO.NetCDF must be installed to convert from NetCDF Traceback (most recent call last): File "test_netcdf.py", line 96, in ? print file2 File "/home/faltet/PyTables/pytables/trunk/test/NetCDF.py", line 240, in __repr__ len_unlim = int(self.sync()) File "/home/faltet/PyTables/pytables/trunk/test/NetCDF.py", line 221, in sync len_max = max(len_unlim_dims) ValueError: min() or max() arg is an empty sequence I don't have the Scientific.IO.NetCDF module installed. Is this normal? 2.- When using both numarray 1.4.1 and Numeric 24.0 (no beta), I'm getting the next error: [snipped...] [ 0. 1. 0.] [ 0. 1. 0.]]' cannot be converted into an array object compliant with Array: '/foo (Array(20, 3)) 'foo' type = Float32 stype = 'Float32' shape = (20, 3) itemsize = 4 nrows = 20 flavor = 'Numeric' byteorder = 'little'' The error was: <array sizes must be consistent.> With these versions of Numeric and numarray all the PyTables tests passes. The error is the same using both PyTables 1.1.1 and 1.2. Could you look into this? 3.- It seems that tables.NetCDF only supports compression for extendeable Arrays (EArrays). Are you aware that CArrays also supports compression without allowing extendeability? Now, in order to integrate the module better in PyTables it is strongly suggested to provide a test unit with a few tests in there, and also some document with a proper description, but also with examples, if possible, that would be able to be integrated in the User's Guide as an appendix (or even as a chapter). Would you like to provide those? If you don't want to learn tbook, it doesn't matter, you can provide plain text (restructured text, if you want) and I'll format it into tbook. Finally, I'm curious about your 'quantize' method. The HDF5 crew will be introducing a new filter called 'scale-offset' for forthcoming 1.8.x series. From the release notes of HDF5 1.7.51: Scale-Offset compression performs a scale and/or offset operation on each data value and truncates the resulting value to a minimum number of bits and then stores the data. Scaleoffset filter supports floating-point and integer datatype. Do you think this is going to be compatible with quantize? If so, I'll try to give support for it in PyTables (when HDF5 1.8 whould be out, of course). Regards, A Dimarts 01 Novembre 2005 22:58, Jeff Whitaker va escriure: > Francesc et al: > > I've written a module that emulates the Scientific.IO.NetCDF API, but > saves the data in a pytables hdf5 file. Enclosed is a tarball that > contains: > > 1) NetCDF.py is the Scientific.I0.NetCDF emulator for pytables. > It should be installed in <sys.prefix>/lib/pythonX.Y/site-packages/tables. > To use, 'import tables.NetCDF'. > > 2) test.py is a simple test.script. > > 3) h5.py is a plugin for the python OPeNDAP module, available from > http://opendap.oceanografia.org/. Just drop it in the dap/plugins > directory before running setup.py. This allows files created with the > tables.NetCDF API to be served over http with OPenDAP, as long as the > file extension is h5, H5, hdf5 or HDF5. Note: h5.py is pretty much a > direct copy of the netcdf plugin (netcdf.py) with only 2 lines changed. > > 4) nctoh5.py is a new version of the nctoh5 utility that uses the nctoh5 > method of a tables.NetCDF.NetCDFFile class instance. > > Ulimately, I'd like this module to save the data in an hdf5 file that is > compatible with netCDF 4 > (http://www.unidata.ucar.edu/software/netcdf/netcdf-4/) so that netCDF 4 > clients can read it. In the meantime, I think some users who are used > to the Scientific.IO.NetCDF interface might find it useful, especially > those who are interested in sharing their data over http with OPeNDAP. > > -Jeff -- Francesc Altet |
From: Jeff W. <Jef...@no...> - 2005-11-01 21:58:16
|
Francesc et al: I've written a module that emulates the Scientific.IO.NetCDF API, but saves the data in a pytables hdf5 file. Enclosed is a tarball that contains: 1) NetCDF.py is the Scientific.I0.NetCDF emulator for pytables. It should be installed in <sys.prefix>/lib/pythonX.Y/site-packages/tables. To use, 'import tables.NetCDF'. 2) test.py is a simple test.script. 3) h5.py is a plugin for the python OPeNDAP module, available from http://opendap.oceanografia.org/. Just drop it in the dap/plugins directory before running setup.py. This allows files created with the tables.NetCDF API to be served over http with OPenDAP, as long as the file extension is h5, H5, hdf5 or HDF5. Note: h5.py is pretty much a direct copy of the netcdf plugin (netcdf.py) with only 2 lines changed. 4) nctoh5.py is a new version of the nctoh5 utility that uses the nctoh5 method of a tables.NetCDF.NetCDFFile class instance. Ulimately, I'd like this module to save the data in an hdf5 file that is compatible with netCDF 4 (http://www.unidata.ucar.edu/software/netcdf/netcdf-4/) so that netCDF 4 clients can read it. In the meantime, I think some users who are used to the Scientific.IO.NetCDF interface might find it useful, especially those who are interested in sharing their data over http with OPeNDAP. -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 Meteorologist FAX : (303)497-6449 NOAA/OAR/CDC R/CDC1 Email : Jef...@no... 325 Broadway Office : Skaggs Research Cntr 1D-124 Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg |
From: Francesc A. <fa...@ca...> - 2005-10-28 18:45:17
|
Hi List, We are in the last stages of checking PyTables 1.2. We recently added a new Row.update() method very handy for modifying tables and also fixed many memory leaks that shown up when dealing with files with a large number of tables (tipically > 1000 tables per file). If you want to check PyTables 1.2 Release Candidate 1, please fetch it from: http://www.carabos.com/downloads/pytables/preliminary/pytables-1.2-rc1.tar.= gz and tell us your feed back. Have a nice weekend! =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-10-28 10:36:40
|
Hi Naveen, None that I'm aware of. Anyway, if you want to have a look at this, I'm open to include anything that would be necessary to support static libraries in the next version of PyTables. Cheers, A Dijous 27 Octubre 2005 00:57, nmi...@jh... va escriure: > Hi, > > I was wondering if there was any resolution to the issue of compiling hdf5 > statically on OS X (and maybe even other platforms). Thanks. > > Naveen Michaud-Agrawal > > On Wed, 5 Oct 2005 nmi...@jh... wrote: > > I'm trying to install PyTables onto Max OS X 10.4, and it compiles and > > installs fine, but when I run the tests or any examples I always get > > "HDF5-DIAG: Error detected in HDF5 library version: 1.6.4 thread 0. > > Back trace follows." errors. The data seems to be saved properly into t= he > > hdf5 file, i'm just not sure what the errors mean. =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: <nmi...@jh...> - 2005-10-26 22:57:32
|
Hi, I was wondering if there was any resolution to the issue of compiling hdf5 statically on OS X (and maybe even other platforms). Thanks. Naveen Michaud-Agrawal On Wed, 5 Oct 2005 nmi...@jh... wrote: > > I'm trying to install PyTables onto Max OS X 10.4, and it compiles and > installs fine, but when I run the tests or any examples I always get > "HDF5-DIAG: Error detected in HDF5 library version: 1.6.4 thread 0. > Back trace follows." errors. The data seems to be saved properly into the > hdf5 file, i'm just not sure what the errors mean. > > Naveen > > --------------------------------------------------------------------- > Naveen Michaud-Agrawal > Program in Molecular Biophysics > Johns Hopkins University > (410) 614 4435 > > - the plural of 'anecdote' is not 'data' > > |
From: Francesc A. <fa...@ca...> - 2005-10-25 19:09:13
|
Hi Stefan, El dl 24 de 10 del 2005 a les 08:59 -0700, en/na Stefan Kuzminski va escriure: > If I run this simple loop, I get the output I expect.. >=20 > table =3D fp.root.table_0 > for row in table: > print row >=20 > but when I call row.append(), the output changes and looks corrupted.. >=20 > table =3D fp.root.table_0 > for row in table: > print row > row.append() Yes, the ability to modify tables in the middle of iterators was not implemented yet. I've catched situations where the user tries to do something like: for row in table: row['somefield'] =3D 2 by issuing the next exception: File "test-append.py", line 42, in read_junk row['lati'] =3D 2 File "TableExtension.pyx", line 1123, in TableExtension.Row.__setitem__ NotImplementedError: You cannot set Row fields when in middle of a table iterator. Use Table.modifyRows() or Table.modifyColumns() instead. but forgot about append(). Now, I've catched this as well and a similar exception will be raised: for row in table: row.append() File "test-append.py", line 43, in read_junk row.append() File "TableExtension.pyx", line 1038, in TableExtension.Row.append NotImplementedError: You cannot append rows when in middle of a table iterator. Anyway, if what you want is modify columns on an existing table, you can use something like: for nrow in xrange(tt.nrows): row =3D tt[nrow:nrow+1] row.field('lati')[0] =3D 2 row.field('pressure')[0] =3D nrow tt.modifyRows(nrow,rows=3Drow) I'm working implementing a variant of this: for row in tt: row['lati'] =3D 2 row['pressure'] =3D row.nrow() row.modify() which looks prettier to my eyes. In addition it's much faster (up to 20x than the former version). A preliminary version of this later implementation is already in the PyTables repository and most probably will be included in PyTables 1.2 (provided that I've time to add the test units and documenting it). If this is what you want, you can have a try at the nigthly tarball that will be automatically made this night around 0:05 UTC at: http://www.carabos.com/downloads/pytables/snapshots/ Cheers, --=20 >0,0< Francesc Altet http://www.carabos.com/ V V C=E1rabos Coop. V. Enjoy Data "-" |
From: Stefan K. <pon...@ya...> - 2005-10-24 15:59:28
|
Hi, If I run this simple loop, I get the output I expect.. table = fp.root.table_0 for row in table: print row but when I call row.append(), the output changes and looks corrupted.. table = fp.root.table_0 for row in table: print row row.append() Perhaps I am misunderstanding how row.append works. I want to iterate over the data a row at a time, set some values in row and have those values written back. Here is the output when it's ok ( not calling append ) and then when I do.. python tmp.py (0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0) (1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0) (2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0) (3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0) (4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0) (5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0) (6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0) (7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0) (8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0) (9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0) python tmp.py (0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0) (0.0, 0.0, 5.1949684891682224e-76, 3.2962254796880672e-260, 0.0, 1.28271820187416\ 2e+45, -5.4060993840027115e-39, 0.0, 0.0, 0.0) (0.0, 0.0, 0.0, 0.0, 2.6593181061630576e-293, -2.5144912217698657e-40, 0.0, 0.0, \ 0.0, 0.0) (0.0, 0.0, 0.0, -1.2835448071228927e-247, 1.149435495237775e-260, 0.0, 0.0, 0.0, \ 1.4939062567974614e-309, 1.1862805280863329e-306) (nan, 6.5191367783158677e+91, 1.6975966327722179e-313, 0.0, 1.8849466386307898e-2\ 60, 0.0, 2.1219957904712067e-314, 1.9097962134497552e-313, 1.8849900568353108e-26\ 0, 4.9406564584124654e-324) (2.1219957904712067e-314, 1.9097962150307652e-313, 1.8850955010462896e-260, 0.0, \ 2.1219957904712067e-314, 1.9097962166117753e-313, 1.8851389192508103e-260, 0.0, 2\ .1219957904712067e-314, 1.9097962181927854e-313) (1.8852071478579149e-260, 4.2439915829186759e-314, 4.9406564584124654e-324, 1.909\ 7962197737954e-313, 1.8853249972701853e-260, 0.0, 0.0, 0.0, 0.0, 0.0) (0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0) (0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0) (0.0, 0.0, 0.0, 0.0, 0.0, 2.1219957909652723e-314, 7.644680467577401e-270, 0.0, 4\ .9406564584124654e-324, 0.0) thanks! Stefan __________________________________ Yahoo! FareChase: Search multiple travel sites in one click. http://farechase.yahoo.com |
From: travlr <vel...@gm...> - 2005-10-20 12:16:36
|
On 10/20/05, Francesc Altet <fa...@ca...> wrote: > A Dijous 20 Octubre 2005 04:17, travlr va escriure: > > Hi All, > > > > I just wanted to mention that which ever way necessity dictates the > > syntax and symantics of pyTables, that great care should be considered > > in making pyTables congruent as much as possible with numarray/scipy > > in those respects. > > > > I also hope that pyTables roadmap would be to able to seamlessly > > integrate in scipy itself... That would be terrific. > > I agree. If you have some particular suggestions on this regard apart > of indexing "a la numarray" and "supporting scipy_core", I'd love to > see them. > > Thanks, > > -- > >0,0< Francesc Altet http://www.carabos.com/ > V V C=E1rabos Coop. V. Enjoy Data > "-" > > Hi Francsec, Could it be possible that along the road of maturity for both pyTables and scipy, that hdf5/pytables db functionality could be under scipy's hood? Some sort of optional behavior. What I mean is maybe array types could somehow be creatively integrated in both hdf's physical as well as scipy's virtual memory through pointers/assignments/flags or some such manor, etc. Just thinking off the cuff here :-). I really don't know what I'm talking about technically, I'm only just now teaching myself ANSI C. In the near future, I should be able to be more helpful, in the openSource way. ;-) |
From: Francesc A. <fa...@ca...> - 2005-10-20 10:32:05
|
A Dijous 20 Octubre 2005 08:53, Ashley Walsh va escriure: > Changing line 1774 of Tables.py from: > > if not self._v_wbuffer: > > to: > > if self._v_wbuffer is None: > > prevents a RuntimeError with numarray-1.3.3, and array.__nonzero__ . Ok. Fixed. Will be out in next PyTables 1.2. Thanks! =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-10-20 10:25:04
|
A Dijous 20 Octubre 2005 04:17, travlr va escriure: > Hi All, > > I just wanted to mention that which ever way necessity dictates the > syntax and symantics of pyTables, that great care should be considered > in making pyTables congruent as much as possible with numarray/scipy > in those respects. > > I also hope that pyTables roadmap would be to able to seamlessly > integrate in scipy itself... That would be terrific. I agree. If you have some particular suggestions on this regard apart of indexing "a la numarray" and "supporting scipy_core", I'd love to see them. Thanks, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Nicola L. <ni...@te...> - 2005-10-20 09:32:42
|
>> Could you send me your dsc and diff files? I'm going to build an 1.1 >> package for Breezy right now. > It looks like simply doing a "fakeroot dpkg-buildpackage" works. Right > now, I am using a beta version. I did just that. It built the four packages: three of them did install, but the "python-tables" did not, because it expected v.2.3 of Python as default, while it's v.2.4 on Breezy. So I changed the control and changelog files, using an 1.1.1-1ubuntu0 version, rebuilt the packages, and all of them installed fine. :-) -- Nicola Larosa - ni...@te... When people with lots of weapons and training in violence feel cornered, it tends to not be a pretty picture. -- Kirby Urner, August 2005 |
From: Ashley W. <ash...@gm...> - 2005-10-20 06:54:00
|
Changing line 1774 of Tables.py from: if not self._v_wbuffer: to: if self._v_wbuffer is None: prevents a RuntimeError with numarray-1.3.3, and array.__nonzero__ . Cheers, Ashley |
From: Cournapeau D. <cou...@at...> - 2005-10-20 04:54:57
|
> . > > Could you send me your dsc and diff files? I'm going to build an 1.1 > package for Breezy right now. It looks like simply doing a "fakeroot dpkg-buildpackage" works. Right now, I am using a beta version. If you don't manage to build the package, drop me the problem, and I will build a package from the latest release. David |
From: travlr <vel...@gm...> - 2005-10-20 02:17:40
|
Hi All, I just wanted to mention that which ever way necessity dictates the syntax and symantics of pyTables, that great care should be considered in making pyTables congruent as much as possible with numarray/scipy in those respects. I also hope that pyTables roadmap would be to able to seamlessly integrate in scipy itself... That would be terrific. |
From: Francesc A. <fa...@ca...> - 2005-10-19 08:44:11
|
Ooops, a typo was slipped in yesterday: A Dimarts 18 Octubre 2005 13:57, Francesc Altet va escriure: > For example, the next works: > > vlarray =3D fp.createVLArray( fp.root, > 'test', > ObjectAtom()) # or also UInt8Atom() > vlarray.append( cPickle.dumps( ['aaa' * 130], 2 )) Actually, if you use an ObjectAtom(), you should do: vlarray =3D fp.createVLArray( fp.root, 'test', ObjectAtom()) vlarray.append(['aaa' * 130]) And, if you rather prefer doing the serialization yourself: vlarray =3D fp.createVLArray( fp.root, 'test', UInt8Atom()) vlarray.append( cPickle.dumps( ['aaa' * 130], 2 )) Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-10-19 08:39:01
|
A Dimecres 19 Octubre 2005 07:53, Cournapeau David va escriure: > The HDF5 version in ubuntu breezy is 1.6.4, no problem here. Ok, so Ubuntu can certainly be ahead Debian testing. Interesting. > However, As I noticed this morning, this does not work anymore with the > last beta (which I use to investigate support of "raw" .h5 files with > pytables). The documentation is using tbook now, which has no debian > package, and looks like a nightmare because of its java dependancy, and > non standart (and buggy) Makefile (I don't even want to talk about the > fact that it is using some tex tools which do not exist in debian > tetex...). I wasted 2 hours this morning trying to solve the issue, and > I gave up. > > I simply removes the doc directory from the build dir for now. > > Francesc, it looks like you are using debian. How do you manage to build > a deb package of tbook, or just even make it work ? > PyTables is using tbook from version 0.1, so I think this is not the problem. In fact, you don't need tbook installed at all in order to generated the docs because on every public release of pyatbles the HTML and PDF versions of User's Manual do come with the source tarball. Look at the debian/ directory in pytables distribution in order to see how the doc package is generated. Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Nicola L. <ni...@te...> - 2005-10-19 07:12:56
|
>> Anyway, PyTables 1.1.1 depends on HDF5 1.6.4 and I'm not sure whether >> this is in Ubuntu or not, but I strongly suspect that it is not (it >> should be 1.6.2 instead). HDF5 1.6.4 is even having problems to enter >> testing distribution in Debian (they are waiting for a recent lam >> package there). > The HDF5 version in ubuntu breezy is 1.6.4, no problem here. > > Building an ubuntu package using the standart debian tools > (dpkg-builpackage) worked for me, but I had to do some dirty stuff, I > cannot remember what exactly (some numarray version problem). > > I cannot check it, but it looks like right know, using the pytables > sources, and doing a fakeroot dpkg-buildpackage in pytable root > directory works at least up to pytable-1.2b6 (I have not tested > thoroughly the package, but it builds on my powerpc machine and two PC). > > I can send my deb packages to anyone interested (modulo the usual 'don't > complain if this destroys your house' stuff:) ). Could you send me your dsc and diff files? I'm going to build an 1.1 package for Breezy right now. > However, As I noticed this morning, this does not work anymore with the > last beta (which I use to investigate support of "raw" .h5 files with > pytables). The documentation is using tbook now, which has no debian > package, and looks like a nightmare because of its java dependancy, and > non standart (and buggy) Makefile (I don't even want to talk about the > fact that it is using some tex tools which do not exist in debian > tetex...). I wasted 2 hours this morning trying to solve the issue, and > I gave up. This is bad news. > I simply removes the doc directory from the build dir for now. > > Francesc, it looks like you are using debian. How do you manage to build > a deb package of tbook, or just even make it work ? -- Nicola Larosa - ni...@te... In the end, guys, you're just as driven by emotions as women. Trust us... testosterone SO does not enhance your powers of reason. -- Kathy Sierra, July 2005 |
From: Cournapeau D. <cou...@at...> - 2005-10-19 05:51:09
|
> Oh, really? This was completely unexpected for me. Perhaps this is due > to that I'm the maintainer for Debian, and because Ubuntu is so close > to Debian the package my name remains as the original maintainer. > > Anyway, PyTables 1.1.1 depends on HDF5 1.6.4 and I'm not sure whether > this is in Ubuntu or not, but I strongly suspect that it is not (it > should be 1.6.2 instead). HDF5 1.6.4 is even having problems to enter > testing distribution in Debian (they are waiting for a recent lam > package there). > The HDF5 version in ubuntu breezy is 1.6.4, no problem here. Building an ubuntu package using the standart debian tools (dpkg-builpackage) worked for me, but I had to do some dirty stuff, I cannot remember what exactly (some numarray version problem). I cannot check it, but it looks like right know, using the pytables sources, and doing a fakeroot dpkg-buildpackage in pytable root directory works at least up to pytable-1.2b6 (I have not tested thoroughly the package, but it builds on my powerpc machine and two PC). I can send my deb packages to anyone interested (modulo the usual 'don't complain if this destroys your house' stuff:) ). However, As I noticed this morning, this does not work anymore with the last beta (which I use to investigate support of "raw" .h5 files with pytables). The documentation is using tbook now, which has no debian package, and looks like a nightmare because of its java dependancy, and non standart (and buggy) Makefile (I don't even want to talk about the fact that it is using some tex tools which do not exist in debian tetex...). I wasted 2 hours this morning trying to solve the issue, and I gave up. I simply removes the doc directory from the build dir for now. Francesc, it looks like you are using debian. How do you manage to build a deb package of tbook, or just even make it work ? David |
From: Ivan V. i B. <iv...@ca...> - 2005-10-18 14:55:04
|
En/na Ivan Vilata i Balaguer ha escrit:: > Please also have a look at the syntax used in the HDF5 RFC at > http://hdf.ncsa.uiuc.edu/RFC/linkEncodings/Character_Encoding.pdf:: >=20 > dataset_id =3D H5Dcreate(dataspace, datatype, DCPL, DAPL, DXPL) > H5Lcreate("dataset name", dataset_id, LCPL) >=20 > Dataset creation and link (name/directory entry) creation are separated= > in the new interface, so maybe PyTables 2 would not have problems in > *actually storing data* in unbound nodes. The previous example would be translated to PyTables 2 like this:: >>> array =3D Array([1,2,3]) >>> group.myarray_ =3D array # interactive interface, or... >>> group.link(array, 'myarray') # programmatic if. (methods), or... >>> group['myarray'] =3D array # programmatic if. (dictionary) (``link()`` could also be named ``addChild()`` or similar.) But the HDF5 hierarchy is not properly a tree, but a graph. Then, we could follow with: >>> group.samearray_ =3D array With no problems on the HDF5 side. That reminds us a truth in HDF5 and Un*x filesystems: files in themselves *do not have a name, nor a path*. Paths and names are only a concatenation of links in groups. Then, moving and renaming operations should be provided by groups, not nodes, i.e. no ``node.rename('newname')``, but ``group.rename('oldchildname', 'newchildname')``. This opens a new range of chances and problems, but I think we should not avoid them in order to not lag behind HDF5 functionality. :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Ivan V. i B. <iv...@ca...> - 2005-10-18 14:28:56
|
En/na Norbert Nemec ha escrit:: > Ivan Vilata i Balaguer wrote: >>The motto you quote was in fact what prevented me from accepting one >>more interface. But if we were to make groups more dictionary-like, >>then the ``getChild()``-like interface would be the one out of place! ;= ) >> >>So, do you see advantages in the dictionary-like interface? >=20 > Well, why not? It could be made clear that __get/setitem__ and > __get/setattr__ are basically the same thing, accessing the children of= > any node, except that item will use the unquoted original name from > HDF5, while attr will quote the name in the style that you propose. >=20 > For HDF5-Attributes, the behavior should be the same, except that it > does not work on node itself, but on node.attrs. >=20 > In general, use of operators should be handled with care, but in this > case, since accessing and setting the children of a node is the one > major purpose of any group, it makes sense to give it this special synt= ax. >=20 > Of course, get/setChild should then go away, or at least its use should= > be discouraged. Francesc and I have been giving this issue a thought, and some problems have immediately come to mind. In the first place, as you say, some operators would not make sense or they would be ambiguous, e.g. does ``del group[childgroup]`` act recursively or not? Does ``for n in group`` yield names of nodes or nodes themselves? Does it recurse? What does ``group.pop()`` mean? How do you rename a child node? Some of the previous ambiguity goes away by using more descriptive methods, but hen we end up implementing half a dictionary interface with lots of extra methods to complete it. Maybe there's no point in making groups dictionary-like, then. But maybe it's OK to implement only a part of the dictionary interface. In the second place, leaves will also have their own, very differently behaving item interface, so having an item interface for leaves and groups may get confusing. Now, I know we seem to oscillate from one side to the other one with this question; maybe we need the ultimate advice! ;) :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Francesc A. <fa...@ca...> - 2005-10-18 11:57:54
|
Hi Stefan, A Dilluns 17 Octubre 2005 21:07, Stefan Kuzminski va escriure: > I am trying to put a pickle string into a VLArray using VLStringAtom. > If the string is larger than a certain amount, I get an error like this > ( the full error is below ) > > ValueError: Problems when converting the object '[snip]' to the > encoding 'utf-8'. The error was: 'ascii' codec can't decode byte 0x86 > in position 4: ordinal not in range(128) Yes, this is due to the fact that Pickle in binary mode is not guaranteeing that the resulting string would be 'ascii' compliant, i.e. that all its elements are chars with ordinal < 128. I suggest you using 'ObjectAtom' so as to serialize objects (it uses cPickle internally). Also, if you want to keep doing the serialization by yourself, then the recommended atom to keep the resulting data is UInt8Atom. =46or example, the next works: vlarray =3D fp.createVLArray( fp.root, 'test', ObjectAtom()) # or also UInt8Atom() vlarray.append( cPickle.dumps( ['aaa' * 130], 2 )) Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Ivan V. i B. <iv...@ca...> - 2005-10-18 11:17:57
|
En/na Francesc Altet ha escrit:: > A Dissabte 15 Octubre 2005 15:25, Norbert Nemec va escriure: >> >>Even if unbound nodes are still a far way off, there is nothing wrong >>about following the design idea now. I think the idea of unbound nodes >>is something very clear to understand for the user, even if - for the >>time being - these nodes are seriously limited until they are actually >>written to disk. >> >>Of course, checking whether a node is bound costs a tiny bit of >>performance, but that certainly can be minimized. >=20 >=20 > One can always use an "assert" instructuction, and if maximum speed is > needed, pass the -O option to python. >=20 > What other people think? Implementing this would improve readability > of the code? My opinion is starting to change and I think it does. The following three threads may be interesting, because the same problems are touched in them (but keep in mind we are now talking of an entirely different release of PyTables): * https://sourceforge.net/mailarchive/forum.php?thread_id=3D6361641&forum_i= d=3D13760 * https://sourceforge.net/mailarchive/forum.php?thread_id=3D6391459&forum_i= d=3D13760 * https://sourceforge.net/mailarchive/forum.php?thread_id=3D6412440&forum_i= d=3D13760 Please also have a look at the syntax used in the HDF5 RFC at http://hdf.ncsa.uiuc.edu/RFC/linkEncodings/Character_Encoding.pdf:: dataset_id =3D H5Dcreate(dataspace, datatype, DCPL, DAPL, DXPL) H5Lcreate("dataset name", dataset_id, LCPL) Dataset creation and link (name/directory entry) creation are separated in the new interface, so maybe PyTables 2 would not have problems in *actually storing data* in unbound nodes. :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |