You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(5) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
|
Feb
(2) |
Mar
|
Apr
(5) |
May
(11) |
Jun
(7) |
Jul
(18) |
Aug
(5) |
Sep
(15) |
Oct
(4) |
Nov
(1) |
Dec
(4) |
2004 |
Jan
(5) |
Feb
(2) |
Mar
(5) |
Apr
(8) |
May
(8) |
Jun
(10) |
Jul
(4) |
Aug
(4) |
Sep
(20) |
Oct
(11) |
Nov
(31) |
Dec
(41) |
2005 |
Jan
(79) |
Feb
(22) |
Mar
(14) |
Apr
(17) |
May
(35) |
Jun
(24) |
Jul
(26) |
Aug
(9) |
Sep
(57) |
Oct
(64) |
Nov
(25) |
Dec
(37) |
2006 |
Jan
(76) |
Feb
(24) |
Mar
(79) |
Apr
(44) |
May
(33) |
Jun
(12) |
Jul
(15) |
Aug
(40) |
Sep
(17) |
Oct
(21) |
Nov
(46) |
Dec
(23) |
2007 |
Jan
(18) |
Feb
(25) |
Mar
(41) |
Apr
(66) |
May
(18) |
Jun
(29) |
Jul
(40) |
Aug
(32) |
Sep
(34) |
Oct
(17) |
Nov
(46) |
Dec
(17) |
2008 |
Jan
(17) |
Feb
(42) |
Mar
(23) |
Apr
(11) |
May
(65) |
Jun
(28) |
Jul
(28) |
Aug
(16) |
Sep
(24) |
Oct
(33) |
Nov
(16) |
Dec
(5) |
2009 |
Jan
(19) |
Feb
(25) |
Mar
(11) |
Apr
(32) |
May
(62) |
Jun
(28) |
Jul
(61) |
Aug
(20) |
Sep
(61) |
Oct
(11) |
Nov
(14) |
Dec
(53) |
2010 |
Jan
(17) |
Feb
(31) |
Mar
(39) |
Apr
(43) |
May
(49) |
Jun
(47) |
Jul
(35) |
Aug
(58) |
Sep
(55) |
Oct
(91) |
Nov
(77) |
Dec
(63) |
2011 |
Jan
(50) |
Feb
(30) |
Mar
(67) |
Apr
(31) |
May
(17) |
Jun
(83) |
Jul
(17) |
Aug
(33) |
Sep
(35) |
Oct
(19) |
Nov
(29) |
Dec
(26) |
2012 |
Jan
(53) |
Feb
(22) |
Mar
(118) |
Apr
(45) |
May
(28) |
Jun
(71) |
Jul
(87) |
Aug
(55) |
Sep
(30) |
Oct
(73) |
Nov
(41) |
Dec
(28) |
2013 |
Jan
(19) |
Feb
(30) |
Mar
(14) |
Apr
(63) |
May
(20) |
Jun
(59) |
Jul
(40) |
Aug
(33) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: <fa...@im...> - 2005-05-12 16:21:01
|
Salut Philippe,=20 =20 Are you sure you are using tables-1.0.win32-py2.x.exe instead of =20 tables-1.0-LB.win32-py2.x.exe. The first one was compiled and linked=20 without LZO nor BZIP2 support while the second has that support.=20 =20 Cheers,=20 =20 Francesc=20 =20 Quoting Philippe Collet <phi...@ho...>:=20 =20 > Hi list,=20 > Thanks for the great job on pytables.=20 > I upgraded tables0.9.1 to tables1.0.=20 > Now when i excecute my pytables program i have an error with the=20 > lzo1.dll =20 > missing.=20 > =20 > I try to uninstall tables1.0 and to re install tables0.9.1 and my=20 > pytables =20 > program works fine.=20 > =20 > Is tables1.0 using this lzo1.dll and why?=20 > =20 > I'm running tables on python 2.3.5 on windows XP.=20 > =20 > cheers,=20 > Philippe Collet=20 > =20 > _________________________________________________________________=20 > MSN Hotmail : antivirus et antispam int=E9gr=E9s =20 > http://www.msn.fr/newhotmail/Default.asp?Ath=3Df=20 > =20 > =20 > =20 > -------------------------------------------------------=20 > This SF.Net email is sponsored by Oracle Space Sweepstakes=20 > Want to be the first software developer in space?=20 > Enter now for the Oracle Space Sweepstakes!=20 > http://ads.osdn.com/?ad_id=3D7393&alloc_id=3D16281&op=3Dclick=20 > _______________________________________________=20 > Pytables-users mailing list=20 > Pyt...@li...=20 > https://lists.sourceforge.net/lists/listinfo/pytables-users=20 > =20 > =20 =20 |
From: Philippe C. <phi...@ho...> - 2005-05-12 16:07:30
|
Hi list, Thanks for the great job on pytables. I upgraded tables0.9.1 to tables1.0. Now when i excecute my pytables program i have an error with the lzo1.dll missing. I try to uninstall tables1.0 and to re install tables0.9.1 and my pytables program works fine. Is tables1.0 using this lzo1.dll and why? I'm running tables on python 2.3.5 on windows XP. cheers, Philippe Collet _________________________________________________________________ MSN Hotmail : antivirus et antispam intégrés http://www.msn.fr/newhotmail/Default.asp?Ath=f |
From: Waldemar O. <wal...@gm...> - 2005-05-12 14:55:48
|
On 5/9/05, Francesc Altet <fa...@ca...> wrote: > Hi List, >=20 > Following is the announcement for PyTables 1.0. It would be very nice > if anybody would like to test the package on her system before a more > announcement would be made. >=20 > Your feedback is very welcome! >=20 [sinp] It passes tests on WinXP with Python 2.4 Waldemar |
From: Valentino A. <val...@co...> - 2005-05-12 12:32:24
|
> -----Messaggio originale----- > Da: Francesc Altet [mailto:fa...@ca...] > Inviato: giovedì 12 maggio 2005 11.05 > A: Valentino Antonio > Cc: pyt...@li... > Oggetto: Re: R: [Pytables-users] Re: [ pytables-Patches-1192700 ] > Multi-dimensional HDF5 attributes > > > A Dijous 12 Maig 2005 09:16, vàreu escriure: > > No. The contrary. In my opinion, python integers and float should be > > pickled. > > Numarray objects, including scalars, should be saved as numeric HDF5 > > attributes. > > In this way numeric HDF5 attributes will be *always* loaded > into pytables > > as numarray objects, and numarray objects will be *always* > saved on file as > > numeric > > HDF5 attributes (H5T_NATIVE_INT, H5T_NATIVE_FLOAT and so on). > > Python objects, including int, float, complex, decimal.Decimal, list, > > tuple, dict, etc, should be saved as cpickle strings in order > to not have > > ambiguities with > > numeric attributes. > > In this way you can write code like this > > > > leaf.attrs.uchar = na.array(1, type=na.UInt8) # --> H5T_NATIVE_UCHAR > > integet = 1 > > leaf.attrs.int = integer # --> cpickle string > > leaf.attrs.int32 = na.asarray(integer) # --> H5T_NATIVE_INT > > > > and then retrieve them back unambiguously. > > Stated that extra code is needed in order to preserve the compatibility > > with old pytables version, I don't know if this approach can > araise other > > problems. > > I hope I have not been too much confusing. > > No, I understand clearly now. :))) > Well, after discussing this issue with > Ivan, we think that your suggestion is very consistent. And, as we can > distinguish between files made with PyTables 1.0 and earlier from > those made with PyTables 1.1 and after, I think there will be no > problem in implementing that. I'm happy to hear this. > May you provide the necessary patch based on the latest snapshot? If > you can't, I can do it my self. Of course I can. I will make it available this weekend (I hope). > Thanks, > > -- > >0,0< Francesc Altet http://www.carabos.com/ > V V Cárabos Coop. V. Enjoy Data > "-" > ciao -- Antonio Valentino |
From: Francesc A. <fa...@ca...> - 2005-05-12 11:39:08
|
A Dijous 12 Maig 2005 09:16, v=E0reu escriure: > No. The contrary. In my opinion, python integers and float should be > pickled. > Numarray objects, including scalars, should be saved as numeric HDF5 > attributes. > In this way numeric HDF5 attributes will be *always* loaded into pytables > as numarray objects, and numarray objects will be *always* saved on file = as > numeric > HDF5 attributes (H5T_NATIVE_INT, H5T_NATIVE_FLOAT and so on). > Python objects, including int, float, complex, decimal.Decimal, list, > tuple, dict, etc, should be saved as cpickle strings in order to not have > ambiguities with > numeric attributes. > In this way you can write code like this > > leaf.attrs.uchar =3D na.array(1, type=3Dna.UInt8) # --> H5T_NATIVE_UCHAR > integet =3D 1 > leaf.attrs.int =3D integer # --> cpickle string > leaf.attrs.int32 =3D na.asarray(integer) # --> H5T_NATIVE_INT > > and then retrieve them back unambiguously. > Stated that extra code is needed in order to preserve the compatibility > with old pytables version, I don't know if this approach can araise other > problems. > I hope I have not been too much confusing. No, I understand clearly now. Well, after discussing this issue with Ivan, we think that your suggestion is very consistent. And, as we can distinguish between files made with PyTables 1.0 and earlier from those made with PyTables 1.1 and after, I think there will be no problem in implementing that. May you provide the necessary patch based on the latest snapshot? If you can't, I can do it my self. Thanks, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Valentino A. <val...@co...> - 2005-05-12 07:14:20
|
> -----Messaggio originale----- > Da: Francesc Altet [mailto:fa...@ca...] > Inviato: mercoledì 11 maggio 2005 18.42 > A: Antonio Valentino > Cc: pyt...@li... > Oggetto: Re: [Pytables-users] Re: [ pytables-Patches-1192700 ] > Multi-dimensional HDF5 attributes > > > A Dimecres 11 Maig 2005 16:09, Antonio Valentino va escriure: > > > - When you save numarray scalars, they become saved as HDF5 scalars as > > > well. I'm hesitating here because I wonder myself whether saving them > > > as a pickle would be better. I really prefer the implemented solution > > > but, what do you think about? > > > > In my opinion saving numarray scalars as HDF5 scalars is the > best choice. > > The problem is that when you get it back from file you always have a > > Python_int or Python _float. > > Attributes can't have a "Flavor" > > That is why I suggested to map HDF5 numeric attributes onto numarray > > objects and handle all python objects using cpickle strings. > > This is absolutely unambiguous and allow the user to have the > full control > > over HDF5 attributes. > > Yeah, may be you are right. So, your proposal is to pickle numarray > scalars, so that they do not interfere with Python scalars > deserialization, isn't it? I'd like to hear more opinions on that > matter. No. The contrary. In my opinion, python integers and float should be pickled. Numarray objects, including scalars, should be saved as numeric HDF5 attributes. In this way numeric HDF5 attributes will be *always* loaded into pytables as numarray objects, and numarray objects will be *always* saved on file as numeric HDF5 attributes (H5T_NATIVE_INT, H5T_NATIVE_FLOAT and so on). Python objects, including int, float, complex, decimal.Decimal, list, tuple, dict, etc, should be saved as cpickle strings in order to not have ambiguities with numeric attributes. In this way you can write code like this leaf.attrs.uchar = na.array(1, type=na.UInt8) # --> H5T_NATIVE_UCHAR integet = 1 leaf.attrs.int = integer # --> cpickle string leaf.attrs.int32 = na.asarray(integer) # --> H5T_NATIVE_INT and then retrieve them back unambiguously. Stated that extra code is needed in order to preserve the compatibility with old pytables version, I don't know if this approach can araise other problems. I hope I have not been too much confusing. > > Ofcourse whatever solution you decide to adopt I will be happy > to help you. > > Let me know your decisions. > > Well, if you are willing to manage the case for scalar numarray > pickling, you are very welcome! > > Adéu, > > -- > >0,0< Francesc Altet http://www.carabos.com/ > V V Cárabos Coop. V. Enjoy Data > "-" ciao Antonio Valentino |
From: Francesc A. <fa...@ca...> - 2005-05-11 16:42:35
|
A Dimecres 11 Maig 2005 16:09, Antonio Valentino va escriure: > > - When you save numarray scalars, they become saved as HDF5 scalars as > > well. I'm hesitating here because I wonder myself whether saving them > > as a pickle would be better. I really prefer the implemented solution > > but, what do you think about? > > In my opinion saving numarray scalars as HDF5 scalars is the best choice. > The problem is that when you get it back from file you always have a > Python_int or Python _float. > Attributes can't have a "Flavor" > That is why I suggested to map HDF5 numeric attributes onto numarray > objects and handle all python objects using cpickle strings. > This is absolutely unambiguous and allow the user to have the full control > over HDF5 attributes. Yeah, may be you are right. So, your proposal is to pickle numarray scalars, so that they do not interfere with Python scalars deserialization, isn't it? I'd like to hear more opinions on that matter. > Ofcourse whatever solution you decide to adopt I will be happy to help yo= u. > Let me know your decisions. Well, if you are willing to manage the case for scalar numarray pickling, you are very welcome! Ad=E9u, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Antonio V. <val...@co...> - 2005-05-11 14:06:32
|
Alle 15:10, mercoled=EC 11 maggio 2005, Francesc Altet ha scritto: > Antonio, > > I've uploaded your patch for dealing multidimensional attributes in > the SVN repository. I've modified it a bit so that: > > - When you save python scalars, they become saved as HDF5 scalars > (before, they were saved as arrays of shape (1,)), which is wrong IMO. I agree! I didn't modify that because of back compatibility reasons but I see that= you=20 solved this point.. > - When you save numarray scalars, they become saved as HDF5 scalars as > well. I'm hesitating here because I wonder myself whether saving them > as a pickle would be better. I really prefer the implemented solution > but, what do you think about? In my opinion saving numarray scalars as HDF5 scalars is the best choice. The problem is that when you get it back from file you always have a=20 Python_int or Python _float. Attributes can't have a "Flavor" That is why I suggested to map HDF5 numeric attributes onto numarray obje= cts=20 and handle all python objects using cpickle strings. This is absolutely unambiguous and allow the user to have the full contro= l=20 over HDF5 attributes. > - For keeping backward compatibility, I added code so that for old > PyTables files so that an HDF5 attribute with shape (1,) will be > mapped to a Python scalar. At the moment you introduce extra code for keeping backward compatibility= you=20 can add still a little more an have and attribute handling system complet= e=20 and clear. Ofcourse whatever solution you decide to adopt I will be happy to help yo= u. Let me know your decisions. > With that I think all the requirements for multidimensional attributes > are fulfilled (except perhaps how to serialize numarray scalars on > disk). > > Wait until a snapshot will be made this midnight in the carabos > repository of pytables in order to have a try at final code. > > I'll look into your second patch when I have more time (I'm a bit busy > lately :-/). Thanks for it anyway! > > Ciao, > > Francesc ciao > A Dimarts 10 Maig 2005 23:55, Antonio Valentino va escriure: > > Il giorno mar, 10-05-2005 alle 14:08 +0200, Francesc Altet ha scritto= : > > > Hola Antonio, > > > > > > Sorry for the late response, but I was very busy lately (PyTables 1= =2E0 > > > was the responsible of that ;) > > > > > > A Dijous 05 Maig 2005 22:37, Antonio Valentino va escriure: > > > > Hi Francesc, > > > > the problem on scalar values seems to be fixed now. > > > > In the new patch is included your test suite with some changes. > > > > Let me know if it is good for you. > > > > > > Thanks. It seems cleaner to me indeed. Unfortunately, the patch doe= s > > > not apply well against PyTables 1.0. Could you make the necessary > > > modifications to this? > > > > Hi Francesc, > > I have just uploaded the mdattr patch against PyTables 1.0. > > > > I also submitted a new patch that should provide a whider support for > > chunked layout datasets. > > Please take a look and let me know what do you think about. > > > > ciao > > > > [...] > > > > > Cheers, --=20 Antonio Valentino INNOVA - Consorzio per l'Informatica e la Telematica via della Scienza - Zona Paip I 75100 Matera (MT) Italy |
From: Francesc A. <fa...@ca...> - 2005-05-11 13:10:45
|
Antonio, I've uploaded your patch for dealing multidimensional attributes in the SVN repository. I've modified it a bit so that: =2D When you save python scalars, they become saved as HDF5 scalars (before, they were saved as arrays of shape (1,)), which is wrong IMO. =2D When you save numarray scalars, they become saved as HDF5 scalars as well. I'm hesitating here because I wonder myself whether saving them as a pickle would be better. I really prefer the implemented solution but, what do you think about? =2D For keeping backward compatibility, I added code so that for old PyTables files so that an HDF5 attribute with shape (1,) will be mapped to a Python scalar. With that I think all the requirements for multidimensional attributes are fulfilled (except perhaps how to serialize numarray scalars on disk). Wait until a snapshot will be made this midnight in the carabos repository of pytables in order to have a try at final code. I'll look into your second patch when I have more time (I'm a bit busy lately :-/). Thanks for it anyway! Ciao, =46rancesc A Dimarts 10 Maig 2005 23:55, Antonio Valentino va escriure: > Il giorno mar, 10-05-2005 alle 14:08 +0200, Francesc Altet ha scritto: > > Hola Antonio, > > > > Sorry for the late response, but I was very busy lately (PyTables 1.0 > > was the responsible of that ;) > > > > A Dijous 05 Maig 2005 22:37, Antonio Valentino va escriure: > > > Hi Francesc, > > > the problem on scalar values seems to be fixed now. > > > In the new patch is included your test suite with some changes. > > > Let me know if it is good for you. > > > > Thanks. It seems cleaner to me indeed. Unfortunately, the patch does > > not apply well against PyTables 1.0. Could you make the necessary > > modifications to this? > > Hi Francesc, > I have just uploaded the mdattr patch against PyTables 1.0. > > I also submitted a new patch that should provide a whider support for > chunked layout datasets. > Please take a look and let me know what do you think about. > > ciao > > [...] > > > Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Antonio V. <val...@co...> - 2005-05-10 21:55:33
|
Il giorno mar, 10-05-2005 alle 14:08 +0200, Francesc Altet ha scritto: > Hola Antonio, > > Sorry for the late response, but I was very busy lately (PyTables 1.0 > was the responsible of that ;) > > A Dijous 05 Maig 2005 22:37, Antonio Valentino va escriure: > > Hi Francesc, > > the problem on scalar values seems to be fixed now. > > In the new patch is included your test suite with some changes. > > Let me know if it is good for you. > > Thanks. It seems cleaner to me indeed. Unfortunately, the patch does > not apply well against PyTables 1.0. Could you make the necessary > modifications to this? > Hi Francesc, I have just uploaded the mdattr patch against PyTables 1.0. I also submitted a new patch that should provide a whider support for chunked layout datasets. Please take a look and let me know what do you think about. ciao [...] > > Cheers, -- Antonio Valentino INNOVA - Consorzio per l'Informatica e la Telematica via della Scienza - Zona Paip I 75100 Matera (MT) Italy |
From: Antonio V. <val...@co...> - 2005-05-10 15:01:38
|
Alle 14:08, marted=EC 10 maggio 2005, Francesc Altet ha scritto: > Hola Antonio, Hi > Sorry for the late response, but I was very busy lately (PyTables 1.0 > was the responsible of that ;) > > A Dijous 05 Maig 2005 22:37, Antonio Valentino va escriure: > > Hi Francesc, > > the problem on scalar values seems to be fixed now. > > In the new patch is included your test suite with some changes. > > Let me know if it is good for you. > > Thanks. It seems cleaner to me indeed. Unfortunately, the patch does > not apply well against PyTables 1.0. Could you make the necessary > modifications to this? I don't have my laptop with me in this moment. I'll send you a pach again= st=20 PyTables 1.0 as soon as I can. > > Anyway there are still a couple of things, concerning the attributes > > handling, I don't like too much in this solution (pytables snapshot > > 20050504 + mdattr patch). > > > > - there is no way to set an HDF5 scalar attribute or an attribute wit= h > > shape (1,) having a data type different from H5T_NATIVE_INT or > > H5T_NATIVE_DOUBLE > > What about using H5T_ARRAY types for keeping multidimensional > attributes? That way you can distinguish between a regular > H5T_NATIVE_INT or H5T_NATIVE_DOUBLE and one with shape (1,). That > would be an elegant solution, IMO. I never used H5T_ARRAY. It sounds good anyway :) I will think about.. Unfortunately this should be good solution for PyTables but not so good f= or=20 me. :( I'm working to a project related to COSMO SkyMED http://www.aleniaspazio.it/earth_observation_page.aspx?IdProg=3D23 My problem is that i have to deal hdf5 data that have a fixed structure=20 (specified in a "product specification document"). For each node and leaf= it=20 is stated the number of attributes and their name, shape and datatype (an= d=20 none of them is H5T_ARRAY :((( ). With the current mdattr-patch (together with another pach for chunked lay= out=20 datasets that I will submit soon) I can "import" this kind of data in the= =20 PyTables format. I still have some problems for the "export" operation. I think that having to import/export ganeric HDF5 files is a situation no= t so=20 unusual and I would like a lot that PyTables could have some mechanism to= =20 allow that. My doubt is that using exclusively H5T_ARRAY to manage multidimentional=20 attributes could make more difficult the managing of generic HDF5 files. What is your opinion about? > > - numeric HDF5 attributes with shape () or (1,) are always "imported" > > as python int or python float (the original data type and sign are > > lost) > > You can prevent this from happening if you use H5T_ARRAY types. > > > - scalar attributes are stored as HDF5 attributes with shape (1,) > > and not as HDF5 attributes with shape () > > Again, H5T_ARRAY types can save the day. Then scalar attributes can > still be saved as regular attributes with shape (1,). > > > I could try to implement a couple of additional methods for the > > AttributeSet class that could be used as workaround for the first > > two points: > > > > =09AttributeSet.setNAAttribute(self, name, value) > > =09AttributeSet.getNAAttribute(self, name) > > > > could be used in alternative to > > > > =09node._v_attrs.name =3D value > > =09value =3D node._v_attrs.name > > > > to force pytables to preserve datatype and shape into HDF5 numerical > > attributes. The mapping would be > > > > numarray.Int8 : H5T_NATIVE_CHAR > > numarray.Int16 : H5T_NATIVE_SHORT > > numarray.Int32 : H5T_NATIVE_INT > > numarray.Int64 : H5T_NATIVE_LLONG > > numarray.UInt8 : H5T_NATIVE_UCHAR > > numarray.UInt16 : H5T_NATIVE_USHORT > > numarray.UInt32 : H5T_NATIVE_UINT > > numarray.UInt64 : H5T_NATIVE_ULLONG > > numarray.Float32 : H5T_NATIVE_FLOAT > > numarray.Float64 : H5T_NATIVE_DOUBLE > > > > This is but an elegant solution in my opinion. > > H5T_ARRAY would be more elegant. > > > In alternative one could think to completely change the attribute > > handling mechanism and store all python objects including int and > > float as HDF5 strings using cpickle, and map only numarray objects > > on numerical HDF5 attributes. > > Of course this would break the backward compatibility but, in my > > opinion, will make the attribute handling mechanism more clear > > (also the relative pytables code will result a lot simpler). > > Did I mentioned the H5T_ARRAY solution? ;) > > > P.S. excuse me for my bad english. > > Don't worry, mine is not any better actually!. > > Cheers, ciao --=20 Antonio Valentino INNOVA - Consorzio per l'Informatica e la Telematica via della Scienza - Zona Paip I 75100 Matera (MT) Italy |
From: Francesc A. <fa...@ca...> - 2005-05-10 12:08:56
|
Hola Antonio, Sorry for the late response, but I was very busy lately (PyTables 1.0 was the responsible of that ;) A Dijous 05 Maig 2005 22:37, Antonio Valentino va escriure: > Hi Francesc, > the problem on scalar values seems to be fixed now. > In the new patch is included your test suite with some changes. > Let me know if it is good for you. Thanks. It seems cleaner to me indeed. Unfortunately, the patch does not apply well against PyTables 1.0. Could you make the necessary modifications to this? > > Anyway there are still a couple of things, concerning the attributes > handling, I don't like too much in this solution (pytables snapshot > 20050504 + mdattr patch). > > - there is no way to set an HDF5 scalar attribute or an attribute with > shape (1,) having a data type different from H5T_NATIVE_INT or > H5T_NATIVE_DOUBLE What about using H5T_ARRAY types for keeping multidimensional attributes? That way you can distinguish between a regular H5T_NATIVE_INT or H5T_NATIVE_DOUBLE and one with shape (1,). That would be an elegant solution, IMO. > - numeric HDF5 attributes with shape () or (1,) are always "imported" > as python int or python float (the original data type and sign are > lost) You can prevent this from happening if you use H5T_ARRAY types. > - scalar attributes are stored as HDF5 attributes with shape (1,) > and not as HDF5 attributes with shape () Again, H5T_ARRAY types can save the day. Then scalar attributes can still be saved as regular attributes with shape (1,). > I could try to implement a couple of additional methods for the > AttributeSet class that could be used as workaround for the first > two points: > > AttributeSet.setNAAttribute(self, name, value) > AttributeSet.getNAAttribute(self, name) > > could be used in alternative to > > node._v_attrs.name =3D value > value =3D node._v_attrs.name > > to force pytables to preserve datatype and shape into HDF5 numerical > attributes. The mapping would be > > numarray.Int8 : H5T_NATIVE_CHAR > numarray.Int16 : H5T_NATIVE_SHORT > numarray.Int32 : H5T_NATIVE_INT > numarray.Int64 : H5T_NATIVE_LLONG > numarray.UInt8 : H5T_NATIVE_UCHAR > numarray.UInt16 : H5T_NATIVE_USHORT > numarray.UInt32 : H5T_NATIVE_UINT > numarray.UInt64 : H5T_NATIVE_ULLONG > numarray.Float32 : H5T_NATIVE_FLOAT > numarray.Float64 : H5T_NATIVE_DOUBLE > > This is but an elegant solution in my opinion. H5T_ARRAY would be more elegant. > In alternative one could think to completely change the attribute > handling mechanism and store all python objects including int and > float as HDF5 strings using cpickle, and map only numarray objects > on numerical HDF5 attributes. > Of course this would break the backward compatibility but, in my > opinion, will make the attribute handling mechanism more clear > (also the relative pytables code will result a lot simpler). Did I mentioned the H5T_ARRAY solution? ;) > P.S. excuse me for my bad english. Don't worry, mine is not any better actually!. Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Antonio V. <val...@co...> - 2005-05-10 08:18:11
|
Thank you for PyTables 1.0. Here are test results on my PC ---------------------------------------------------------------------- Ran 1776 tests in 148.900s OK -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-= =3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D= -=3D PyTables version: 1.0 Extension version: $Id: hdf5Extension.pyx 821 2005-05-09 11:45:38Z faltet= $ HDF5 version: 1.6.4 numarray version: 1.3.1 Zlib version: 1.1.4 BZIP2 version: 1.0.2 (30-Dec-2001) Python version: 2.4.1 (#1, May 4 2005, 11:47:31) [GCC 3.2.2] Platform: linux2-i686 Byte-ordering: little -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-= =3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D= -=3D Performing only a light (yet comprehensive) subset of the test suite. If you have a big system and lots of CPU to waste and want to do a more complete test, try passing the --heavy flag to this script. The whole suite will take more than 10 minutes to complete on a relatively modern CPU and around 100 MB of main memory. -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-= =3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D= -=3D Skipping Numeric test suite -=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-= =3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D-=3D= -=3D ciao Alle 21:09, luned=EC 9 maggio 2005, Francesc Altet ha scritto: > Hi List, > > Following is the announcement for PyTables 1.0. It would be very nice > if anybody would like to test the package on her system before a more > announcement would be made. > > Your feedback is very welcome! > > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D > Announcing PyTables 1.0 > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D > [...] > > > Share your experience > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > Let us know of any bugs, suggestions, gripes, kudos, etc. you may > have. > > > ---- > > **Enjoy data!** > > -- The PyTables Team --=20 Antonio Valentino INNOVA - Consorzio per l'Informatica e la Telematica via della Scienza - Zona Paip I 75100 Matera (MT) Italy Tel.: +39 0835 309180 Fax: +39 0835 264705 Home Page: www.consorzio-innova.it Email: val...@co... |
From: Francesc A. <fa...@ca...> - 2005-05-09 19:09:34
|
Hi List, =46ollowing is the announcement for PyTables 1.0. It would be very nice if anybody would like to test the package on her system before a more announcement would be made.=20 Your feedback is very welcome! =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Announcing PyTables 1.0 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The Carabos crew is very proud to announce the immediate availability of **PyTables release 1.0**. On this release you will find a series of exciting new features, being the most important the Undo/Redo capabilities, support for objects (and indexes!) with more than 2**31 rows, better I/O performance for Numeric objects, new time datatypes (useful for time-stamping fields), support for Octave HDF5 files and improved support for HDF5 native files. What it is =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D **PyTables** is a package for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data (with support for full 64-bit file addressing). It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code, makes it a very easy-to-use tool for high performance data storage and retrieval. It is built on top of the HDF5 library and the numarray package, and provides containers for both heterogeneous data (``Table``) and homogeneous data (``Array``, ``EArray``) as well as containers for keeping lists of objects of variable length (like Unicode strings or general Python objects) in a very efficient way (``VLArray``). It also sports a series of filters allowing you to compress your data on-the-fly by using different compressors and compression enablers. But perhaps the more interesting features are its powerful browsing and searching capabilities that allow you to perform data selections over heterogeneous datasets exceeding gigabytes of data in just tenths of second. Besides, the PyTables I/O is buffered, implemented in C and carefully tuned so that you can reach much better performance with PyTables than with your own home-grown wrappings to the HDF5 library. Changes more in depth =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Improvements: =2D New Undo/Redo feature (i.e. integrated support for undoing and/or redoing actions). This functionality lets you to put marks in specific places of your data operations, so that you can make your HDF5 file pop back (undo) to a specific mark (for example for inspecting how your data looked at that point). You can also go forward to a more recent marker (redo). You can even do jumps to the marker you want using just one instruction. =2D Reading Numeric objects from ``*Array`` and ``Table`` (Numeric columns) objects have a 50-100x speedup. With that, Louis Wicker reported that a speed of 350 MB/s can be achieved with Numeric objects (on a SGI Altix with a Raid 5 disk array) while with numarrays, this speed approaches 900 MB/s. This improvement has been possible mainly due to a nice recipe from Todd Miller. Thanks Todd! =2D With PyTables 1.0 you can finally create Tables, EArrays and VLArrays with more than 2**31 (~ 2 thousand millions) rows, as well as retrieve them. Before PyTables 1.0, retrieving data on these beasts was not well supported, in part due to limitations in some slicing functions in Python (that rely on 32-bit adressing). So, we have made the necessary modifications in these functions to support 64-bit indexes and integrated them into PyTables. As a result, our tests shows that this feature works just fine. =2D As a consequence of the above, you can now index columns of tables with more than 2**31 rows. For instance, indexes have been created for integer columns with 10**10 (yeah, 10 thousand million) rows in less than 1 hour using an Opteron @ 1.6 GHz system (~ 1 hour and half with a Xeon Intel32 @ 2.5 GHz platform). Enjoy! =2D Now PyTables supports the native HDF5 time types, both 32-bit signed integer and 64-bit fixed point timestamps. They are mapped to ``Int32`` and ``Float64`` values for easy manipulation. See the documentation for the ``Time32Col`` and ``Time64Col`` classes. =2D The opening and copying of files with large number of objects has been made faster by correcting a typo in ``Table._open()``. Thanks to Ashley Walsh for sending a patch for this. =2D Now, one can modify rank-0 (scalar) ``EArray`` datasets. Thanks to Norbert Nemec for providing a patch for this. =2D You are allowed from this version on to add non-valid natural naming names as node or attribute names. A warning is issued to warn (but not forbid) you in such a case. Of course, you have to use the ``getattr()`` function so as to access such invalid natural names. =2D The indexes of ``Table`` and ``*Array`` datasets can be of long type besides of integer type. However, indexes in slices are still restricted to regular integer type. =2D The concept of ``READ_ONLY`` system attributes has disappeared. You can change them now at your own risk! However, you still cannot remove or rename system attributes. =2D Now, one can do reads in-between write loops using ``table.row`` instances. This is thanks to a decoupling in I/O buffering: now there is a buffer for reading and other for writing, so that no collisions take place anymore. Fixes #1186892. =2D Support for Octave HDF5 output format. Even complex arrays are supported. Thanks to Edward C. Jones for reporting this format. Backward-incompatible changes: =2D The format of indexes has been changed and indexes in files created with PyTables pre-1.0 versions are ignored now. However, ``ptrepack`` can still save your life because it is able to convert your old files into the new indexing format. Also, if you copy the affected tables to other locations (by using ``Leaf.copy()``), it will regenerate your indexes with the new format for you. =2D The API has changed a little bit (nothing serious) for some methods. See ``RELEASE-NOTES.txt`` for more details. Bug fixes: =2D Added partial support for native HDF5 chunked datasets. They can be read now, and even extended, but only along the first extensible dimension. This limitation may be removed when multiple extensible dimensions are supported in PyTables. =2D Formerly, when the name of a column in a table was subsumed in another column name, PyTables crashed while retrieving information of the former column. That has been fixed. =2D A bug prevented the use of indexed columns of tables that were in other hierarchical level than root. This is solved now. =2D When a ``Group`` was renamed you were not able to modify its attributes. This has been fixed. =2D When whether ``Table.modifyColumns()`` or ``Table.modifyRows()`` were called, a subsequent call to ``Table.flush()`` didn't really flush the modified data to disk. This works as intended now. =2D Fixed some issues when iterating over ``*Array`` objects using the ``List`` or ``Tuple`` flavor. Important note for Python 2.4 and Windows users =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D If you are willing to use PyTables with Python 2.4 in Windows platforms, you will need to get the HDF5 library compiled for MSVC 7.1, aka .NET 2003. It can be found at: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-164-win-net.ZIP Users of Python 2.3 on Windows will have to download the version of HDF5 compiled with MSVC 6.0 available in: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-164-win.ZIP Where can PyTables be applied? =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D PyTables is not designed to work as a relational database competitor, but rather as a teammate. If you want to work with large datasets of multidimensional data (for example, for multidimensional analysis), or just provide a categorized structure for some portions of your cluttered RDBS, then give PyTables a try. It works well for storing data from data acquisition systems (DAS), simulation software, network data monitoring systems (for example, traffic measurements of IP packets on routers), very large XML files, or for creating a centralized repository for system logs, to name only a few possible uses. What is a table? =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D A table is defined as a collection of records whose values are stored in fixed-length fields. All records have the same structure and all values in each field have the same data type. The terms "fixed-length" and "strict data types" seem to be quite a strange requirement for a language like Python that supports dynamic data types, but they serve a useful function if the goal is to save very large quantities of data (such as is generated by many scientific applications, for example) in an efficient manner that reduces demand on CPU time and I/O resources. What is HDF5? =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =46or those people who know nothing about HDF5, it is a general purpose library and file format for storing scientific data made at NCSA. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic constructs, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs. Platforms =3D=3D=3D=3D=3D=3D=3D=3D=3D We are using Linux on top of Intel32 as the main development platform, but PyTables should be easy to compile/install on other UNIX machines. This package has also been successfully compiled and tested on a =46reeBSD 5.4 with Opteron64 processors, a UltraSparc platform with Solaris 7 and Solaris 8, a SGI Origin3000 with Itanium processors running IRIX 6.5 (using the gcc compiler) and Microsoft Windows. In particular, it has been thoroughly tested on 64-bit platforms, like Linux-64 on top of an Intel Itanium or AMD Opteron (in 64-bit mode). Regarding Windows platforms, PyTables has been tested with Windows 2000 and Windows XP (using the Microsoft Visual C compiler), but it should also work with other flavors as well. Web site =3D=3D=3D=3D=3D=3D=3D=3D Go to the PyTables web site for more details: http://pytables.sourceforge.net/ To know more about the company behind the PyTables development, see: http://www.carabos.com/ Share your experience =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. =2D--- **Enjoy data!** -- The PyTables Team =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Antonio V. <val...@co...> - 2005-05-05 20:40:51
|
Hi Francesc, the problem on scalar values seems to be fixed now. In the new patch is included your test suite with some changes. Let me know if it is good for you. Anyway there are still a couple of things, concerning the attributes handling, I don't like too much in this solution (pytables snapshot 20050504 + mdattr patch). - there is no way to set an HDF5 scalar attribute or an attribute with shape (1,) having a data type different from H5T_NATIVE_INT or H5T_NATIVE_DOUBLE - numeric HDF5 attributes with shape () or (1,) are always "imported" as python int or python float (the original data type and sign are lost) - scalar attributes are stored as HDF5 attributes with shape (1,) and not as HDF5 attributes with shape () I could try to implement a couple of additional methods for the AttributeSet class that could be used as workaround for the first two points: AttributeSet.setNAAttribute(self, name, value) AttributeSet.getNAAttribute(self, name) could be used in alternative to=20 node._v_attrs.name =3D value value =3D node._v_attrs.name to force pytables to preserve datatype and shape into HDF5 numerical attributes. The mapping would be numarray.Int8 : H5T_NATIVE_CHAR numarray.Int16 : H5T_NATIVE_SHORT numarray.Int32 : H5T_NATIVE_INT numarray.Int64 : H5T_NATIVE_LLONG numarray.UInt8 : H5T_NATIVE_UCHAR numarray.UInt16 : H5T_NATIVE_USHORT numarray.UInt32 : H5T_NATIVE_UINT numarray.UInt64 : H5T_NATIVE_ULLONG numarray.Float32 : H5T_NATIVE_FLOAT numarray.Float64 : H5T_NATIVE_DOUBLE This is but an elegant solution in my opinion. In alternative one could think to completely change the attribute handling mechanism and store all python objects including int and float as HDF5 strings using cpickle, and map only numarray objects on numerical HDF5 attributes. Of course this would break the backward compatibility but, in my opinion, will make the attribute handling mechanism more clear (also the relative pytables code will result a lot simpler). I would like to know your (Francesc) opinion and the opinion of other pytables users about that. P.S. excuse me for my bad english. Ciao =20 Il giorno mar, 03-05-2005 alle 20:31 +0200, Francesc Altet ha scritto: > Hi Antonio, >=20 > First of all, thanks for your patch! >=20 > I've tried the second version of the patch and seems to work quite > well. There are, though, some issues with shape of scalar arrays and > retrieved types that need to be addressed before commiting the changes > to the repository. >=20 > I'm attaching a test suite for attributes that exposes these problems. > Run them in verbose mode to get more info on what is failing. If you > can address those flaws that would be very nice! >=20 > Cheers, >=20 > A Divendres 29 Abril 2005 22:30, v=C3=A0reu escriure: > > Patches item #1192700, was opened at 2005-04-29 22:30 > > Message generated for change (Tracker Item Submitted) made by Item > > Submitter You can respond by visiting: > > https://sourceforge.net/tracker/?func=3Ddetail&atid=3D504146&aid=3D1192= 700&group_ > >id=3D63486 > > > > Category: None > > Group: None > > Status: Open > > Resolution: None > > Priority: 5 > > Submitted By: v_antonio (v_antonio) > > Assigned to: Nobody/Anonymous (nobody) > > Summary: Multi-dimensional HDF5 attributes > > > > Initial Comment: > > I needed support to manage multidimensional hdf5 > > attributes so > > I implemented a patch for pytables-20050429 snapshot that > > adds this feature. > > With this patch I'm able to store numarray objects (not > > tuples or > > lists) with arbitrary shape (according with hdf5 > > limitations) into > > hdf5 attributes. > > Datatype supported are: > > > > numarray.Int8 : H5T_NATIVE_CHAR, > > numarray.Int16 : H5T_NATIVE_SHORT, > > numarray.Int32 : H5T_NATIVE_INT, > > numarray.Int64 : H5T_NATIVE_LLONG, > > numarray.UInt8 : H5T_NATIVE_UCHAR, > > numarray.UInt16 : H5T_NATIVE_USHORT, > > numarray.UInt32 : H5T_NATIVE_UINT, > > numarray.UInt64 : H5T_NATIVE_ULLONG, > > numarray.Float32 : H5T_NATIVE_FLOAT, > > numarray.Float64 : H5T_NATIVE_DOUBLE > > > > other datatypes are still stored into strings using > > cpickle. > > It is also possible to store mono-dimensional attributes > > having more control over the datatype (e.g. one can > > store H5T_NATIVE_UCHAR attributes). > > Note: a H5T_NATIVE_UCHAR attribute is still retrieved > > as python int. > > > > > > Attached files: > > pytables-20050429-mdattr.patch > > test_mdattr.py (a small test program) > > > > > > ---------------------------------------------------------------------- > > > > You can respond by visiting: > > https://sourceforge.net/tracker/?func=3Ddetail&atid=3D504146&aid=3D1192= 700&group_ > >id=3D63486 --=20 Antonio Valentino INNOVA - Consorzio per l'Informatica e la Telematica via della Scienza - Zona Paip I 75100 Matera (MT) Italy |
From: Ivan V. i B. <iv...@ca...> - 2005-05-04 10:33:05
|
Antonio Valentino (el 2005-04-29 a les 15:02:02 +0200) va dir:: > still problems :(( >=20 > $ python test_all.py -v > [...] > Traceback (most recent call last): > File "test_all.py", line 213, in ? > unittest.main( defaultTest=3D'suite' ) > File "/usr/lib/python2.3/unittest.py", line 720, in __init__ > self.parseArgs(argv) > File "/usr/lib/python2.3/unittest.py", line 747, in parseArgs > self.createTests() > File "/usr/lib/python2.3/unittest.py", line 753, in createTests > self.module) > File "/usr/lib/python2.3/unittest.py", line 519, in loadTestsFromNames > suites.append(self.loadTestsFromName(name, module)) > File "/usr/lib/python2.3/unittest.py", line 493, in loadTestsFromName > obj =3D getattr(obj, part) > AttributeError: 'module' object has no attribute '' I have not been able to reproduce this error. Have you tried that with clean and up-to-date sources of PyTables? Cheers, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=E1rabos Coop. V. V V Enjoy Data "" |
From: Francesc A. <fa...@ca...> - 2005-05-04 10:02:36
|
Hi Hanneke, This is the pytables list, not a R list. Please, check http://www.r-project.org/mail.html for the different lists of the R-project. Cheers, A Dimecres 04 Maig 2005 11:51, sch...@ge... va escriure: > Dear R-users, > > I'm experiencing a segmentation fault when using > hdf5load(file,load=3DFALSE). Library(hdf5) loads without problems but when > loading a file, R crashes. I compiled R under Unix (Solaris for Sun). > There is nothing wrong with the files, as I can run the same script at > another place where R runs under Linux. > > Is it possible it has something to do with the hdf5 libraries where > package(hdf5) refers to? > > Regards, > > Hanneke > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: NEC IT Guy Games. > Get your fingers limbered up and give it your best shot. 4 great events, 4 > opportunities to win big! Highest score wins.NEC IT Guy Games. Play to > win an NEC 61 plasma display. Visit http://www.necitguy.com/?r > _______________________________________________ > Pytables-users mailing list > Pyt...@li... > https://lists.sourceforge.net/lists/listinfo/pytables-users =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: <sch...@ge...> - 2005-05-04 09:51:37
|
Dear R-users, I'm experiencing a segmentation fault when using hdf5load(file,load=3DFALSE). Library(hdf5) loads without problems but whe= n loading a file, R crashes. I compiled R under Unix (Solaris for Sun). There is nothing wrong with the files, as I can run the same script at another place where R runs under Linux. Is it possible it has something to do with the hdf5 libraries where package(hdf5) refers to? Regards, Hanneke |
From: Francesc A. <fa...@ca...> - 2005-05-04 07:55:57
|
Hi Alexey, Which pytables version are you using? If it is 0.9.1, please, try with a recent snapshot (http://www.carabos.com/downloads/pytables/snapshots/) and tell me how it goes. I think you are facing a known (and solved) bug. Cheers, A Dimarts 03 Maig 2005 02:33, Alexey Goldin va escriure: > I am having the following problems trying to access data in hdf5 file > > (created by pytables): > >>> hs=3Dh5.root.errorsim.FAST.cols.hise[0:10] > > HDF5-DIAG: Error detected in HDF5 library version: 1.6.2 thread > -1209593728. Back trace follows. > #000: ../../../src/H5Tcompound.c line 327 in H5Tinsert(): unable to > insert member > major(13): Datatype interface > minor(45): Unable to insert object > #001: ../../../src/H5Tcompound.c line 418 in H5T_insert(): member > overlaps with another member major(13): Datatype interface > minor(45): Unable to insert object > > > The table was created from the following descriptor: > > import tables as t > class OrbitSolution(t.IsDescription): > """ class to create HDF5 table > """ > hise =3D t.Int32Col() > solution_flag=3D t.StringCol(2,dflt=3D'') > has_tycho_pm =3D t.StringCol(1,dflt=3D'') > P =3D t.Float32Col() > T0 =3D t.Float32Col() > eks =3D t.Float32Col() > chi2 =3D t.Float32Col() > chi2_0 =3D t.Float32Col() > confidence =3D t.Float32Col() > ntries =3D t.Int32Col() > nmeas =3D t.Int32Col() > a0,om, OM, i =3D [t.Float32Col() for i in range(4)] > ccc =3D t.Float32Col(shape=3D(5)) > afbg =3D t.Float32Col(shape=3D(4)) > > > Am I trying to use it properly? > > a0=3Dh5.root.errorsim.FAST.cols.a0[0:10] works Ok.... > > > ------------------------------------------------------- > This SF.Net email is sponsored by: NEC IT Guy Games. > Get your fingers limbered up and give it your best shot. 4 great events, 4 > opportunities to win big! Highest score wins.NEC IT Guy Games. Play to > win an NEC 61 plasma display. Visit http://www.necitguy.com/?r > _______________________________________________ > Pytables-users mailing list > Pyt...@li... > https://lists.sourceforge.net/lists/listinfo/pytables-users =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Alexey G. <ale...@gm...> - 2005-05-03 19:20:05
|
I am having the following problems trying to access data in hdf5 file (created by pytables): >>> hs=3Dh5.root.errorsim.FAST.cols.hise[0:10] HDF5-DIAG: Error detected in HDF5 library version: 1.6.2 thread -1209593728. Back trace follows. #000: ../../../src/H5Tcompound.c line 327 in H5Tinsert(): unable to insert member major(13): Datatype interface minor(45): Unable to insert object #001: ../../../src/H5Tcompound.c line 418 in H5T_insert(): member overlaps with another member major(13): Datatype interface minor(45): Unable to insert object The table was created from the following descriptor: import tables as t class OrbitSolution(t.IsDescription): """ class to create HDF5 table """ hise =3D t.Int32Col() solution_flag=3D t.StringCol(2,dflt=3D'') has_tycho_pm =3D t.StringCol(1,dflt=3D'') P =3D t.Float32Col() T0 =3D t.Float32Col() eks =3D t.Float32Col() chi2 =3D t.Float32Col() chi2_0 =3D t.Float32Col() confidence =3D t.Float32Col() ntries =3D t.Int32Col() nmeas =3D t.Int32Col() =20 a0,om, OM, i =3D [t.Float32Col() for i in range(4)] ccc =3D t.Float32Col(shape=3D(5)) afbg =3D t.Float32Col(shape=3D(4)) =20 Am I trying to use it properly?=20 a0=3Dh5.root.errorsim.FAST.cols.a0[0:10] works Ok.... |
From: Ivan V. i B. <iv...@ca...> - 2005-04-29 08:44:42
|
Antonio Valentino (el 2005-04-29 a les 09:38:44 +0200) va dir:: > Hi, > > I encountred this problem on my FC3 > > pytables-20050429 > > $ python test_all.py > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > =-=-= > PyTables version: 1.0 (alpha) > Extension version: $Id: hdf5Extension.pyx 794 2005-04-26 15:16:10Z > ivilata $ > HDF5 version: 1.6.4 > numarray version: 1.3.1 > Zlib version: 1.2.1.2 > LZO version: 1.08 (Jul 12 2002) > Traceback (most recent call last): > File "test_all.py", line 176, in ? > tinfo = tables.whichLibVersion("ucl") > File "hdf5Extension.pyx", line 897, in hdf5Extension.whichLibVersion > ValueError: asked version of unsupported library ``ucl``; supported > library names are ``('hdf5', 'zlib', 'lzo', 'ucl', 'bzip2')`` Hi Antonio! You found a bug in the last commit of ``hdf5Extension.pyx``. It has been fixed in the repository and will be available by tomorrow in the `PyTables snapshots directory`_. For the moment, you can apply the attached diff to the ``pyx`` file and rebuild it. Thanks for your report! .. _PyTables snapshots directory: http://www.carabos.com/downloads/pytables/snapshots/ Cheers, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ Cárabos Coop. V. V V Enjoy Data "" |
From: Antonio V. <val...@co...> - 2005-04-29 07:47:39
|
Hi, I encountred this problem on my FC3 pytables-20050429 $ python test_all.py -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- =-=-= PyTables version: 1.0 (alpha) Extension version: $Id: hdf5Extension.pyx 794 2005-04-26 15:16:10Z ivilata $ HDF5 version: 1.6.4 numarray version: 1.3.1 Zlib version: 1.2.1.2 LZO version: 1.08 (Jul 12 2002) Traceback (most recent call last): File "test_all.py", line 176, in ? tinfo = tables.whichLibVersion("ucl") File "hdf5Extension.pyx", line 897, in hdf5Extension.whichLibVersion ValueError: asked version of unsupported library ``ucl``; supported library names are ``('hdf5', 'zlib', 'lzo', 'ucl', 'bzip2')`` ciao -- Antonio Valentino Consorzio Innova S.r.l. |
From: Waldemar O. <wal...@gm...> - 2005-04-26 19:37:19
|
On 4/26/05, Francesc Altet <fa...@ca...> wrote: > Ooops, I meant: >=20 > http://www.carabos.com/downloads/pytables/snapshots/pytables-20050426.tar= .gz >=20 > In general, you should check: >=20 > http://www.carabos.com/downloads/pytables/snapshots/pytables-latest.tar.g= z >=20 > Cheers, >=20 [sinp] I have run the tests and I can confirm that HDF5 related errors are gone. What is left are "Permission denied" errors that look more like problem with tests than with extension but then again what do I know :-). See attachment. Right now I'm running the tests whenever I come around to it but if you are interested I could run them regularly. Let's say once a week until 1.0 is released. On the other hand I do not want to create unnecessary noise on the list. Waldemar |
From: Francesc A. <fa...@ca...> - 2005-04-26 08:50:46
|
Hi Dragan, I see. Well, the sort answer is no: PyTables does not support multidimensional arrays of tables. HDF5 does support this as well as numarray. We can start thinking in implementing such a support sometime after PyTables 1.0 release. However, depending on what you are trying to do, you may want to check VLArrays of objects. You can create a numarray.RecArray object for each simulation run and save it as an entry in a VLArray. Regards, A Dilluns 25 Abril 2005 13:12, v=E0reu escriure: > Hello Francesc, > > I think I was not clear enough. My question was > related to having an multidimensional array of tables. > Each element of the array would therefore present one > table with data of one simulation run. > Does this make any sence to you? > > Thanks for replying. > > Greetings, Dragan. > > --- Francesc Altet <fa...@ca...> wrote: > > Hello Dragan, > > > > On Sunday 24 April 2005 17:59, dragan savic wrote: > > > A table is typically the result of a simulation > > > > run. > > > > > When more runs are done, each run has normally > > > different values for one of parameters. In my case > > > > I > > > > > have for example three parameters: switch size, > > > > buffer > > > > > size and load. I would like to have tables as > > > > elements > > > > > of an N dimensional array, where each parameter > > > represents a different dimension, and the number > > > > of > > > > > possible values of that parameter is the size of > > > > the > > > > > array along that dimension. Each dimension should > > > have a scalar or a string type. > > > Is this possible to achieve with Pytables? > > > > Well, if I have not misunderstood you, yes, I think > > you can achieve > > this goal using PyTables. Just declare the next > > description > > dictionary: > > > > tsim=3D{'switch_size': > > UInt16Col(shape=3Dmax_size_switch), > > 'buffer_size': > > UInt32Col(shape=3Dmax_buffer_switch), > > 'load': UInt8Col(shape=3Dmax_size_load), > > } > > > > Then, create the table with compression enabled: > > > > table =3D fileh.createTable(group, 'table', tsim, "A > > table", Filters(1)) > > > > Please, take in account that, as you use compression > > (see the > > Filters(1) parameter), you don't waste too much > > space by booking > > columns with maximum possible sizes. > > > > If you want true variable length records, even with > > compression > > support (but not able to deal with heterogeneous > > data), you can use a > > combination of three VLArray objects (one per each > > parameter). See > > http://pytables.sourceforge.net/html/tut/vlarray2.html > > > for an example > > of use. > > > > HTH, > > > > Francesc Altet > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-04-25 07:37:34
|
Hello Dragan, On Sunday 24 April 2005 17:59, dragan savic wrote: > A table is typically the result of a simulation run. > When more runs are done, each run has normally > different values for one of parameters. In my case I > have for example three parameters: switch size, buffer > size and load. I would like to have tables as elements > of an N dimensional array, where each parameter > represents a different dimension, and the number of > possible values of that parameter is the size of the > array along that dimension. Each dimension should > have a scalar or a string type. > Is this possible to achieve with Pytables? Well, if I have not misunderstood you, yes, I think you can achieve this goal using PyTables. Just declare the next description dictionary: tsim={'switch_size': UInt16Col(shape=max_size_switch), 'buffer_size': UInt32Col(shape=max_buffer_switch), 'load': UInt8Col(shape=max_size_load), } Then, create the table with compression enabled: table = fileh.createTable(group, 'table', tsim, "A table", Filters(1)) Please, take in account that, as you use compression (see the Filters(1) parameter), you don't waste too much space by booking columns with maximum possible sizes. If you want true variable length records, even with compression support (but not able to deal with heterogeneous data), you can use a combination of three VLArray objects (one per each parameter). See http://pytables.sourceforge.net/html/tut/vlarray2.html for an example of use. HTH, Francesc Altet |