You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(5) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
|
Feb
(2) |
Mar
|
Apr
(5) |
May
(11) |
Jun
(7) |
Jul
(18) |
Aug
(5) |
Sep
(15) |
Oct
(4) |
Nov
(1) |
Dec
(4) |
2004 |
Jan
(5) |
Feb
(2) |
Mar
(5) |
Apr
(8) |
May
(8) |
Jun
(10) |
Jul
(4) |
Aug
(4) |
Sep
(20) |
Oct
(11) |
Nov
(31) |
Dec
(41) |
2005 |
Jan
(79) |
Feb
(22) |
Mar
(14) |
Apr
(17) |
May
(35) |
Jun
(24) |
Jul
(26) |
Aug
(9) |
Sep
(57) |
Oct
(64) |
Nov
(25) |
Dec
(37) |
2006 |
Jan
(76) |
Feb
(24) |
Mar
(79) |
Apr
(44) |
May
(33) |
Jun
(12) |
Jul
(15) |
Aug
(40) |
Sep
(17) |
Oct
(21) |
Nov
(46) |
Dec
(23) |
2007 |
Jan
(18) |
Feb
(25) |
Mar
(41) |
Apr
(66) |
May
(18) |
Jun
(29) |
Jul
(40) |
Aug
(32) |
Sep
(34) |
Oct
(17) |
Nov
(46) |
Dec
(17) |
2008 |
Jan
(17) |
Feb
(42) |
Mar
(23) |
Apr
(11) |
May
(65) |
Jun
(28) |
Jul
(28) |
Aug
(16) |
Sep
(24) |
Oct
(33) |
Nov
(16) |
Dec
(5) |
2009 |
Jan
(19) |
Feb
(25) |
Mar
(11) |
Apr
(32) |
May
(62) |
Jun
(28) |
Jul
(61) |
Aug
(20) |
Sep
(61) |
Oct
(11) |
Nov
(14) |
Dec
(53) |
2010 |
Jan
(17) |
Feb
(31) |
Mar
(39) |
Apr
(43) |
May
(49) |
Jun
(47) |
Jul
(35) |
Aug
(58) |
Sep
(55) |
Oct
(91) |
Nov
(77) |
Dec
(63) |
2011 |
Jan
(50) |
Feb
(30) |
Mar
(67) |
Apr
(31) |
May
(17) |
Jun
(83) |
Jul
(17) |
Aug
(33) |
Sep
(35) |
Oct
(19) |
Nov
(29) |
Dec
(26) |
2012 |
Jan
(53) |
Feb
(22) |
Mar
(118) |
Apr
(45) |
May
(28) |
Jun
(71) |
Jul
(87) |
Aug
(55) |
Sep
(30) |
Oct
(73) |
Nov
(41) |
Dec
(28) |
2013 |
Jan
(19) |
Feb
(30) |
Mar
(14) |
Apr
(63) |
May
(20) |
Jun
(59) |
Jul
(40) |
Aug
(33) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Francesc A. <fa...@ca...> - 2005-01-31 16:38:37
|
A Dilluns 31 Gener 2005 12:26, Norbert Nemec va escriure: > Also, 'data' should probably be the first parameter and have no default. > Since the method is renamed anyway, compatibility should not be an issue. Agreed > Basically, it should first be decided which ways should exist for > addressing rows in general. I would vote for the option of greatest > flexibility, even if certain methods of addressing might not be very > efficient. > > The most complete set of parameters would be something like > start=3DNone, stop=3DNone, step=3D1, rows=3DNone, columns=3DNone > where 'rows' does what 'sequence' and 'coords' to up to now. 'columns' > might not exist for all routines (e.g. remove) that can - by principle on= ly > address whole rows. That sounds reasonable. What about making rows --> rowlist and columns --> columnlist? > I would suggest dropping the automatic sorting of sequences. Documenting > that unsorted lists kill the performance should be enough. I think it is > better if a user who is unaware of the issue gets bad performance than > wrong results. I disagree in this point. Sorting an object in-memory is a relatively fast operation, while retrieving an un-sorted sequence from disk can be *killer*. The default should be the solution that less impact on performance, and this is sorting my default. On optimization-consciuos user can read the manual and try to disable sorting, if appropriate. > > > It is really irritating if > > > you first think you could use some addressing and then realize that, > > > for some unknown reason, 'remove' does not support step=3D > > Anyway, it just looks much better if the interface is clean and complete > and simply documents which features are not implemented yet, giving an > error message. You can even place a comment in the documentation mentioni= ng > the difficulties and inviting people to help out. I disagree again: if a parameter is not implemented, I think it is better to not allow it. If I were going to implement this feature in a short term, that could be different. But the reality is that I don't plan to address this for a long while, at least, the efficient version (the reason is that the required effort would be much higher than the benefits that this will report). However, perhaps it could be useful to add 'step' for remove and implement this in as a sequence of remove(start,stop) that fakes the intended behaviour. It would not be very efficient, but... > > Yes. I agree that all of this needs a general, careful, overhauling. > > I'll try myself to address this issues. Moreover, if you want to > > provide patches for any of that, they will be more than welcome! > > OK - I already considered it. As before I cannot really promise when I > might to it. Should we find some way of coordinating this effort? I would > like to avoid the confusion we had some time before... Well, not me nor Ivan are going to address any of these issues for a while (at least in a couple of weeks or so). So feel free to download a recent snapshot (preferibly after this night, as I've fixed a couple of things today in Table.py): http://www.carabos.com/downloads/pytables/snapshots/ and work on that. Cheers, =2D-=20 >qo< Francesc Altet =A0 =A0 http://www.carabos.com/ V =A0V C=E1rabos Coop. V. =A0=A0Enjoy Data "" |
From: Norbert N. <Nor...@gm...> - 2005-01-31 11:29:48
|
Am Montag 31 Januar 2005 10:17 schrieb Ivan Vilata i Balaguer: > Hopefully we will soon be moving to some kind of automatic build and > install system such as A-A-P (www.a-a-p.org), so that updating, > building, testing and installing the stable and development versions > will be far easier than using CVS and setup.py. Wow - thanks for the link! I have to check that out... -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Norbert N. <Nor...@gm...> - 2005-01-31 11:26:45
|
Am Montag 31 Januar 2005 11:16 schrieb Francesc Altet: > > * the parameter names modifyRows(rows=...) and modifyColumns(columns=...) > > are badly confusing. The name 'data' for this parameter would be much > > more intuitive > > Yeah, specially with modifyColumns, where the columns parameter can > play the role of columns or rows, depending on the object passed (a > list of columns or a RecArray). 'data' can be a good name, yes. Also, 'data' should probably be the first parameter and have no default. Since the method is renamed anyway, compatibility should not be an issue. > > * why does one single routine 'read' support both start,stop,step and > > coords, while iterrators do need two different names for that? > > You mean itersequence() that has a parameter named as "sequence"?. > Well, the truth is that coords was meant for internal use only, > specially for indexing-related issues. However, I've made the error of > documenting it in read (but just the in-line docs). > > But this is a good point. The truth is that I don't know which name > would represent better its meaning. Which do you prefer, 'coords' or > 'sequence'?. 'indexes' would also be a possibility, but that may > interfere with indexing capabilities. Basically, it should first be decided which ways should exist for addressing rows in general. I would vote for the option of greatest flexibility, even if certain methods of addressing might not be very efficient. The most complete set of parameters would be something like start=None, stop=None, step=1, rows=None, columns=None where 'rows' does what 'sequence' and 'coords' to up to now. 'columns' might not exist for all routines (e.g. remove) that can - by principle only address whole rows. I would suggest dropping the automatic sorting of sequences. Documenting that unsorted lists kill the performance should be enough. I think it is better if a user who is unaware of the issue gets bad performance than wrong results. > > It is really irritating if > > you first think you could use some addressing and then realize that, for > > some unknown reason, 'remove' does not support step= > > Well, that maybe irritating for you, but the sad truth under this is > that all this extended slicing support has been provided after a *very > hard work* on my part.... Neither I nor anyone else will blame you for features that are 'not implemented'. It is tremendous what you did already! Anyway, it just looks much better if the interface is clean and complete and simply documents which features are not implemented yet, giving an error message. You can even place a comment in the documentation mentioning the difficulties and inviting people to help out. > Yes. I agree that all of this needs a general, careful, overhauling. > I'll try myself to address this issues. Moreover, if you want to > provide patches for any of that, they will be more than welcome! OK - I already considered it. As before I cannot really promise when I might to it. Should we find some way of coordinating this effort? I would like to avoid the confusion we had some time before... Ciao, Nobbi -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Francesc A. <fa...@ca...> - 2005-01-31 11:03:46
|
Hi, This should be solved in SVN now. Thanks! A Dissabte 29 Gener 2005 18:01, Norbert Nemec va escriure: > Hi there, > > just realized that Table.flush() does not flush the modifications done by > Table.modifyRows(). If I flush the whole file instead, the changes are > written. > > Ciao, > Norbert =2D-=20 >qo< Francesc Altet =A0 =A0 http://www.carabos.com/ V =A0V C=E1rabos Coop. V. =A0=A0Enjoy Data "" |
From: Antonio V. <val...@co...> - 2005-01-31 10:23:00
|
Il giorno ven, 28-01-2005 alle 09:19 +0100, Antonio Valentino ha scritto: > Il giorno gio, 27-01-2005 alle 15:16 +0100, Francesc Altet ha scritto: > > Hi Antonio, > > hi > > > I've finally found some time to work in this issue. The problem with > > that was that native chunked datasets were not supported by PyTables. > > The good news is that I've been able to support and map them to EArray > > objects :) > > thanks a lot > I did try to fix it by myself but I had no luck. > The best I can do is to fix the segmentation fault but still an > UnImplemented object was created and than: > > RuntimeError: maximum recursion depth exceeded > > :(( > > May I ask you what kind of tools do you use to debug PyTables? > How do you debug PyRex extensions? > > > With the new version, you should be able to read chunked datasets > > without problems (at least I do with the sample you sent me). However, > > there is a limited support for multiple extendeable dimensions that > > are set simultaneously because PyTables does not support that. That > > would mean that you can still extend these native HDF5 datasets, but > > only along the *first* (counting from the left to the right) > > extendeable dimension that is found in the dataset. > > Ok, this is enough for me. I don't have to extend arbitrary chunked arrays. ^^^^^^^^^^^^^^^^^^^^^^^^^^ It seems I spoke too soon :P I tested pytables snapshot on my data and in effect it works good on chunked datasets with at least one extendeable dimension (so were the ones I sent to you). Then I run a test on chunked data with no extendeable dimensions (unfortunately I have no control on this kind of aspects because data come from an external team) and again I got a Segmentation fault. I fixed it by modifying hdf5Extension.pyx ($Id: hdf5Extension.pyx 520 2005-01-27 13:51:04Z faltet $) as follows ##patch #2832,2833c2832,2834 #< if (self.__class__.__name__ == "EArray" or #< self.__class__.__name__ == "IndexArray"): #--- #> #if (self.__class__.__name__ == "EArray" or #> # self.__class__.__name__ == "IndexArray"): #> if self.extdim >= 0: Of course in this case the array is no more mapped onto an EArray object and an UnImplemented object is created instead. ciao Antonio > Anyway I would like to have more control over the chunk size. > Do you think to add the full support to chunked datasets in the future? > > > As we do not offer public access to the SVN repository yet, you should > > wait until tomorrow so as to get the snapshot with this patch that > > will appear in: > > > > http://www.carabos.com/downloads/pytables/snapshots/ > > saved ;) > > > Please, tell me if this works for you. > > Of course, I'll do it. > > > Cheers, > > Ciao > Antonio > > P.S. excuse me for my bad english :) > > > A Diumenge 16 Gener 2005 16:19, Antonio Valentino va escriure: > > > hi, > > > I'm not an expert user and I'm having some problems trying to open an > > > hdf5 file containing a chunked dataset. > > > Here it is some infos > > > > > > Python 2.3.4 > > > [GCC 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)] on linux2 > > > numarray v. 1.1.1 > > > tables v. 0.9.1 > > > hdf5 v. 1.6.3-patch > > > > > > this is the test program > > > > > > # BEGIN file test-uchar.py > > > import tables > > > h5file = tables.openFile('data.h5') > > > print h5file > > > # END file test-uchar.py > > > > > > an this is the data > > > > > > # chunk (128x128x2) > > > > > > [antonio@m6n h5]$ h5dump -A data.h5 > > > HDF5 "data.h5" { > > > GROUP "/" { > > > DATASET "ComplexUCharArray" { > > > DATATYPE H5T_STD_U8LE > > > DATASPACE SIMPLE { ( 200, 150, 2 ) / ( 200, 150, 2 ) } > > > } > > > } > > > } > > > > > > When i run the test program i get a segfault > > > > > > [antonio@m6n h5]$ python test-uchar.py > > > /usr/lib/python2.3/site-packages/tables/File.py:192: UserWarning: > > > 'data.h5' does exist, is an HDF5 file, but has not a PyTables format. > > > Trying toguess what's there using HDF5 metadata. I can't promise you > > > getting the correctobjects, but I will do my best!. > > > path, UserWarning) > > > Segmentation fault > > > > > > > > > If I try it with a *non* chunked dataset ... > > > > > > [antonio@m6n h5]$ python test-uchar.py > > > /usr/lib/python2.3/site-packages/tables/File.py:192: UserWarning: > > > 'data.h5' does exist, is an HDF5 file, but has not a PyTables format. > > > Trying to guess what's there using HDF5 metadata. I can't promise you > > > getting the correct objects, but I will do my best!. > > > path, UserWarning) > > > Traceback (most recent call last): > > > File "test-uchar.py", line 6, in ? > > > print h5file > > > File "/usr/lib/python2.3/site-packages/tables/File.py", line 1000, in > > > __str__ > > > astring += str(leaf) + '\n' > > > File "/usr/lib/python2.3/site-packages/tables/Leaf.py", line 472, in > > > __str__ > > > title = self.attrs.TITLE > > > File "/usr/lib/python2.3/site-packages/tables/AttributeSet.py", line > > > 166, in __getattr__ > > > raise AttributeError, \ > > > > > > [SNIP] > > > > > > File "/usr/lib/python2.3/site-packages/tables/AttributeSet.py", line > > > 166, in __getattr__ > > > raise AttributeError, \ > > > File "/usr/lib/python2.3/site-packages/tables/Leaf.py", line 472, in > > > __str__ > > > title = self.attrs.TITLE > > > File "/usr/lib/python2.3/site-packages/tables/Leaf.py", line 189, in > > > _get_attrs > > > return AttributeSet(self) > > > RuntimeError: maximum recursion depth exceeded > > > > > > > > > in this case the file seems to be correctly opened but some problem is > > > met in the print statement. > > > > > > antonio > > > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting > Tool for open source databases. Create drag-&-drop reports. Save time > by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc. > Download a FREE copy at http://www.intelliview.com/go/osdn_nl > _______________________________________________ > Pytables-users mailing list > Pyt...@li... > https://lists.sourceforge.net/lists/listinfo/pytables-users |
From: Francesc A. <fa...@ca...> - 2005-01-31 10:16:29
|
A Dijous 27 Gener 2005 19:42, Norbert Nemec va escriure: > just been working with tables a bit. The methods of the Table class and t= he > parameters they take (especially the naming) utterly confuse me. Some > examples: > > * as I understand it, modifyRows and modifyColumns do nearly the same > thing, except that the latter variant gives the additional flexibility of > modifying only certain rows. Why not drop the first variant and rename the > latter to .modify(...) ? Yes, it seems like you are right. What happened is that I started implementing modify() for complete rows. Then, I realized that some method for modifying specific columns would be nice as well, so I ended implementing modifyColumns(), and renaming modify() to modifyRows(). But now, it seems like modifyColumns embeds all the functionality under modifyRows, so I'll take your advise and try to make modifyRows to disappear. > * the parameter names modifyRows(rows=3D...) and modifyColumns(columns=3D= =2E..) > are badly confusing. The name 'data' for this parameter would be much more > intuitive Yeah, specially with modifyColumns, where the columns parameter can play the role of columns or rows, depending on the object passed (a list of columns or a RecArray). 'data' can be a good name, yes. > * why is the parameter of read() called 'field' - I think we are talking = of > columns. Agreed. This was a reminiscence of pytables 0.1 or so, when columns were called 'fields'. This has to change. > * If modifyColumns() allows to select an arbitrary number of columns, why > is read() restricted to all or one? Hopefully, the parameters > read(field=3D...) and modifyColumns(names=3D...) should be unified and gi= ven > the same semantics. Good suggestion as well! I'll try to do that, but that would require some work before being able to provide that. > * why does one single routine 'read' support both start,stop,step and > coords, while iterrators do need two different names for that? You mean itersequence() that has a parameter named as "sequence"?. Well, the truth is that coords was meant for internal use only, specially for indexing-related issues. However, I've made the error of documenting it in read (but just the in-line docs). But this is a good point. The truth is that I don't know which name would represent better its meaning. Which do you prefer, 'coords' or 'sequence'?. 'indexes' would also be a possibility, but that may interfere with indexing capabilities. > * all the routines addressing rows take some (seemingly) arbitrary subset > of {start, stop, step, coords, sequence, condition}. Intuitively, I would > expect a set of routines to read, modify, remove and iterate that take the > same uniform addressing with the same semantics. Agreed again!. > It is really irritating if > you first think you could use some addressing and then realize that, for > some unknown reason, 'remove' does not support step=3D Well, that maybe irritating for you, but the sad truth under this is that all this extended slicing support has been provided after a *very hard work* on my part. If there are bits that does not support a "step" parameter in 'remove' yet, it's not due to my lack of willingness, but because it is very difficult to implement. However, if you want to address this and provide a patch I'll include it. > I could probably go on like this, but I think I'll better stop here. There > certainly are good reasons for some of the points I mentioned, but I gues= s, > other newbies would wonder about exactly the same points, so it might yet > be worth reconsidering them. Yes. I agree that all of this needs a general, careful, overhauling. I'll try myself to address this issues. Moreover, if you want to provide patches for any of that, they will be more than welcome! Thanks for providing this feedback anyway. This will hopefully contribute to make a more consistent API before releasing PyTables 1.0. Cheers, =2D-=20 >qo< Francesc Altet =A0 =A0 http://www.carabos.com/ V =A0V C=E1rabos Coop. V. =A0=A0Enjoy Data "" |
From: Ivan V. i B. <iv...@ca...> - 2005-01-31 09:17:44
|
On Wed, Jan 26, 2005 at 03:47:48PM +0100, Norbert Nemec wrote: > Public CVS access would be extremely helpful for those who want to track = the=20 > current version. 'cvs update' is just one command while downloading and= =20 > unpacking has to be done by hand each time (or it would need some scripti= ng=20 > that I'd like to avoid) Hopefully we will soon be moving to some kind of automatic build and install system such as A-A-P (www.a-a-p.org), so that updating, building, testing and installing the stable and development versions will be far easier than using CVS and setup.py. Regards, Ivan --=20 Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=E1rabos Coop. V. V V Enjoy Data "" |
From: Norbert N. <No...@ne...> - 2005-01-29 17:02:20
|
Hi there, just realized that Table.flush() does not flush the modifications done by Table.modifyRows(). If I flush the whole file instead, the changes are written. Ciao, Norbert -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Antonio V. <val...@co...> - 2005-01-28 08:20:34
|
Il giorno gio, 27-01-2005 alle 15:16 +0100, Francesc Altet ha scritto: > Hi Antonio, hi > I've finally found some time to work in this issue. The problem with > that was that native chunked datasets were not supported by PyTables. > The good news is that I've been able to support and map them to EArray > objects :) thanks a lot I did try to fix it by myself but I had no luck. The best I can do is to fix the segmentation fault but still an UnImplemented object was created and than: RuntimeError: maximum recursion depth exceeded :(( May I ask you what kind of tools do you use to debug PyTables? How do you debug PyRex extensions? > With the new version, you should be able to read chunked datasets > without problems (at least I do with the sample you sent me). However, > there is a limited support for multiple extendeable dimensions that > are set simultaneously because PyTables does not support that. That > would mean that you can still extend these native HDF5 datasets, but > only along the *first* (counting from the left to the right) > extendeable dimension that is found in the dataset. Ok, this is enough for me. I don't have to extend arbitrary chunked arrays. Anyway I would like to have more control over the chunk size. Do you think to add the full support to chunked datasets in the future? > As we do not offer public access to the SVN repository yet, you should > wait until tomorrow so as to get the snapshot with this patch that > will appear in: > > http://www.carabos.com/downloads/pytables/snapshots/ saved ;) > Please, tell me if this works for you. Of course, I'll do it. > Cheers, Ciao Antonio P.S. excuse me for my bad english :) > A Diumenge 16 Gener 2005 16:19, Antonio Valentino va escriure: > > hi, > > I'm not an expert user and I'm having some problems trying to open an > > hdf5 file containing a chunked dataset. > > Here it is some infos > > > > Python 2.3.4 > > [GCC 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)] on linux2 > > numarray v. 1.1.1 > > tables v. 0.9.1 > > hdf5 v. 1.6.3-patch > > > > this is the test program > > > > # BEGIN file test-uchar.py > > import tables > > h5file = tables.openFile('data.h5') > > print h5file > > # END file test-uchar.py > > > > an this is the data > > > > # chunk (128x128x2) > > > > [antonio@m6n h5]$ h5dump -A data.h5 > > HDF5 "data.h5" { > > GROUP "/" { > > DATASET "ComplexUCharArray" { > > DATATYPE H5T_STD_U8LE > > DATASPACE SIMPLE { ( 200, 150, 2 ) / ( 200, 150, 2 ) } > > } > > } > > } > > > > When i run the test program i get a segfault > > > > [antonio@m6n h5]$ python test-uchar.py > > /usr/lib/python2.3/site-packages/tables/File.py:192: UserWarning: > > 'data.h5' does exist, is an HDF5 file, but has not a PyTables format. > > Trying toguess what's there using HDF5 metadata. I can't promise you > > getting the correctobjects, but I will do my best!. > > path, UserWarning) > > Segmentation fault > > > > > > If I try it with a *non* chunked dataset ... > > > > [antonio@m6n h5]$ python test-uchar.py > > /usr/lib/python2.3/site-packages/tables/File.py:192: UserWarning: > > 'data.h5' does exist, is an HDF5 file, but has not a PyTables format. > > Trying to guess what's there using HDF5 metadata. I can't promise you > > getting the correct objects, but I will do my best!. > > path, UserWarning) > > Traceback (most recent call last): > > File "test-uchar.py", line 6, in ? > > print h5file > > File "/usr/lib/python2.3/site-packages/tables/File.py", line 1000, in > > __str__ > > astring += str(leaf) + '\n' > > File "/usr/lib/python2.3/site-packages/tables/Leaf.py", line 472, in > > __str__ > > title = self.attrs.TITLE > > File "/usr/lib/python2.3/site-packages/tables/AttributeSet.py", line > > 166, in __getattr__ > > raise AttributeError, \ > > > > [SNIP] > > > > File "/usr/lib/python2.3/site-packages/tables/AttributeSet.py", line > > 166, in __getattr__ > > raise AttributeError, \ > > File "/usr/lib/python2.3/site-packages/tables/Leaf.py", line 472, in > > __str__ > > title = self.attrs.TITLE > > File "/usr/lib/python2.3/site-packages/tables/Leaf.py", line 189, in > > _get_attrs > > return AttributeSet(self) > > RuntimeError: maximum recursion depth exceeded > > > > > > in this case the file seems to be correctly opened but some problem is > > met in the print statement. > > > > antonio > |
From: Norbert N. <Nor...@gm...> - 2005-01-27 18:43:04
|
Hi there, just been working with tables a bit. The methods of the Table class and the parameters they take (especially the naming) utterly confuse me. Some examples: * as I understand it, modifyRows and modifyColumns do nearly the same thing, except that the latter variant gives the additional flexibility of modifying only certain rows. Why not drop the first variant and rename the latter to .modify(...) ? * the parameter names modifyRows(rows=...) and modifyColumns(columns=...) are badly confusing. The name 'data' for this parameter would be much more intuitive * why is the parameter of read() called 'field' - I think we are talking of columns. * If modifyColumns() allows to select an arbitrary number of columns, why is read() restricted to all or one? Hopefully, the parameters read(field=...) and modifyColumns(names=...) should be unified and given the same semantics. * why does one single routine 'read' support both start,stop,step and coords, while iterrators do need two different names for that? * all the routines addressing rows take some (seemingly) arbitrary subset of {start, stop, step, coords, sequence, condition}. Intuitively, I would expect a set of routines to read, modify, remove and iterate that take the same uniform addressing with the same semantics. It is really irritating if you first think you could use some addressing and then realize that, for some unknown reason, 'remove' does not support step= I could probably go on like this, but I think I'll better stop here. There certainly are good reasons for some of the points I mentioned, but I guess, other newbies would wonder about exactly the same points, so it might yet be worth reconsidering them. Ciao, Norbert -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Ivan V. i B. <iv...@ca...> - 2005-01-27 15:00:23
|
On Thu, Jan 27, 2005 at 12:17:01PM +0100, Norbert Nemec wrote: > Am Donnerstag 27 Januar 2005 10:16 schrieb Ivan Vilata i Balaguer: > > Actually, it is possible for Group to use _f_create* methods and have no > > knowledge of which kinds of nodes it supports. One could register and > > deregister new Node classes into Group in this way: > > > > class MyNewLeaf(Leaf): > > ... class definition ... > > Group._f_registerNode(MyNewLeaf) > > > > The _f_create* method would be created on-the-fly.... >=20 > Maybe it is just me personally, but I find this kind of library-design=20 > extremely confusing. Of course, Python allows an extremely flexible desig= n of=20 > libraries that would never be possible in other languages (my personal=20 > background is mostly C++). This flexibility calls for a lot of=20 > self-discipline. If I may say, I only see one text line of self-discipline. ;) For me, it is not confusing at all, and it leverages the power of Python introspection and dynamic binding. >=20 > My usual idea of a library is, that it is a fixed set of modules, contain= ing a=20 > fixed set of classes offering a fixed set of methods. Any departure from = this=20 > static picture means an additional step in understanding the library. It = may=20 > be justified in many cases, but should still be avoided if there is no go= od=20 > reason for it. Yes, you are completely right. However, when *extending* the library we are changing the library =E2=80=9Cblack-box=E2=80=9D approach (which is sti= ll valid for users) for a framework =E2=80=9Cwhite-box=E2=80=9D approach (where develope= rs have access to some internal aspects). Registering new classes in a framework is not that strange (and believe me, it could have been *much* worse). >=20 > Anyway: this is just my personal view as someone relatively new to Python= =2E.. And it is a very interesting one. Please, don't hesitate to show your opinions about PyTables (that goes for everyone). They are very valuable to us. Regards, Ivan --=20 Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Francesc A. <fa...@ca...> - 2005-01-27 14:17:06
|
Hi Antonio, I've finally found some time to work in this issue. The problem with that was that native chunked datasets were not supported by PyTables. The good news is that I've been able to support and map them to EArray objects :) With the new version, you should be able to read chunked datasets without problems (at least I do with the sample you sent me). However, there is a limited support for multiple extendeable dimensions that are set simultaneously because PyTables does not support that. That would mean that you can still extend these native HDF5 datasets, but only along the *first* (counting from the left to the right) extendeable dimension that is found in the dataset. As we do not offer public access to the SVN repository yet, you should wait until tomorrow so as to get the snapshot with this patch that will appear in: http://www.carabos.com/downloads/pytables/snapshots/ Please, tell me if this works for you. Cheers, A Diumenge 16 Gener 2005 16:19, Antonio Valentino va escriure: > hi, > I'm not an expert user and I'm having some problems trying to open an > hdf5 file containing a chunked dataset. > Here it is some infos > > Python 2.3.4 > [GCC 3.4.2 20041017 (Red Hat 3.4.2-6.fc3)] on linux2 > numarray v. 1.1.1 > tables v. 0.9.1 > hdf5 v. 1.6.3-patch > > this is the test program > > # BEGIN file test-uchar.py > import tables > h5file =3D tables.openFile('data.h5') > print h5file > # END file test-uchar.py > > an this is the data > > # chunk (128x128x2) > > [antonio@m6n h5]$ h5dump -A data.h5 > HDF5 "data.h5" { > GROUP "/" { > DATASET "ComplexUCharArray" { > DATATYPE H5T_STD_U8LE > DATASPACE SIMPLE { ( 200, 150, 2 ) / ( 200, 150, 2 ) } > } > } > } > > When i run the test program i get a segfault > > [antonio@m6n h5]$ python test-uchar.py > /usr/lib/python2.3/site-packages/tables/File.py:192: UserWarning: > 'data.h5' does exist, is an HDF5 file, but has not a PyTables format. > Trying toguess what's there using HDF5 metadata. I can't promise you > getting the correctobjects, but I will do my best!. > path, UserWarning) > Segmentation fault > > > If I try it with a *non* chunked dataset ... > > [antonio@m6n h5]$ python test-uchar.py > /usr/lib/python2.3/site-packages/tables/File.py:192: UserWarning: > 'data.h5' does exist, is an HDF5 file, but has not a PyTables format. > Trying to guess what's there using HDF5 metadata. I can't promise you > getting the correct objects, but I will do my best!. > path, UserWarning) > Traceback (most recent call last): > File "test-uchar.py", line 6, in ? > print h5file > File "/usr/lib/python2.3/site-packages/tables/File.py", line 1000, in > __str__ > astring +=3D str(leaf) + '\n' > File "/usr/lib/python2.3/site-packages/tables/Leaf.py", line 472, in > __str__ > title =3D self.attrs.TITLE > File "/usr/lib/python2.3/site-packages/tables/AttributeSet.py", line > 166, in __getattr__ > raise AttributeError, \ > > [SNIP] > > File "/usr/lib/python2.3/site-packages/tables/AttributeSet.py", line > 166, in __getattr__ > raise AttributeError, \ > File "/usr/lib/python2.3/site-packages/tables/Leaf.py", line 472, in > __str__ > title =3D self.attrs.TITLE > File "/usr/lib/python2.3/site-packages/tables/Leaf.py", line 189, in > _get_attrs > return AttributeSet(self) > RuntimeError: maximum recursion depth exceeded > > > in this case the file seems to be correctly opened but some problem is > met in the print statement. > > antonio =2D-=20 >qo< Francesc Altet =A0 =A0 http://www.carabos.com/ V =A0V C=E1rabos Coop. V. =A0=A0Enjoy Data "" |
From: Norbert N. <Nor...@gm...> - 2005-01-27 11:17:12
|
Am Donnerstag 27 Januar 2005 10:16 schrieb Ivan Vilata i Balaguer: > Actually, it is possible for Group to use _f_create* methods and have no > knowledge of which kinds of nodes it supports. One could register and > deregister new Node classes into Group in this way: > > class MyNewLeaf(Leaf): > ... class definition ... > Group._f_registerNode(MyNewLeaf) > > The _f_create* method would be created on-the-fly.... Maybe it is just me personally, but I find this kind of library-design extremely confusing. Of course, Python allows an extremely flexible design of libraries that would never be possible in other languages (my personal background is mostly C++). This flexibility calls for a lot of self-discipline. My usual idea of a library is, that it is a fixed set of modules, containing a fixed set of classes offering a fixed set of methods. Any departure from this static picture means an additional step in understanding the library. It may be justified in many cases, but should still be avoided if there is no good reason for it. Anyway: this is just my personal view as someone relatively new to Python... Ciao, Norbert -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Ivan V. i B. <iv...@ca...> - 2005-01-27 09:39:52
|
On Thu, Jan 27, 2005 at 10:16:50AM +0100, Ivan Vilata i Balaguer wrote: > def _f_registerNode(cls, nodeClass): [...] - createNode.__name__ =3D methodName + try: + createNode.__name__ =3D methodName + except TypeError: + pass # Python < 2.4 does not support changing __name__ >=20 > # Add the new method to the class. > setattr(cls, methodName, createNode) > _f_registerNode =3D classmethod(_f_registerNode) [...] --=20 Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=E1rabos Coop. V. V V Enjoy Data "" |
From: Ivan V. i B. <iv...@ca...> - 2005-01-27 09:17:16
|
On Wed, Jan 26, 2005 at 03:43:19PM +0100, Norbert Nemec wrote: > Am Mittwoch, 26. Januar 2005 10:52 schrieb Ivan Vilata i Balaguer: > > Having said that, I keep the opinion that node creation, removal and > > renaming should belong in Group methods and nowhere else. Bye! >=20 > For renaming and removal, I agree with you. For creation however, I don't= =2E=20 > Making node creation a group method makes the whole system non-extensible= =2E=20 > The Group class has to contain a createSomething method for every kind of= =20 > Node that it might contain. Currently, we already have Array, EArray, Tab= le,=20 > Group and VLArray. In the long run, the number of possible Nodes will=20 > probably increase even more. Actually, it would make sense to allow the u= ser=20 > to define their own kinds of Nodes to extend PyTables. [...] Actually, it is possible for Group to use _f_create* methods and have no knowledge of which kinds of nodes it supports. One could register and deregister new Node classes into Group in this way: class MyNewLeaf(Leaf): ... class definition ... Group._f_registerNode(MyNewLeaf) The _f_create* method would be created on-the-fly. This can actually be done. Here is the code for an hypothetical implementation: class Group(Node): # ... # @classmethod def _f_registerNode(cls, nodeClass): nodeClassName =3D nodeClass.__name__ if not issubclass(nodeClass, Node): raise TypeError("registered class is not a subclass of Node: %s" % (nodeClassName,)) # Define the _create<Class> method. methodName =3D '_f_create%s' % (nodeClassName,) def createNode(self, name, **nodeArgs): node =3D nodeClass(name, **nodeArgs) self._g_bind(name, node) createNode.__name__ =3D methodName # Add the new method to the class. setattr(cls, methodName, createNode) _f_registerNode =3D classmethod(_f_registerNode) # @classmethod def _f_deregisterNode(cls, nodeclass): nodeClassName =3D nodeClass.__name__ methodName =3D '_f_create%s' % (nodeClassName,) if not hasattr(cls, methodName): raise ValueError("class is not registered: %s" % (nodeClassName,)) delattr(cls, methodName) _f_deregisterNode =3D classmethod(_f_deregisterNode) # ... Group._f_registerNode(Group) The only restriction to Node subclasses is that its __init__ method supports at least one argument (which already makes sense for =ABtitle=BB). Else, a **kwargs argument can be used and checked for no actual arguments. I think this implementation is reasonably simple and places very little burden on the developer (namely, calling Group._f_registerNode). It should make Group completely independent of Node subclasses. Regards, Ivan --=20 Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=E1rabos Coop. V. V V Enjoy Data "" |
From: Norbert N. <Nor...@gm...> - 2005-01-26 14:47:53
|
Am Dienstag, 25. Januar 2005 18:59 schrieb Francesc Altet: > I take the opportunity for announce that we are leaving the CVS facilities > at SourceForge for keeping the PyTables source versions. Currently, the new > repository it is not public yet, but we will try to offer publicly > accessible snapshots nightly. Ivan, please, try to make this as soon as > possible, and send a separate announcement to this list. Public CVS access would be extremely helpful for those who want to track the current version. 'cvs update' is just one command while downloading and unpacking has to be done by hand each time (or it would need some scripting that I'd like to avoid) Ciao, Norbert -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Norbert N. <No...@ne...> - 2005-01-26 14:43:26
|
Am Mittwoch, 26. Januar 2005 10:52 schrieb Ivan Vilata i Balaguer: > Having said that, I keep the opinion that node creation, removal and > renaming should belong in Group methods and nowhere else. Bye! For renaming and removal, I agree with you. For creation however, I don't. Making node creation a group method makes the whole system non-extensible. The Group class has to contain a createSomething method for every kind of Node that it might contain. Currently, we already have Array, EArray, Table, Group and VLArray. In the long run, the number of possible Nodes will probably increase even more. Actually, it would make sense to allow the user to define their own kinds of Nodes to extend PyTables. Putting all these methods into Group is not extensible. The Group should offer everything that concerns groups, but it should not need to know what kinds of Nodes might exist. Now, we could either demand a factory function for each kind of Nodes, but that would just mean one additional function where the same functionality can simply be placed in the constructor. -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Norbert N. <Nor...@gm...> - 2005-01-26 10:31:19
|
Am Dienstag, 25. Januar 2005 18:40 schrieb Francesc Altet: > A Dilluns 24 Gener 2005 20:51, Norbert Nemec va escriure: > > In short pseudonotation: > > I want to store the result of a function > > f(x,y,time) > > as well as another (time independent) function > > g(x,y) > > with > > x in range(0.0,2.0,0.1) > > y in range(-1.0,1.0,0.1) > > time in range(0.0,500.0,10.0) > > > > NetCDF allows me to first define the dimensions 'x','y' and 'time' and > > then write datasets (arrays) that extend in a certain subset of these > > dimensions. > > Uh, can you be a bit more explicit on what feature PyTables is missing?. A > single metacode would be enough. [You know that EArrays allows extending > the object in any (single) dimension you want, do you?.] Never mind. I thought about it a bit more and understand somewhat more now: NetCDF and HDF have different objectives. The latter is just a kind of a container that helps to store and access your data but doesn't associate any meaning with it. NetCDF, on the other hand, tries to store data in a self-descriptive way. You don't just store a bunch of numbers, but you also specify their meaning to a certain extent in a machine readable way. The example above would mean that you store five arrays: x, y, time, f, g in such a way that it is clear that x,y and time are coordinates, while f and g is data at these coordinates. Of course, this can simply be done on top of HDF5 (which is, what NetCDF4 is trying to do - previous versions defined their own file format) Everybody could find their own convention how to store this kind of relationship. The whole point of NetCDF is to fix one convention so that various tools can use this additional information to properly handle or display the data. It probably is not the idea behind PyTables to follow this path. Maybe some day, one can put something similar to NetCDF on top of PyTables, but it should probably always kept as two distinct layers. Ciao, Norbert -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Ivan V. i B. <iv...@ca...> - 2005-01-26 09:52:44
|
On Tue, Jan 25, 2005 at 08:00:28PM +0100, Norbert Nemec wrote: > Am Dienstag, 25. Januar 2005 17:39 schrieb Francesc Altet: [...] > The usual behavior of object in python is, that the constructor creates a= n=20 > object, but does not link any handle to it, so if you don't preserve the= =20 > reference that the constructor returns, the object is dead immediately. [...] However, one could see that PyTables nodes have the additional requirement that they must belong in an HDF5 file. A nice parallelism can be made with UNIX-like filesystems. There, files and directories do not properly belong to a directory, but are accessed via their inode instead. However, the user is not able to create files at the inode level, but has to use additional functions (like creat(), mkdir(), mknod()...) to create them and instantly (i.e. atomically) place them in some directory. This avoids creating files that are not accessible through the file system path hierarchy. Giving the user the opportunity to go straight to Node constructor calling would be like letting Unix users create files that are never bound to a directory. It might seem more elegant sometimes, but it breaks the hierarchical concept by allowing a kind of =E2=80=9Cparallel hierarchies=E2=80=9D. Having said that, I keep the opinion that node creation, removal and renaming should belong in Group methods and nowhere else. Bye! --=20 Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Ivan V. i B. <iv...@ca...> - 2005-01-26 09:37:15
|
On Tue, Jan 25, 2005 at 06:59:15PM +0100, Francesc Altet wrote: [...] > I take the opportunity for announce that we are leaving the CVS facilities > at SourceForge for keeping the PyTables source versions. Currently, the n= ew > repository it is not public yet, but we will try to offer publicly > accessible snapshots nightly. Ivan, please, try to make this as soon as > possible, and send a separate announcement to this list. Nightly snapshots from the repository have been collected for some days. They are currently available at: http://www.carabos.com/downloads/pytables/snapshots/ This directory is not yet linked to, but this will be fixed soon. You can access it, though. Regards, Ivan --=20 Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=E1rabos Coop. V. V V Enjoy Data "" |
From: Norbert N. <Nor...@gm...> - 2005-01-25 19:00:36
|
Am Dienstag, 25. Januar 2005 17:39 schrieb Francesc Altet: > We have given to this quite a deal of consideration. While we agree that > allowing: > > mygroup.mynode= Node(...) #this can be done now, but not a good practice > IMO! > > or > > mygroup['mynode'] = Node(...) > > can easy the work to the programmer (or at least, to some programmers), the > fact that a Node() in itself has not a real utility (or at least, not the > utility it was primarily designed for), may lead to innecessary confusions > for users. So, my opinion is that this should not be regarded (and hence, > not documented) as a feature. I don't follow that opinion at all. One point to consider, is that this kind of node creation is the only one that allows the use of natural naming at the point of creation. I simply find it ugly having to switch between using a string for naming a node and naming it by natural naming. Comparing the two snippets: --------------------- mygroup.somenode = Table(...) row = mygroup.somenode.row ... -------------------- mygroup._f_createTable('somenode',...) row = mygroup.somenode.row ... ------------------- The first one is just a lot more intuitive to read. Furthermore: class constructors are a very pythonic concept. Using factories means that users have to remember yet another name of a function. Adding 'where' to the constructor does not seem like a very good idea to me: The usual behavior of object in python is, that the constructor creates an object, but does not link any handle to it, so if you don't preserve the reference that the constructor returns, the object is dead immediately. > Temporary tables (or arrays) in memory would be nice, and of course, is an > option for the future. And their single existence would justify the > mygroup.mynode = Table() notation. However, this would imply quite a lot of > work, and this is not a priority for us right now. Why do you need this as justification for the mygroup.mynode = Table() notation? It is clear that "unbound nodes" are an intuitive concept. Even if you cannot save and data in them (yet), it should not be hard to grasp what it is. -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Francesc A. <fa...@ca...> - 2005-01-25 17:59:28
|
A Dilluns 24 Gener 2005 20:01, Norbert Nemec va escriure: > Hi there, > > I noticed that ptdump shows much redundant information. The attached patch > changes that by > > * not displaying system attributes in AttributeSet.__repr__ Yeah > * sticking with group.__str__ instead of group.__repr__ even for verbose > mode (the latter only displays a list of children which is dumped lateron > anyway) I agree. > There probably still is some room for optimization... I hope so ;) These changes are already in our new SVN repository in carabos.com. I take the opportunity for announce that we are leaving the CVS facilities at SourceForge for keeping the PyTables source versions. Currently, the new repository it is not public yet, but we will try to offer publicly accessible snapshots nightly. Ivan, please, try to make this as soon as possible, and send a separate announcement to this list. Cheers, =2D-=20 >qo< Francesc Altet =A0 =A0 http://www.carabos.com/ V =A0V C=E1rabos Coop. V. =A0=A0Enjoy Data "" |
From: Francesc A. <fa...@ca...> - 2005-01-25 17:41:09
|
A Dilluns 24 Gener 2005 20:51, Norbert Nemec va escriure: > In short pseudonotation: > I want to store the result of a function > f(x,y,time) > as well as another (time independent) function > g(x,y) > with > x in range(0.0,2.0,0.1) > y in range(-1.0,1.0,0.1) > time in range(0.0,500.0,10.0) > > NetCDF allows me to first define the dimensions 'x','y' and 'time' and th= en > write datasets (arrays) that extend in a certain subset of these > dimensions. Uh, can you be a bit more explicit on what feature PyTables is missing?. A single metacode would be enough. [You know that EArrays allows extending the object in any (single) dimension you want, do you?.] =2D-=20 >qo< Francesc Altet =A0 =A0 http://www.carabos.com/ V =A0V C=E1rabos Coop. V. =A0=A0Enjoy Data "" |
From: Francesc A. <fa...@ca...> - 2005-01-25 17:27:50
|
Hi Norbert, The Table.row object is only defined as a mean to allow fast appends to Table objects, and it was designed to work only with this goal. I agree with you in that allowing: for row in mytable.iterrows(): row['x'] =3D row['x'] + 0.5 would be nice, but the problem is that Row.__setitem__ only puts its value in a buffer, that is intended to be written to disk afterwards. An easy workaround would be defining a new class, say RowMod (for Row Modification), so that its __setitem__ method would work as you want. Once we have this, mytable.iterrows() would return instances of class RowMod,=20 not Row. That way, the next should work: row =3D mytable.row for i in range(3): row['x'] =3D i row.append() mytable.flush() but also, for row in mytable.iterrows(): row['x'] =3D row['x'] + 0.5 Mmm, I'll think more on that, and if I find this feasible (and more importantly, find the time), I'll try to implement that. By the way, a good workaround for what you are trying for is: for i in xrange(mytable.nrows): mytable.cols.x[i] =3D mytable.cols.x[i] + 0.5 Cheers, A Dimarts 25 Gener 2005 15:51, Norbert Nemec va escriure: > Hi there, > > i would have assumed that the following works: > ------------------------- > #!/usr/bin/env python > > from tables import * > > file =3D openFile('tryout.h5','w') > > desc =3D {} > desc['x'] =3D FloatCol() > > file.root.mytable =3D Table(desc) > mytable =3D file.root.mytable > > row =3D mytable.row > for i in range(3): > row['x'] =3D i > row.append() > mytable.flush() > > for row in mytable.iterrows(): > row['x'] =3D row['x'] + 0.5 > mytable.flush() > > for row in mytable.iterrows(): > print row['x'] > > file.close() > ------------------------- > > but obviously, the writing is completely ignored. Is this intuitive > approach wrong? Is it just not implemented yet? Is there some other way t= he > same thing could be done? > > Ciao, > Norbert =2D-=20 >qo< Francesc Altet =A0 =A0 http://www.carabos.com/ V =A0V C=E1rabos Coop. V. =A0=A0Enjoy Data "" |
From: Ivan V. i B. <iv...@ca...> - 2005-01-25 16:59:49
|
On Tue, Jan 25, 2005 at 05:39:41PM +0100, Francesc Altet wrote: [...] > However, we plan to add a few methods to Group in order to easy the creat= ion > of nodes. Most probably, we will implement: Group._f_createGroup, > Group._f_create*Array and Group._f_createTable that will do the similar > thing than their counterparts in File. Besides, we plan to add support for > Group.__getitem__('nodename') and Group.__delitem__('nodename'). With tha= t, > we think that node creation and referencing would result somewhat improve= d. [...] This is definitely a good move, since it moves node creation and deletion to the containing node instead of the File. A coherent move in the same direction would be to change Group._f_rename(newname), which changes a group's own name, to Group._f_rename(oldname, newname), which would change a *child's* name. The name of a node should only matter to the containing group. Also, Leaf._f_rename would go away. So, >>> group.oldname._f_rename('newname') would change to: >>> group._f_rename('oldname', 'newname') which is more coherent with the way file systems work. --=20 Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=E1rabos Coop. V. V V Enjoy Data "" |