You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(5) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
|
Feb
(2) |
Mar
|
Apr
(5) |
May
(11) |
Jun
(7) |
Jul
(18) |
Aug
(5) |
Sep
(15) |
Oct
(4) |
Nov
(1) |
Dec
(4) |
2004 |
Jan
(5) |
Feb
(2) |
Mar
(5) |
Apr
(8) |
May
(8) |
Jun
(10) |
Jul
(4) |
Aug
(4) |
Sep
(20) |
Oct
(11) |
Nov
(31) |
Dec
(41) |
2005 |
Jan
(79) |
Feb
(22) |
Mar
(14) |
Apr
(17) |
May
(35) |
Jun
(24) |
Jul
(26) |
Aug
(9) |
Sep
(57) |
Oct
(64) |
Nov
(25) |
Dec
(37) |
2006 |
Jan
(76) |
Feb
(24) |
Mar
(79) |
Apr
(44) |
May
(33) |
Jun
(12) |
Jul
(15) |
Aug
(40) |
Sep
(17) |
Oct
(21) |
Nov
(46) |
Dec
(23) |
2007 |
Jan
(18) |
Feb
(25) |
Mar
(41) |
Apr
(66) |
May
(18) |
Jun
(29) |
Jul
(40) |
Aug
(32) |
Sep
(34) |
Oct
(17) |
Nov
(46) |
Dec
(17) |
2008 |
Jan
(17) |
Feb
(42) |
Mar
(23) |
Apr
(11) |
May
(65) |
Jun
(28) |
Jul
(28) |
Aug
(16) |
Sep
(24) |
Oct
(33) |
Nov
(16) |
Dec
(5) |
2009 |
Jan
(19) |
Feb
(25) |
Mar
(11) |
Apr
(32) |
May
(62) |
Jun
(28) |
Jul
(61) |
Aug
(20) |
Sep
(61) |
Oct
(11) |
Nov
(14) |
Dec
(53) |
2010 |
Jan
(17) |
Feb
(31) |
Mar
(39) |
Apr
(43) |
May
(49) |
Jun
(47) |
Jul
(35) |
Aug
(58) |
Sep
(55) |
Oct
(91) |
Nov
(77) |
Dec
(63) |
2011 |
Jan
(50) |
Feb
(30) |
Mar
(67) |
Apr
(31) |
May
(17) |
Jun
(83) |
Jul
(17) |
Aug
(33) |
Sep
(35) |
Oct
(19) |
Nov
(29) |
Dec
(26) |
2012 |
Jan
(53) |
Feb
(22) |
Mar
(118) |
Apr
(45) |
May
(28) |
Jun
(71) |
Jul
(87) |
Aug
(55) |
Sep
(30) |
Oct
(73) |
Nov
(41) |
Dec
(28) |
2013 |
Jan
(19) |
Feb
(30) |
Mar
(14) |
Apr
(63) |
May
(20) |
Jun
(59) |
Jul
(40) |
Aug
(33) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Francesc A. <fa...@ca...> - 2004-12-10 08:40:44
|
A Divendres 10 Desembre 2004 08:23, Norbert Nemec va escriure: > > However, at a Python level, nodes are accessed as attributes (or > > members), so the right exception should be AttributeError. If we want > > to give this kind of access an additional meaning in PyTables, then > > maybe a new subclass of AttributeError (e.g NodeError) should be > > defined and used instead. In addition, this would not break > > compatibility with existing code where AttributeError was caught. > > Well, this is just an alternative. Bye! >=20 > I like this idea. Yes, me too. However, that implies a few more more work, and perhaps a better redesign of *many* others exceptions in PyTables. While this a task that should be done, I'm still very tempted to use a KeyError, just to diferentiate behaviours. Although perhaps is still better to let the current LookupError (__getattr__ and __delattr__) and NameError (__setattr__) happening until the big exception redesing would happen. > It is clear that there is a conflict in naming between HDF5-Attributes an= d=20 > Python attributes. It is dangerous to implicitely say "When I talk of=20 > attributes, I mean ...-Attributes." The conflict is there and will cause= =20 > confusion for everyone if it is not addressed cleanly. >=20 > There should be some explicit policy in distinguishing between the two ty= pes=20 > of attributes. Unfortunately, I have no idea yet, what that might look=20 > like... Well, the pointed objects are already different (a Leaf or Group for HDF5 nodes and AttributeSet for HDF5 attributes). Another step would be providing different types for exceptions. =2D-=20 =46rancesc Altet Who's your data daddy? =A0PyTables |
From: Norbert N. <Nor...@gm...> - 2004-12-10 07:27:35
|
Am Freitag, 10. Dezember 2004 00:23 schrieb Ivan Vilata i Balaguer: > On Thu, Dec 09, 2004 at 08:34:07PM +0100, Francesc Altet wrote: > > [...] > > It just happens that "fileh.root.table" is > > the way that PyTables has chosen to specifiy a node in the object tree > > (the so-called "natural naming"). This is why I think there is an > > intrinsic difference between a "node" and an "attribute", and why they > > should be treated differently (even raise different exceptions). > > However, at a Python level, nodes are accessed as attributes (or > members), so the right exception should be AttributeError. If we want > to give this kind of access an additional meaning in PyTables, then > maybe a new subclass of AttributeError (e.g NodeError) should be > defined and used instead. In addition, this would not break > compatibility with existing code where AttributeError was caught. > Well, this is just an alternative. Bye! I like this idea. It is clear that there is a conflict in naming between HDF5-Attributes and Python attributes. It is dangerous to implicitely say "When I talk of attributes, I mean ...-Attributes." The conflict is there and will cause confusion for everyone if it is not addressed cleanly. There should be some explicit policy in distinguishing between the two types of attributes. Unfortunately, I have no idea yet, what that might look like... -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Ivan V. i B. <iv...@ca...> - 2004-12-09 23:23:52
|
On Thu, Dec 09, 2004 at 08:34:07PM +0100, Francesc Altet wrote: > [...] > It just happens that "fileh.root.table" is > the way that PyTables has chosen to specifiy a node in the object tree (t= he > so-called "natural naming"). This is why I think there is an intrinsic > difference between a "node" and an "attribute", and why they should be > treated differently (even raise different exceptions). However, at a Python level, nodes are accessed as attributes (or members), so the right exception should be AttributeError. If we want to give this kind of access an additional meaning in PyTables, then maybe a new subclass of AttributeError (e.g NodeError) should be defined and used instead. In addition, this would not break compatibility with existing code where AttributeError was caught. Well, this is just an alternative. Bye! --=20 Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=E1rabos Coop. V. V V Enjoy Data "" |
From: Francesc A. <fa...@ca...> - 2004-12-09 19:34:43
|
A Dijous 09 Desembre 2004 19:55, Norbert Nemec va escriure: > True. Maybe one could have an option to deactivate this security measure?= Just=20 > some library-wide option that you switch off if you want to take the risk= y=20 > route. It's a possibility... > On the other hand, overwriting elements like Arrays probably is not very= =20 > efficient anyway, so it should be avoided. Making it necessary to delete = an=20 > object before assigning it anew might be a way to let people think about = that=20 > strategy again. I definitely like this approach better > The question of dictionary vs. object has little to do with the question = of=20 > overwriting elements. Currently, PyTables is built upon attributes. I thi= nk=20 > changing this is out of question (compatibilty!) >=20 > As long as you are dealing with attribute exceptions, only AttributeError= =20 > should be used. (docs: "Raised when an attribute reference or assignment= =20 > fails.") Neither NameError nor KeyError have anything to do with this iss= ue. Well, I think we should expand somewhat the meaning of "attribute" concept here. To my eyes, an object like "fileh.root.table" is not an attribute, but a Leaf (or more generally a "node"), which has a very different meaning from an object like "fileh.root.table.attrs.attribute" which is closer to the Python concept of "attribute". It just happens that "fileh.root.table" is the way that PyTables has chosen to specifiy a node in the object tree (the so-called "natural naming"). This is why I think there is an intrinsic difference between a "node" and an "attribute", and why they should be treated differently (even raise different exceptions). Cheers, =2D-=20 =46rancesc Altet Who's your data daddy? =A0PyTables |
From: Norbert N. <Nor...@gm...> - 2004-12-09 18:56:04
|
Am Donnerstag, 9. Dezember 2004 11:10 schrieb Francesc Altet: > A Dissabte 04 Desembre 2004 12:01, Francesc Altet va escriure: > > A Dissabte 04 Desembre 2004 11:44, Norbert Nemec va escriure: > > > Now in pytables: > > > ----------------- > > > from tables import * > > > h5file = openFile("dummy.h5",'w') > > > h5file.root.asdf = Group() > > > h5file.root.asdf = Group() # NameError: '/' group already has > > > # a child named 'asdf' in file 'dummy.h5' > > > > Yes, I wanted to protect somewhat the attributes so that a user has to > > delete it before being overwritten. But you are probably right in that we > > PyTables should follow as carefully as possible the python behaviour. > > Mmm, I'm afraid that I was a bit out of context when I wrote this. I > thought that we were talking about attributes of nodes, but in fact we are > talking about *childs* of objects, and I do believe that PyTables should > offer some protection against overwriting a node (or even a complete > subtree), specially when used interactively (you don't want to loose lots > of data because a typing mistake, would you?). True. Maybe one could have an option to deactivate this security measure? Just some library-wide option that you switch off if you want to take the risky route. On the other hand, overwriting elements like Arrays probably is not very efficient anyway, so it should be avoided. Making it necessary to delete an object before assigning it anew might be a way to let people think about that strategy again. > I think we should regard the object tree more as a multi-level dictionary > rather than an ordinary object with attributes. In that sense, I should > change the NameError above by a KeyError. What do you think? The question of dictionary vs. object has little to do with the question of overwriting elements. Currently, PyTables is built upon attributes. I think changing this is out of question (compatibilty!) As long as you are dealing with attribute exceptions, only AttributeError should be used. (docs: "Raised when an attribute reference or assignment fails.") Neither NameError nor KeyError have anything to do with this issue. > On another hand, I've changed the RuntimeError raised during overwriting > read-only system atributes to AttributeError. > > > > del h5file.root.asdf > > > del h5file.root.asdf # this just does nothing > > > > Ops, this should raise AttributeError of course. > > For the same reason as stated before, it should raise a KeyError. Again: no. This is dealing with attributes, not with keys of a dictionary. -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Francesc A. <fa...@ca...> - 2004-12-09 12:35:45
|
Hi Ashley, Thanks for filling-up the questionnaire. Find some notes intertwined in the text behind. A Dijous 09 Desembre 2004 09:20, Ashley Walsh va escriure: > > - Implement date support in Leaf objects (specially on Tables). >=20 > +5, tables with a time column are our stock in trade. Well, I think we will be definitely able to deliver that for 1.0 > > - What feature do you like more (not necessarily listed before)? >=20 > CSTables ;) Or more to the point to ability to have one write-access=20 > point, but easy read-access from other processes. Great, perhaps you will be interested in testing the beta version that we will be releasing by the end of the month. Stay tuned. > PS sourceforge is down at the moment or I'd put this in as a bug. >=20 > Table.iterrows() should probably return an empty iterator (eg "return=20 > iter([])" ) for the case when start >=3D stop. Returning an empty=20 > RecArray raises a TypeError for: >=20 > iter(table) >=20 > because iter() expects an iterator to be returned by table.__iter__ Good point. This is implemented in CVS (MAIN trunk) now. Cheers, =2D-=20 =46rancesc Altet Who's your data daddy? =A0PyTables |
From: Francesc A. <fa...@ca...> - 2004-12-09 10:10:29
|
A Dissabte 04 Desembre 2004 12:01, Francesc Altet va escriure: > A Dissabte 04 Desembre 2004 11:44, Norbert Nemec va escriure: > > Now in pytables: > > ----------------- > > from tables import * > > h5file =3D openFile("dummy.h5",'w') > > h5file.root.asdf =3D Group() > > h5file.root.asdf =3D Group() # NameError: '/' group already has=20 > > # a child named 'asdf' in file 'dummy.h5' >=20 > Yes, I wanted to protect somewhat the attributes so that a user has to > delete it before being overwritten. But you are probably right in that we > PyTables should follow as carefully as possible the python behaviour. Mmm, I'm afraid that I was a bit out of context when I wrote this. I thought that we were talking about attributes of nodes, but in fact we are talking about *childs* of objects, and I do believe that PyTables should offer some protection against overwriting a node (or even a complete subtree), specially when used interactively (you don't want to loose lots of data because a typing mistake, would you?).=20 I think we should regard the object tree more as a multi-level dictionary rather than an ordinary object with attributes. In that sense, I should change the NameError above by a KeyError. What do you think? On another hand, I've changed the RuntimeError raised during overwriting read-only system atributes to AttributeError. >=20 > > del h5file.root.asdf > > del h5file.root.asdf # this just does nothing >=20 > Ops, this should raise AttributeError of course. =46or the same reason as stated before, it should raise a KeyError. =2D-=20 =46rancesc Altet Who's your data daddy? =A0PyTables |
From: Ashley W. <ash...@sy...> - 2004-12-09 08:20:52
|
On 30/11/2004, at 4:16 AM, Francesc Altet wrote: > Hi List, > > Now that PyTables has passed its second birthday, we need your = feedback > again. So, please, if you are using PyTables and like this effort to = be > continued and improved, take some time to respond to this > questionnaire; it will help us to decide what to do in the next few > months. > > Following are our plans for the next few months, before releasing the > 1.0 version. Could you respond to these questions by giving a > punctuation ranging from +5 (I absolutely need that) to 0 (I can pass > without this) to these planned features? > > - Implement support for Variable Length values in Table columns (most > likely implemented in a brand-new object called VLTables). +2, interesting > - Implement relationships (apart from the existing hierarchical ones) > between objects in the object tree (kind of symbolic links on a > filesystems), and pointers to dataset regions. +2, interesting > - Implement date support in Leaf objects (specially on Tables). +5, tables with a time column are our stock in trade. > - Full index-based searches in Tables (right now, you can only set one > condition in index-accelerated searches). +3, useful, but I haven't done enough searching to know how useful yet. > - Image support (in the sense stated in > http://hdf.ncsa.uiuc.edu/HDF5/doc/ADGuide/ImageSpec.html). 0, haven't stored an image yet > Now, answer the next questions. If you don't like to respond all of=20 > them, > this is not necessary. > > - What feature do you like more (not necessarily listed before)? CSTables ;) Or more to the point to ability to have one write-access=20 point, but easy read-access from other processes. > - What feature do you miss more (not necessarily listed before)? > > - Do you prefer seeing PyTables to become (even) faster or less memory > demanding? Both? > - Which present limitations do you find more annoying? > > - In which field of engineering, science or business are you using > PyTables? Engineering operations management. Process analysis and optimisation. =20= Reports for operators and engineers. > Ok, that's all. I hope some of you will take some of his precious time > to fill-up the questionnaire. Even only one answer would be much > better than our sole opinion!. > > Thanks, > > --=20 > Francesc Altet > Who's your data daddy? =A0PyTables Thank you. Ashley Walsh PS sourceforge is down at the moment or I'd put this in as a bug. Table.iterrows() should probably return an empty iterator (eg "return=20= iter([])" ) for the case when start >=3D stop. Returning an empty=20 RecArray raises a TypeError for: iter(table) because iter() expects an iterator to be returned by table.__iter__ |
From: Francesc A. <fa...@ca...> - 2004-12-08 18:27:40
|
Hi *Norbert* ;) A Diumenge 05 Desembre 2004 13:59, Norbert Nemec va escriure: > I just realized that the branching in the CVS seems to be a bit mixed up:= When=20 > I pull pytables from CVS without any sophisticated branching options, I g= et=20 > an older version than the 0.9.1 which I can download as .tar.gz >=20 > For example, the last patch of AttributeSet.py (changing the behavior for= =20 > non-existant attributes) is in the .tar.gz but not in the MAIN branch of = the=20 > CVS. >=20 > In general, I do not really see, by the pytables CVS needs any branches a= t=20 > all. Branches are a nice thing if you want to fork of a experimental vers= ion=20 > that should be handled separated from the regular maintenance. I've decided to create a branch for 0.9 updates called Release-0_9_patches, and continue the development in MAIN for 1.0. The normal thing is to do some merges from branch Release-0_9_patches to MAIN with some frequence. > For the rather regular and centralized development of pytables one workin= g=20 > trunk containing all the patches should IMO be enough and help avoid some= =20 > confusion. Well, I've just followed the recommendations in: https://www.cvshome.org/docs/manual/cvs-1.11.16/cvs_5.html which clearly advocates for the solution that I've recently adopted and I must say that I find out this approach to be very useful. Cheers, =2D-=20 =46rancesc Altet Who's your data daddy? =A0PyTables |
From: Norbert N. <Nor...@gm...> - 2004-12-05 12:59:35
|
Hi there, I just realized that the branching in the CVS seems to be a bit mixed up: When I pull pytables from CVS without any sophisticated branching options, I get an older version than the 0.9.1 which I can download as .tar.gz For example, the last patch of AttributeSet.py (changing the behavior for non-existant attributes) is in the .tar.gz but not in the MAIN branch of the CVS. In general, I do not really see, by the pytables CVS needs any branches at all. Branches are a nice thing if you want to fork of a experimental version that should be handled separated from the regular maintenance. For the rather regular and centralized development of pytables one working trunk containing all the patches should IMO be enough and help avoid some confusion. Ciao, Norbert -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Ted H. <ted...@ea...> - 2004-12-05 07:40:13
|
Hi, I'm a new PyTables user. I am in the process of rearchitecting a=20 simple GIS program I wrote for mapping bike routes to use PyTables (the=20= data is currently in SDTS (iso8211) format and I have about 500MB of=20 compressed data). > Hi List, > > Now that PyTables has passed its second birthday, we need your=20 > feedback > again. So, please, if you are using PyTables and like this effort to=20= > be > continued and improved, take some time to respond to this > questionnaire; it will help us to decide what to do in the next few > months. > > Following are our plans for the next few months, before releasing the > 1.0 version. Could you respond to these questions by giving a > punctuation ranging from +5 (I absolutely need that) to 0 (I can pass > without this) to these planned features? > > - Implement support for Variable Length values in Table columns (most > likely implemented in a brand-new object called VLTables). +5 I was actually holding out for this feature after reading the SciPy=20= presentation, but when it wasn't in 0.9, I decided to just go ahead=20 anyway. Some of my data could be represented much more naturally with=20= this feature. > - Implement relationships (apart from the existing hierarchical ones) > between objects in the object tree (kind of symbolic links on a > filesystems), and pointers to dataset regions. +2 I would definitely use this feature, but its fairly easy to=20 workaround. > - Implement date support in Leaf objects (specially on Tables). +3 Normally I am a fantic about dates (my day job is in the financial=20= industry), but I don't happen to need them for my current project. > - Full index-based searches in Tables (right now, you can only set = one > condition in index-accelerated searches). +5 Since this is a GIS system, I need to query by rectangular regions=20= (maybe there is some way to do this without already that I havn't=20 thought of?). > - Image support (in the sense stated in > http://hdf.ncsa.uiuc.edu/HDF5/doc/ADGuide/ImageSpec.html). +0 > Now, answer the next questions. If you don"t like to respond all of=20= > them, > this is not necessary. > > - What feature do you like more (not necessarily listed before)? Automatic and fast handling of large data sets (the main feature). > - What feature do you miss more (not necessarily listed before)? Don't know yet. > - Do you prefer seeing PyTables to become (even) faster or less = memory > demanding? Don't know yet. > - Which present limitations do you find more annoying? Don't know yet. > - In which field of engineering, science or business are you using > PyTables? Currently GIS, but might use it in financial modeling. > Ok, that"s all. I hope some of you will take some of his precious = time > to fill-up the questionnaire. Even only one answer would be much > better than our sole opinion!. I hope it useful. Like I said, I'm new to this, so I could be off base. > Thanks, Thank you for making PyTables available! > -- > Francesc Altet > Who"s your data daddy? =A0PyTables -- Ted Horst |
From: Francesc A. <fa...@ca...> - 2004-12-04 11:02:10
|
A Dissabte 04 Desembre 2004 11:44, Norbert Nemec va escriure: > just noticed a behavior that' just contrary to python convention: >=20 > First the convention in python: > ------------ > class D: > pass > d=3DD() > d.asdf=3D"asdf" > d.asdf=3D"qwer" # this just overwrites the old attribute > del d.asdf > del d.asdf # AttributeError: D instance has no attribute 'asdf' > ------------ >=20 > You see: reassignment overwrites the old field, deletion of nonexistant f= ields=20 > gives an exception >=20 > Now in pytables: > ----------------- > from tables import * > h5file =3D openFile("dummy.h5",'w') > h5file.root.asdf =3D Group() > h5file.root.asdf =3D Group() # NameError: '/' group already has=20 > # a child named 'asdf' in file 'dummy.h5' Yes, I wanted to protect somewhat the attributes so that a user has to delete it before being overwritten. But you are probably right in that we PyTables should follow as carefully as possible the python behaviour. > del h5file.root.asdf > del h5file.root.asdf # this just does nothing Ops, this should raise AttributeError of course. Well, I'll try to address to that in next releases Thanks for noting this! =2D-=20 =46rancesc Altet Who's your data daddy? =A0PyTables |
From: Norbert N. <No...@ne...> - 2004-12-04 10:52:05
|
Am Donnerstag, 2. Dezember 2004 19:47 schrieb Francesc Altet: > If you feel like having a general overhaul about wrong exceptions and > provide the pointers to the appropriate parts of code, I'll look into that. > And much better if you can provide the patches, of course!. Don't know when I find the time for it, but I put it on my ToDo list. -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Norbert N. <Nor...@gm...> - 2004-12-04 10:45:13
|
Hi there, just noticed a behavior that' just contrary to python convention: First the convention in python: ------------ class D: pass d=D() d.asdf="asdf" d.asdf="qwer" # this just overwrites the old attribute del d.asdf del d.asdf # AttributeError: D instance has no attribute 'asdf' ------------ You see: reassignment overwrites the old field, deletion of nonexistant fields gives an exception Now in pytables: ----------------- from tables import * h5file = openFile("dummy.h5",'w') h5file.root.asdf = Group() h5file.root.asdf = Group() # NameError: '/' group already has # a child named 'asdf' in file 'dummy.h5' del h5file.root.asdf del h5file.root.asdf # this just does nothing ----------------- So pytables just behaves in the opposite way from python standards. If this is intentional, it should probably be documented with a good rationale. Otherwise, one should probably fix it and accept the break of compatibility. Ciao, Norbert -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Francesc A. <fa...@ca...> - 2004-12-02 18:48:08
|
Hi Robert, A Dijous 02 Desembre 2004 19:24, Norbert Nemec va escriure: > Hi there, >=20 > checking through the code of pytables, I fail to find much of a concept f= or=20 > which exception is thrown where. All the different standard exception typ= es=20 > have well defined meanings: > http://www.python.org/dev/doc/devel/lib/module-exceptions.html > but it seems to me that pytables does not really follow these too closely. I wasn't really aware of that document. I use to follow the "Python Essential Reference" by David Beazly, and I can see now that the document you pointed out is much better regarding exceptions case uses. Thanks for noting that!. > I'm mostly concerned about nonexistant objects. If I try to access/delete= such=20 > an object in different contexts, I get all kinds of different exceptions,= =20 > ranging from NameError over LookupError all the way to RuntimeError, wher= e an=20 > AttributeError would seem most appropriate. >=20 > Following the documentation > * LookupError should never be used at all, since it is just the base for= =20 > IndexError and KeyError. > * NameError is meant for global or local symbols that do not exist > * RuntimeError is just a fallback in cases where nothing else seems to fi= t. >=20 > Is this matter just not defined too well, yet or is there a hidden concep= t=20 > behind the current state of affairs, that I just fail to see? To say the truth, the first reason fits better in this case. Generally speaking, I've tried to follow the "good use manners" when exceptions raised from python code (although maybe in some cases this might not be the case). However, when an exception was raised from the Pyrex part of the code, it normally goes as a RuntimeError. I've no a good reason to do that (apart of laziness), but it is like that. If you feel like having a general overhaul about wrong exceptions and provide the pointers to the appropriate parts of code, I'll look into that. And much better if you can provide the patches, of course!. Cheers, =2D-=20 =46rancesc Altet Who's your data daddy? =A0PyTables |
From: Francesc A. <fa...@ca...> - 2004-12-02 18:24:38
|
Announcing PyTables 0.9.1 =2D------------------------ This release is mainly a maintenance version. In it, some bugs has been fixed and a few improvements has been made. One important thing is that chunk sizes in EArrays has been re-tuned to get much better performance and compression rations. Besides, it has been tested against the latest Python 2.4 and all tests units seems to pass fine. Changes more in depth =2D-------------------- Improvements: =2D The chunksize computation for EArrays has been re-tuned to allow better performance and *much* better compression rations. =2D New --unpackshort and --quantize flags has been added to nctoh5 script. --unpackshort unpack short integer variables to float variables using scale_factor and add_offset netCDF variable attributes. --quantize quantize data to improve compression using least_significant_digit netCDF variable attribute (not active by default). See http://www.cdc.noaa.gov/cdc/conventions/cdc_netcdf_standard.shtml for further explanation of what this attribute means. Thanks to Jeff Whitaker for providing this. =2D Table.itersequence has received a new parameter called "sort". This allows to disable the sorting of the sequence in case the user wants so. Backward-incompatible changes: =2D Now, the AttributeSet class throw an AttributeError on __getattr__ for nonexistent attributes in it. Formerly, the routine returned None, which is pretty much against convention in Python and breaks the built-in hasattr() function. Thanks to Robert Nemec for noting this and offering a patch. =2D VLArray.read() has changed its behaviour. Now, it always returns a list, as stated in documentation, even when the number of elements to return is 0 or 1. This is much more consistent when representing the actual number of elements on a certain VLArray row. API additions: =2D A Row.getTable() has been added. It is an accessor for the associated Table object. =2D A File.copyAttrs() has been added. It allows copying attributes from one leaf to other. Properly speaking, this was already there, but not documented :-/ Bug fixes: =2D Now, the copy of hierarchies works even when there are scalar Arrays (i.e. Arrays which shape is ()) on it. Thanks to Robert Nemec for providing a patch. =2D Solved a memory leak regarding the Filters instance associated with the File object, that was not released after closing the file. Now, there are no known leaks on PyTables itself. =2D Fixed a bug in Table.append() when the table was indexed. The problem was that if table was in auto-indexing mode, some rows were lost in the indexation process and hence, not indexed correctly. =2D Improved security of nodes name checking. Closes #1074335 Share your experience =2D-------------------- Let me know of any bugs, suggestions, gripes, kudos, etc. you may have. Enjoy data! =2D-=20 =46rancesc Altet Who's your data daddy? =A0PyTables |
From: Norbert N. <Nor...@gm...> - 2004-12-02 18:24:18
|
Hi there, checking through the code of pytables, I fail to find much of a concept for which exception is thrown where. All the different standard exception types have well defined meanings: http://www.python.org/dev/doc/devel/lib/module-exceptions.html but it seems to me that pytables does not really follow these too closely. I'm mostly concerned about nonexistant objects. If I try to access/delete such an object in different contexts, I get all kinds of different exceptions, ranging from NameError over LookupError all the way to RuntimeError, where an AttributeError would seem most appropriate. Following the documentation * LookupError should never be used at all, since it is just the base for IndexError and KeyError. * NameError is meant for global or local symbols that do not exist * RuntimeError is just a fallback in cases where nothing else seems to fit. Is this matter just not defined too well, yet or is there a hidden concept behind the current state of affairs, that I just fail to see? Ciao, Norbert -- _________________________________________Norbert Nemec Bernhardstr. 2 ... D-93053 Regensburg Tel: 0941 - 2009638 ... Mobil: 0179 - 7475199 eMail: <No...@Ne...> |
From: Francesc A. <fa...@ca...> - 2004-12-02 08:15:58
|
Hi Andrew, Yes, releasing the GIL when writing (or reading) data would be a very nice addition. I've to look more in depth in that. That would be hopefully included in version 1.0 of PyTables. Anyway, if you know how to do add that kind of support, and test it, I'll be glad to include your code in next version of PyTables. Thanks for your time, A Dimarts 30 Novembre 2004 21:18, Andrew Straw va escriure: > Given that I'm an "interested and intermittent" user, not a "hardcore,=20 > must have" user, please don't rank my responses as highly as others'. =20 > Also, I've only answered the questions that I feel even remotely=20 > qualified or interested to answer. (You may take my other answers as "0"= =2E) >=20 > >- Image support (in the sense stated in > > http://hdf.ncsa.uiuc.edu/HDF5/doc/ADGuide/ImageSpec.html). > > =20 > > > Sounds great > +1 >=20 > In conjunction with releasing Python GIL on file write operations: +5 >=20 > This might make me a hardcore user if I convert my primitive home-brew=20 > "movie" code. But this would require releasing the Python GIL on writing= =20 > the data to HDF5 files. This would allow a multi-threaded Python=20 > program to continue working while the (non-Python API using) C code=20 > completes. Refer to our previous conversation:=20 > http://sourceforge.net/mailarchive/forum.php?thread_id=3D4963045&forum_id= =3D13760=20 > My email (at sourceforge time 2004-06-22 00:04) describes the (seemingly= =20 > fairly simple) necessary changes. (Sorry, if you've done this already=20 > -- I haven't kept up with the changes in the last few months.) >=20 > >- Full index-based searches in Tables (right now, you can only set one > > condition in index-accelerated searches). > > =20 > > > Sounds good, but not informed enough to judge (+1?) >=20 > >- Which present limitations do you find more annoying? > > =20 > > > (Sorry to keep kicking a dead horse.) Keeping the GIL in long file write= =20 > operations where the Python API is not touched and therefore it could be= =20 > released. >=20 > >- In which field of engineering, science or business are you using > > PyTables? > > =20 > > > Neuroscience. >=20 >=20 >=20 >=20 > ------------------------------------------------------- > SF email is sponsored by - The IT Product Guide > Read honest & candid reviews on hundreds of IT Products from real users. > Discover which products truly live up to the hype. Start reading now.=20 > http://productguide.itmanagersjournal.com/ > _______________________________________________ > Pytables-users mailing list > Pyt...@li... > https://lists.sourceforge.net/lists/listinfo/pytables-users >=20 =2D-=20 =46rancesc Altet Who's your data daddy? =A0PyTables |
From: Andrew S. <str...@as...> - 2004-11-30 20:13:23
|
Given that I'm an "interested and intermittent" user, not a "hardcore, must have" user, please don't rank my responses as highly as others'. Also, I've only answered the questions that I feel even remotely qualified or interested to answer. (You may take my other answers as "0".) >- Image support (in the sense stated in > http://hdf.ncsa.uiuc.edu/HDF5/doc/ADGuide/ImageSpec.html). > > Sounds great +1 In conjunction with releasing Python GIL on file write operations: +5 This might make me a hardcore user if I convert my primitive home-brew "movie" code. But this would require releasing the Python GIL on writing the data to HDF5 files. This would allow a multi-threaded Python program to continue working while the (non-Python API using) C code completes. Refer to our previous conversation: http://sourceforge.net/mailarchive/forum.php?thread_id=4963045&forum_id=13760 My email (at sourceforge time 2004-06-22 00:04) describes the (seemingly fairly simple) necessary changes. (Sorry, if you've done this already -- I haven't kept up with the changes in the last few months.) >- Full index-based searches in Tables (right now, you can only set one > condition in index-accelerated searches). > > Sounds good, but not informed enough to judge (+1?) >- Which present limitations do you find more annoying? > > (Sorry to keep kicking a dead horse.) Keeping the GIL in long file write operations where the Python API is not touched and therefore it could be released. >- In which field of engineering, science or business are you using > PyTables? > > Neuroscience. |
From: Francesc A. <fa...@ca...> - 2004-11-29 18:16:48
|
Hi List, Now that PyTables has passed its second birthday, we need your feedback again. So, please, if you are using PyTables and like this effort to be continued and improved, take some time to respond to this questionnaire; it will help us to decide what to do in the next few months. =46ollowing are our plans for the next few months, before releasing the 1.0 version. Could you respond to these questions by giving a punctuation ranging from +5 (I absolutely need that) to 0 (I can pass without this) to these planned features? =2D Implement support for Variable Length values in Table columns (most likely implemented in a brand-new object called VLTables). =2D Implement relationships (apart from the existing hierarchical ones) between objects in the object tree (kind of symbolic links on a filesystems), and pointers to dataset regions. =2D Implement date support in Leaf objects (specially on Tables). =2D Full index-based searches in Tables (right now, you can only set one condition in index-accelerated searches). =2D Image support (in the sense stated in http://hdf.ncsa.uiuc.edu/HDF5/doc/ADGuide/ImageSpec.html). Now, answer the next questions. If you don't like to respond all of them, this is not necessary. =2D What feature do you like more (not necessarily listed before)? =2D What feature do you miss more (not necessarily listed before)? =2D Do you prefer seeing PyTables to become (even) faster or less memory demanding? =2D Which present limitations do you find more annoying? =2D In which field of engineering, science or business are you using PyTables? Ok, that's all. I hope some of you will take some of his precious time to fill-up the questionnaire. Even only one answer would be much better than our sole opinion!. Thanks, =2D-=20 =46rancesc Altet Who's your data daddy? =A0PyTables |
From: Francesc A. <fa...@py...> - 2004-11-19 08:23:58
|
Hi Gerry, This errors does not seem to be grave IMO. A think they are consequence of a slightly change (read improvement) on numarray from 1.0 to 1.1. Upgrade to latest numarray (1.1.1 I think) and I believe this errors would disappear. Regarding the warnings, I don't exactly know where they came from. The only thing I know is that I was getting them some time ago, and now they ceased to appear, but don't ask me why because I don't know. You can try to update HDF5 to 1.6.3-patch (get it from ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/src/patches). It may help as well to try with the latest PyTables CVS release (branch tagged as Release-0_9_patches), where I've solved a small bug in indexing operations. You can get this CVS branch issuing the next commands: $ cvs -d:pserver:ano...@cv...:/cvsroot/pytables login $ cvs -z3 -d:pserver:ano...@cv...:/cvsroot/pytables co -r Release-0_9_patches pytables Please, tell me how it goes, Francesc A Dijous 18 Novembre 2004 21:12, vareu escriure: > Hi Francesco, > > I ran the test suite and ran into some errors. I've included the > original test output and the verbose output below. > > > Thanks for any help you can offer. > > Best wishes, > > Gerry Wiener > National Center for Atmospheric Research > > > light:gerry> python test_all.py > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > PyTables version: 0.9 > Extension version: $Id: hdf5Extension.c,v 1.170 2004/11/05 15:34:42 > falted Exp $ > HDF5 version: 1.6.2 > numarray version: 1.0 > Zlib version: 1.2.1 > LZO version: 1.08 (Jul 12 2002) > Python version: 2.3.4 (#1, Jun 29 2004, 13:40:38) > [GCC 3.2.2] > Platform: linux2-i686 > Byte-ordering: little > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > Performing only a light (yet comprehensive) subset of the test > suite. If you have a big system and lots of CPU to waste and want to > do a more complete test, try passing the --heavy flag to this script. > The whole suite will take more than 10 minutes to complete on a > relatively modern CPU and around 100 MB of main memory. > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > Numeric (version 23.3) is present. Adding the Numeric test suite. > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > /d2/gerry/local/src/pytables-0.9/tables/Leaf.py:90: UserWarning: ucl > compression library is not available. Using zlib instead!. > warnings.warn( \ > .............................................................................................................................................................................................................................E................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................Warning: > Encountered invalid numeric result(s) in greater_equal > Warning: Encountered invalid numeric result(s) in less_equal > Warning: Encountered invalid numeric result(s) in greater_equal > Warning: Encountered invalid numeric result(s) in less > Warning: Encountered invalid numeric result(s) in greater > Warning: Encountered invalid numeric result(s) in less_equal > Warning: Encountered invalid numeric result(s) in greater > Warning: Encountered invalid numeric result(s) in less > .Warning: Encountered invalid numeric result(s) in less > Warning: Encountered invalid numeric result(s) in less_equal > Warning: Encountered invalid numeric result(s) in greater > Warning: Encountered invalid numeric result(s) in greater_equal > ..Warning: Encountered invalid numeric result(s) in less > Warning: Encountered invalid numeric result(s) in greater > Warning: Encountered invalid numeric result(s) in less_equal > Warning: Encountered invalid numeric result(s) in greater_equal > Warning: Encountered invalid numeric result(s) in less_equal > Warning: Encountered invalid numeric result(s) in greater_equal > ...............................................................................................................................................................................................................................................................................................................................................................................................................................E....E........................................................................................................................................................................................................................................................ > ====================================================================== > ERROR: Non supported lists object (numerical types) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/d2/gerry/local/src/pytables-0.9/test/test_lists.py", line 157, > in test01_types > WriteRead(a) > File "/d2/gerry/local/src/pytables-0.9/test/test_lists.py", line 29, > in WriteRead > fileh.createArray(root, 'somearray', a, "Some array") > File "/d2/gerry/local/src/pytables-0.9/tables/File.py", line 545, in > createArray > setattr(group, name, Object) > File "/d2/gerry/local/src/pytables-0.9/tables/Group.py", line 571, in > __setattr__ > value._g_putObjectInTree(name, self) > File "/d2/gerry/local/src/pytables-0.9/tables/Leaf.py", line 174, in > _g_putObjectInTree > self._create() > File "/d2/gerry/local/src/pytables-0.9/tables/Array.py", line 116, in > _create > raise type, value > error: setArrayFromSequence: invalid sequence. > > ====================================================================== > ERROR: Checking enlargeable array __setitem__ special method > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/d2/gerry/local/src/pytables-0.9/test/test_earray.py", line 539, > in test05_setitemEArray > object = object * 2 + 3 > File > "/d2/gerry/local/lib/python2.3/site-packages/numarray/numarraycore.py", > line 765, in __mul__ > return ufunc.multiply(self, operand) > error: copy4bytes: access beyond buffer. offset=3 buffersize=0 > > ====================================================================== > ERROR: Checking enlargeable array __setitem__ special method > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/d2/gerry/local/src/pytables-0.9/test/test_earray.py", line 539, > in test05_setitemEArray > object = object * 2 + 3 > File > "/d2/gerry/local/lib/python2.3/site-packages/numarray/numarraycore.py", > line 765, in __mul__ > return ufunc.multiply(self, operand) > error: copy4bytes: access beyond buffer. offset=3 buffersize=0 > > ---------------------------------------------------------------------- > Ran 1550 tests in 185.763s > > FAILED (errors=3) > 172.360u 13.860s 3:06.66 99.7% 0+0k 0+0io 1170pf+0w > > > > > Here's the verbose output for test_lists.py: > > light:gerry> python test_lists.py -v > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Non suppported lists objects (character objects) ... ok > Non supported lists object (numerical types) ... ERROR > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Single element access (character types) ... ok > Single element access (numerical types) ... ok > Range element access (character types) ... ok > Range element access (numerical types) ... ok > Range element access, strided (character types) ... ok > Range element access (numerical types) ... ok > Negative Index element access (character types) ... ok > Negative Index element access (numerical types) ... ok > Negative range element access (character types) ... ok > Negative range element access (numerical types) ... ok > Single element access (character types) ... ok > Single element access (numerical types) ... ok > Range element access (character types) ... ok > Range element access (numerical types) ... ok > Range element access, strided (character types) ... ok > Range element access (numerical types) ... ok > Negative Index element access (character types) ... ok > Negative Index element access (numerical types) ... ok > Negative range element access (character types) ... ok > Negative range element access (numerical types) ... ok > Single element access (character types) ... ok > Single element access (numerical types) ... ok > Range element access (character types) ... ok > Range element access (numerical types) ... ok > Range element access, strided (character types) ... ok > Range element access (numerical types) ... ok > Negative Index element access (character types) ... ok > Negative Index element access (numerical types) ... ok > Negative range element access (character types) ... ok > Negative range element access (numerical types) ... ok > Single element access (character types) ... ok > Single element access (numerical types) ... ok > Range element access (character types) ... ok > Range element access (numerical types) ... ok > Range element access, strided (character types) ... ok > Range element access (numerical types) ... ok > Negative Index element access (character types) ... ok > Negative Index element access (numerical types) ... ok > Negative range element access (character types) ... ok > Negative range element access (numerical types) ... ok > Testing generator access to Arrays, single elements (char) ... ok > Testing generator access to Arrays, multiple elements (char) ... ok > Testing generator access to Arrays, single elements (numeric) ... ok > Testing generator access to Arrays, multiple elements (numeric) ... ok > Testing generator access to Arrays, single elements (char) ... ok > Testing generator access to Arrays, multiple elements (char) ... ok > Testing generator access to Arrays, single elements (numeric) ... ok > Testing generator access to Arrays, multiple elements (numeric) ... ok > Testing generator access to Arrays, single elements (char) ... ok > Testing generator access to Arrays, multiple elements (char) ... ok > Testing generator access to Arrays, single elements (numeric) ... ok > Testing generator access to Arrays, multiple elements (numeric) ... ok > Testing generator access to Arrays, single elements (char) ... ok > Testing generator access to Arrays, multiple elements (char) ... ok > Testing generator access to Arrays, single elements (numeric) ... ok > Testing generator access to Arrays, multiple elements (numeric) ... ok > > ====================================================================== > ERROR: Non supported lists object (numerical types) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "test_lists.py", line 157, in test01_types > WriteRead(a) > File "test_lists.py", line 29, in WriteRead > fileh.createArray(root, 'somearray', a, "Some array") > File "/d2/gerry/local/src/pytables-0.9/tables/File.py", line 545, in > createArray > setattr(group, name, Object) > File "/d2/gerry/local/src/pytables-0.9/tables/Group.py", line 571, in > __setattr__ > value._g_putObjectInTree(name, self) > File "/d2/gerry/local/src/pytables-0.9/tables/Leaf.py", line 174, in > _g_putObjectInTree > self._create() > File "/d2/gerry/local/src/pytables-0.9/tables/Array.py", line 116, in > _create > raise type, value > error: setArrayFromSequence: invalid sequence. > > ---------------------------------------------------------------------- > Ran 72 tests in 0.422s > > > Here's the verbose output for test_earray.py: > > python test_lists.py -v > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Non suppported lists objects (character objects) ... ok > Non supported lists object (numerical types) ... ERROR > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Data integrity during recovery (character types) ... ok > Data integrity during recovery (numerical types) ... ok > Single element access (character types) ... ok > Single element access (numerical types) ... ok > Range element access (character types) ... ok > Range element access (numerical types) ... ok > Range element access, strided (character types) ... ok > Range element access (numerical types) ... ok > Negative Index element access (character types) ... ok > Negative Index element access (numerical types) ... ok > Negative range element access (character types) ... ok > Negative range element access (numerical types) ... ok > Single element access (character types) ... ok > Single element access (numerical types) ... ok > Range element access (character types) ... ok > Range element access (numerical types) ... ok > Range element access, strided (character types) ... ok > Range element access (numerical types) ... ok > Negative Index element access (character types) ... ok > Negative Index element access (numerical types) ... ok > Negative range element access (character types) ... ok > Negative range element access (numerical types) ... ok > Single element access (character types) ... ok > Single element access (numerical types) ... ok > Range element access (character types) ... ok > Range element access (numerical types) ... ok > Range element access, strided (character types) ... ok > Range element access (numerical types) ... ok > Negative Index element access (character types) ... ok > Negative Index element access (numerical types) ... ok > Negative range element access (character types) ... ok > Negative range element access (numerical types) ... ok > Single element access (character types) ... ok > Single element access (numerical types) ... ok > Range element access (character types) ... ok > Range element access (numerical types) ... ok > Range element access, strided (character types) ... ok > Range element access (numerical types) ... ok > Negative Index element access (character types) ... ok > Negative Index element access (numerical types) ... ok > Negative range element access (character types) ... ok > Negative range element access (numerical types) ... ok > Testing generator access to Arrays, single elements (char) ... ok > Testing generator access to Arrays, multiple elements (char) ... ok > Testing generator access to Arrays, single elements (numeric) ... ok > Testing generator access to Arrays, multiple elements (numeric) ... ok > Testing generator access to Arrays, single elements (char) ... ok > Testing generator access to Arrays, multiple elements (char) ... ok > Testing generator access to Arrays, single elements (numeric) ... ok > Testing generator access to Arrays, multiple elements (numeric) ... ok > Testing generator access to Arrays, single elements (char) ... ok > Testing generator access to Arrays, multiple elements (char) ... ok > Testing generator access to Arrays, single elements (numeric) ... ok > Testing generator access to Arrays, multiple elements (numeric) ... ok > Testing generator access to Arrays, single elements (char) ... ok > Testing generator access to Arrays, multiple elements (char) ... ok > Testing generator access to Arrays, single elements (numeric) ... ok > Testing generator access to Arrays, multiple elements (numeric) ... ok > > ====================================================================== > ERROR: Non supported lists object (numerical types) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "test_lists.py", line 157, in test01_types > WriteRead(a) > File "test_lists.py", line 29, in WriteRead > fileh.createArray(root, 'somearray', a, "Some array") > File "/d2/gerry/local/src/pytables-0.9/tables/File.py", line 545, in > createArray > setattr(group, name, Object) > File "/d2/gerry/local/src/pytables-0.9/tables/Group.py", line 571, in > __setattr__ > value._g_putObjectInTree(name, self) > File "/d2/gerry/local/src/pytables-0.9/tables/Leaf.py", line 174, in > _g_putObjectInTree > self._create() > File "/d2/gerry/local/src/pytables-0.9/tables/Array.py", line 116, in > _create > raise type, value > error: setArrayFromSequence: invalid sequence. > > ---------------------------------------------------------------------- > Ran 72 tests in 0.422s > > FAILED (errors=1) > > light:gerry> > light:gerry> python test_earray.py -v > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ERROR > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ERROR > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... > /d2/gerry/local/src/pytables-0.9/tables/Leaf.py:90: UserWarning: ucl > compression library is not available. Using zlib instead!. > warnings.warn( \ > ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking earray with offseted numarray strings appends ... ok > Checking earray with strided numarray strings appends ... ok > Checking earray with offseted numarray ints appends ... ok > Checking earray with strided numarray ints appends ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking enlargeable array iterator ... ok > Checking enlargeable array iterator with (start, stop, step) ... ok > Checking read() of enlargeable arrays ... ok > Checking enlargeable array __getitem__ special method ... ok > Checking enlargeable array __setitem__ special method ... ok > Checking EArray.copy() method ... ok > Checking EArray.copy() method (where specified) ... ok > Checking EArray.copy() method (Numeric flavor) ... ok > Checking EArray.copy() method (Tuple flavor) ... ok > Checking EArray.copy() method (List flavor) ... ok > Checking EArray.copy() method (String flavor) ... ok > Checking EArray.copy() method (CharArray flavor) ... ok > Checking EArray.copy() method (checking title copying) ... ok > Checking EArray.copy() method (user attributes copied) ... ok > Checking EArray.copy() method (user attributes not copied) ... ok > Checking EArray.copy() method ... ok > Checking EArray.copy() method (where specified) ... ok > Checking EArray.copy() method (Numeric flavor) ... ok > Checking EArray.copy() method (Tuple flavor) ... ok > Checking EArray.copy() method (List flavor) ... ok > Checking EArray.copy() method (String flavor) ... ok > Checking EArray.copy() method (CharArray flavor) ... ok > Checking EArray.copy() method (checking title copying) ... ok > Checking EArray.copy() method (user attributes copied) ... ok > Checking EArray.copy() method (user attributes not copied) ... ok > Checking EArray.copy() method with indexes ... ok > Checking EArray.copy() method with indexes (close file version) ... ok > Checking EArray.copy() method with indexes ... ok > Checking EArray.copy() method with indexes (close file version) ... ok > Checking EArray.copy() method with indexes ... ok > Checking EArray.copy() method with indexes (close file version) ... ok > Checking EArray.copy() method with indexes ... ok > Checking EArray.copy() method with indexes (close file version) ... ok > Checking EArray.copy() method with indexes ... ok > Checking EArray.copy() method with indexes (close file version) ... ok > Checking EArray.copy() method with indexes ... ok > Checking EArray.copy() method with indexes (close file version) ... ok > Checking EArray.copy() method with indexes ... ok > Checking EArray.copy() method with indexes (close file version) ... ok > Checking EArray.copy() method with indexes ... ok > Checking EArray.copy() method with indexes (close file version) ... ok > Checking EArray.copy() method with indexes ... ok > Checking EArray.copy() method with indexes (close file version) ... ok > Checking EArray.copy() method with indexes ... ok > Checking EArray.copy() method with indexes (close file version) ... ok > Checking EArray.copy() method with indexes ... ok > Checking EArray.copy() method with indexes (close file version) ... ok > Checking EArray.copy() method with indexes ... ok > Checking EArray.copy() method with indexes (close file version) ... ok > > ====================================================================== > ERROR: Checking enlargeable array __setitem__ special method > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "test_earray.py", line 539, in test05_setitemEArray > object = object * 2 + 3 > File > "/d2/gerry/local/lib/python2.3/site-packages/numarray/numarraycore.py", > line 765, in __mul__ > return ufunc.multiply(self, operand) > error: copy4bytes: access beyond buffer. offset=3 buffersize=0 > > ====================================================================== > ERROR: Checking enlargeable array __setitem__ special method > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "test_earray.py", line 539, in test05_setitemEArray > object = object * 2 + 3 > File > "/d2/gerry/local/lib/python2.3/site-packages/numarray/numarraycore.py", > line 765, in __mul__ > return ufunc.multiply(self, operand) > error: copy4bytes: access beyond buffer. offset=3 buffersize=0 > > ---------------------------------------------------------------------- > Ran 228 tests in 16.424s > > FAILED (errors=2) > light:gerry> > -- Francesc Altet |
From: Francesc A. <fa...@py...> - 2004-11-19 08:09:06
|
Hi Andrea, A Dijous 18 Novembre 2004 18:42, vareu escriure: > For many reason that are not important here, I'd like to have PyTables > compiled against Numeric and not numarray. Searching PyTables site and > docs, I've found the following lines in the README.txt: [snip] > > # Finally, check for numarray > > try: > > import numarray > > except: > > print """\ > > Can't find a local numarray > > Python installation. > > Please, read carefully the README and remember > > that PyTables needs the numarray package to > > compile and run.""" > > Could you explain this a little further? If I change the setup.py > script to check against Numeric instead of against numarray will > PyTables compile anyway? No, you *need* numarray installed. numarray objects are supported natively, while Numeric is supported through conversions, i.e. when saving, the Numeric arrays are converted to numarray objects. This conversion tries to be as efficient as possible, and for contiguous Numeric objects, the same buffer area is used: naarr = numarray.array(buffer(Narr), type=Narr.typecode(), shape=Narr.shape) However, when the Numarray object is non-contiguous, a memory-copy is done: naarr = numarray.array(buffer(Narr.copy()), type=Narr.typecode(), shape=Narr.shape) When retrieving the object, the FLAVOR attribute is checked, and if the object saved was Numeric, then a converion from numarray to Numeric is done. In this case, a memory-copy is always made, though: Narr=Numeric.array(naarr.tolist(), typecode=naarr.typecode()) Hope that helps to understand what happens behind the scenes, -- Francesc Altet |
From: Ivan V. i B. <iv...@ca...> - 2004-11-18 15:07:55
|
Hi all! On the process of adding write support (overwriting, not appending, 'r+') support to tables.nodes.FileNode, I have come across a difficulty with the splitting between newNode() and openNode(). In the beginning, it seemed a good idea, since the kind of objects passed as arguments were quite different (h5file and creation arguments in the first function, only node and mode in the second one). However, the possibility of creating a new file node in either append or write mode would make necessary the addition of a 'mode' argument. It would be fairly easy to add a second 'mode =3D 'x'' argument to newNode(), which would not break interface compatibility. Semantic compatibility would imply 'x' being 'a+'. However, I have pondered adding suport for 'w', 'a' and 'w+' modes in addition to the supported 'r' and 'a+', and the mentioned 'r+'. Right now, RWFileNode.__init__() does all node creation tasks (with an admittedly ugly argument list treatment). The addition of those new modes would imply more cases of node creation, as well as some cases of node replacement or emptying. This would complicate __init__() methods for the new classes. To avoid the problem of __init__() diversity of arguments and implementation complexity, I have considered moving all node creation and emptying to newNode() and openNode(). In this way, new classes would only have to handle existent nodes. Using some mix-in classes for readable, writable and appendable behaviours of the file methods, FileNode classes would be fairly simple to implement. New classes should not cause compatibility problems, since they keep the same file interface, and the current __init__() methods for the current FileNode classes should not have been used directly (in theory). However, moving node creation to newNode() and openNode() would most probably cause duplicate behaviour. Both functions should end up calling a same function. In this way I have got to the idea of dumping newNode() and using a new definition of openNode (in a similar way as file() or open()). The new signature would be: openNode(h5file, path, mode =3D 'r', title =3D '', filters =3D None, expectedsize =3D None) By taking out arguments only related to FileNode, the signature could be reduced to: openNode(h5file, path, title =3D '', options =3D FileNodeOptions()) Where 'path' would be the full, absolute path of the (maybe new) node. This signature could be shared with future tables.nodes subpackages (of course, changing the default value for 'options'). XXXOptions would be a simple container object for node-dependent options ('mode', 'filters' and 'expectedsize' for a FileNode object). If name clashing was preferred to be avoided, the function could be named openFileNode() for the FileNode object. In any case, a new function tables.nodes.openNode() could be used to open eny kind of *existing* node defined in tables.nodes, by mapping its '_type' attribute to its opening function. There is also the question of emptying or replacing the node of the file when truncating ('w', 'a' and 'w+'). Is replacing the node with a new one a valid solution? Emptying the file without replacing it is not possible right now, since EArray objects can not be truncated (ummm, this would be a _very_ useful addition!) Sorry for the long message. It also helped me to sort some things out. So, what's your opinion on the new openNode() function and the truncation issue? Thank you, Ivan =66rom ivan.opinion import disclaimer --=20 Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=E1rabos Coop. V V Enjoy Data "" |
From: Francesc A. <fa...@py...> - 2004-11-11 10:54:10
|
A Dimecres 10 Novembre 2004 15:55, Norbert Nemec va escriure: > find enclosed two tiny patches: > > * the first to throw an AttributeError on __getattr__ for nonexistant > attributes in AttributeSet. Formerly, the routine returned "None", which is > pretty much against convention in Python and breaks the builtin "hasattr" > routine Ok. I see your point. This will cause a backward-incompatible change, although I think the advantages that will bring would largely compensate that. In addition, I've had to rewrite some internals in PyTables that were based on the former behaviour. But all has been done and committed to CVS. > * the second to correct the behavior for shape=() arrays. These worked fine > before, except for copying of trees. That has been committed too. > PS: In case this list is not the appropriate place for tiny patch submissions, > please point me to the correct address. You can also issue a bug in the PyTables project site: http://sourceforge.net/tracker/?group_id=63486&atid=504144 Do what you want although during this stage of development I would prefer to see the patches and bug reports on this list, so that I can discuss them with you. Thanks for your contribution! -- Francesc Altet |
From: Francesc A. <fa...@py...> - 2004-11-11 10:27:59
|
A Dimecres 10 Novembre 2004 15:50, Jeff Whitaker va escriure: > >>BTW: I've added a couple more command line options to nctoh5 - the new > >>version is at http://whitaker.homeunix.org/~jeff/nctoh5. The new > >>switches are: > > Sorry - it's there now. > Ok, so I've checked-in your improvements in nctoh5 utility. Cheers, -- Francesc Altet |