You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(5) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
|
Feb
(2) |
Mar
|
Apr
(5) |
May
(11) |
Jun
(7) |
Jul
(18) |
Aug
(5) |
Sep
(15) |
Oct
(4) |
Nov
(1) |
Dec
(4) |
2004 |
Jan
(5) |
Feb
(2) |
Mar
(5) |
Apr
(8) |
May
(8) |
Jun
(10) |
Jul
(4) |
Aug
(4) |
Sep
(20) |
Oct
(11) |
Nov
(31) |
Dec
(41) |
2005 |
Jan
(79) |
Feb
(22) |
Mar
(14) |
Apr
(17) |
May
(35) |
Jun
(24) |
Jul
(26) |
Aug
(9) |
Sep
(57) |
Oct
(64) |
Nov
(25) |
Dec
(37) |
2006 |
Jan
(76) |
Feb
(24) |
Mar
(79) |
Apr
(44) |
May
(33) |
Jun
(12) |
Jul
(15) |
Aug
(40) |
Sep
(17) |
Oct
(21) |
Nov
(46) |
Dec
(23) |
2007 |
Jan
(18) |
Feb
(25) |
Mar
(41) |
Apr
(66) |
May
(18) |
Jun
(29) |
Jul
(40) |
Aug
(32) |
Sep
(34) |
Oct
(17) |
Nov
(46) |
Dec
(17) |
2008 |
Jan
(17) |
Feb
(42) |
Mar
(23) |
Apr
(11) |
May
(65) |
Jun
(28) |
Jul
(28) |
Aug
(16) |
Sep
(24) |
Oct
(33) |
Nov
(16) |
Dec
(5) |
2009 |
Jan
(19) |
Feb
(25) |
Mar
(11) |
Apr
(32) |
May
(62) |
Jun
(28) |
Jul
(61) |
Aug
(20) |
Sep
(61) |
Oct
(11) |
Nov
(14) |
Dec
(53) |
2010 |
Jan
(17) |
Feb
(31) |
Mar
(39) |
Apr
(43) |
May
(49) |
Jun
(47) |
Jul
(35) |
Aug
(58) |
Sep
(55) |
Oct
(91) |
Nov
(77) |
Dec
(63) |
2011 |
Jan
(50) |
Feb
(30) |
Mar
(67) |
Apr
(31) |
May
(17) |
Jun
(83) |
Jul
(17) |
Aug
(33) |
Sep
(35) |
Oct
(19) |
Nov
(29) |
Dec
(26) |
2012 |
Jan
(53) |
Feb
(22) |
Mar
(118) |
Apr
(45) |
May
(28) |
Jun
(71) |
Jul
(87) |
Aug
(55) |
Sep
(30) |
Oct
(73) |
Nov
(41) |
Dec
(28) |
2013 |
Jan
(19) |
Feb
(30) |
Mar
(14) |
Apr
(63) |
May
(20) |
Jun
(59) |
Jul
(40) |
Aug
(33) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Antonio V. <v_a...@us...> - 2005-07-16 10:39:51
|
Hi Francesc, hi Ivan, I encountered a problem trying to build the RPM package for PyTables 1.1: $ python setup.py bdist_rpm [...] building RPMs [...] + env 'CFLAGS=-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m32 -march=i386 -mtune=pentium4 -fasynchronous-unwind-tables' python setup.py build Found numarray 1.3.3 package installed Traceback (most recent call last): File "setup.py", line 46, in ? VERSION = open('VERSION').read().strip() IOError: [Errno 2] No such file or directory: 'VERSION' error: Bad exit status from /var/tmp/rpm-tmp.16679 (%build) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.16679 (%build) error: command 'rpmbuild' failed with exit status 1 $ I solved it by adding VERSION in the MANIFEST.in file but I'm but an expert in this kind of questions so may be this is not the bast way to fix the problem. ciao P.S. I'm running Fedora Core 4 -- Antonio Valentino Consorzio Innova via della Scienza - Zona Paip I 75100 Matera (MT) Italy |
From: Francesc A. <fa...@ca...> - 2005-07-14 13:07:53
|
Hi List, After quite a few testing iterations, I'm happy to announce the immediate availability of the newest PyTables 1.1. Many thanks to the people on this list for contributing not only bug reports, fixes and feedback but also code implementing new features (most specially Antonio Valentino). Here follows the official announcemnt. Enjoy! =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Announcing PyTables 1.1 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D On this version you will find support for a nice set of new features, like nested datatypes, enumerated datatypes, nested iterators, support for native multidimensional attributes, a new object for dealing with compressed arrays (CArray), bzip2 compression support and more. Many bugs has been addressed as well. What it is =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D **PyTables** is a package for managing hierarchical datasets and designed to efficiently cope with extremely large amounts of data (with support for full 64-bit file addressing). It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code, makes it a very easy-to-use tool for high performance data storage and retrieval. Perhaps its more interesting feature is that it optimizes memory and disk resources so that data take much less space (between a factor 3 to 5, and more if the data is compressible) than other solutions, like for example, relational or object oriented databases. Besides, PyTables I/O for table objects is buffered, implemented in C and carefully tuned so that you can reach much better performance with PyTables than with your own home-grown wrappings to the HDF5 library. Changes more in depth =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Improvements: =2D ``Table``, ``EArray`` and ``VLArray`` objects now support enumerated types. ``Array`` objects support opening existing HDF5 enumerated arrays. Enumerated types are restricted sets of ``(name, value)`` pairs. Use the ``Enum`` class to easily define new enumerations that will be saved along with your data. =2D Now, the HDF5 library is responsible to do data conversions when the datasets are written in a machine with different byte-ordering than the machine that reads the dataset. With this, all the data is converted on-the-fly and you always get native datatypes in memory. I think this approach to be more convenient in terms of CPU consumption when using these datasets. Right now, this only works for tables, though. =2D Support for nested datatypes is in place. You can now made columns of tables that host another columns for an unlimited depth (well, theoretically, in practice until the python recursive limit would be reached). Convenient NestedRecArray objects has been implemented as data containers. Cols and Description accessors has been improved so you can navigate on the type hierarchy very easily (natural naming is has been implemented for the task). =2D Added support for native HDF5 multidimensional attributes. Now, you can load native HDF5 files that contains fully multidimensional attributes; these attributes will be mapped to NumArray objects. Also, when you save NumArray objects as attributes, they get saved as native HDF5 attributes (before, NumArray attributes where pickled). =2D A brand-new class, called CArray, has been introduced. It's mainly like an Array class (i.e. non-enlargeable), but with compression capabilities enabled. The existence of CArray also allows PyTables to read native HDF5 chunked, non-enlargeable datasets. =2D Bzip2 compressor is supported. Such a support was already in PyTables 1.0, but forgot to announce it. =2D New LZO2 (http://www.oberhumer.com/opensource/lzo/lzonews.php) compressor is supported. The installer now recognizes whether LZO1 or LZO2 is installed, and adapts automatically to it. If both are installed in your system, then LZO2 is chosen. LZO2 claims to be fully compatible (both backwark and forward) with LZO1, so you should not experience any problem during this transition. =2D The old limit of 256 columns in a table has been released. Now, you can have tables with any number of columns, although if you try to use a too high number (i.e. > 1024), you will start to consume a lot of system resources. You have been warned!. =2D The limit in the length of column names has been released also. =2D Nested iterators for reading in tables are supported now. =2D A new section in tutorial about how to modify values in tables and arrays has been added to the User's Manual. Backward-incompatible changes: =2D None. Bug fixes: =2D VLArray now correctly updates the number of rows internal counter when opening an existing VLArray object. Now you can add new rows to existing VLA's without problems. =2D Tuple flavor for VLArrays now works as intented, i.e. reading VLArray objects will always return tuples even in the case of multimensional Atoms. Before, this operations returned a mix of tuples and lists. =2D If a column was not able to be indexed because it has too few entries, then _whereInRange is called instead of _whereIndexed. Fixes #1203202. =2D You can call now Row.append() in the middle of Table iterators without resetting loop counters. Fixes #1205588. =2D PyTables used to give a segmentation fault when removing the last row out of a table with the table.removeRows() method. This is due to a limitation in the HDF5 library. Until this get fixed in HDF5, a NotImplemented error is raised when trying to do that. Address #1201023. =2D You can safely break a loop over an iterator returned by Table.where(). Fixes #1234637. =2D When removing a Group with hidden child groups, those are effectively closed now. =2D Now, there is a distinction between shapes 1 and (1,) in tables. The former represents a scalar, and the later a 1-D array with just one element. That follows the numarray convention for records, and makes more sense as well. Before 1.1, shapes 1 and (1,) were represented by an scalar on disk. Known bugs: =2D Classes inheriting from IsDescription subclasses do not inherit columns defined in the superclass. See SF bug #1207732 for more info. =2D Time datatypes are non-portable between big-endian and little-endian architectures. This is ultimately a consequence of a HDF5 limitation. See SF bug #1234709 for more info. Important note for MacOSX users =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D UCL compressor seems to work badly on MacOSX platforms. Until the problem would be isolated and eventually solved, UCL will not be compiled by default on MacOSX platforms, even if the installer finds it in the system. However, if you still want to get UCL support on MacOSX, you can use the --force-ucl flag in setup.py. Important note for Python 2.4 and Windows users =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D If you are willing to use PyTables with Python 2.4 in Windows platforms, you will need to get the HDF5 library compiled for MSVC 7.1, aka .NET 2003. It can be found at: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-164-win-net.ZIP Users of Python 2.3 on Windows will have to download the version of HDF5 compiled with MSVC 6.0 available in: ftp://ftp.ncsa.uiuc.edu/HDF/HDF5/current/bin/windows/5-164-win.ZIP Where can PyTables be applied? =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D PyTables is not designed to work as a relational database competitor, but rather as a teammate. If you want to work with large datasets of multidimensional data (for example, for multidimensional analysis), or just provide a categorized structure for some portions of your cluttered RDBS, then give PyTables a try. It works well for storing data from data acquisition systems (DAS), simulation software, network data monitoring systems (for example, traffic measurements of IP packets on routers), very large XML files, or for creating a centralized repository for system logs, to name only a few possible uses. What is a table? =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D A table is defined as a collection of records whose values are stored in fixed-length fields. All records have the same structure and all values in each field have the same data type. The terms "fixed-length" and "strict data types" seem to be quite a strange requirement for a language like Python that supports dynamic data types, but they serve a useful function if the goal is to save very large quantities of data (such as is generated by many scientific applications, for example) in an efficient manner that reduces demand on CPU time and I/O resources. What is HDF5? =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =46or those people who know nothing about HDF5, it is a general purpose library and file format for storing scientific data made at NCSA. HDF5 can store two primary objects: datasets and groups. A dataset is essentially a multidimensional array of data elements, and a group is a structure for organizing objects in an HDF5 file. Using these two basic constructs, one can create and store almost any kind of scientific data structure, such as images, arrays of vectors, and structured and unstructured grids. You can also mix and match them in HDF5 files according to your needs. Platforms =3D=3D=3D=3D=3D=3D=3D=3D=3D We are using Linux on top of Intel32 as the main development platform, but PyTables should be easy to compile/install on other UNIX machines. This package has also been successfully compiled and tested on a =46reeBSD 5.4 with Opteron64 processors, a UltraSparc platform with Solaris 7 and Solaris 8, a SGI Origin3000 with Itanium processors running IRIX 6.5 (using the gcc compiler), Microsoft Windows and MacOSX (10.2 although 10.3 should work fine as well). In particular, it has been thoroughly tested on 64-bit platforms, like Linux-64 on top of an Intel Itanium, AMD Opteron (in 64-bit mode) or PowerPC G5 (in 64-bit mode) where all the tests pass successfully. Regarding Windows platforms, PyTables has been tested with Windows 2000 and Windows XP (using the Microsoft Visual C compiler), but it should also work with other flavors as well. Web site =3D=3D=3D=3D=3D=3D=3D=3D Go to the PyTables web site for more details: http://pytables.sourceforge.net/ To know more about the company behind the PyTables development, see: http://www.carabos.com/ Share your experience =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Let us know of any bugs, suggestions, gripes, kudos, etc. you may have. =2D--- **Enjoy data!** -- The PyTables Team |
From: travlr <vel...@gm...> - 2005-07-14 05:30:03
|
Why is it dumb mistakes are caught just after sending the email? I had "r"ead.... instead of "a"ppend. Sorry and thanks just the same. |
From: travlr <vel...@gm...> - 2005-07-14 05:23:00
|
Hi guys, Little problem here renaming nodes. File permissions are good, so I'm not sure what's wrong here. Thanks /usr/bin/python -u -i Python 2.4.1 (#1, Jun 17 2005, 03:48:59)=20 [GCC 3.4.4 (Gentoo 3.4.4, ssp-3.4.4-1.0, pie-8.7.8)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import tables as ta >>> f =3D ta.openFile('~/data/h5/vel3/es050105.h5','r') >>> print f /home/stella/data/h5/vel3/es050105.h5 (File) '' Last modif.: 'Tue Mar 22 02:39:14 2005' Object Tree:=20 / (RootGroup) '' /es010505_ (Group) 'es010505_' /es010505_/VEL (Table(126414L,), zlib(1)) 'VELes010505_' >>> f.renameNode('/','es050105','es010505_') HDF5-DIAG: Error detected in HDF5 library version: 1.6.4 thread 0.=20 Back trace follows. #000: H5G.c line 700 in H5Gmove2(): unable to change object name major(10): Symbol table layer minor(29): Unable to initialize object #001: H5G.c line 3243 in H5G_move(): unable to register new name for obje= ct major(10): Symbol table layer minor(29): Unable to initialize object #002: H5G.c line 2591 in H5G_link(): unable to create new name/link for o= bject major(10): Symbol table layer minor(29): Unable to initialize object #003: H5G.c line 2281 in H5G_insert(): already exists major(10): Symbol table layer minor(49): Object already exists #004: H5G.c line 1548 in H5G_namei(): unable to insert name major(10): Symbol table layer minor(53): Unable to insert object #005: H5Gstab.c line 242 in H5G_stab_insert(): unable to insert entry major(10): Symbol table layer minor(53): Unable to insert object #006: H5B.c line 965 in H5B_insert(): unable to insert key major(09): B-tree layer minor(29): Unable to initialize object #007: H5B.c line 1330 in H5B_insert_helper(): can't insert maximum leaf n= ode major(09): B-tree layer minor(53): Unable to insert object #008: H5Gnode.c line 1121 in H5G_node_insert(): unable to insert symbol name into heap major(10): Symbol table layer minor(53): Unable to insert object #009: H5HL.c line 961 in H5HL_insert(): no write intent on file major(11): Heap layer minor(25): Write failed Traceback (most recent call last): File "<stdin>", line 1, in ? File "/usr/lib/python2.4/site-packages/tables/File.py", line 781, in renameNode obj._f_rename(newname) File "/usr/lib/python2.4/site-packages/tables/Node.py", line 438, in _f_r= ename self._f_move(newname =3D newname) File "/usr/lib/python2.4/site-packages/tables/Node.py", line 506, in _f_m= ove self._g_move(newparent, newname) File "/usr/lib/python2.4/site-packages/tables/Group.py", line 413, in _g_= move super(Group, self)._g_move(newParent, newName) File "/usr/lib/python2.4/site-packages/tables/Node.py", line 427, in _g_m= ove self._v_parent._g_moveNode(oldPathname, self._v_pathname) File "hdf5Extension.pyx", line 1104, in hdf5Extension.Group._g_moveNode tables.exceptions.HDF5ExtError: Problems moving the node /es010505_ to /es0= 50105 |
From: Francesc A. <fa...@ca...> - 2005-07-13 12:37:44
|
A Wednesday 13 July 2005 14:13, v=E0reu escriure: > Thanks a lots Francesc, that was exactly what i need for table. > As i modify also Array, what about modifying an a row in an array? Similar thing: array[1] =3D 1 array[1:3] =3D [1,3] array[1:10:3] =3D [1,2,3] array[1:10, :, ..., 1, 1:2] =3D [[[[[...]]]]] So, you can use fully extended slices with the exception that negative values for step (the third parameter in slice construction) are not allowed. See the reference section for Array object. By the way, I'm adding a tutorial section on array and table modifications for user's manual. It will hopefully appear on the forthcoming 1.1 release. Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-07-13 10:17:59
|
A Wednesday 13 July 2005 11:23, Philou va escriure: > Now, i was wondering whether it was possible to modify only a value in a > table. > > I look in the pytables tutorial and find a way to append a row but not > to modify a value in an existing row. Yeah, I agree that an example (as well as some section in the tutorial) about how to modify values in tables would be great. Anyway, the information you are looking for is in the reference section. Look at the Table.modifyRows, Table.modifyColumns methods [1] as well as the Table.__setitem__ special method [2]. Look also at examples for Column.__setitem__ [3]. > If it is not possible, is it possible to remove a row and append another > row at the same position? (Is the fact of deleting a row letting empty > space?). You can delete row(s) using Table.removeRows(). Cheers, [1] http://pytables.sourceforge.net/html-doc/x2488.html#subsection4.6.2 [2] http://pytables.sourceforge.net/html-doc/x2488.html#subsection4.6.3 [3] http://pytables.sourceforge.net/html-doc/x2928.html =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Philou <Kal...@wa...> - 2005-07-13 09:23:40
|
Hi list, I successfully load an hdf5 file, displays values in a spreadsheet, modify the spreadsheet, get the modifyied value. Now, i was wondering whether it was possible to modify only a value in a table. I look in the pytables tutorial and find a way to append a row but not to modify a value in an existing row. If it is not possible, is it possible to remove a row and append another row at the same position? (Is the fact of deleting a row letting empty space?). Thanks a lot for your help, Philippe Collet |
From: Francesc A. <fa...@ca...> - 2005-07-06 16:48:41
|
Hi, Which is the problem by defining a table like: class Pressure(IsDescription): coords = FloatCol(shape=4) pressure = FloatCol() and assign values to the coords column (in the four dimensions) and pressure? Please, note that if you are afraid about data redundancy and getting a large dataset, you can activate compression (LZO if you need high speed access to data, ZLIB or BZIP2 if you want to compress more). You will be pleased seeing how small can be the resulting data file in such a case. Cheers, Francesc |
From: <phi...@ho...> - 2005-07-06 15:48:41
|
Hi list, I want to thank you a lot because i made real progress to achieve my aim and it's due to the pytables capabilities and the developper's community. I got a problem to organise my data in the hdf5 file. The problem is the data are linked together. For exemple, i have a pressure depending from the time and space coordinates. I can easily store time in an hdf5 array. What about space coordinates? Is it better to store it in a table with each column corresponding to a dimension (x,y,z) ot to have independant array (one for x values, one for y values, and one for z values)? Furthermore, here is the complex aspect: pressure is depending on 4 dimensions (time, x, y, z). I want to store it in a table but i really don't know how because: It is the combination of a a value for time a value for x a value for y and a value for z that gives me a pressure value. If i have 4 values for time, 4 values for x, 4 values for y for values for z, it makes me 1 024 values to store with hierarchy dependancies. Has someone an idea about this problem or perhaps a solution? Regards, Philippe Collet |
From: <fa...@xo...> - 2005-07-04 17:16:06
|
On Mon, Jul 04, 2005 at 04:25:33PM +0200, Antonio Valentino wrote: > +1 the pytables-20050702 snapshot perfectly works on a fresh Fedora Core 4 > installation. Good. I'll try to release PyTables 1.1 as soon as I can, because it seems quite free of bugs. I'm a bit worried about UCL problems on MacOSX though. > I just realized that you included my name on the "PyTables User's Guide" front > page. Thank you very very much :))) That was unexpected for me! Well, you have contributed quite interesting bits, like CArray objects and a more consistent treatement of attributes as well as native HDF5 support for multidimensional attributes. Besides you have delivered a very nice documentation about your patches. So your inclusion in the list of authors in the manual is totally deserved :) > I read it again fro the beginning an found a couple of little inconsistency. > Please take a look to the attached file. Of course, I'll add your suggestions as soon as possible. Thanks! Francesc |
From: Antonio V. <v_a...@us...> - 2005-07-04 14:37:04
|
Alle 21:11, venerd=EC 24 giugno 2005, Francesc Altet ha scritto: > Hi List, HI > After one week of testing and fixing (many of the fixes reported by > the users), the beta3 version, a PyTables 1.1 release candidate 1 > version is available for the next round of tests. [...] +1 the pytables-20050702 snapshot perfectly works on a fresh Fedora Core 4= =20 installation. I just realized that you included my name on the "PyTables User's Guide" fr= ont=20 page. Thank you very very much :))) That was unexpected for me! I read it again fro the beginning an found a couple of little inconsistency= =2E=20 Please take a look to the attached file. Ciao =2D-=20 Antonio Valentino Consorzio Innova via della Scienza - Zona Paip I 75100 Matera (MT) Italy |
From: Francesc A. <fa...@ca...> - 2005-06-30 17:24:08
|
Hi Joost, El dj 30 de 06 del 2005 a les 14:42 +0200, en/na joost va escriure: > while compiling the 30 june's pytables snapshot I get an error. I > suppose it has something todo with swig but i don't know swig. Can > anybody help me? > Yes, apply the next patch: http://lists.copyleft.no/pipermail/pyrex/2004-December/001084.html to the *Pyrex* sources. Francesc |
From: joost <j.g...@ew...> - 2005-06-30 12:43:06
|
while compiling the 30 june's pytables snapshot I get an error. I suppose it has something todo with swig but i don't know swig. Can anybody help me? Greet, Joost building 'tables.hdf5Extension' extension Traceback (most recent call last): File "setup.py", line 552, in ? scripts = ['utils/ptdump', 'utils/ptrepack', 'utils/nctoh5'], File "/usr/local/lib/python2.4/distutils/core.py", line 149, in setup dist.run_commands() File "/usr/local/lib/python2.4/distutils/dist.py", line 946, in run_commands self.run_command(cmd) File "/usr/local/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/local/lib/python2.4/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/usr/local/lib/python2.4/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/usr/local/lib/python2.4/distutils/dist.py", line 966, in run_command cmd_obj.run() File "/usr/local/lib/python2.4/distutils/command/build_ext.py", line 279, in run self.build_extensions() File "/usr/local/lib/python2.4/distutils/command/build_ext.py", line 405, in build_extensions self.build_extension(ext) File "/usr/local/lib/python2.4/distutils/command/build_ext.py", line 442, in build_extension sources = self.swig_sources(sources, ext) TypeError: swig_sources() takes exactly 2 arguments (3 given) -- joost <j.g...@ew...> |
From: Francesc A. <fa...@ca...> - 2005-06-29 20:18:15
|
Hi,=20 If what you need is a series of lists using the same order that the =46ile iterator returns, this would work: i =3D 0 for node in h5file: type_node =3D node._v_attrs.CLASS if type_node =3D=3D "Group": components_list.append({'id':i, 'name':node._v_hdf5name, 'type':type_node, 'shape':'', 'nbrows':'', 'filename':filename, 'fixingValue':"", 'valueSelection':""} elif type_node =3D=3D "Table": components_list.append({'id':i, 'name':node._v_hdf5name, 'type':type_node, 'shape':node.shape, 'nbrows':node.nrows, 'filename':filename, 'fixingValue':"", 'valueSelection':""}) i +=3D 1 or something similar, depending on your exact need. Cheers, A Monday 27 June 2005 16:36, phi...@ho... va escriure: > Hi list, > I asked this question a month ago, but nobody answered it. > I try again we never know :). > > The print method for an hdf5 file object displays the object Tree. > I would like to create a list containing the name of the hdf5 components > in the same order as in the hdf5 file object. > I tryied walkGroups and walkNodes method but without success. > > Has someone ever tryied and succeeded ? > Or do you have any suggestions? > > Here is the code i used: > 1. > for node in h5file: > components_list.append({'name':node._v_name, > 'type':node._v_attrs.CLASS}) > 2. > i=3D0 > for group in self.__h5file.walkGroups("/"): >=20 > components_list.append({'id':i,'name':group._v_hdf5name,'type':"Group",'s= ha >pe':'','nbrows':'','filename':filename,'fixingValue':"",'valueSelection':"= "} >) for array in self.__h5file.listNodes(group, classname =3D 'Array'): > components_list.append({'id':i,'name':array._v_hdf5name,'type':"Array",'s= ha >pe':array.shape,'nbrows':array.type,'filename':filename,'fixingValue':"",'= va >lueSelection':""}) i=3Di+1 > for table in self.__h5file.listNodes(group, "Table"): > components_list.append({'id':i,'name':table._v_hdf5name,'type':"Table",'s= ha >pe':table.shape,'nbrows':table.nrows,'filename':filename,'fixingValue':"",= 'v >alueSelection':""}) i=3Di+1 > i=3Di+1 > > > > ------------------------------------------------------- > SF.Net email is sponsored by: Discover Easy Linux Migration Strategies > from IBM. Find simple to follow Roadmaps, straightforward articles, > informative Webcasts and more! Get everything you need to get up to > speed, fast. http://ads.osdn.com/?ad_id=3D7477&alloc_id=3D16492&op=3Dclick > _______________________________________________ > Pytables-users mailing list > Pyt...@li... > https://lists.sourceforge.net/lists/listinfo/pytables-users =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: <phi...@ho...> - 2005-06-27 14:36:50
|
Hi list, I asked this question a month ago, but nobody answered it. I try again we never know :). The print method for an hdf5 file object displays the object Tree. I would like to create a list containing the name of the hdf5 components in the same order as in the hdf5 file object. I tryied walkGroups and walkNodes method but without success. Has someone ever tryied and succeeded ? Or do you have any suggestions? Here is the code i used: 1. for node in h5file: components_list.append({'name':node._v_name, 'type':node._v_attrs.CLASS}) 2. i=0 for group in self.__h5file.walkGroups("/"): components_list.append({'id':i,'name':group._v_hdf5name,'type':"Group",'shape':'','nbrows':'','filename':filename,'fixingValue':"",'valueSelection':""}) for array in self.__h5file.listNodes(group, classname = 'Array'): components_list.append({'id':i,'name':array._v_hdf5name,'type':"Array",'shape':array.shape,'nbrows':array.type,'filename':filename,'fixingValue':"",'valueSelection':""}) i=i+1 for table in self.__h5file.listNodes(group, "Table"): components_list.append({'id':i,'name':table._v_hdf5name,'type':"Table",'shape':table.shape,'nbrows':table.nrows,'filename':filename,'fixingValue':"",'valueSelection':""}) i=i+1 i=i+1 |
From: Francesc A. <fa...@ca...> - 2005-06-24 19:12:05
|
Hi List, After one week of testing and fixing (many of the fixes reported by the users), the beta3 version, a PyTables 1.1 release candidate 1 version is available for the next round of tests. The Windows platform has been tested now, and it passes all the tests (at least using Python 2.4 and Win XP). The MacOSX seems to still have some problem with the UCL compressor, although I cannot check it for myself. You can get the package at: http://www.carabos.com/downloads/pytables/preliminary/ or=20 http://www.carabos.com/downloads/pytables/snapshots/ if you can wait after midnight (UTC). Thanks! =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-06-23 16:05:31
|
A Tuesday 21 June 2005 20:51, dragan savic va escriure: > Is it possible to add fields to already created table > and if it is, how can I do that? Well, no and yes. No because there is not a high level method to do that. Yes because you can do this manipulating somewhat the objects. Look at the attachment, were I wrote a simple example of doing that. The output looks like: $ python add-column.py Contents of the original table: NestedRecArray[ ('Particle: 10', 10, 0, 100.0, 100.0), ('Particle: 11', 11, -1, 121.0, 121.0), ('Particle: 12', 12, -2, 144.0, 144.0) ] Contents of the table with column added: NestedRecArray[ ('Particle: 10', 10, 0, 100.0, 100.0, 0), ('Particle: 11', 11, -1, 121.0, 121.0, 0), ('Particle: 12', 12, -2, 144.0, 144.0, 1) ] In the future, it would be interesting to create a high level method to easy such operation. Cheers, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Francesc A. <fa...@ca...> - 2005-06-23 12:07:15
|
On Thursday 23 June 2005 13:50, phi...@ho... wrote: > Hi list, > My question is pure pythonic but is linked to pytables. > > I allows user to select data from an array by letting him type the > expression of the comprehension list. > > For example, > > x[?pressure?] for x in table.iterrows() if x[?TDCcount?]>3 and > 20<=x[?pressure?]<50. > > I wanted to create a python list comprehension from the previsou string. > > Is is possible? Yes: eval("[%s]" % userstring) However, this practice is not recommended because the user may introduce malign code (which can be dangerous if, for example, you are developing an app on a public web). > Is there another solution? Yes. Parse the info that wants the user, pass it to a function, and return the list of results. This represents much more work, though, so you may want to stick with the previous solution to start with. Cheers, -- Francesc |
From: <phi...@ho...> - 2005-06-23 11:52:28
|
Hi list, My question is pure pythonic but is linked to pytables. I allows user to select data from an array by letting him type the=20 expression of the comprehension list. For example, x[=92pressure=92] for x in table.iterrows() if x[=92TDCcount=92]>3 and=20 20<=3Dx[=92pressure=92]<50. I wanted to create a python list comprehension from the previsou string. Is is possible? Is there another solution? Thanks by advance, Philippe Collet |
From: Francesc A. <fa...@ca...> - 2005-06-23 11:24:55
|
Ashley, On Thursday 23 June 2005 04:44, you wrote: > I tested the 20050622 snapshot. Here are few changes I made. > > AttributeSet now preloads all the attributes. I've been pickling > classes in the attributes that may not available when the file is > opened (its a bad design -- I'll change it to use class names one > day), and this raises an exception in AttributeSet.__init__. My > solution was to add ImportError to the "allowable exceptions" in > AttributeSet.__getattr__. I did notice that the documentation for > pickling says that unpickling may raise errors other than > UnpicklingError. > > > --- pytables/tables/AttributeSet.py 2005-06-16 12:43:41.000000000 > +1000 > +++ pytables22/tables/AttributeSet.py 2005-06-23 12:20:33.000000000 > +1000 > @@ -205,7 +205,7 @@ > if type(value) is str and value and value[-1] == ".": > try: > retval = cPickle.loads(value) > - except cPickle.UnpicklingError: > + except (cPickle.UnpicklingError, ImportError): > retval = value > else: > retval = value Yup, you suggestion has been added. Perhaps something like: try: except: would be better? Ivan, do you see something wrong with this? > The other two are just simple mistakes: > > > --- pytables/tables/File.py 2005-06-18 12:36:31.000000000 +1000 > +++ pytables22/tables/File.py 2005-06-23 12:23:16.000000000 +1000 > @@ -912,7 +912,7 @@ > """ > srcObject = self.getNode(where, name=name) > dstObject = self.getNode(dstnode) > - object._v_attrs._f_copy(dstnode) > + srcObject._v_attrs._f_copy(dstnode) > def copyChildren(self, srcgroup, dstgroup, Ops. Solved and added a couple of tests to check File.copyNodeAttrs() just in case. > > --- pytables/tables/Table.py 2005-06-21 13:25:17.000000000 +1000 > +++ pytables22/tables/Table.py 2005-06-23 12:21:52.000000000 +1000 > @@ -1666,9 +1666,9 @@ > index = self._v_file.getNode(indexpathname) > # Get the filters values > self.indexprops.filters = self._g_getFilters() > - description[colname].indexed = 1 > + getattr(description, colname).indexed = 1 > except NodeError: > - description[colname].indexed = 0 > + getattr(description, colname).indexed = 0 > # Add a possible IndexProps property to that > if hasattr(self, "indexprops"): Indeed, you are right. Fixed! > The tests pass with the exception of a segmentation fault that occurs > intermittently. > > The platform is: > > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > =-=-=-= > PyTables version: 1.1-beta4 > Extension version: $Id: utilsExtension.pyx 1019 2005-06-20 12:54:50Z > faltet $ > HDF5 version: 1.6.4 > numarray version: 1.4.0 > Zlib version: 1.2.2 > LZO version: 1.08 (Jul 12 2002) > UCL version: 1.03 (Jul 20 2004) > BZIP2 version: 1.0.2 (30-Dec-2001) > Python version: 2.3.5 (#1, Mar 20 2005, 20:38:20) > [GCC 3.3 20030304 (Apple Computer, Inc. build 1809)] > Platform: darwin-Power Macintosh > Byte-ordering: big > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > =-=-=-= > > I get occasional failures with from ``test_earray`` when run inside > ``test_all``:: > > Checking enlargeable array iterator ... Segmentation fault > > Its not a reproducible error. If I run only ``test_earray`` with > ``heavy=1``, it succeeds:: [...] > The crash log is:: > > Exception: EXC_BAD_ACCESS (0x0001) > Codes: KERN_INVALID_ADDRESS (0x0001) at 0x024f8000 > > Thread 0 Crashed: > 0 utilsExtension.so 0x016b4ae4 ucl_nrv2e_decompress_safe_8 > + 684 (n2e_d.c:122) > 1 utilsExtension.so 0x016af480 ucl_deflate + 320 (H5Zucl.c: Mmm, and it is always this UCL call which crash? Perhaps this might a consequence of a bug in UCL for PowerPC platforms, o perhaps (more possible) a bad implementation of UCL support on my part. I've slightly modified the size for buffer compression: can you give it a try so as to see if this symptom has bettered? If problems with UCL compression library remains for PowerPC, I'll have to disable support for it, at least until I get a fix :-/ > Hope this helps. Of course it does! PowerPC was not tested yet for PyTables 1.1. Thanks! -- Francesc |
From: dragan s. <sav...@ya...> - 2005-06-21 18:52:17
|
Hello! Is it possible to add fields to already created table and if it is, how can I do that? Thanks! Best regards, Dragan. __________________________________ Yahoo! Mail Mobile Take Yahoo! Mail with you! Check email on your mobile phone. http://mobile.yahoo.com/learn/mail |
From: Francesc A. <fa...@ca...> - 2005-06-20 07:21:27
|
Hi Antonio, On Saturday 18 June 2005 19:28, Antonio Valentino wrote: > a really good job ;) Well, part of it was possible thanks to your efforts ;) > the test fails on my laptop (see the attatched file). > It seems that the "itemsize" in the _createEArray method is never > initialized. This should fix the problem: > > --- hdf5Extension.pyx (Id 1015 2005-06-17 17:55:14Z) > +++ hdf5Extension.pyx > @@ -1319,6 +1319,7 @@ > complib =3D PyString_AsString(self.filters.complib) > version =3D PyString_AsString(self._v_version) > class_ =3D PyString_AsString(self._c_classId) > + itemsize =3D atom.itemsize > fill_value =3D <void *>malloc(<size_t> itemsize) > if(fill_value): > for i from 0 <=3D i < itemsize: Of course. The strange thing is that this bug was not exposed on my Debian systems (neither sarge nor sid). Fortunately, your Fedora system revealed it. Well, thanks anyway. Ad=E9u! =2D-=20 =46rancesc |
From: Antonio V. <val...@co...> - 2005-06-18 17:41:34
|
Hi Francesc, Il giorno ven, 17-06-2005 alle 21:15 +0200, Francesc Altet ha scritto: > Hi List, > > We are about to release PyTables 1.1. On this version you will find > support for a nice set of new features, like nested datatypes, > enumerated datatypes, nested iterators, support for native > multidimensional attributes, a new object for dealing with compressed > arrays, bzip2 compression support and more. Many bugs has been > addressed as well. a really good job ;) > If you want to contribute by checking that everything is ok (included > documentation!), please, download a snapshot from: > > http://www.carabos.com/downloads/pytables/snapshots/ > > after midnight (UTC) of today, June 17th. > > Remember that, in order to be able to compile the beast from the > snapshot you will need Pyrex: > > http://www.cosc.canterbury.ac.nz/~greg/python/Pyrex/ > > > Thanks! the test fails on my laptop (see the attatched file). It seems that the "itemsize" in the _createEArray method is never initialized. This should fix the problem: --- hdf5Extension.pyx (Id 1015 2005-06-17 17:55:14Z) +++ hdf5Extension.pyx @@ -1319,6 +1319,7 @@ complib = PyString_AsString(self.filters.complib) version = PyString_AsString(self._v_version) class_ = PyString_AsString(self._c_classId) + itemsize = atom.itemsize fill_value = <void *>malloc(<size_t> itemsize) if(fill_value): for i from 0 <= i < itemsize: ciao -- Antonio Valentino INNOVA - Consorzio per l'Informatica e la Telematica via della Scienza - Zona Paip I 75100 Matera (MT) Italy |
From: Francesc A. <fa...@ca...> - 2005-06-18 09:58:30
|
Hi Elias, Quoting eli...@gu...: > Is it possible to read a "hyperslab" from a multidimensional EArray? > How would I do this with pytables. > > For my 1D EArrays I do something like: > > >>> offsets = h5file.getNode("/OEF1", "offsets") > >>> offsetSlice = offsets.read(start=17568, stop=17570) > > which works fine. For bidimensional slices, just use the extended slicing: offsets[17568:17570,23:25] or, for general multidimensional slices: offsets[17568:17570,23:25,...,2,34:100:3] In general, you can use any extended slice idiom listed in: http://www.python.org/doc/2.3.4/whatsnew/section-slices.html except using negative steps. -- Francesc Altet |
From: Francesc A. <fa...@ca...> - 2005-06-17 19:16:12
|
Hi List, We are about to release PyTables 1.1. On this version you will find support for a nice set of new features, like nested datatypes, enumerated datatypes, nested iterators, support for native multidimensional attributes, a new object for dealing with compressed arrays, bzip2 compression support and more. Many bugs has been addressed as well. If you want to contribute by checking that everything is ok (included documentation!), please, download a snapshot from: http://www.carabos.com/downloads/pytables/snapshots/ after midnight (UTC) of today, June 17th.=20 Remember that, in order to be able to compile the beast from the snapshot you will need Pyrex: http://www.cosc.canterbury.ac.nz/~greg/python/Pyrex/ Thanks! =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |