|
From: Francesc A. <fa...@ca...> - 2006-03-17 08:52:23
|
Hi Elias,
A Dijous 16 Mar=E7 2006 23:42, eli...@gu... va escriure:
> I am working with large finite element models and results data from
> hundreds of loading condition. The amount of data is taxing our resources
> and what I'm doing probably wouldn't be possible without HDF5, nor easy
> without PyTables.
>
> There may be cases where different sets of load conditions could be in
> separate files, and yet they would all share the same finite element (FE)
> model data, ie, geometry, materials, etc. Instead of each file containing
> it's own Group of FE data, I could just H5Fmount the common geometry from=
a
> single file to be shared by any number of results files.
Yeah, your point is very valid. As I said before, we plan to look into that.
>
> Having given this more thought, I can see how I would be able to accompli=
sh
> this by using an attribute in each results file to store a pathname of the
> file containing the common data. However, the H5Fmount would be nice
> because it is transparent and only one file handle would be needed.
Yes, that's another possibility. Something like:
file1 =3D tables.openFile('file1.h5')
# Now, let's suppose that the file you want to share is in
# root._v_attrs.common. You can access it easily with:
filecommon =3D tables.openFile(file1.root._v_attrs.commom)
# Now, you have all the shared data avalaible in 'filecommon' handle.
What you are suggesting, I think, is something like:
mountpoint =3D file1.mountFile('/common', file1.root._v_attrs.commom)
for having access to the shared file in file1.root.common.
Mmm, now that I think more about this, I'm not so sure that the later
approach would be much better than the former. Do you envision some
important advantage for the "mount thing"?.
Regards,
=2D-=20
>0,0< Francesc Altet =A0 =A0 http://www.carabos.com/
V V C=E1rabos Coop. V. =A0=A0Enjoy Data
"-"
|