Thx for your reply. I have no great answer. I wrote a script that assembles the tiles into global arrays in memory and then writes to disk. It is about 5x slower than simply copying the files on the filesystem, so it is not too bad, but probably not too good either. A colleague is comparing it to the standard python tool for doing this (with MOM6 files), so I should know how it stacks up in a few days.
Hello Charlie, I have a bunch of small files containing the results of a big computation, and each of the small files contains data from a rectangular patch of the domain, a tile. Thus, each file more-or-less corresponds to a chunk, to use the terminology of netcdf compression. I would like to merge the small files into a single large file. My current approach involves reading all tiles, assembling a large global array in memory, and then writing the global array to a freshly created netcdf file....
I see, thanks. This is not highly important, but it is something I run in to occassionally....
Hi Charlie, Is there a way to force ncrcat and friends to nf_sync their intermediate...
Hi Charlie, Your suggestion seems to work. The ncrcat is still slow, it is producing...
Thanks a lot. I'll try converting the files using nccopy -3 in.nc4 out.nc and then...
Hi Charlie and friends, I have two large (6GB and 1GB) files I am trying to merge...