I've got a 94GB file that I'd like to compute anomalies for. I've successfully computed the mean using "ncra" but when I try to run "ncbo", I get:
ncbo: ERROR nco_malloc() unable to allocate 10952873100 bytes
ncbo: INFO NCO has reported a malloc() failure. malloc() failures usually indicate that your machine does not have enough free memory (RAM+swap) to perform the requested operation. As such, malloc() failures result from the physical limitations imposed by your hardware. Read http://nco.sf.net/nco.html#mmr for a description of NCO memory usage. There are two workarounds in this scenario. One is to process your data in smaller chunks. The other is to use a machine with more free memory.
I take it there is no way to tell ncbo to process the input file one record at a time, right? (it has an unlimited time dimension).
So is the only way to physically split the input netcdf file into multiple netcdf files? (I can't move to another machine with more memory - I've got 396 of these 94GB files to process!)
> I take it there is no way to tell ncbo to process the input file one record at a time, right?
> So is the only way to physically split the input netcdf file into multiple netcdf files?
that's the kludgiest way. a slight more sophisticated (and disk friendly) way is to access smaller chunks from the files you have. i.e., use ncbo -d time,0,10 in.nc obs.nc diff.nc;ncbo -d time 11,20 in.nc obs.nc diff.nc;…
so you write a shell script to loop over time and complete the differencing in place, then use ncrcat to glue the results together.
to be more clear, the script should call NCO something like this…
ncbo -d time,0,10 in.nc obs.nc diff1.nc
ncbo -d time 11,20 in.nc obs.nc diff2.nc;
ncbo -d time 91,100 in.nc obs.nc diff10.nc;
ncrcat diff*.nc diff.nc
Log in to post a comment.