The netCDF operators NCO version 4.0.5 are ready.
http://dust.ess.uci.edu/nco (Homepage "mirror")
This is a "brown paper bag" release intended to deliver important
bugfixes affecting ncks and ncra.
Also worth mentioning is that the latest Debian (Sid) and Ubuntu
(Maverick) packages finally support DAP, netCDF4, and UDUnits2.
Those executables are now comparable to what the developers use.
(RedHat/Fedora RPMs have supported these features for many years).
Work on NCO 4.0.6 is underway.
Areas of improvement include DAP transparency, and more chunking rulesets.
"New stuff" in 4.0.5 details:
A. Versions 4.0.3--4.0.4 of ncks contain a bug which triggers a
core-dump when hyperslabbing (along a non-record dimension) a
netCDF4-format input file into a netCDF4-format output file, e.g.,
ncks -d 0,1,lat in4.nc out4.nc.
This bug does not affect netCDF3-format files.
Two workarounds that do not require NCO upgrades (or downgrades) are
to explicitly specify ncks chunking with, e.g.,
ncks --cnk_plc=all -d 0,1,lat in4.nc out4.nc,
or, to use ncea instead of ncks for hyperslabbing, e.g.,
ncea -d 0,1,lat in4.nc out4.nc
This works because ncea does a no-op when there is only one input file.
B. Fix bug where ncra incorrectly treats record variable as a
fixed variable if it is specified in the "coordinates" attribute
of any variable in a file processed with CCM/CCSM/CF metadata
conventions. This bug caused core dumps. And even weirder
behavior like creating imaginary time slices in the ouput.
C. There is a known problem triggered by using the stride argument
when accessing a file through the DAP protocol. We are working
to identify and fix the cause of this problem.
ncks -O -F -D 9 -v weasdsfc -d time,100,110,5
J. All operators support netCDF4 chunking options:
These options can improve performance on large datasets.
Large file users: Send us suggestions on useful chunking patterns!
More useful chunking patterns may be implemented in NCO 4.0.6.
ncks -O -4 --cnk_plc=all in.nc out.nc
K. Pre-built, up-to-date Debian Sid & Ubuntu Lucid packages:
L. Pre-built Fedora and CentOS RPMs:
M. Did you try SWAMP (Script Workflow Analysis for MultiProcessing)?
SWAMP efficiently schedules/executes NCO scripts on remote servers:
SWAMP can work command-line operator analysis scripts besides NCO.
If you must transfer lots of data from a server to your client
before you analyze it, then SWAMP will likely speed things up.
N. NCO support for netCDF4 features is tracked at
NCO supports netCDF4 atomic data types, compression, and chunking.
NCO 4.0.5 with was built and tested with HDF5 hdf5-1.8.4-patch1 and with
NCO may not build with earlier, and should build with later, netCDF4
This is particularly true since NCO 4.0.5 takes advantage of an
internal change to the netCDF nc_def_var_chunking() API in June 2009.
export NETCDF4_ROOT=/usr/local/netcdf4 # Set netCDF4 location
cd ~/nco;./configure --enable-netcdf4 # Configure mechanism -or-
cd ~/nco/bld;./make NETCDF4=Y allinone # Old Makefile mechanism
O. Have you seen the NCO logo candidates by Tony Freeman, Rich
Signell, Rob Hetland, and Andrea Cimatoribus?
Tell us what you think...
GSL functions added to ncap2 in 4.0.5:
(Most desired GSL functions have been merged)
Charlie Zender, Department of Earth System Science
University of California, Irvine (949) 891-2429 :)