I noticed another odd behaivor with nco and uncompressed files. I'm using version 4.4.0
If I try to change the chunking on an uncompressed file I get an
"ERROR NC_EINVAL Invalid argument" error
ncks --cnk_dmn lon,90 inputfile outputfile throws that error if the file is uncompressed but works fine if the file is compressed.
Can you send a link to the file you are using? I don't get that behavior with our test files.
I'm sorry, It was that I was trying to to uncompress and chunk at the same time. I was trying to do ncks -L 0 --cnk_dmn lon,90 input output for example. I can do ncks -L n --cnk_dmn lon,90 input put
and if n > 0 it works
Thank you for reporting this bug. We will try to fix it. The workaround for now is to not uncompress files at the same time that you are chunking them.
The current snapshot has a patch that we think fixes the problem you found.
Now NCO should successfully be able to chunk and uncompress t the same time.
Please try it if you can and give us some feedback.
By snapshot do you mean the latest development version you get with
cvs -z3 -d:pserver:firstname.lastname@example.org:/cvsroot/nco co -kk nco
we did that command and built the resulting checkout but that didn't seem to fix the problem
ncks --hst -4 -L 0 --cnk_dmn lon,576 --cnk_dmn lat,361 --cnk_dmn lev,72 inputfile outputfile
where inputfile is compressed at deflate level 2
still fails with the ERROR NC_EINVAL Invalid argument error.
Yes, thank you for trying the snapshot and giving feedback.
You are using a case we never tested for.
Apparently your variables had the shuffle filter set while compressed, but NCO was not unsetting the shuffle filter while uncompressing the variables (i.e., when compressing them to level 0). So, although the variables were uncompressed, they were still shuffled. I didn't know this was possible. Thought the netCDF library might take care of this. In any case, one cannot unchunk a shuffled variable. So now we turn off shuffling when uncompressing and the problem goes away. Please retry latest snapshot on your datasets and let us know if this fixes it.