I ran into a similar problem using LabView a couple of years ago.  I don't remember all the specifics of how it was resolved, but here are a few things I can tell you.
1) I loaded your images into Matlab (v7.2.0.232 (R2006a)) and all were read as uint16, 520x696, with the following max values:
i7 = 37152
i8 = 41120
i9 = 37962
i10 = 37156
i11 = 57692
i12 = 35176
i13 = 34036
I next loaded your images using IDL v6.1 and all were read in at uint, 696x520 images with the following max values:
i7 = 72
i8 = 160
i9 = 296
i10 = 580
i11 = 1802
i12 = 2198
i13 = 4254
The values that Matlab reports are too high for a 12 bit camera, so it is clearly not reading the images correctly. IDL seems to get it right.
Here are some notes I dredged up from two years ago:
"To simplify decoders, PNG specifies that only certain sample depths can be used, and further specifies that sample values should be scaled to the full range of possible values at the sample depth. However, the sBIT chunk is provided in order to store the original number of significant bits. This allows decoders to recover the original data losslessly even if the data had a sample depth not directly supported by PNG. We recommend that an encoder emit an sBIT chunk if it has converted the data from a lower sample depth." (see
Now, returning to Labview, I find that the function which writes out the PNG image files has a parameter called "Use bit depth". Regarding this parameter I see:
"Use bit depth? (false) When saving a signed 16-bit image to a PNG file, IMAQ Vision must convert the data to an unsigned format and shift the data so that most significant bit is always the leftmost bit. Set this parameter to TRUE to use the bit depth information attached to image to perform these conversions. Set this parameter to FALSE to bias the image by adding a constant value to all the pixels in the image such that the lowest negative pixel value in the image maps to zero, and then shifting the image data based on the highest pixel value in the image. "
After some discussion of how the images were being loaded by VxL a change was implemented by Amitha Perera (see the cvs log entry for 5/12/2006 for vxl/core/vil/file_formats/vil_png.cxx).
I'm not sure if this leads you to a solution, but it could be that with the proper setting in LabView you can get the images written the way you want.
Glen Brooksby

From: Tobias Wood []
Sent: Monday, October 06, 2008 2:43 PM
Subject: [Vxl-users] vil_image_view and PNGs

Hi, I've been using vil to load and manipulate .png files saved from a LabView program that uses the IMAQ camera controls attached to a 12-bit CCD. The .pngs saved by this are a little wierd - they have variable bit depth depending on the dynamic range of the image and so use the 'SignificantBits' part of the png header. They are wierd enough that Matlab loads them incorrectly, and we had to write our own wrapper code around the Matlab load functions. Until today vil did not seem to have a problem with them, but the following assert fails when loading the latest round of images I captured with the LabView program: In vil_image_view.txx, line 76: assert(mem_chunk->size() >= n_planes*n_i*n_j*sizeof(T)); When I debug this, mem_chunk->size is equal to n_plane*n_i*n_j. Examining the 'this' pointer indicates that the pixel_format_ is VIL_PIXEL_FORMAT_BYTE. However some further digging indicates that sizeof(T) returns 2, hence the failed assert. Further examination of the pngs in Matlab indicates that they are only 7 bit pngs. Usually the camera gives us back more than 8 significant bits but this set of images are very dark. Does anyone with more experience of vil and the png file format have an idea as to what is going on? For the record, I am loading the files with: vil_image_view image = vil_convert_cast(double(), vil_load(fileName.c_str())) and the assert fails within vil_load. I am happy to send a suspect png file to anyone who can help if they want to replicate exactly my situation. Thanks in advance, Toby Wood