Command 110 deals with DDS memory. I don't actually have any idea what "DDS" stands for. However, whatever it is, it's the kind of memory that's inside "smart" Vernier sensors—the sensors that store their own calibration equations, etc.
On LabPro devices, you can dump the contents of this memory by sending command 110. (Interestingly, I2C is used to communicate with the sensor behind the scenes; see the VernierLib Arduino library code.) The syntax is s{110,<Channel number>,-1}
. I don't know what the -1 means. Maybe it means "raw"? It's what Logger Lite (and apparently LoggerPro) use. Anyway, although the contents are returned in the usual scientific notation, they are actually the raw bytes, interpreted as 8-bit integers. Here's an example of what was returned when Logger Lite queried a channel with a pH probe connected to channel 3 (line breaks and comments added for clarity):
s{110,3,-1} { +1.00000E+00, # DDS memory layout version +2.00000E+01, # Sensor number (20 = pH) +1.60000E+02, +1.20000E+02, +1.00000E+00, # Sensor serial number +3.00000E+00, +2.10000E+01, # Sensor lot code +0.00000E+00, # Manufacturer ID +8.00000E+01, +7.20000E+01, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, # Long sensor name +8.00000E+01, +7.20000E+01, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, # Short sensor name +1.00000E+00, # Uncertainty (not sure what this means) +6.90000E+01, # Significant figures (not sure what this means either, Go! IO SDK says to ignore it) +7.00000E+00, # Current requirement of sensor (milliamps) +1.00000E+00, # "Averaging" (not sure what that means either; maybe 1 = "take an average because readings fluctuate"?) +0.00000E+00, +0.00000E+00, +1.28000E+02, +6.30000E+01, # Little-endian float, minimum sample period, seconds +0.00000E+00, +0.00000E+00, +0.00000E+00, +6.40000E+01, # Little-endian float, typical sample period, seconds +6.00000E+01, +0.00000E+00, # Typical number of samples (this is a little-endian 16-bit int) +3.00000E+01, +0.00000E+00, # Warm-up time (seconds) (this is a little-endian 16-bit int) +1.00000E+00, # Experiment type for LoggerPro +1.40000E+01, # Operation type, for Command 1 +1.00000E+00, # Conversion equation type, for Command 4 +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, # Suggested Y-axis min value (float) +0.00000E+00, +0.00000E+00, +9.60000E+01, +6.50000E+01, # Suggested Y-axis max value (float) +1.40000E+01, # Suggested Y-axis scale +0.00000E+00, # Highest valid calibration page index +0.00000E+00, # Currently active calibration page +3.10000E+01, +1.33000E+02, +9.10000E+01, +6.50000E+01, # Coefficient A of calibration page 0 +2.03000E+02, +1.61000E+02, +1.17000E+02, +1.92000E+02, # Coefficient B of calibration page 0 +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, # Coefficient C of calibration page 0 +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, # Units for calibration page 0 +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, # Coefficient A of calibration page 1 +0.00000E+00, +0.00000E+00, +1.28000E+02, +6.30000E+01, # Coefficient B of calibration page 1 +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, # Coefficient C of calibration page 1 +4.00000E+01, +8.60000E+01, +4.10000E+01, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, # Units for calibration page 1 +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, # Coefficient A of calibration page 2 +0.00000E+00, +0.00000E+00, +1.28000E+02, +6.30000E+01, # Coefficient B of calibration page 2 +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, # Coefficient C of calibration page 2 +4.00000E+01, +8.60000E+01, +4.10000E+01, +0.00000E+00, +0.00000E+00, +0.00000E+00, +0.00000E+00, # Units for calibration page 2 +5.90000E+01 # Parity (XOR of all the previous bytes) }
Luckily, Vernier has kindly (though obscurely) documented this information in the Go! IO SDK. The first byte is the version of the data format. To my (not very extensive) knowledge, version 1 is the only version in use.
The next byte is the sensor number (20 in this case), which matches the sensor number returned by Command 80. The Go! IO SDK says that if the sensor ID is ≥ 20, then it's a "smart" sensor (i.e. has DDS memory). However, there are exceptions, such as the colorimeter (COL-BTA), which has an ID of 54. This number is also referred to as BaseID in the file sensormap.xml
shipped with Logger Lite and Logger Pro.
The next 3 bytes form the serial number; it's a 3-byte little-endian integer, so you have to convert each 8-bit number listed here to binary, invert the order they're listed in, and put them together to get the 24-bit integer for the serial number.
The next two bytes give the year and week of manufacturing. I think this says that it was manufactured in week 21 of year 3 (whatever that means—2003?).
The next byte gives the manufacturer ID, which is zero in this case, so I think we can assume that ID 0 = Vernier.
We then have 20 bytes that store the sensor name. If you look at an ASCII table, you'll notice that 80 and 72 correspond to the letters "PH"—the same name returned by Command 116. It's followed by null bytes, since the the sensor name didn't take up 20 bytes. Then, there are 12 bytes to store the short sensor name (what's returned by Command 117). In this case, it's the same as the long sensor name—"PH".
The name strings are followed by 1 byte to indicate the "uncertainty" of the sensor. I have no idea what that means and the number conflicts with what sensormap.xml
seems to indicate should have been burned into the sensor from the factory, so I think it's safe to ignore this number.
The uncertainty byte is followed by a byte for "Significant figures." The Go! IO SDK says that this field is only understood properly by TI calculators, and that it is better to count significant figures by considering the resolution of the 5V analog-to-digital converter.
There is then a byte for the sensor's current requirements, in milliamps, followed by a byte labeled only as "averaging." I think a 1 here means that the programmer should take several readings and find an average to avoid errors. Another possibility is that it is related to the number of points used to calculate a derivative.
Next are the minimum and typical sample periods (time between samples). These are each 32-bit floats, see below for how to parse them. For some reason the minimum time between samples provided here conflicts with sensormap.xml
. The sensormap says 0.25 seconds is the minimum time between samples, while this sensor says that 1 second is the minimum.
The next two bytes are the typical number of samples, 60 in this case. Remember, it's little endian so the first byte is actually least significant. This is followed by another 16-bit int which represents the warm-up time for the sensor (30 seconds here).
Next is a byte for the LoggerPro experiment type. I'm not sure exactly what this means. It's followed by a byte for the Command 1 operation type, which here is 14 (which means voltage 0–5 V). There is also a byte for the conversion equation type (1 in this case), which for LabPro is a polynomial.
Next are the suggested Y-axis parameters for graphing, and they are stored as floating-point values. The floating-point values here are the complicated part, since they're chopped up into 4 8-bit ints. Here's how to work through them (from a paper-and-pencil perspective):
1. The floats are little-endian, which means that the first bit is the least significant. LabPro already interprets each byte properly as little-endian, but since it splits up the float, we still have to take the the binary representation of the final 8-bit int and move it to the front.
2. We continue the pattern (i.e. the second-to-last int's binary representation follows the final int's binary representation and so on). If done properly, you should get the following in binary: 0 10000010 110 0000 0000 0000 0000 0000
.
3. The float is in the IEEE 754 biased format, so we convert it to decimal as follows: The first bit is a zero, so the number overall is positive. The next 8 bits are the exponent with the bias taken into account, and in decimal equal 130. The bias is 127 (IEEE 754 uses a bias so that negative exponents are possible), so when we subtract, we get an exponent of 130 - 127 = 3.
4. Next, we consider the ending 23 bytes. In IEEE 754, it's assumed that the mantissa (a fancy word for the thing that gets multiplied by a power of 2) starts with a 1, so it's left out of the actual float. If we were to represent the mantissa in the above float as a binary fraction, we would have 1.11. But remember, the exponent tells us to move the dot three places to the right! So the actual binary representation of the number is 1110, which in decimal equals 14. Coincidence that that's the (typical) max value for the pH scale? I think not!
Then there is the recommended Y-axis scale of 14, which at least on TI calculators, means that there would be a tick-mark on the graph every 14 units. In my opinion, this is not really a great default value for the pH scale, but it matches the BurnParams in sensormap.xml, so at least we know that we're interpreting it correctly.
Next is the "highest valid calibration page index" byte. In this case, it's zero even though this pH sensor has room for 3 calibration pages. I think that's because no other calibration pages have been set up for pH from the factory (see below). It is followed by a byte which stores the index (0 for the first calibration, 1 for the second, 2 for the third) currently active calibration page.
Following the "Current calibration index" byte are the three calibration pages. Each calibration page consists of three floats, labeled as Coefficient A, Coefficient B, and Coefficient C in the Go! IO SDK, but probably more familiar as the parameters K0, K1, and K2 for Command 4. Each calibration page also includes 7 bytes to store the units. Calibration page 0 in this case has no units, because pH is "just a number." But note that calibration pages 1 and 2 have units. The three nonzero bytes correspond to the characters "(V)", and the fact that (if you work through the floating-point conversion) Coefficient A = 0 and Coefficient B = 1 indicates that these calibrations allow raw voltage measurements.
The last bit is the parity bit (assuming that the Go! IO SDK is correct). I didn't bother to actually check whether it is correct. :/