|
From: SourceForge.net <no...@so...> - 2003-05-05 14:40:23
|
Read and respond to this message at: https://sourceforge.net/forum/message.php?msg_id=2001746 By: beanmf I am not at all expert on computer representation of numbers but I tested Tom's assertion about the number 10 in C#.Net double test = 10; double newTest = test; the value held in newTest was 10.0 which has the same value as 10 but with a different precision. In our discussions about precision in the past, we differentiated between measurement precision and representation precision. If we acquire data with an ADC, then realistic measurement precision may be hard to assess from a graph without further information; thus measurement precision was thought to lie outside the domain of mere data representation. I personally cannot decipher why we would care if a number was represented as 10.0 or 10 as long as the measurement precision was accessible somewhere. Numbers represented as strings may be difficult to array in many languages, such as C and its derivatives, as the size of the string must be known at compile time, and all number strings must be the same size to be held in an array. That is of course one of the chief reasons (other than space) for adopting binary rather than character representations. ______________________________________________________________________ You are receiving this email because you elected to monitor this forum. To stop monitoring this forum, login to SourceForge.net and visit: https://sourceforge.net/forum/monitor.php?forum_id=262127 |