From: Thomas P. <tom...@li...> - 2013-07-18 17:08:16
|
> Specific concerns: > > (1) Under normal circumstances you'll be taking an integer time from the > system clock (via [clock seconds]) or from a database, then converting > to floating point, then running it through [clock format]. The reverse > route is to scan a date/time string using [clock scan] to get a floating > point value, then converting that value to an int for storage or precise > manipulation. I don't see any need to have the floating point step in > there. It's the wrong approach. ? I don not get your "normal circumstances". My use-case 1): Get the system clock via [clock milliseconds] or from a database, and then make a graph/plot or display the values on the GUI.use-case 2): Get the system clock via [clock milliseconds] and store it to database, and then make a graph/plot or display the values on the GUI. > > (2) As Keven points out, the format specifier suggests an equivalent > scan specifier: > > On 2013/07/18 04:00 AM, Kevin Kenny wrote: > > If we go this route, we should have the same format group recognized > > by [clock scan], and cause the result to be expressed as a float. > > So, based on the value of the -format parameter, [clock scan] may return > an int or it may return a float. Bad! Ok - understood.But consider [expr 10/3.]: There will be a floating point result only if one of these number have a dot. Similar this could be handled in [clock] operations > Imagine that I have an application that allows the user to specify a > date/time format string in a config file, and parses inputs using [clock > scan] and the configured format string. Upgrade to a TIP#423-supporting > interp and you'll blow up my app by giving me a float when I expect an int. You need to check the input from the user anyhow. Or what do you do when he configures unknown format specifiers? > > > Counter-proposal: > > Add to [clock format], [clock scan] and [clock add] an additional > optional parameter that identifies the scale/magnitude/resolution of the > time (formerly 'seconds') argument, one of: > -millis true|false > -micros true|false > -scale 1|1000|1000000 (default 1) > -resolution seconds|millis|micros (default seconds) > ... or something similar. Do you mean that %S will output a float depending on such a switch? ThxThomas |