I'm seeing a significant discrepancy between the logged TimeStamps and the logged duration of the timed function. The following simple test program has a DurationLogger class whose ctor issues a START log (and registers the Duration attribute) and whose dtor issues a STOP log. The class has a logger member variable to which the START/STOP logs are sent:
Based on the TimeStamps, above, the expected Durations are:
1. 621019 - 620317 = 702us (rather than the indicated 503us)
2. 620741 - 620526 = 215us (rather than the indicated 86us)
I see even worse discrepancies when running multithreaded, but I have deliberately simplified the problem.
NOTE: I always see that timer1 takes longer than timer2, but I assume that's because timer1 waits for timer2 to complete its logging.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The timer starts counting when the attribute is created. In your code it happens after the first log record is fully processed, which means the time for the first record processing is mostly included in the timestamps span and not included in the timer. You can swap the attribute creation and the first log record and see that the timers start to register longer durations.
The timestamps and timer value are obtained separately. While this happens relatively close in time, Linux clocks are precise enough to register the different time points.
Your test time span is very short. You can increase the sleep duration and see that the timestamps and timers indicate roughly the same durations (or rather, the amount of discrepancy is insignificant compared to the measured duration). Sub-millisecond durations are difficult to measure reliably on a non-realtime system because there are many factors involved, like memory allocation, thread contention and IO latency.
If you're trying to benchmark some code using Boost.Log then consider either increasing workload (to increase the measured time durations and make the error insignificant) or use more specialized tools like measuring CPU cycles instead of reading system clocks.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The code presented in this post [which still cannot be seen, as it remains in quarantine], was contrived to be simple and yet demonstrate the discrepancy. In the actual application where these discrepancies were first identified, the durations are longer.
Rather than use Boost.Log's Duration attribute, I have achieved satisfactory results with the following RAII approach:
classScopedLogger{ScopedLogger()=delete;ScopedLogger(constScopedLogger&)=delete;boost::log::sources::severity_channel_logger_mt<boost::log::trivial::severity_level,std::string>lg_;constboost::log::trivial::severity_levellevel_;conststd::stringmessage_;conststd::chrono::steady_clock::time_pointstart_time_;public:ScopedLogger(conststd::string&channel,constboost::log::trivial::severity_levellevel,conststd::string&message):lg_(boost::log::keywords::channel=channel),level_(level),message_(message),start_time_(std::chrono::steady_clock::now()){BOOST_LOG_SEV(lg_,level_)<<message_<<" BEGIN SCOPE";}~ScopedLogger(){std::chrono::steady_clock::time_pointend_time=std::chrono::steady_clock::now();uint64_telapsed_us=std::chrono::duration_cast<std::chrono::microseconds>(end_time-start_time_).count();BOOST_LOG_SEV(lg_,level_)<<message_<<" END SCOPE, Elapsed="<<elapsed_us<<"us";}};
Using the above ScopedLogger, the calculated duration is typically within 150us of the TimeStamp delta -- even when substituting ScopedLogger for DurationLogger in the posted example:
$ ./logtest
2017-11-16T19:08:23.093979 <0x00007f6e2cf66740> timer1 BEGIN SCOPE
2017-11-16T19:08:23.094184 <0x00007f6e2cf66740> timer2 BEGIN SCOPE
2017-11-16T19:08:23.094419 <0x00007f6e2cf66740> timer2 END SCOPE, Elapsed=232us
2017-11-16T19:08:23.094499 <0x00007f6e2cf66740> timer1 END SCOPE, Elapsed=679us
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Functionally, your code is equivalent to the original one with timers. You can see it also has discrepancy in durations, though of slightly lower magnitude due to a bit less overhead. Enabling optimization may reduce the difference.
PS: Oddly, I'm not receiving notifications of posts requiring moderaion. It may be that this only happens to posts with code samples. I'm not using SF actively, but it wasn't like this before, so something has changed. Not sure how to fix it.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
If not SF, which forum would you recommend for Boost.Log related questions?
Posting here is fine, if it is convenient for you. I'm also following Boost developers mailing list, where you can post your questions with a "[log]" tag in the subject. You can also post on StackOverflow.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm seeing a significant discrepancy between the logged TimeStamps and the logged duration of the timed function. The following simple test program has a DurationLogger class whose ctor issues a START log (and registers the Duration attribute) and whose dtor issues a STOP log. The class has a logger member variable to which the START/STOP logs are sent:
Based on the TimeStamps, above, the expected Durations are:
1. 621019 - 620317 = 702us (rather than the indicated 503us)
2. 620741 - 620526 = 215us (rather than the indicated 86us)
I see even worse discrepancies when running multithreaded, but I have deliberately simplified the problem.
NOTE: I always see that timer1 takes longer than timer2, but I assume that's because timer1 waits for timer2 to complete its logging.
I've been looking at "Post awaiting moderation." for 24 hours. Anybody home?
The discrepancy is caused by multiple reasons.
If you're trying to benchmark some code using Boost.Log then consider either increasing workload (to increase the measured time durations and make the error insignificant) or use more specialized tools like measuring CPU cycles instead of reading system clocks.
The code presented in this post [which still cannot be seen, as it remains in quarantine], was contrived to be simple and yet demonstrate the discrepancy. In the actual application where these discrepancies were first identified, the durations are longer.
Rather than use Boost.Log's Duration attribute, I have achieved satisfactory results with the following RAII approach:
Using the above ScopedLogger, the calculated duration is typically within 150us of the TimeStamp delta -- even when substituting ScopedLogger for DurationLogger in the posted example:
Functionally, your code is equivalent to the original one with timers. You can see it also has discrepancy in durations, though of slightly lower magnitude due to a bit less overhead. Enabling optimization may reduce the difference.
PS: Oddly, I'm not receiving notifications of posts requiring moderaion. It may be that this only happens to posts with code samples. I'm not using SF actively, but it wasn't like this before, so something has changed. Not sure how to fix it.
Thanks for your comments; they're much appreciated.
If not SF, which forum would you recommend for Boost.Log related questions?
Posting here is fine, if it is convenient for you. I'm also following Boost developers mailing list, where you can post your questions with a "[log]" tag in the subject. You can also post on StackOverflow.