|
From: Stefano M. <mo...@ic...> - 2016-05-31 09:50:43
|
> On 30 May 2016, at 17:27, Guy COLIN <guy...@gm...> wrote: > > Also I think that monitoring the directories isn't the best method. > I'm going to use new script to read the devices and check if some are > missing. > It will take sometime, no problem I'm patient ;-) > I'll keep updating here. My 1cent of advice: I have a (python) script monitoring a network of sensors (DS2438) for years. Of course your mileage may vary, but here are my choices. 1) The list of sensors is hardwired in a configuration file, no device listing or discovery at runtime. 2) Data is logged with rrdtool in RRD databases (one for each sensor) with step=60s. 3) Every 30s all sensors are read and the corresponding RRDs updated. If a read fails, don worry, just skip it. By doing so you will end up with a RRD database that records every 60s the average of two sensor samples. In the case of a failed read, the logged value will be the one of a single sample, eventually of a previous time step. (Actually the story is a little more complicated… every 60s a primary data point is generated by averaging the valid samples, typically two. What actually is recorded in the database depends on the consolidation function used, but this is another story). Read errors are so unfrequent that I simply stopped looking at the logs. RRD is not an easy tool, but by properly implemented and tuned it becomes very robust; at an eyeball inspection the graph generated from the RRD databases are flawless: no missing points, no spikes nor outliers. Stefano |