Menu

Logging - Dating of logs

2026-01-12
2026-01-15
  • andrew bear

    andrew bear - 2026-01-12

    Hi all,

    For log management purposes it would be useful to have an option create a new log each day and append a date suffix. I note that the current dating option only creates new dated logs on certain conditions and this isn't one of them .

    The above option would enable an external program to delete old logs after a certain time.

    Even better would be an internal option to delete old log files after X amount of days.

    Another good option would be a limit Z on the size of the log directory to prevent bringing the OS to a halt due to lack of disk space if a script or cyclic script goes astray. Once the limit has been reached no new log entries would be written other than a "log size limit reached" message (an indicator of this on the App would be good too). Whilst it be nice in this case to automatically delete old log files to make room important data may be deleted before the script or cyclic script went astray.

    In regards to is request I note this can be done in scripts as mentioned in the following forum post but inbuilt options would suit more users

    https://sourceforge.net/p/scriptcommunicator/discussion/featurerequests/thread/ffe0357420/

    Cheers
    Andy

     

    Last edit: andrew bear 2026-01-12
  • andrew bear

    andrew bear - 2026-01-15

    Note I nearly have finished a script to create a new timestamped log file each day and also limit the log directory size. However even though the functionality can be implemented in a script it's a rather slow and technical complex way (with a few headaches) for something I think would be worthwhile building in the the app.

    My intention is to share this script here because there are parts in it that many will find useful (directory size reading, daily log files, execution of external commands, and a useful exception handler).

    The only problems I've had so far with this is getting the directory size in a speedy way.

    The inbuilt ScriptCommunicator function scriptFile.getFileSize is very slow on first read as it caches the entire file (it uses QT QFile.size() from QJSEngine which is the Javascript engine used by ScriptCommunicator). For a small 5 file 30kb directory it only takes 3ms. However on a large 319MB directory with 1525 files it takes 50 seconds for the first directory size read and approx 150ms thereafter.

    I tried using an external program via scriptThread.createProcessAsynchronous which is much faster for the first directory read at 140-150ms with most of this being overhead of just calling the external program (approx 130ms).

    My program is a VBscript so there may be significant program startup time even when not reading directory sizes (directory size readings only took about 10ms as already mentioned judging on the response time when the directory code was commented out). I will eventually try other file types to see if that has significant effect on times. I may try SysInternals du.exe which is said to be superfast for reading directories.

    Whilst this directory read timing is fine for me others may need faster response. Any suggestions?

    Cheers
    Andy

     
  • Stefan Zieker

    Stefan Zieker - 2026-01-15

    Hi Andy,

    I was not aware that the built in function is so slow. I think createProcessAsynchronous with du.exe is the way to go.

    Cheers
    Stefan

     
    • andrew bear

      andrew bear - 2026-01-15

      Hi Stefan,

      The built in function (which uses QFile::size() ) is fast once the files are all added to the MS Windows file cache but that takes ages for large directories (50 seconds in my case) with it blocking in the meantime and actually slows the rest of operating system down overall who is also fighting for the disk cache.

      I've looked at the QT source code (see below) but got lost at the lower levels. QFileInfo.size() is probably much faster as it seems to use fstat (see below), primarily , which uses the MFT (Master File Table) which according to Google AI (see below) is generally cached in memory. Of coarse the only way to find out for sure is to test it.

      Cheers
      Andy

      **QFile::size() **

      https://codebrowser.dev/qt5/qtbase/src/corelib/io/qfile.cpp.html

      qint64 QFile::size() const
      {
      return QFileDevice::size(); // for now
      }

      qint64 QFileDevice::size() const
      {
      Q_D(const QFileDevice);
      if (!d->ensureFlushed())
      return 0;
      d->cachedSize = d->engine()->size();
      return d->cachedSize;
      }

      **QFileDevice::size() **

      qint64 QFileDevice::size() const
      {
      Q_D(const QFileDevice);
      if (!d->ensureFlushed())
      return 0;
      d->cachedSize = d->engine()->size();
      return d->cachedSize;
      }

      QFileInfo::size()

      The following is fast and generally filled by fstat at invocation of QFileInfo for each file.

      "{ return d->metaData.size(); }"

      The fallback "d->fileEngine->size()" is what QFile ::size() uses and as the code suggests uses cached file data.

      https://codebrowser.dev/qt5/qtbase/src/corelib/io/qfileinfo.cpp.html#1394

      qint64 QFileInfo::size() const
      {
      Q_D(const QFileInfo);
      return d->checkAttribute<qint64>(
      QFileSystemMetaData::SizeAttribute,
      d { return d->metaData.size(); },
      d {
      if (!d->getCachedFlag(QFileInfoPrivate::CachedSize)) {
      d->setCachedFlag(QFileInfoPrivate::CachedSize);
      d->fileSize = d->fileEngine->size();
      }
      return d->fileSize;
      });
      }</qint64>

      **From Google AI (which may not be correct as it hashes databases)**
      

      The speed of fstat() on Windows is generally very fast for retrieving file metadata (an O(1) operation),

      Speed is for Metadata: fstat() retrieves metadata (like file size, timestamps, permissions) which is typically stored in the file system's Master File Table (MFT) and can be accessed quickly in memory. It is not a measure of data transfer speed.

      Overhead Varies: The actual time taken can be a few milliseconds or less per call. The overhead becomes noticeable if a program performs a large number of calls in a tight loop

       
  • andrew bear

    andrew bear - 2026-01-15

    Hi Stefan,

    Forgot to say my tests did conclusively reveal that it is the combined size of the directory that determines the great majority of the response time rather than the number of files for your function. This makes sense when considering it takes time to copy all these files from disk to the cache.

    Also note I have confirmed that your function does cache the files whereas my faster one doesn't using RamMap from SysInternals.

    https://learn.microsoft.com/en-gb/sysinternals/downloads/rammap

    You could try RamMap with QFileInfo::size() to confirm if it doesn't use cached files.

    Hint: Use menu "Empty->Empty Standby List" to clear the cache, File->Refresh to refresh the display, try your app, then view what files are cached in the "File Summary" tab which can be ordered by path.

    Cheers
    Andy

     
  • andrew bear

    andrew bear - 2026-01-15

    The last bit actually should be reordered as follows

    Hint: Use menu "Empty->Empty Standby List" to clear the cache, , try your app, File->Refresh to refresh the display, then view what files are cached in the "File Summary" tab which can be ordered by path.

     

Anonymous
Anonymous

Add attachments
Cancel