System is openSuse 13.1, kernel version 3.11.10, e2fsprogs version 1.42.8
My data integrity check policy is smartmontools selftest on hard drives, data scrub on mirrored RAID, and fsck on ext4 filesystems. I monitor the time since the last fsck check and warn if over 30 days. I use tune2fs to set the check interval to 1 day on all ext4 file systems. This works well for all my filesystems with one excpetion: the root file system is created on an LVM volume, which has a RAID1 physical volume. On reboot, all filesystems are checked, including the root filesystem, but only the root filesystem does not have its "Last checked" field updated. Have I missed something obvious? Thanks.
Of course, the root filesystem is mounted read-only when the check is done, so the "last checked" field cannot be updated. Duh.
Further investigation throws up some interesting points. For ext4 filesystems, whether clean or not, if the check interval time has passed then a full check will be done provided it is not the root filesystem. To force checking on the root filesystem, set the maximum mounts field equal or lower then the current mount count. In my case, I'm happy with a forced check on every boot, so I set the maximum mounts to 1.