Forced file sync "getFD().sync()".

2004-07-14
2013-06-03
  • Artem Grinblat

    Artem Grinblat - 2004-07-14

    Can somebody review my patch (section "Patches" of the Sourceforge) with an option to turn off the file syncing? My supposition is that turning the syncing off will abolish the "consistency" guarantee, but the database will still be durable, that is, internal structures still can't be corrupted by sudden power off and database can be used as usual thereafter (supposing we can ignore the loss of some recent updates). Is it true? And if not, how the syncing helps, becouse the power failure (or disk full condition, or whatever) can still happen at exactly the moment when the OS is syncing file contents to the disk, leaving half of the blocks not updated.

     
    • Alex Boisvert

      Alex Boisvert - 2004-07-14

      Artem,

      In a nutshell, your patch isn't safe.   If you want to guarantee consistency in the face of OS crashes you need to sync() the log.   The net of it is that the OS and hard disk subsystem don't guarantee ordered writes to the disk.  The only wait to ensure that your log is consistent AND the database file isn't corrupted is to sync().

      The flaw "window" here is that the database may start to be updated (down to the disk) before all writes are completed for the log file.   In such case, you risk losing part of the last transaction and your database file would be inconsistent if a failure happened at that point.

      That said, your patch provides an optimization that may appeal to users that are willing to (slightly) increase the risk of data corruption.   The usual performance versus reliability compromise applies here.  Caveat emptor.

      One last disclaimer:  Please keep in mind that as with all data management software, backups are the only viable way to safeguard data against catastrophical failures.  

      regards,
      alex

       
      • Artem Grinblat

        Artem Grinblat - 2004-10-16

        But, to maintain the correct order, that is, to have the log file saved before writing to the data file, the sync is only required in TransactionManager#synchronizeLogFromMemory (or in TransactionManager#close). The usual commit will only put new records into the log and will not touch the data file, thus sync is not required on every transaction.

        It will be a good optimization to sync the log only before the data file begins to be modified (considering the size of log is configurable), and source modifications to achieve this are few.

        The database will be still consistent (will not be damaged), although some of the last transactions might be lost after the system failure.

        Please, verify if I am right (about consistency)?

         
        • Alex Boisvert

          Alex Boisvert - 2004-10-22

          Transactional systems typically provide all ACID properties (atomicity, consistency, isolation and durability).

          You are right in that sync'ing only the main database file will result in consistency.   But as you point out, it's possible to lose the last transactions that have been written in the log but not yet synchronized to the database file.

          This is generally considered as breaking the "durability" contract.   In short, durability implies that after you commit, your data has been made durable.  If you accepted an order for a customer and added it to a JDBM data strucutre, then you're guaranteed the order won't be lost after you call commit().

          Some systems reduce the overhead of sync'ing for every transaction and instead adopt a 'group commit' policy where multiple concurrent (and therefore independent) transactions will be written to the log and synchronized together.   This approach can reduce the number of I/O writes as well as increase throughput because sync is performed less often.

          alex

           
          • Artem Grinblat

            Artem Grinblat - 2004-10-22

            Thanks, this is what i thought.
            To me the consistency after failure in most places is far more important then durability.
            Do you interested in a patch providing this optional (configurable) optimization?

            > Some systems reduce the overhead of sync'ing
            > for every transaction and instead adopt a 'group
            > commit' policy where multiple concurrent (and
            > therefore independent) transactions will be
            > written to the log and synchronized together.
            > This approach can reduce the number of I/O
            > writes as well as increase throughput because
            > sync is performed less often.

            Not applicable in my case, as i have a lot of non-concurrent databases (i don't even need the Jdbm classes to be thread-safe, as i synchronize outside).

             
            • Alex Boisvert

              Alex Boisvert - 2004-10-22

              If you have a patch that applies cleanly to the current CVS code, please post it and I'll commit it to CVS.

              thanks!
              alex

               

Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

JavaScript is required for this form.





No, thanks