Menu

Protecting n equal sized files

2004-04-27
2004-04-27
  • Nobody/Anonymous

    Hi.

    I'd like to use PAR2 to protect backup files written to CDR, but I am not sure if the tool is right for the job.

    The files I'd like to backup are all roughly of the same size (>200 MB in size, up to 700 MB, but the files belonging to one backup group are same sized within 10MB).

    So I've got a given number of these files, say 5, and I'd like to protect against loss of a single complete file (CD becoming unreadable, for example).

    Is there a way to tell the par2 command line tool to protect against this case?

    Or do I have to calculate the numbers myself, and pass them to par2 manually?

    Thanks.

     
    • Peter C

      Peter C - 2004-04-27

      You will have to work out the number of recovery blocks you need by hand.

      The proceedure is:

      1) Pick a block size to use (make sure it is a multiple of 2048 bytes).

      2) Divide the chosen block size into the size of the largest data file (and round UP any fractions).

      3) Use the result of the division as the number of recovery blocks to create.

       
    • Nobody/Anonymous

      Say I'd like to use 65536 bytes blocksize, and I need 18 blocks to cover my largest file (example only, obviously)

      How do I pass this to par2? The command line utility does not accept both block size and block count.

       
      • Peter C

        Peter C - 2004-04-27

        To specify that you want a block size of 65536 you would use:

        -b65536

        To specify that you wanted to create 18 recovery blocks you would use:

        -c18

        These options are all documented in the README file, and you can also displayed when you run par2 with no command line parameters.

         
        • Nobody/Anonymous

          OK, I found it.

          Problem was that I was using version 0.3 (which is what you get if you select par2cmdline in the PAR client list on the parchive main web site). -c was added in 0.4, it seems.

          Thanks.

           

Log in to post a comment.