clarity needed on G4L logic of backing devices

  • venki

    venki - 2013-12-29


    I would like to know what are the factors that affect time and space consumed by G4L while taking harddisk backups of RHEL OS in HP blade server.

    Here is the procedure i am following for taking backups
    a. Insert the G4L CD, and reboot the node:


    b. Select the correct boot image:
    If you are using G6 hardware, then choose the following:
    A: bz38.8 386 build 06-03-2011
    If you are using Gen8 hardware, then choose the following:
    I: bz3.6.7 386 build 3.6.7 11-17-2012
    If G4L has been installed on root disk, perform the steps in section Using G4L from the root disk.
    5. Create the recovery image:
    a. Configure the local IP address:

    ifconfig eth0 <node local="" ip="" address="">

    b. Configure the network interface parameters, if they are not correct:

    ethtool -s eth0 speed 1000 duplex full autoneg off

    c. Start G4L:

    g4l <FTP-server-ip(DS1 DS2="" EXT)=""> <hostname>.img <user>:<password> <local ip=""> /d/backup/recovery-images eth0 reboot

    • <FTP-server-ip(DS1 DS2="" EXT)=""> is the IP-address of the FTP server used. This can be Data Server-1, Data Server-2 or even an external FTP server
    • <hostname> is the hostname of the node for which you are creating the recovery image
    • <user> is the username of the user for FTP transfer. This cannot be the root user.
    • <password> is the user password
    • <local ip=""> is the IP-address of the node for which you are creating the recovery image

    d. In the Information dialog, click Yes.
    e. In the Main menu, select RAW Mode and click the OK button.
    f. In the RAW Mode dialog, select Network use and click the OK button.
    g. In the Network Use dialog, select H: Backup and click the OK button.
    h. In the Backup dialog, select the whole disk cciss/c0d0 and click the OK button. In v.0.43 the disk is the one that has multiple sub partitions.
    i. In the Confirmation dialog, click Yes.
    j. Remove the CD from the CD drive.
    Wait until the backup is finished and the server has rebooted automatically and is up and running.

    Scenario 1 :
    When i perform this procedure on G6 HP hardware (72 GB harddisk) with below partioning, it takes 20-30 mins per server and image size is around 20 GB per server.

    [root@tn2cs1 ~]# df -kh
    Filesystem Size Used Avail Use% Mounted on
    /dev/cciss/c0d0p3 25G 13G 11G 56% /
    /dev/cciss/c0d0p1 145M 26M 112M 19% /boot
    none 12G 0 12G 0% /dev/shm
    /dev/cciss/c0d0p6 1012M 38M 923M 4% /tmp
    /dev/cciss/c0d0p2 35G 4.6G 28G 15% /var

    Scenario 2 :

    But when i perform this procedure on G8 hardware (300 GB harddisk) with below partioning, it takes 2-2.5 hrs per server and image size is aroung 200+ GB

    [root@wb2ds1 ~]# df -kh
    Filesystem Size Used Avail Use% Mounted on
    /dev/cciss/c0d0p5 95G 8.7G 82G 10% /
    /dev/cciss/c0d0p6 9.5G 151M 8.9G 2% /tmp
    /dev/cciss/c0d0p3 151G 23G 121G 16% /var
    /dev/cciss/c0d0p1 487M 31M 431M 7% /boot
    tmpfs 32G 4.0K 32G 1% /dev/shm

    I can see in G8 image size is increased by a very huge factor compared to G6. Can anyone explain me what is the logic G4L is using and what factors does time and space consumed depends on and why in G8 scenario time and space has increased by many folds ?

    Thanks in advance !!

  • venki

    venki - 2013-12-29

    attaching the procedure in an attachment as some formatting is missing in the above post.

  • Michael Setzer II

    First, that would seem to be a very old version, but in regards to the explanation it should matter much.

    G4L with the regular process is a bit level backup, and backs up every bit of the image, which includes all space used or unused. That is why the clearing process is needed to greatly decrease the size of the image.

    Example: Long ago did a full install of Fedora on an 80G hard disk. Then did an image, and it created a 12G image file. Used the clean process on partitions to make all unused sectors contain nulls. Then redid the image, and it created a 2.5G image of the same disk.

    The time to read the disk is connected to the real speed of the disk, and not the buffered speed, since the buffer is quickly filled, and then all read and writes are done physically.

    Example: My current computer reports disk buffered speed of 2300MB, but physical speed of only 84MB. hdparm -tT /dev/sda

    The compression process is also effected by the CPU and compression tool.

    Again, old test using an old 100M boot partition.
    no compression took 10 seconds
    lzop compression took 3 seconds
    gzip compression took 6 seconds
    bzip2 compression took 18 seconds

    So, lzop is fastest and lowest load on cpu, but does make images about 10% larger then gzip. bzip was about 10% better than gzip, but the time difference made no sense with other than super small partitions.

    More powerful and faster hard disk might result in closer speeds.

    But clearing the space will greatly reduce image size.

    As for speed, with windows partitions, the ntfsclone option only backs up used space, but only works for ntfs partitions. fsarchiver is a file level backup tool as well, but am not sure it backs up all the extra file data, and like with ntfs requires the partitions already be created.


Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

No, thanks