FIle to Large

Help
2006-10-21
2013-05-20
  • Peter Dominey
    Peter Dominey
    2006-10-21

    I'm getting the following error when I attempt to do a dump of /dev/rootvg/homelv, detailes below:

    Error message:
      DUMP: 89.61% done at 784 kB/s, finished in 1:04
      DUMP: 90.23% done at 783 kB/s, finished in 1:01
      DUMP: 90.85% done at 781 kB/s, finished in 0:57
      DUMP: 91.67% done at 782 kB/s, finished in 0:52
      DUMP: write error 27471100 blocks into volume 1: File too large

    The LV details are:

    [root@twelth local]# lvdisplay /dev/rootvg/homelv
      --- Logical volume ---
      LV Name                /dev/rootvg/homelv
      VG Name                rootvg
      LV UUID                YfnCz6-AorP-B0jU-mLCS-lLzh-KSXH-qNGPTY
      LV Write Access        read/write
      LV Status              available
      # open                 1
      LV Size                37.91 GB
      Current LE             1213
      Segments               5
      Allocation             inherit
      Read ahead sectors     0
      Block device           253:1

    The dump take place to a NFS mounted filesystem:

    nineth:/backups on /mnt/nineth/Backups type nfs (rw,addr=192.168.1.9)

    The command used is:

    dump 0 -L`date +%d%b%y` -j -u -A /Backups/twelth/_dev_rootvg_homelv_level_0_dump.log -f /Backups/twelth/_dev_rootvg_homelv_level_0.dump /dev/rootvg/homelv

    Although it actaully runs from a script that does all of the filesystems, this is the only one that seems to encounter an error. I believe everything was fine utill I extended the size of homelv. But /usr/local (locallv) is close in size (26GB) and seems to have no problems.

    Any help would be appreciated.

     
    • Stelian Pop
      Stelian Pop
      2006-10-23

      The problem is not in the source volume but in writing to the destination file. "File too large" means that your destination filesystem (the NFS one) does not support files over 2GB in size. This might be a limitation of your remote filesystem or the NFS protocol itself.

      Stelian.

       
    • Peter Dominey
      Peter Dominey
      2006-10-23

      Thanks for getting back with me.

      I assume you mean an individual file with in the dump file, as most of the  'dumped' filesystem files are in excess of 2GB ?

      e.g.

      -rw-r--r--  1 root     root     2.7G Oct 22 02:10 _dev_rootvg_usrlv_level_0.dump
      -rw-r--r--  1 root     root     4.2G Oct 23 02:14 _dev_rootvg_locallv_level_1.dump

      Tks

      Peter

       
      • Stelian Pop
        Stelian Pop
        2006-10-24

        Hmmm, no, I meant the output files. The fact that you already been able create output files over 2GB in length puzzles me.

        That error comes directly from a write() syscall, and I cannot imagine why it would work sometimes and not other times.

        What version of dump/restore are you using ? Does "ulimit -a" report some limits on the "file size" attribute ?