Menu

#312 Support for firmware RAID

testing_clonezilla
open
nobody
None
5
2019-02-01
2019-01-18
Elliott
No

Would it be possible to add support for firmware RAID devices? In this case I have an Intel RST RAID 1 which appears in /dev/mapper. After running kpartx -a I am able to mount the partition successfully, but ocs-sr restoredisk says it's an unknown hard drive device and terminates.

Discussion

  • Steven Shiau

    Steven Shiau - 2019-01-18

    Which version of Clonezilla live did you try? Could you please describe the whole steps you have done, and the error messages on the screen? It's better to take some photos and post them. Thanks.

    Steven

     
  • Elliott

    Elliott - 2019-01-18

    I'm using Clonezilla 2.6.0-37. In the BIOS, I configure Intel RST as RAID 1 called Volume0.
    I boot to the shell with Clonezilla live, configure my network interface, mount my NFS share, then run ocs-sr -g auto -e1 auto -e2 -r -j2 -c -scr -p choose restoredisk Ubuntu_AAF /dev/mapper/isw_iafgiffdf_Volume0. Photo attached.

    Note: When Clonezilla first starts, I get this error, but I just ignore it and I think this is unrelated:
    Error: attempt to read or write outside of disk hd0

     

    Last edit: Elliott 2019-01-18
  • Steven Shiau

    Steven Shiau - 2019-01-18

    Did you try to use the normal Clonezilla dialog wizard to save and restore? That is, run: sudo clonezilla
    Any differences?
    Thanks.

    Steven

     
    • Elliott

      Elliott - 2019-01-18

      Yes I tried the GUI first. It does not show the RAID set, it only shows each member disk separately. So I created the image using just one member as the source. I am able to restore the image to one member successfully, but then Intel RST shows the second disk as no longer part of the RAID set. After I add it using the BIOS, the OS no longer boots. I'm using Ubuntu 18.04 by the way.
      From what I've read online, Clonezilla does not support Intel RST (Fake RAID), so that's why I filed this as a feature request. Do you think this is supposed to be supported already?

       
  • Steven Shiau

    Steven Shiau - 2019-01-18

    It was not well tested and supported, IIRC, the mapping device name for FakeRAID like /dev/md126 is supported.

    Steven

     
    • Elliott

      Elliott - 2019-01-18

      Ah, so it sounds you like you support DM RAID, not MD RAID. That makes sense, because MD RAID is the newer method. So then the issue is, how do I get the kernel to recognize this RAID set using mdadm instead of dmraid? If I boot with "nodmraid" kernel option, it prevents the /dev/mapper/ist device from being created, but it also does not show anything in /dev/md0. I think that if I can get the RAID set assembled as /dev/md0 using mdadm then Clonezilla might be able to use it.

       
  • Elliott

    Elliott - 2019-01-18

    I made some progress. Boot with "nodmraid", then run mdadm --assemble --scan and it creates two devices /dev/md126 and /dev/md127. Now these disks appear in Clonezilla, but I can't use it as a source, it says "No input device!" I don't know why there are two, results are strange seel output below from fdisk and mdadm.

    fdisk shows that this device has only one partition. It's missing the EFI partition. I have read that when booting from a RAID 1, the EFI partition should only be on one of the drives. So I'm not sure how to handle this. Clonezilla would have to be smart enough to recognize the source as a bootable RAID1 with an EFI partition, then recognize whether the destination is also a RAID1, then write the EFI and other partitions accordingly.
    To explain the long fdisk output, this system has an 8TB hardware RAID0, two 240GB SSDs in fake RAID1 booting Ubuntu, a 512GB SSD booting CentOS, and a 32GB stick running Clonezilla.

    fdisk -l

    Disk /dev/sda: 7.5 TiB, 8189160456192 bytes, 15994454016 sectors
    Disk model: MR9361-8i       
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 262144 bytes / 262144 bytes
    Disklabel type: gpt
    Disk identifier: 29E621BA-0A22-4140-8D22-82F3FD59C5F6
    
    Device     Start         End     Sectors  Size Type
    /dev/sda1   2048 15994451967 15994449920  7.5T Linux filesystem
    
    
    Disk /dev/sdb: 223.6 GiB, 240057409536 bytes, 468862128 sectors
    Disk model: INTEL SSDSC2KB24
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: gpt
    Disk identifier: 4B2C22A8-B637-439E-BCB8-75C2A48327E8
    
    Device       Start       End   Sectors   Size Type
    /dev/sdb1     2048   1050623   1048576   512M EFI System
    /dev/sdb2  1050624 445407231 444356608 211.9G Linux filesystem
    
    
    Disk /dev/sdc: 223.6 GiB, 240057409536 bytes, 468862128 sectors
    Disk model: INTEL SSDSC2KB24
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: gpt
    Disk identifier: 4B2C22A8-B637-439E-BCB8-75C2A48327E8
    
    Device       Start       End   Sectors   Size Type
    /dev/sdc1     2048   1050623   1048576   512M EFI System
    /dev/sdc2  1050624 445407231 444356608 211.9G Linux filesystem
    
    
    Disk /dev/sdd: 477 GiB, 512110190592 bytes, 1000215216 sectors
    Disk model: Samsung SSD 850 
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: B07D5D26-4D63-40C1-BF7F-D663EC3A393E
    
    Device       Start        End   Sectors   Size Type
    /dev/sdd1     2048     411647    409600   200M EFI System
    /dev/sdd2   411648    2508799   2097152     1G Microsoft basic data
    /dev/sdd3  2508800 1000214527 997705728 475.8G Linux LVM
    
    
    Disk /dev/sde: 28.9 GiB, 31004295168 bytes, 60555264 sectors
    Disk model: DataTraveler 3.0
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x495d5b6a
    
    Device     Boot Start    End Sectors  Size Id Type
    /dev/sde1  *       64 534527  534464  261M 17 Hidden HPFS/NTFS
    
    
    Disk /dev/loop0: 215.8 MiB, 226258944 bytes, 441912 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/mapper/cl-swap: 4 GiB, 4294967296 bytes, 8388608 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/mapper/cl-home: 421.8 GiB, 452842225664 bytes, 884457472 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/mapper/cl-root: 50 GiB, 53687091200 bytes, 104857600 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    
    Disk /dev/md126: 212.4 GiB, 228048502784 bytes, 445407232 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disklabel type: dos
    Disk identifier: 0x00000000
    
    Device       Boot Start       End   Sectors   Size Id Type
    /dev/md126p1          1 445407231 445407231 212.4G ee GPT`
    

    ** mdadm --detail /dev/md126**

    /dev/md126:
             Container : /dev/md/imsm0, member 0
            Raid Level : raid1
            Array Size : 222703616 (212.39 GiB 228.05 GB)
         Used Dev Size : 222703616 (212.39 GiB 228.05 GB)
          Raid Devices : 2
         Total Devices : 2
    
                 State : clean 
        Active Devices : 2
       Working Devices : 2
        Failed Devices : 0
         Spare Devices : 0
    
    Consistency Policy : resync
    
    
                  UUID : 3e28c0fd:bc279dde:e494b6e0:4fbffc8b
        Number   Major   Minor   RaidDevice State
           1       8       16        0      active sync   /dev/sdb
           0       8       32        1      active sync   /dev/sdc
    

    ** mdadm --detail /dev/md127**

    /dev/md127:
               Version : imsm
            Raid Level : container
         Total Devices : 2
    
       Working Devices : 2
    
    
                  UUID : c86f6047:808db2c0:ebf8a424:bcb7d633
         Member Arrays : /dev/md/Volume0_0
    
        Number   Major   Minor   RaidDevice
    
           -       8       32        -        /dev/sdc
           -       8       16        -        /dev/sdb
    

    mdadm --examine /dev/md126

    /dev/md126:
       MBR Magic : aa55
    Partition[0] :    445409279 sectors at            1 (type ee)
    

    mdadm --examine /dev/md127

    /dev/md127:
              Magic : Intel Raid ISM Cfg Sig.
            Version : 1.1.00
        Orig Family : 3005c91f
             Family : 3005c91f
         Generation : 00000029
         Attributes : All supported
               UUID : c86f6047:808db2c0:ebf8a424:bcb7d633
           Checksum : 90105d85 correct
        MPB Sectors : 1
              Disks : 2
       RAID Devices : 1
    
      Disk00 Serial : YF847604DE240AGN
              State : active
                 Id : 00000000
        Usable Size : 468851726 (223.57 GiB 240.05 GB)
    
    [Volume0]:
               UUID : 3e28c0fd:bc279dde:e494b6e0:4fbffc8b
         RAID Level : 1
            Members : 2
              Slots : [UU]
        Failed disk : none
          This Slot : 0
        Sector Size : 512
         Array Size : 445407232 (212.39 GiB 228.05 GB)
       Per Dev Size : 445409280 (212.39 GiB 228.05 GB)
      Sector Offset : 0
        Num Stripes : 1739872
         Chunk Size : 64 KiB
           Reserved : 0
      Migrate State : idle
          Map State : normal
        Dirty State : clean
         RWH Policy : off
    
      Disk01 Serial : YF847604NZ240AGN
              State : active
                 Id : 00000001
        Usable Size : 468851726 (223.57 GiB 240.05 GB)
    
     

    Last edit: Elliott 2019-01-18
  • Steven Shiau

    Steven Shiau - 2019-01-18

    Could you please show the result of this command:
    cat /proc/partitions
    Thanks.

    Steven

     
    • Elliott

      Elliott - 2019-01-18

      cat /proc/partitions

      major minor  #blocks  name
      
         8        0 7997227008 sda
         8        1 7997224960 sda1
         8       16  234431064 sdb
         8       17     524288 sdb1
         8       18  222178304 sdb2
         8       32  234431064 sdc
         8       33     524288 sdc1
         8       34  222178304 sdc2
         8       48  500107608 sdd
         8       49     204800 sdd1
         8       50    1048576 sdd2
         8       51  498852864 sdd3
         8       64   30277632 sde
         8       65     267232 sde1
         7        0     220956 loop0
       254        0    4194304 dm-0
       254        1  442228736 dm-1
       254        2   52428800 dm-2
         9      126  222703616 md126
      
       
  • Steven Shiau

    Steven Shiau - 2019-02-01

    I have to find a machine with fake RAID card to test this. However, this is not easy to implement in Clonezilla.

    Steven

     

Log in to post a comment.