I have an Intel motherboard with 2 disks configured in the BIOS as RAID0. Linux uses mdadm to recognize and access the partitions on the array. If I start up Rescuezilla, open a terminal window and issue
ls /dev
it shows
/dev/md126
/dev/md126p1
/dev/md126p2
/dev/md126p3
etc.
However, when I click on the Backup button, Rescuezilla only shows /dev/sda and /dev/sdb, which are the component drives of the RAID0 array. I can't select /dev/md*.
clonezilla allows me to select the/dev/md* partitions, in fact it doesn't even list /dev/sda or /dev/sdb, as drives that I can select for backup. It would be nice if Rescuezilla did the same.
Charles Bailey
Last edit: Charles Bailey 2022-04-12
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Yep, my top task currently is further usability testing and refinement of Linux mdadm RAID. Will have more soon. Last I checked md device nodes were able have backups made exactly like Clonezilla.
By the way, see here for more recent Linux md RAID discussion.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I exactly have detected the same issue: The unterground Linux assembles the BIOS RAID correctly showing /dev/md126 on the lsblk command but Rescuezilla's GUI does not allow me to choose this block device. At
OK, I spent a few hours trying to reproduce this issue today (side note: Rescuezilla v2.5 is coming together quite nicely, should be released in late-October 2023).
I continue to successfully backup and restore Linux md RAID environments containing CentOS environments on /dev/md126 -- I was unable to directly reproduce the "missing device in list error".
However, I did find that Rescuezilla v2.4.2 (and almost certainly earlier versions) has an issue creating backup images of Linux md RAID devices that contain with MBR partition tables. I created a writeup here: https://github.com/rescuezilla/rescuezilla/issues/448 , but basically I'm able to start a backup as normal but a confusing error prevents the backup image from completing:
The issue does not occur on disks with GPT partition tables.
The good news is, the (very simple) solution to the underlying problem above will be in Rescuezilla v2.5, and the solution may also solve the "missing in UI" problem that it sounds like you both are experiencing.
The bad news is dreael's logs weren't enough for me to identify for sure, since Rescuezilla follows Clonezilla by combining the output in a very complex way from not just command-line tools like lsblk but from other tools like sfdisk and parted, so I don't have enough information to see exactly what's going on.
Since Rescuezilla v2.5 is still a few weeks from being released, I have two suggestions to help make sure v2.5 works for you both:
Please load up Rescuezilla v2.4.2 live environment, close Rescuezilla and open a terminal and run the following command: sed --in-place 's*json"]$*json", "--merge"]*g' /usr/lib/python3/dist-packages/rescuezilla/drive_query.py. Then when you reopen Rescuezilla please let me know if you can now see the Linux md RAID devices in the Rescuezilla graphical environment.
Separately, if that doesn't work, please simply run the following from the terminal /usr/sbin/rescuezilla | tee /home/ubuntu/Desktop/log.txt, and send the log.txt file. This will contain the information for me to identify where the problem is.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
i.e. I collected all kind of logs as well as made several screenshots for you.
Note: I also know some older systems (32 bit CPU only) using Promise Fastrak controller for RAID1 where the correct block devices are called /dev/mapper/pdc_xxxxxxx So you can consider programming drive_query.py in such a way to also identify such source drives correctly.
I can do a test using rescuezilla-2.4.2-32bit.bionic.iso on such old hardware for you.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I can take a look at adding support for the 32-bit only Promise Fastrak controller. If you boot up the 32-bit bionic build and collect the exact same rescuezillapy logs (no need to worry about the screenshots). And as long as you start the Backup mode and go I believe 1 page in, all the information around lsblk and blkid should be there.
But I can't promise support for that custom hardware, my main worry is how widely used it is -- I already have limited time and energy to develop Rescuezilla, and so I like to focus on highest-impact bugs, and also high-impact features. But I'll take a look.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The changes in this rolling release 2023-11-26 release have been been thoroughly tested already to the level of a normal Rescuezilla release, and would have been released as Rescuezilla v2.5 release, but there are few preexisting bugs impacting Rescuezilla v2.4.2 and earlier that I'm trying to fix before the actual Rescuezilla v2.5 is released.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I have an Intel motherboard with 2 disks configured in the BIOS as RAID0. Linux uses mdadm to recognize and access the partitions on the array. If I start up Rescuezilla, open a terminal window and issue
ls /dev
it shows
/dev/md126
/dev/md126p1
/dev/md126p2
/dev/md126p3
etc.
However, when I click on the Backup button, Rescuezilla only shows /dev/sda and /dev/sdb, which are the component drives of the RAID0 array. I can't select /dev/md*.
clonezilla allows me to select the/dev/md* partitions, in fact it doesn't even list /dev/sda or /dev/sdb, as drives that I can select for backup. It would be nice if Rescuezilla did the same.
Charles Bailey
Last edit: Charles Bailey 2022-04-12
Yep, my top task currently is further usability testing and refinement of Linux mdadm RAID. Will have more soon. Last I checked md device nodes were able have backups made exactly like Clonezilla.
By the way, see here for more recent Linux md RAID discussion.
I exactly have detected the same issue: The unterground Linux assembles the BIOS RAID correctly showing
/dev/md126
on thelsblk
command but Rescuezilla's GUI does not allow me to choose this block device. Athttps://beilagen.dreael.ch/Diverses/Rescuezilla_RAID1_Problem/
you will find a screenshot not showing the
/dev/md126
block device on the GUI and alsblk
andlshw
output from the PC.Last edit: dreael 2023-09-26
OK, I spent a few hours trying to reproduce this issue today (side note: Rescuezilla v2.5 is coming together quite nicely, should be released in late-October 2023).
I continue to successfully backup and restore Linux md RAID environments containing CentOS environments on
/dev/md126
-- I was unable to directly reproduce the "missing device in list error".However, I did find that Rescuezilla v2.4.2 (and almost certainly earlier versions) has an issue creating backup images of Linux md RAID devices that contain with MBR partition tables. I created a writeup here: https://github.com/rescuezilla/rescuezilla/issues/448 , but basically I'm able to start a backup as normal but a confusing error prevents the backup image from completing:
The issue does not occur on disks with GPT partition tables.
The good news is, the (very simple) solution to the underlying problem above will be in Rescuezilla v2.5, and the solution may also solve the "missing in UI" problem that it sounds like you both are experiencing.
The bad news is dreael's logs weren't enough for me to identify for sure, since Rescuezilla follows Clonezilla by combining the output in a very complex way from not just command-line tools like
lsblk
but from other tools likesfdisk
andparted
, so I don't have enough information to see exactly what's going on.Since Rescuezilla v2.5 is still a few weeks from being released, I have two suggestions to help make sure v2.5 works for you both:
sed --in-place 's*json"]$*json", "--merge"]*g' /usr/lib/python3/dist-packages/rescuezilla/drive_query.py
. Then when you reopen Rescuezilla please let me know if you can now see the Linux md RAID devices in the Rescuezilla graphical environment.Separately, if that doesn't work, please simply run the following from the terminal
/usr/sbin/rescuezilla | tee /home/ubuntu/Desktop/log.txt
, and send the log.txt file. This will contain the information for me to identify where the problem is.Thanks for your investigations.
I tried out your
sed
command modifyingdrive_query.py
and then was able to successfully backup the RAID1 volume.Details see
https://beilagen.dreael.ch/Diverses/Rescuezilla_RAID1_Problem/Update_20231018/
i.e. I collected all kind of logs as well as made several screenshots for you.
Note: I also know some older systems (32 bit CPU only) using Promise Fastrak controller for RAID1 where the correct block devices are called
/dev/mapper/pdc_xxxxxxx
So you can consider programmingdrive_query.py
in such a way to also identify such source drives correctly.I can do a test using rescuezilla-2.4.2-32bit.bionic.iso on such old hardware for you.
Great, glad the patch worked for you!
I can take a look at adding support for the 32-bit only Promise Fastrak controller. If you boot up the 32-bit bionic build and collect the exact same
rescuezillapy
logs (no need to worry about the screenshots). And as long as you start the Backup mode and go I believe 1 page in, all the information aroundlsblk
andblkid
should be there.But I can't promise support for that custom hardware, my main worry is how widely used it is -- I already have limited time and energy to develop Rescuezilla, and so I like to focus on highest-impact bugs, and also high-impact features. But I'll take a look.
The fix to this issue available now as part of "Weekly" Rolling Release (2023-11-26) so you can use it without needing that manual workaround.
The changes in this rolling release 2023-11-26 release have been been thoroughly tested already to the level of a normal Rescuezilla release, and would have been released as Rescuezilla v2.5 release, but there are few preexisting bugs impacting Rescuezilla v2.4.2 and earlier that I'm trying to fix before the actual Rescuezilla v2.5 is released.