I downloaded t5he latest Resquezilla and created the system disk clone of my Rocky Linux 9,2 system. When the clone was ready I simply swapped the disks attempting to boot from the clone... and then back to the original.
As a result, I am facing a serious trouble: both the clone and the source(!!!) do not want to boot claiming problems and ending up in the emergency mode.
The problems claimed are:
- x86/CPU: SGX disabled by BIOS
- i8042: can't read CTR while initializing i8042
- integrity: Problem loading X.509 certificate -126
You are in emergency mode...
Examining the system journal reveals much more errors than the above message shows. Therefore the questions:
1. Does the cloning procedure make any writes to the source drive?
2. Are the source and the volume absolutely identical, i.e. can their behavior differ on swapping?
Thank you!
Mike
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
There's two emergency modes for Linux: GRUB Rescue mode (when GRUB bootloader can't find the partition it needs to in the initial stages of the boot loader) and systemd emergency mode (when Linux can't find the partition for continuing its booting).
It sounds like you may be facing the latter here, given the Linux kernel warnings. On Linux, systemd access the file /etc/fstab to determine which partitions to mount and where.
The section on the Rescuezilla frequently asked questions has some more information on fstab:
For modern systemd Linux environments, the partitions listed in the /etc/fstab file must be present for the system to complete booting. This means if the hard drive environment does not match exactly, the system will enter Rescue Mode until the /etc/fstab is modified to only refer to the drives which are present in the system.
Fixing a machine in this situation requires logging in at the Rescue Mode prompt, remounting the root partition as read/write, modifying /etc/fstab to remove the line of the partition which is no longer present, then remounting the root partition as read-only before hard rebooting the machine. There are plenty of guides on the internet to help fix this particular issue. If you need further help, post on the support forum and you will receive detailed assistance for as long as it takes to fix your issues.
Please note, in some older Linux distributions it's not the partition's UUID that is used by /etc/fstab but the partition's device node (eg. /dev/sdc1). This means the Linux boot is expecting a hard drive connnected to an identical numbered SATA port! But don't worry, any problems can be resolved by modifying /etc/fstab. As mentioned, if you need more help please ask for it on the support forum.
Disk swaps causing an /etc/fstab mismatch are a relatively common failure mode for Linux users. Rescuezilla doesn't change UUID (unique identifiers) on either source or destination on backup, clone or restore.
Any modification of the source disk during the backup or clone operation is considered a bug. Rescuezilla v2.4.2 does modify NTFS partitions during backup and clone operations. I can't imagine that the corner case would impact a Linux environment, and this does not appear to be the root cause of the critical Windows bug described in #446, which is unlikely to be related to your issue here.
If a clone fails to boot, I generally recommend booting up Rescuezilla again and opening up GParted and taking a cursory look at your disks, to confirm all partitions on source and destination look healthy. Then closing Rescuezilla and opening up the partitions in the file manager and confirming everything looks good. Side note: RedHat/CentOS/Rocky Linux environments can use advanced "LVM" disk layouts, which Rescuezilla supports with logic very similar to Clonezilla itself.
I wonder if those Linux kernel warnings were likely being emitted by your Linux kernel originally, but perhaps not displayed for long when your system boots normally.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hello, and thank you for your detailed response.
It's been several days since I received it - and unfortunately I was so busy with another projects that could not return to it. That is today I would like to try fixing the things. However - since I am new to this, I might (and most certainly, will) ask stupid questions, sorry!
As far as I understood I have to reboot the Resquezilla, mount the root partition as R/W and fix the fstab file. I am a bit confused because you said one would need to remove the line related to the partition not present, but since the disk I am trying to boot from is the clone shouldn't it be "replacing the previous UUID's with the new ones" - so to preserve the disk structure?
The following actions (remounting the root as read and rebooting) are trivial...
Thank you for your help!
Mike Faynberg
Senior Software Engineer
[cid:image001.jpg@01DA8A64.763D9140]
HSQ Tech Inc.
26227 Research Road
Hayward CA. 94545
Phone: 510-259-1334
Phone: 510-259-3733 (Direct)
Cell: 408-621-6856
Email: faynberg@hsq.comfaynberg@hsq.com
"Privileged/Confidential Information of HSQ Technology may be contained in this message. If you are not the addressee of this message, you may not copy, use, or deliver this message to anyone. In such an event, you should destroy the message and kindly notify the sender by reply e-mail. It is understood that opinions or conclusions that do not relate to the official business of HSQ Technology are neither given nor endorsed by HSQ Technology".
There's two emergency modes for Linux: GRUB Rescue mode (when GRUB bootloader can't find the partition it needs to in the initial stages of the boot loader) and systemd emergency mode (when Linux can't find the partition for continuing its booting).
It sounds like you may be facing the latter here, given the Linux kernel warnings. On Linux, systemd access the file /etc/fstab to determine which partitions to mount and where.
The section on the Rescuezilla frequently asked questions has some morehttps://rescuezilla.com/help information on fstab:
For modern systemd Linux environments, the partitions listed in the /etc/fstab file must be present for the system to complete booting. This means if the hard drive environment does not match exactly, the system will enter Rescue Mode until the /etc/fstab is modified to only refer to the drives which are present in the system.
Fixing a machine in this situation requires logging in at the Rescue Mode prompt, remounting the root partition as read/write, modifying /etc/fstab to remove the line of the partition which is no longer present, then remounting the root partition as read-only before hard rebooting the machine. There are plenty of guides on the internet to help fix this particular issue. If you need further help, post on the support forum and you will receive detailed assistance for as long as it takes to fix your issues.
Please note, in some older Linux distributions it's not the partition's UUID that is used by /etc/fstab but the partition's device node (eg. /dev/sdc1). This means the Linux boot is expecting a hard drive connnected to an identical numbered SATA port! But don't worry, any problems can be resolved by modifying /etc/fstab. As mentioned, if you need more help please ask for it on the support forum.
Disk swaps causing an /etc/fstab mismatch are a relatively common failure mode for Linux users. Rescuezilla doesn't change UUID (unique identifiers) on either source or destination on backup, clone or restore.
Any modification of the source disk during the backup or clone operation is considered a bug. Rescuezilla v2.4.2 does modify NTFS partitions during backup and clone operations. I can't imagine that the corner case would impact a Linux environment, and this does not appear to be the root cause of the critical Windows bug described in #446https://github.com/rescuezilla/rescuezilla/issues/466#issuecomment-1913684546, which is unlikely to be related to your issue here.
If a clone fails to boot, I generally recommend booting up Rescuezilla again and opening up GParted and taking a cursory look at your disks, to confirm all partitions on source and destination look healthy. Then closing Rescuezilla and opening up the partitions in the file manager and confirming everything looks good. Side note: RedHat/CentOS/Rocky Linux environments can use advanced "LVM" disk layouts, which Rescuezilla supports with logic very similar to Clonezilla itself.
I wonder if those Linux kernel warnings were likely being emitted by your Linux kernel originally, but perhaps not displayed for long when your system boots normally.
Thank you fr the response! I am still confused (sorry - all this is really new to me, so that I have to learn each and every step from scratch :( :()
As far as I could understand I have to edit the clone's /etc/fstab file replacing the currently present UUID's copied from the source HDD, with the actual UUID's belonging to the clone disk.
Now what I did was booting from the Resquezilla flash drive and looking into the relevant (as I understand it) information on the clone disk. What I found is: the image contains three partitions:
- FileSystem Partition 1 (629 MB, FAT32);
- FileSystem Partition 2 (1.1 GB, XFS);
- Partition3 (999 GB, LVM2 PV)
I will attach the corresponding screenshots. Now while I could mount the Partition 2 and now see its UUID (like one can see in the screenshot) I do not see a way to mount Partition3...
Just in case I am attaching the file Manager's image of the Partition 2 - I am curious if those initramfs* files would be a problem.
Another screenshot is the screenshot of the terminal window with the response to the "mount" command (is it useful?).
At this point I am stuck and feel lost. Could you please give me the detailed (as much as possible given my incompetence level :( ) instructions for how to proceed making this clone bootable?
Thank you so much!
Mike
For each disk there are 2 partitions, plus the partition 3 which is the LVM ("Logical Volume Manager") container partition.
The second partition appears healthy assuming there is some free space available. The multiple initramfs, vmlinuz etc correspond to the different Linux kernel that are installed.
A bunch of old entries is totally expected when a long-lived Linux machine gets updated over the months and years. These actually correspond to the GRUB boot menu options (maybe hidden behind 'Advanced') that allow you to boot into older Linux kernels if something goes wrong during an update. But that's not the problem here.
From your SDD1.PNG image I can see your system displays the LVM partitions of both cloned disks.
/dev/rl00/swap (the swap partition -- you won't be able to see files in here it just matters it exists)
/dev/rl00/root 75GB (containing your Rocky Linux root filesystem -- so folders like bin, opt).
/dev/rl00/home 907GB (containing your home folder that's likely mounted to /home in your running system).
I additionally see /dev/rl/swap, /dev/rl/root, /dev/rl/home
The /dev/rl00 corresponds to your cloned disk and your /dev/rl is your original. The 00 is something Linux adds to make the names unique.
This is enough evidence to suggest Rescuezilla successfully copied your LVM (Logical Volume Manager) structures across. After Rescuezilla does this, it copies the filesystems.
This may not work, because the file manager in "FileManager.PNG" weirdly shows two 1.1GB partitions, but no 629MB FAT partition, and doesn't show your second 75GB and 907GB filesystems. This is strange.
It may be worth using your operating system to validate you can mount each of the disks. You can do this from terminal for each partition using eg:
mount /dev/rl/home /mnt
ls /mnt
umount /mnt
I need to mention, booting a system with two identical disks with identical UUIDs is not recommended. Linux can handle it OK, but you're generally safest booting with one disk at a time, ideally using the same SATA or M2 slot so it enumerates in an identical way.
I'm not yet convinced this issue you're experience is an UUID issue. It's possible the clone failed (it would have displayed an error message).
You mentioned your source disk didn't boot at the start? I recommend the first step is to removed the cloned disk, and confirm booting the source disk works. Then replace it with the cloned disk.
Boot Rescuezilla and validate the home (907GB) and root (75GB) filesystems can be mounted, as well as ideally that 629MB FAT partition. Then try booting from the cloned disk.
If the clone suceeded and you're simply doing a direct swap of the disks, you shouldn't need to make any changes to the UUIDs in the /etc/fstabfile.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thank you! I will try to read your recommendations carefully and follow them. However I would like to clarify some misconceptions.
1. The system is pretty new - it has been installed in October 2023 (if I am not very accurate, it could have been Septmber or November). Therefore the abundance of the "remnants" still looks strange.
2. The second disk you can see in the SDD1 to SDD3 screenshots is an unrelated HDD. It is present on the system, occupies the second slot (hence being sdb) but is unused otherwise. I realize now that in order to make things better visible, I should have removed it for now. But now I believe you know what you are seeing. At the time I tried to boot up from the clone - and made all those screenshots I had two HDDS present: the clone and the - let's call it "unused" disk.
3. What I also noticed (and I suspect this is how it should work) - when I have both the source HDD and the clone HDD installed (that is having 3 HDDs present) the system does not boot. I have to remove the clone in order to make it work - is it the intended behavior?
Thank you once again,
Mike
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The Linux kernel boot file 'remnants' don't strike me as strange. Based on the timestamps on your screenshot FileManager.PNG there's been about one update each month, and importantly the version changes are very incremental -- it's still Linux kernel 5.14.0 but after the dash in the version string it's going from 362.13.1 to 362.18.1 to 362.24.1 -- I'm sure this the normal release schedule for your Rocky Linux environment, but you can check that project for more information.
thanks
What I also noticed (and I suspect this is how it should work) - when I have both the source HDD and the clone HDD installed (that is having 3 HDDs present) the system does not boot. I have to remove the clone in order to make it work - is it the intended behavior?
Yes, if the source cloned disk and the destination are both inserted at the same time, it may cause issues because the UUID unique identifiers may possibly conflict and confuse the operating system or bootloader as Rescuezilla doesn't yet have a checkbox to regenerate UUIDs and reconfigure automatically.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
So now that you removed the source disk and left the clone in, does it boot?
It's a bit weird to me I see /dev/rl00/root, /dev/rl00/home, /dev/rl00/swap
Based on your previous screenshot I would have expected that to populate as /dev/rl/root, /dev/rl/swap, /dev/rl/swap.
It's possible that your clone has been created with rl00 instead of just rl. I don't recall ever seeing such an issue during a clone. IF that's the case it'd be a bug in Rescuezilla around cloning these advanced LVM disks. A nice find if proven.
If so, one fix would be to edit /etc/fstab on the cloned disk and updated any /dev/rl to /dev/rl00 (if they exist -- maybe using UUIDs not the dev path). You may need to also update grub.cfg too then run update-grub.
But given how in-depth the fstab / update-grub thing is, and your suggestion the source disk works when there's no clone disk inserted (as expected) another easier fix would be do rerun the clone with another tool (eg, Clonezilla) which may not have the bug that you may have discovered.
Let me know how it goes
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I downloaded t5he latest Resquezilla and created the system disk clone of my Rocky Linux 9,2 system. When the clone was ready I simply swapped the disks attempting to boot from the clone... and then back to the original.
As a result, I am facing a serious trouble: both the clone and the source(!!!) do not want to boot claiming problems and ending up in the emergency mode.
The problems claimed are:
- x86/CPU: SGX disabled by BIOS
- i8042: can't read CTR while initializing i8042
- integrity: Problem loading X.509 certificate -126
You are in emergency mode...
Examining the system journal reveals much more errors than the above message shows. Therefore the questions:
1. Does the cloning procedure make any writes to the source drive?
2. Are the source and the volume absolutely identical, i.e. can their behavior differ on swapping?
Thank you!
Mike
There's two emergency modes for Linux: GRUB Rescue mode (when GRUB bootloader can't find the partition it needs to in the initial stages of the boot loader) and systemd emergency mode (when Linux can't find the partition for continuing its booting).
It sounds like you may be facing the latter here, given the Linux kernel warnings. On Linux, systemd access the file
/etc/fstab
to determine which partitions to mount and where.The section on the Rescuezilla frequently asked questions has some more information on fstab:
Disk swaps causing an
/etc/fstab
mismatch are a relatively common failure mode for Linux users. Rescuezilla doesn't change UUID (unique identifiers) on either source or destination on backup, clone or restore.Any modification of the source disk during the backup or clone operation is considered a bug. Rescuezilla v2.4.2 does modify NTFS partitions during backup and clone operations. I can't imagine that the corner case would impact a Linux environment, and this does not appear to be the root cause of the critical Windows bug described in #446, which is unlikely to be related to your issue here.
If a clone fails to boot, I generally recommend booting up Rescuezilla again and opening up GParted and taking a cursory look at your disks, to confirm all partitions on source and destination look healthy. Then closing Rescuezilla and opening up the partitions in the file manager and confirming everything looks good. Side note: RedHat/CentOS/Rocky Linux environments can use advanced "LVM" disk layouts, which Rescuezilla supports with logic very similar to Clonezilla itself.
I wonder if those Linux kernel warnings were likely being emitted by your Linux kernel originally, but perhaps not displayed for long when your system boots normally.
Hello, and thank you for your detailed response.
It's been several days since I received it - and unfortunately I was so busy with another projects that could not return to it. That is today I would like to try fixing the things. However - since I am new to this, I might (and most certainly, will) ask stupid questions, sorry!
As far as I understood I have to reboot the Resquezilla, mount the root partition as R/W and fix the fstab file. I am a bit confused because you said one would need to remove the line related to the partition not present, but since the disk I am trying to boot from is the clone shouldn't it be "replacing the previous UUID's with the new ones" - so to preserve the disk structure?
The following actions (remounting the root as read and rebooting) are trivial...
Thank you for your help!
Mike Faynberg
Senior Software Engineer
[cid:image001.jpg@01DA8A64.763D9140]
HSQ Tech Inc.
26227 Research Road
Hayward CA. 94545
Phone: 510-259-1334
Phone: 510-259-3733 (Direct)
Cell: 408-621-6856
Email: faynberg@hsq.comfaynberg@hsq.com
http://www.hsq.comhttp://www.hsq.com/
http://railworks.com/
"Privileged/Confidential Information of HSQ Technology may be contained in this message. If you are not the addressee of this message, you may not copy, use, or deliver this message to anyone. In such an event, you should destroy the message and kindly notify the sender by reply e-mail. It is understood that opinions or conclusions that do not relate to the official business of HSQ Technology are neither given nor endorsed by HSQ Technology".
From: discussion@rescuezilla.p.re.sourceforge.net discussion@rescuezilla.p.re.sourceforge.net On Behalf Of Rescuezilla
Sent: Friday, April 5, 2024 5:39 AM
To: [rescuezilla:discussion] support@discussion.rescuezilla.p.re.sourceforge.net
Subject: [rescuezilla:discussion] Rocky Linux clone created by the latest Resquezilla 2.4.2 does not start normally
There's two emergency modes for Linux: GRUB Rescue mode (when GRUB bootloader can't find the partition it needs to in the initial stages of the boot loader) and systemd emergency mode (when Linux can't find the partition for continuing its booting).
It sounds like you may be facing the latter here, given the Linux kernel warnings. On Linux, systemd access the file /etc/fstab to determine which partitions to mount and where.
The section on the Rescuezilla frequently asked questions has some morehttps://rescuezilla.com/help information on fstab:
For modern systemd Linux environments, the partitions listed in the /etc/fstab file must be present for the system to complete booting. This means if the hard drive environment does not match exactly, the system will enter Rescue Mode until the /etc/fstab is modified to only refer to the drives which are present in the system.
Fixing a machine in this situation requires logging in at the Rescue Mode prompt, remounting the root partition as read/write, modifying /etc/fstab to remove the line of the partition which is no longer present, then remounting the root partition as read-only before hard rebooting the machine. There are plenty of guides on the internet to help fix this particular issue. If you need further help, post on the support forum and you will receive detailed assistance for as long as it takes to fix your issues.
Please note, in some older Linux distributions it's not the partition's UUID that is used by /etc/fstab but the partition's device node (eg. /dev/sdc1). This means the Linux boot is expecting a hard drive connnected to an identical numbered SATA port! But don't worry, any problems can be resolved by modifying /etc/fstab. As mentioned, if you need more help please ask for it on the support forum.
Disk swaps causing an /etc/fstab mismatch are a relatively common failure mode for Linux users. Rescuezilla doesn't change UUID (unique identifiers) on either source or destination on backup, clone or restore.
Any modification of the source disk during the backup or clone operation is considered a bug. Rescuezilla v2.4.2 does modify NTFS partitions during backup and clone operations. I can't imagine that the corner case would impact a Linux environment, and this does not appear to be the root cause of the critical Windows bug described in #446https://github.com/rescuezilla/rescuezilla/issues/466#issuecomment-1913684546, which is unlikely to be related to your issue here.
If a clone fails to boot, I generally recommend booting up Rescuezilla again and opening up GParted and taking a cursory look at your disks, to confirm all partitions on source and destination look healthy. Then closing Rescuezilla and opening up the partitions in the file manager and confirming everything looks good. Side note: RedHat/CentOS/Rocky Linux environments can use advanced "LVM" disk layouts, which Rescuezilla supports with logic very similar to Clonezilla itself.
I wonder if those Linux kernel warnings were likely being emitted by your Linux kernel originally, but perhaps not displayed for long when your system boots normally.
Rocky Linux clone created by the latest Resquezilla 2.4.2 does not start normallyhttps://sourceforge.net/p/rescuezilla/discussion/support/thread/ca1436aefc/?limit=250#d5ed
Sent from sourceforge.net because you indicated interest in https://sourceforge.net/p/rescuezilla/discussion/support/
To unsubscribe from further messages, please visit https://sourceforge.net/auth/subscriptions/
Thank you fr the response! I am still confused (sorry - all this is really new to me, so that I have to learn each and every step from scratch :( :()
As far as I could understand I have to edit the clone's /etc/fstab file replacing the currently present UUID's copied from the source HDD, with the actual UUID's belonging to the clone disk.
Now what I did was booting from the Resquezilla flash drive and looking into the relevant (as I understand it) information on the clone disk. What I found is: the image contains three partitions:
- FileSystem Partition 1 (629 MB, FAT32);
- FileSystem Partition 2 (1.1 GB, XFS);
- Partition3 (999 GB, LVM2 PV)
I will attach the corresponding screenshots. Now while I could mount the Partition 2 and now see its UUID (like one can see in the screenshot) I do not see a way to mount Partition3...
Just in case I am attaching the file Manager's image of the Partition 2 - I am curious if those initramfs* files would be a problem.
Another screenshot is the screenshot of the terminal window with the response to the "mount" command (is it useful?).
At this point I am stuck and feel lost. Could you please give me the detailed (as much as possible given my incompetence level :( ) instructions for how to proceed making this clone bootable?
Thank you so much!
Mike
For each disk there are 2 partitions, plus the partition 3 which is the LVM ("Logical Volume Manager") container partition.
The second partition appears healthy assuming there is some free space available. The multiple initramfs, vmlinuz etc correspond to the different Linux kernel that are installed.
A bunch of old entries is totally expected when a long-lived Linux machine gets updated over the months and years. These actually correspond to the GRUB boot menu options (maybe hidden behind 'Advanced') that allow you to boot into older Linux kernels if something goes wrong during an update. But that's not the problem here.
From your
SDD1.PNG
image I can see your system displays the LVM partitions of both cloned disks./dev/rl00/swap
(the swap partition -- you won't be able to see files in here it just matters it exists)/dev/rl00/root
75GB (containing your Rocky Linux root filesystem -- so folders likebin
,opt
)./dev/rl00/home
907GB (containing your home folder that's likely mounted to/home
in your running system).I additionally see
/dev/rl/swap
,/dev/rl/root
,/dev/rl/home
The
/dev/rl00
corresponds to your cloned disk and your/dev/rl
is your original. The00
is something Linux adds to make the names unique.This is enough evidence to suggest Rescuezilla successfully copied your LVM (Logical Volume Manager) structures across. After Rescuezilla does this, it copies the filesystems.
This may not work, because the file manager in "FileManager.PNG" weirdly shows two 1.1GB partitions, but no 629MB FAT partition, and doesn't show your second 75GB and 907GB filesystems. This is strange.
It may be worth using your operating system to validate you can mount each of the disks. You can do this from terminal for each partition using eg:
I need to mention, booting a system with two identical disks with identical UUIDs is not recommended. Linux can handle it OK, but you're generally safest booting with one disk at a time, ideally using the same SATA or M2 slot so it enumerates in an identical way.
I'm not yet convinced this issue you're experience is an UUID issue. It's possible the clone failed (it would have displayed an error message).
You mentioned your source disk didn't boot at the start? I recommend the first step is to removed the cloned disk, and confirm booting the source disk works. Then replace it with the cloned disk.
Boot Rescuezilla and validate the
home
(907GB) androot
(75GB) filesystems can be mounted, as well as ideally that 629MB FAT partition. Then try booting from the cloned disk.If the clone suceeded and you're simply doing a direct swap of the disks, you shouldn't need to make any changes to the UUIDs in the
/etc/fstab
file.Thank you! I will try to read your recommendations carefully and follow them. However I would like to clarify some misconceptions.
1. The system is pretty new - it has been installed in October 2023 (if I am not very accurate, it could have been Septmber or November). Therefore the abundance of the "remnants" still looks strange.
2. The second disk you can see in the SDD1 to SDD3 screenshots is an unrelated HDD. It is present on the system, occupies the second slot (hence being sdb) but is unused otherwise. I realize now that in order to make things better visible, I should have removed it for now. But now I believe you know what you are seeing. At the time I tried to boot up from the clone - and made all those screenshots I had two HDDS present: the clone and the - let's call it "unused" disk.
3. What I also noticed (and I suspect this is how it should work) - when I have both the source HDD and the clone HDD installed (that is having 3 HDDs present) the system does not boot. I have to remove the clone in order to make it work - is it the intended behavior?
Thank you once again,
Mike
Thanks for the clarifications.
The Linux kernel boot file 'remnants' don't strike me as strange. Based on the timestamps on your screenshot
FileManager.PNG
there's been about one update each month, and importantly the version changes are very incremental -- it's still Linux kernel 5.14.0 but after the dash in the version string it's going from 362.13.1 to 362.18.1 to 362.24.1 -- I'm sure this the normal release schedule for your Rocky Linux environment, but you can check that project for more information.thanks
Yes, if the source cloned disk and the destination are both inserted at the same time, it may cause issues because the UUID unique identifiers may possibly conflict and confuse the operating system or bootloader as Rescuezilla doesn't yet have a checkbox to regenerate UUIDs and reconfigure automatically.
This is how the Disks window looks when I leave only the clone inserted..,
So now that you removed the source disk and left the clone in, does it boot?
It's a bit weird to me I see
/dev/rl00/root
,/dev/rl00/home
,/dev/rl00/swap
Based on your previous screenshot I would have expected that to populate as
/dev/rl/root
,/dev/rl/swap
,/dev/rl/swap
.It's possible that your clone has been created with
rl00
instead of justrl
. I don't recall ever seeing such an issue during a clone. IF that's the case it'd be a bug in Rescuezilla around cloning these advanced LVM disks. A nice find if proven.If so, one fix would be to edit
/etc/fstab
on the cloned disk and updated any/dev/rl
to/dev/rl00
(if they exist -- maybe using UUIDs not the dev path). You may need to also updategrub.cfg
too then run update-grub.But given how in-depth the fstab / update-grub thing is, and your suggestion the source disk works when there's no clone disk inserted (as expected) another easier fix would be do rerun the clone with another tool (eg, Clonezilla) which may not have the bug that you may have discovered.
Let me know how it goes