I'm using imDisk to create a RAM disk on Hyper-V as a storage for VMs. This
works (mostly) fine, but I'm having couple issues:
1) When I use RAM disk on Hyper-V Server 2019, checkpoints cannot be
created on the VMs that are located on a RAM disk. The error is not helpful
at all: "An error occurred while attempting to checkpoint the selected
virtual machine(s). Checkpoint operation for 'test' failed. (Virtual machine
ID <vm_guid>)".
This works fine on Hyper-V Server 2016.</vm_guid>
2) RAM disk persists in some weird state through host reboots - I'm not
using /P switch, so I assumed it shouldn't persist through reboot, but it
does, is eating the RAM (driver locked), but is not accessible and I'm
forced to first unmount it and recreate it. On the other hand, when the /P
switch is used, the drive persists, but also in inaccessible state. This is
the info after a reboot:
Is there a way to get more information about the disk, when standard disk
management tools do not work with it?
3) When a VM is moved to the RAM drive, it has to be started first in
order for Hyper-V be able to create a checkpoint (Hyper-V Server 2016),
since it sees the storage (VHDX) located on \Device\ImDisk0\<path> instead
of the assigned drive letter (R:). Once the VM starts, the path gets updated
to R:\<path>. I guess this is a Hyper-V issue though? </path></path>
I never used Hyper-V so it will be difficult for me to help you.
About the persistent ramdisk, it seems you are using the fast startup feature of Windows. Therefore, all the content of the kernel and the session 0 are written to the disk during shutdown (not a reboot) and then restored at the next startup. That's why the ramdisks are saved.
And the ramdisk content should be saved as well. I made several tests, and even by using AWE, the content is saved. So I cannot explain why your ramdisk is corrupted.
If you want to try to disable the Windows fast startup, use this command in an elevated command prompt: powercfg /h off
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
My testing Hyper-V host is actually a virtual machine, so it doesn't even support hibernation, but I also ran the powercfg /h off command just to be sure, the disk still persists through reboot.
There's also issue 4) When booting VMs located on the RAM disk, the boot gets stuck on 10% for about 20-60s (normally this is <10s), then it proceeds and boots normally. The 10% mark is described as "Starting – 10% is when we try to allocate memory for a virtual machine. If you see a virtual machine spend a long time at 10% when starting – your system is running low on memory and it is taking us a while to gather all the memory for the virtual machine to start". However this also happens on a server with >40GB free RAM (this is a physical Hyper-V host). Could it be caused by the way imdisk allocates RAM when using AWE?
If you're willing to spend some time on it, I can perform any necessary diagnostic steps you would need me to perform.
Last edit: Abe 2020-01-08
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I've been doing this for years now, and have never had any problems with it on any OS, 2008R2-2019, unless the obivous (Ramdisk full, out of memory, etc.). You cannot completely run Hyper-V VMs from a RamDisk directly. First of all it's not supported (ofcourse), secondly, it's not so much use unless you very carefully persist the RamDisk through Reboots. v77 is talking about FastBoot, not Hybernation. Having said this, what you can do and actually works without problem is to create a checkpoint when the VM is off, then redirect the snapshot and checkpoint files, including the resulting AVHDx to the RamDisk. Doing this means the VM will read from disk, but writes go to the AVHDx on the RamDisk. For this to work without loss, you'd have to delete the checkpoint BEFORE any reboot/shutdown, otherwise you lose the AVHDx and it won't work again after the Reboot, even it's there on the RamDisk. If you're Hyper-V Host is a VM, so you're using nested Hyper-V, you're much better off doing the above trick here on the Hyper-V Host VM itself compared to doing this to the VMs inside. That way, you won't need to run anything from RamDisk in the VM, because the Hosts' disk is coming from the AVHDx which then is on the RamDisk. This way you get the best performance, much better than from inside the VM. Secondly, since your Hyper-V Host VM much use a fixed amount of memory (no dynamic memory), you won't be wasting any valuable Ram inside that VM so you can use all memory for actually running VMs itself. Best of all is if you use a Dynamic RamDisk on the physical machine, create a checkpoint of the Hyper-V Host VM, redirect the AVHDx to that Dynamic RamDisk. This way, you only "waste" space as the Hyper-V Host VM is writing stuff to the AVHDx, so it will "cost" you only the amount of memory on the physical machine worth the amount written by the Hyper-V Host VM. You shouldn't use AWE for a Dynamic RamDisk, since you need a "overrun" buffer for when the RamDisk is about to be full so the PageFile can jump in and help prevent getting the message the RamDIsk cannot write because it's full. Just don't forget to delete or revert the Checkpoint before shutdown/reboot otherwise you lose the Host and whatever VMs are inside.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Wizard, I understand your way of running it, but we're simply doing something different. The test Hyper-V VM is not for production, it's just a test machine where I did some imdisk-related tests.
The real machines are ofcourse physical and I don't actually need RAM disk persistence through host reboots, I was just trying to get a clarification on the inconsistent behavior.
The RAM disk hosts many small short-lived VMs that frequently reset (to a base checkpoint) and they often also get discarded/recreated. So it's preferable to have everything in RAM. It's also well managed (monitored), so the RAM disk won't run out of disk space.
To summarize, I'd ofcourse prefer if v77 could fix all the issues described above (if possible from imDisk's perspective), but the only 2 issues that are most important to me is the HV2019 checkpoint and slow boot. I cannot change how the service is designed, so while I appreciate your input, it doesn't help me much.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hyper-V requires a real disk to run VM's from, so then in your case you'd be better of using a different RamDisk, like Starwindsoftware's one. It will provide you with a ramdisk that gets served as a real (SCSI)drive, and Hyper-V will have less issues with it for your scenario
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm using imDisk to create a RAM disk on Hyper-V as a storage for VMs. This
works (mostly) fine, but I'm having couple issues:
1) When I use RAM disk on Hyper-V Server 2019, checkpoints cannot be
created on the VMs that are located on a RAM disk. The error is not helpful
at all: "An error occurred while attempting to checkpoint the selected
virtual machine(s). Checkpoint operation for 'test' failed. (Virtual machine
ID <vm_guid>)".
This works fine on Hyper-V Server 2016.</vm_guid>
2) RAM disk persists in some weird state through host reboots - I'm not
using /P switch, so I assumed it shouldn't persist through reboot, but it
does, is eating the RAM (driver locked), but is not accessible and I'm
forced to first unmount it and recreate it. On the other hand, when the /P
switch is used, the drive persists, but also in inaccessible state. This is
the info after a reboot:
Is there a way to get more information about the disk, when standard disk
management tools do not work with it?
3) When a VM is moved to the RAM drive, it has to be started first in
order for Hyper-V be able to create a checkpoint (Hyper-V Server 2016),
since it sees the storage (VHDX) located on \Device\ImDisk0\<path> instead
of the assigned drive letter (R:). Once the VM starts, the path gets updated
to R:\<path>. I guess this is a Hyper-V issue though? </path></path>
The RAM disk is created using this command:
Any way to fix/workaround these problems?
Last edit: Abe 2020-01-08
I never used Hyper-V so it will be difficult for me to help you.
About the persistent ramdisk, it seems you are using the fast startup feature of Windows. Therefore, all the content of the kernel and the session 0 are written to the disk during shutdown (not a reboot) and then restored at the next startup. That's why the ramdisks are saved.
And the ramdisk content should be saved as well. I made several tests, and even by using AWE, the content is saved. So I cannot explain why your ramdisk is corrupted.
If you want to try to disable the Windows fast startup, use this command in an elevated command prompt:
powercfg /h off
My testing Hyper-V host is actually a virtual machine, so it doesn't even support hibernation, but I also ran the powercfg /h off command just to be sure, the disk still persists through reboot.
There's also issue 4) When booting VMs located on the RAM disk, the boot gets stuck on 10% for about 20-60s (normally this is <10s), then it proceeds and boots normally. The 10% mark is described as "Starting – 10% is when we try to allocate memory for a virtual machine. If you see a virtual machine spend a long time at 10% when starting – your system is running low on memory and it is taking us a while to gather all the memory for the virtual machine to start". However this also happens on a server with >40GB free RAM (this is a physical Hyper-V host). Could it be caused by the way imdisk allocates RAM when using AWE?
If you're willing to spend some time on it, I can perform any necessary diagnostic steps you would need me to perform.
Last edit: Abe 2020-01-08
I've been doing this for years now, and have never had any problems with it on any OS, 2008R2-2019, unless the obivous (Ramdisk full, out of memory, etc.). You cannot completely run Hyper-V VMs from a RamDisk directly. First of all it's not supported (ofcourse), secondly, it's not so much use unless you very carefully persist the RamDisk through Reboots. v77 is talking about FastBoot, not Hybernation. Having said this, what you can do and actually works without problem is to create a checkpoint when the VM is off, then redirect the snapshot and checkpoint files, including the resulting AVHDx to the RamDisk. Doing this means the VM will read from disk, but writes go to the AVHDx on the RamDisk. For this to work without loss, you'd have to delete the checkpoint BEFORE any reboot/shutdown, otherwise you lose the AVHDx and it won't work again after the Reboot, even it's there on the RamDisk. If you're Hyper-V Host is a VM, so you're using nested Hyper-V, you're much better off doing the above trick here on the Hyper-V Host VM itself compared to doing this to the VMs inside. That way, you won't need to run anything from RamDisk in the VM, because the Hosts' disk is coming from the AVHDx which then is on the RamDisk. This way you get the best performance, much better than from inside the VM. Secondly, since your Hyper-V Host VM much use a fixed amount of memory (no dynamic memory), you won't be wasting any valuable Ram inside that VM so you can use all memory for actually running VMs itself. Best of all is if you use a Dynamic RamDisk on the physical machine, create a checkpoint of the Hyper-V Host VM, redirect the AVHDx to that Dynamic RamDisk. This way, you only "waste" space as the Hyper-V Host VM is writing stuff to the AVHDx, so it will "cost" you only the amount of memory on the physical machine worth the amount written by the Hyper-V Host VM. You shouldn't use AWE for a Dynamic RamDisk, since you need a "overrun" buffer for when the RamDisk is about to be full so the PageFile can jump in and help prevent getting the message the RamDIsk cannot write because it's full. Just don't forget to delete or revert the Checkpoint before shutdown/reboot otherwise you lose the Host and whatever VMs are inside.
Wizard, I understand your way of running it, but we're simply doing something different. The test Hyper-V VM is not for production, it's just a test machine where I did some imdisk-related tests.
The real machines are ofcourse physical and I don't actually need RAM disk persistence through host reboots, I was just trying to get a clarification on the inconsistent behavior.
The RAM disk hosts many small short-lived VMs that frequently reset (to a base checkpoint) and they often also get discarded/recreated. So it's preferable to have everything in RAM. It's also well managed (monitored), so the RAM disk won't run out of disk space.
To summarize, I'd ofcourse prefer if v77 could fix all the issues described above (if possible from imDisk's perspective), but the only 2 issues that are most important to me is the HV2019 checkpoint and slow boot. I cannot change how the service is designed, so while I appreciate your input, it doesn't help me much.
Hyper-V requires a real disk to run VM's from, so then in your case you'd be better of using a different RamDisk, like Starwindsoftware's one. It will provide you with a ramdisk that gets served as a real (SCSI)drive, and Hyper-V will have less issues with it for your scenario