[Heavy I/O on one storage server.]
How does HD I/O performance match up, if vms use lvm storage nativey on the hypervisor -Versus- vms accessing LVM NFS or even a dedicated iSCSI type storage server?
[WAN and LAN Speeds?]
The time it takes to answer an http request on vm native hypervisor lvm, --versus- vm accessing the iSCSI drive than answering the request?
[My Case Scenario]
I figure if I run 40 vms, with their own lvm vg on their own hypervisor, is better than 40 vms accessing the same lvm nfs, iSCSI 'one' storage server?
Can someone elobarate? Is there decrease in performance using iscsi vs native storage, if not by how much? Obviosly I didn't take into account netapp filer, fibre channel, and super expensive storage hardware. I am talking about modest enteprise server hardware, say entry level IBM and HP blades?
I do not have numbers for you but I am happy to share my experiences expecially with network booted systems.
Performance basically depends on your storage type e.g. using NFS every bit (read/write) is going through the wire immediatly.
Using e.g. iSCSI with a regular filesystem on it (ext3/4) lots of reads will actually stay on the systems memory because of the journal. Performance of a network-attached storage depends of course always on the network bandwith and latency.
Another factor is if you are using an underlaying cluster-filesystem or something like drbd. Some of those filesystems will wait for the e.g. write commit as long as the bits are stored on the second box. This may cause especially write delays.
- prefer blockdevice storage (e.g. iSCSI or SAN) compared to network-filesystems
- avoiding complexity in your storage layer helps you to be able to scale
many thanks + have a nice day,
Project Manager openQRM
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.