Is a python script for managing a small cluster of libvirt virtual machines on a single host with one command using the unix command line interface.
it is intended for scripted VM manipulation, by automating the task of launching and shutting down virtual machine images mounting the VM file systems, extracting artifacts, inserting artifacts, snappshotting and restoring images.for virtualizing Operating systems which has been useful at work in High through put server deployment.
Vmimagemanager was originally writern for Xen prior to 0.1.0 when it was ported to libvirt controlling kvm in the future it may support more open platforms including xen, kvm and virtual box.
The user interface was designed to complement the xm series of commands, but will be extended since xm commands are xen specific.
virt-manager is can be used along side vmimagemanager, and complement each other, virtmanger of one off work vmimagemanager for clusters.
So far its optimized for repeatedly restarting a Virtual Machine slot with a reset image, with one command line.
- Works with Kvm through libvirt.
- Works with local storage.
- Simple Slot and Virtual Machine model of deployment.
- Slot configuration, and Host configuration defaulting so you only need to specifiy the image directory once.
- Speeds the import and export of images to tgz or rsync directories for archive.
- Retrive artifacts or Insert artifacts into the files system, (eg, cirtificates=x509, keys=ssh,apache)
- Speedy frequent redeploys with rsync.
- Extracting or Overlaying Archived Directories images.
- Defaults and per Slot configuration.
- Manages mounting domains when then Vm is down and un-mounting when the domain is launched.
- Manages exporting images in tar.gz format.
- Lists Free Slots.
- Libvirt gives support can be extended to work with the Xen hypervisor on Linux and Solaris hosts. On Linux QEMU, KVM, LXC, OpenVZ, User Mode Linux, and VirtualBox.
- Only works with libvirt kvm at the moment.
- Only works with local storage at the moment.
- Image Extraction and insertion work when each VM has a RAW virtual disk that can be read with kpartx.
- Image Extraction and insertion work when each VM has is a block device, for forthcoming feature to support Xen Para-Virtualization. (Hard disk partition or LV)
- Works with xfs/ext3 at the moment and maybe others.
- Requires work on mis-configured systems.
- Only Tested on Debian Lenny and Sid, Fedora and Scientific Linux (Red Hat Enterprise Linux binary compatible.)
Virtualization hosts will run a fixed number of "Slots" upon which Virtual Machines can be started, The rational of this is based on the work of many groups in HTC (High Throughput Computing (1)) it seems that the number of jobs per host is at best N+1:N proportional to the CPU/cores's. This varies with IO jobs and CPU requirements for the job. Secure services require certificates, secure automatic certificate generation seemed to complex. On this reasoning the slot and virtual machine model was used, since the idea of a Host . This script is never going to be a complete solution to virtualisation on a batch queue Jobs that require low latency communication and highly multi threaded optimized might be better suited waiting for virtualization in these areas.
In testing timings for shutting down, rsync reinstalling and rebooting a standard a vanilla Scientific Linux (Red Hat Enterprise Linux binary compatible OS) where in the region of 30-40 seconds for SL3 and 40-50 seconds for SL4, your performance may vary, at home I also run a ultra portable laptop which with slower USB attached hard drives that cant reach near this I/O performance so shutdown, rsync, bootup can at around 70-80 seconds, Im sure with appropriate (and user friendly error handling) , the performance of bootup would still dominate execution time even with enhancements like upstart in ubuntu.
Virtulization penalties in seem to lie in disk IO. (todo: ref HEPIX talks) for Xen, and KVM. Multi core CPU performance may be enhanced, thier are no convincing explinations for this phenominum yet. I have seen next to zero data on virtualized network latency delay for KVM or Xen, but bandwith of Xen seems near native, and KVM has slight outband overhead, but inbound network connectivitiy is native.
vmimagemanger should always work along side other virtulization tools.
vmimanager is ment to be part of things. vmimagemanger will never be a complete solution for virtulization with x509 certificate generation and network management such as Cloud computing, but rather something that makes writing such a system easy.
To be as user friendly as possible, while making a simple command line interface to Virtulization of KVM and in the future regain Xen compatibility on Linux and potentially Solaris.
This script is also intended to be placed on an LHC worker node along with SGE (sun grid engine).
vmimagemanger may extend its self to support iSCSI,virtualbox, Solaris, but stability and functionality will be placed before diversity of platforms.
The batch queue integration scripts may need a separate source forge project, and hopefully will be completely rewritten, as they are at the moment intelligent.
Other Plans include a platform application flexible virtulization abstraction for a build system, so that software projects can build rpm's or debian debs easily with virtual build servers and so trace build ability bugs. (such a think exists already for debian.).
What we are not doing with vmimagemanger.
Why use vmimagemanager script to manage virtual machines ?
Useful for off line backing up virtual machine systems, extracting and inserting directories, and reseting the operating system, using rsync, just the sort of thing that you might do on a batch queue to run jobs in virtual machines, destroying and reinstalling virtual machines to test for installation bugs on complex server applications.
This script is a simple script, and is meant to be useful and user friendly and save time doing the sort of thing you need to do with virtual maschines when you have one virtual server.
This tool was writern to be combined with the xm tools from xen and depends on them and complements them until a better solution for end users comes.
This tool is also meant to be used by other automation tools, such as automated build and testing set ups, where simplicity of managing virtual machine slots for your own scripts is a priority not a gui.
(1) HTC and HPC:High performance Computing became well defined and included low latency communication (2) between different computer CPU's, and through this definition came the need to define High Throughput Computing, which differs in the amount of synchronization operations the computers need to perform
(2) Xen used to increase TCP connections over ethernet latency by 60us for its bridge, if addressed by a low latency networking standard or potentially a PCI-E bus sharing system native speed should suitable for HPC job platform isolation as well.