You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
(40) |
Apr
|
May
(1) |
Jun
(8) |
Jul
(9) |
Aug
(2) |
Sep
(9) |
Oct
(8) |
Nov
(23) |
Dec
(1) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(1) |
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
(3) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
(1) |
Nov
(4) |
Dec
(1) |
2012 |
Jan
(1) |
Feb
(23) |
Mar
(6) |
Apr
(1) |
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(3) |
Oct
|
Nov
(1) |
Dec
(3) |
2013 |
Jan
|
Feb
|
Mar
(1) |
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(1) |
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(3) |
From: <ope...@li...> - 2010-03-25 02:34:22
|
Greetings, ----- ope...@li... wrote: > I'm failing to create any VMs using KVM, but this is my first time > creating a KVM VM, so be gentle. > > I've installed opennode on two Dell 710 servers, then added Clustering > and Cluster Storage groups via yum, followed by adding them into a > cluster using luci/ricci. This all worked fine. I've just ensured that > they are both all upto data with the repos using "yum update". > > I ran the following command (similar to that suggested on the opennode > web site), "samba" is a local host that has CentOS on it and our > kickstart profiles. > > [root@prod1 ~]# virt-install --connect=qemu:///session --name=admin1 > --ram=4092 --arch=x86_64 --vcpus=4 --file=/vz/admin1.img --file-size=8 > --bridge=vmbr0 --vnc --os-type=linux --os-variant=rhel5 > --location=http://samba/kickstart/CentOS/5/os/x86_64 > --extra-args="ks=http://samba/kickstart/profile/ogweb.cf" --accelerate > --hvm > ERROR Host does not support virtualization type 'hvm' for arch > 'x86_64' > > And have tried variations on that, basically it doesn't appear to be > of the opinion that KVM is present and works. > > I guess I'm missing something really obvious? But I can't see what. First verify that your CPU has hardware support for virtualization per the KVM FAQ: http://www.linux-kvm.org/page/FAQ#How_can_I_tell_if_I_have_Intel_VT_or_AMD-V.3F Assuming so, make sure that VT is enabled in the BIOS. I'm guessing it might be turned off. If after enabling it you still have a problem... check for a BIOS update. TYL, -- Scott Dowdle 704 Church Street Belgrade, MT 59714 (406)388-0827 [home] (406)994-3931 [work] |
From: <ope...@li...> - 2010-03-25 01:06:44
|
Hi, I'm failing to create any VMs using KVM, but this is my first time creating a KVM VM, so be gentle. I've installed opennode on two Dell 710 servers, then added Clustering and Cluster Storage groups via yum, followed by adding them into a cluster using luci/ricci. This all worked fine. I've just ensured that they are both all upto data with the repos using "yum update". I ran the following command (similar to that suggested on the opennode web site), "samba" is a local host that has CentOS on it and our kickstart profiles. [root@prod1 ~]# virt-install --connect=qemu:///session --name=admin1 --ram=4092 --arch=x86_64 --vcpus=4 --file=/vz/admin1.img --file-size=8 --bridge=vmbr0 --vnc --os-type=linux --os-variant=rhel5 --location=http://samba/kickstart/CentOS/5/os/x86_64 --extra-args="ks=http://samba/kickstart/profile/ogweb.cf" --accelerate --hvm ERROR Host does not support virtualization type 'hvm' for arch 'x86_64' And have tried variations on that, basically it doesn't appear to be of the opinion that KVM is present and works. I guess I'm missing something really obvious? But I can't see what. Cheers, Ed. |
From: <ope...@li...> - 2010-03-25 00:48:45
|
Hi, It appears that our squid must have got its knickers in a twist and cached different versions of the repomd.xml containing the hashes against the files. I had to manually purge the cache of all the relevant files, then everything was OK again. Cheers, Ed. On 24/03/2010, at 11:14 PM, ope...@li... wrote: > If this happened yesterday (or at least before today morning 8:00) it > might have been resolved by repository update performed this morning. > > > Danel Ahman > > > > On 24.03.2010, at 14:48, ope...@li... wrote: > >> hi Ed, >> >> I'm currently unable to reproduce the problem. >> I installed build 24 from iso and did yum clean all and yum update - >> worked for me. >> If you could provide exact steps leading to the problem I could >> investigate this further. >> >> Anybody else having this issue? >> >> Cheers, >> >> ---------------------------------------------- >> Andres Toomsalu, an...@ac... |
From: <ope...@li...> - 2010-03-24 13:14:44
|
If this happened yesterday (or at least before today morning 8:00) it might have been resolved by repository update performed this morning. Danel Ahman On 24.03.2010, at 14:48, ope...@li... wrote: > hi Ed, > > I'm currently unable to reproduce the problem. > I installed build 24 from iso and did yum clean all and yum update - > worked for me. > If you could provide exact steps leading to the problem I could > investigate this further. > > Anybody else having this issue? > > Cheers, > > ---------------------------------------------- > Andres Toomsalu, an...@ac... > > > > > ope...@li... wrote: >> Hi, >> >> I'm trialling opennode on a couple of servers, all looking pretty >> good so far, and have noticed that it appears that the opennode >> yum metadata is broken. >> >> If you try to clean out the existing metadata and then do an update >> yum will fail with a complaint that the primary.xml.gz Metadata >> file does not match checksum. >> >> I have tried to clean the existing metadata (yum clean metadata, as >> well as yum clean all) to no avail. >> >> FYI. I'm getting opennode working with clustering so that I can >> share my SAN iscsi filesystem across two servers. >> >> Cheers, Ed. >> >> >> ------------------------------------------------------------------------------ >> Download Intel® Parallel Studio Eval >> Try the new software tools for yourself. Speed compiling, find bugs >> proactively, and fine-tune applications for parallel performance. >> See why Intel Parallel Studio got high marks during beta. >> http://p.sf.net/sfu/intel-sw-dev >> _______________________________________________ >> OpenNode-users mailing list >> Ope...@li... >> https://lists.sourceforge.net/lists/listinfo/opennode-users >> >> > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users > |
From: <ope...@li...> - 2010-03-24 12:52:46
|
hi Ed, I'm currently unable to reproduce the problem. I installed build 24 from iso and did yum clean all and yum update - worked for me. If you could provide exact steps leading to the problem I could investigate this further. Anybody else having this issue? Cheers, ---------------------------------------------- Andres Toomsalu, an...@ac... ope...@li... wrote: > Hi, > > I'm trialling opennode on a couple of servers, all looking pretty good so far, and have noticed that it appears that the opennode yum metadata is broken. > > If you try to clean out the existing metadata and then do an update yum will fail with a complaint that the primary.xml.gz Metadata file does not match checksum. > > I have tried to clean the existing metadata (yum clean metadata, as well as yum clean all) to no avail. > > FYI. I'm getting opennode working with clustering so that I can share my SAN iscsi filesystem across two servers. > > Cheers, Ed. > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users > > |
From: <ope...@li...> - 2010-03-24 06:30:44
|
Hi Ed, Could you please look up and report 3 things: a) opennode release number -> cat /etc/opennode-release b) opennode rpm version -> yum list installed | grep opennode c) whats in /etc/yum.repos.d -> ls -l /etc/yum.repos.d/ Cheers, -- ---------------------------------------------- Andres Toomsalu, an...@ac... On 24.03.2010, at 4:20, ope...@li... wrote: > Hi, > > I'm trialling opennode on a couple of servers, all looking pretty good so far, and have noticed that it appears that the opennode yum metadata is broken. > > If you try to clean out the existing metadata and then do an update yum will fail with a complaint that the primary.xml.gz Metadata file does not match checksum. > > I have tried to clean the existing metadata (yum clean metadata, as well as yum clean all) to no avail. > > FYI. I'm getting opennode working with clustering so that I can share my SAN iscsi filesystem across two servers. > > Cheers, Ed. > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users > |
From: <ope...@li...> - 2010-03-24 02:36:05
|
Hi, I'm trialling opennode on a couple of servers, all looking pretty good so far, and have noticed that it appears that the opennode yum metadata is broken. If you try to clean out the existing metadata and then do an update yum will fail with a complaint that the primary.xml.gz Metadata file does not match checksum. I have tried to clean the existing metadata (yum clean metadata, as well as yum clean all) to no avail. FYI. I'm getting opennode working with clustering so that I can share my SAN iscsi filesystem across two servers. Cheers, Ed. |
From: <ope...@li...> - 2010-03-08 18:47:02
|
hi, Kernel sources are available from OpenVZ project - http://wiki.openvz.org/Download/kernel/rhel5/028stab068.3 We do not alter this kernel in any way - we just provide a copy of this kernel rpm in opennode repo for more controlled upgrade cycle (updating ovzkernel on opennode when kvm module build is done). Regards, -- ---------------------------------------------- Andres Toomsalu, an...@ac... On 08.03.2010, at 19:27, ope...@li... wrote: > hi, > > I just visited http://opennode.activesys.org/CentOS/5/opennode/SRPMS/ but I cannot see the kernel sources? > > where do I get them? > > thanks, > tm > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev_______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users |
From: <ope...@li...> - 2010-03-08 17:27:20
|
hi, I just visited http://opennode.activesys.org/CentOS/5/opennode/SRPMS/ but I cannot see the kernel sources? where do I get them? thanks, tm |
From: <ope...@li...> - 2010-03-05 20:15:26
|
-- ---------------------------------------------- Andres Toomsalu, an...@ac... On 04.03.2010, at 0:26, ope...@li... wrote: > As long as we are making a wish list, > > Another thing the Proxmox does not have is the ability to do high availability. RHEV does have this ability. VM high availibility is definitely in our future roadmap. > > The backup capabilities of Proxmox are ok, the ability to make a snapshot of a running box is helpful but it needs to be paired with another tool like BackupPC to be useful. The snapshots are great but if you have a large amount of data to be backed up the snapshots take forever. To get around this I take a snapshot once a week and use BackupPC to fill in the gaps. We are going to include Safekeep backup with OpenNode soon (safekeep.sf.net). For exporting VM-s Proxmox vzdump utility is already included. > Although if you implement image management (Scott's first suggestion) you should be able to just copy what has changed on each snapshot saving time and disk space and possibly eliminate the need for BackupPC especially if those back images could be mounted read only so you could restore specific files. > > As for storage shared storage seems to be the way to go. You could just make an OpenFiler box as a virtual machine and connect via iSCSI to that if you did not already have iSCSI or something else in place. We want at first provide local storage management - shared storage is possible but just you have to configure manually for now. Not always shared storage is good - it can be too slow or too complicated - so we want to be offer simple alternatives also (as local storage). > > Thanks, > _ > /-\ ndrew > > On Wed, Mar 3, 2010 at 2:55 PM, <ope...@li...> wrote: > Greetings, > > I'm curious as to what the feature set of opennode's upcoming web-based management system will be. I'm looking for something between that of Proxmox VE and Red Hat Enterprise Linux Virtualization. > > === Proxmox VE === > It is dead easy to install and use... and it works well... but it lacks the following features: > > 1) KVM VM image management - It would be nice if one could create a VM and use the storage as a base image for other VMs. > > 2) Tiered user access - It would be nice to give ownership of a VM to a user and let them have some web-based management of it > > 3) Power management - It would be nice to have some power policies so one could load balance VMs across multiple physical nodes, or consolidate VMs if one wanted to conserve power and migrate machines to turn off unneeded physical machines when load is low > > 4) VDI features - It would be nice to have a connection broker for VDI users. Red Hat open sourced SPICE in January but so far no one has been able to incorporate > > == RHEV for Servers === > It claims to have all of the features missing in Proxmox VE BUT it has a number of major annoyances: > > 1) Windows-based Management app - To install their management app you have to put it on a Windows 2000 Server. It uses Microsoft SQL server as a storage back end, Microsoft IIS server as the web-server, and .Net technology for the management app. It requires Microsoft Internet Explorer to connect to the management app. Red Hat is in the process of completely rewriting the management app as a Jboss application but it will be some time before that is available > > 2) The management app is very error prone and buggy - Granted, the hardware configuration / network I used it on wasn't ideal but it should have worked much better than it did. I was using NFS storage and I was constantly having issues with storage disappearing. Talking to friends it seems to be pretty painful to use with iSCSI targets too. The current management app is just to fragile and error prone to be useful... at least for me. > > 3) The feature set is there but the complexity it adds seems to almost require a very expensive hardware configuration to take advantage of those features. > > === OpenNode === > > KVM and LVM provide a lot of functionality... but how does the shared storage need to be implimented to use it? Will one need a clustered filesystem in a clustered machine environment... or will something as open as DRBD work? > > What features will OpenNode offer that Proxmox VE is missing and that RHEV has? Red Hat has not released RHEV for Desktops yet but that is on the roadmap. Will OpenNode offer anything in the way of VDI? Will it offer SPICE remote display capability? > > I'm sure a lot of people want to know the answers to these questions. :) > > Thanks in advance and TYL, > -- > Scott Dowdle > 704 Church Street > Belgrade, MT 59714 > (406)388-0827 [home] > (406)994-3931 [work] > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users > > > > -- > _ > /-\ ndrew Niemantsverdriet > Academic Computing > (406) 238-7360 > Rocky Mountain College > 1511 Poly Dr. > Billings MT, 59102 > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev_______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users |
From: <ope...@li...> - 2010-03-05 20:07:05
|
-- ---------------------------------------------- Andres Toomsalu, an...@ac... On 03.03.2010, at 23:55, ope...@li... wrote: > Greetings, > > I'm curious as to what the feature set of opennode's upcoming web-based management system will be. I'm looking for something between that of Proxmox VE and Red Hat Enterprise Linux Virtualization. FuncMan aka upcoming web-based management system and OpenNode main idea is to keep things simple as Proxmox but to use RHEL/CentOS stable codebase and implement some things what Proxmox is missing (for example softraid, stable RHEL kernel, management through libvirt if possible, etc.) > > === Proxmox VE === > It is dead easy to install and use... and it works well... but it lacks the following features: > > 1) KVM VM image management - It would be nice if one could create a VM and use the storage as a base image for other VMs. We are already working on KVM templates to use with OpenNode. We had to create our own template package format (it will be simple tar) to contain besides VM filesystem other needed things - like VM configuration template. > > 2) Tiered user access - It would be nice to give ownership of a VM to a user and let them have some web-based management of it This can happen in future but it should be implemented as separate web panel for VM users probably. Can be done after FuncMan is ready. > > 3) Power management - It would be nice to have some power policies so one could load balance VMs across multiple physical nodes, or consolidate VMs if one wanted to conserve power and migrate machines to turn off unneeded physical machines when load is low We currently have no plans for automated VM migration/consolidation - its a complicated feature with many pros/cons and not in our near future roadmap - unless there a lot of interest on this. > > 4) VDI features - It would be nice to have a connection broker for VDI users. Red Hat open sourced SPICE in January but so far no one has been able to incorporate mhm..need to get more familiar with this. > > == RHEV for Servers === > It claims to have all of the features missing in Proxmox VE BUT it has a number of major annoyances: > > 1) Windows-based Management app - To install their management app you have to put it on a Windows 2000 Server. It uses Microsoft SQL server as a storage back end, Microsoft IIS server as the web-server, and .Net technology for the management app. It requires Microsoft Internet Explorer to connect to the management app. Red Hat is in the process of completely rewriting the management app as a Jboss application but it will be some time before that is available I thought that Red Hat was cooking Ovirt as their next-gen manager. No OpenVZ support still. > > 2) The management app is very error prone and buggy - Granted, the hardware configuration / network I used it on wasn't ideal but it should have worked much better than it did. I was using NFS storage and I was constantly having issues with storage disappearing. Talking to friends it seems to be pretty painful to use with iSCSI targets too. The current management app is just to fragile and error prone to be useful... at least for me. > > 3) The feature set is there but the complexity it adds seems to almost require a very expensive hardware configuration to take advantage of those features. Ovirt also has kind a too complex requirements to start using in smaller datacenters - complex separate storage pool setups etc. > > === OpenNode === > > KVM and LVM provide a lot of functionality... but how does the shared storage need to be implimented to use it? Will one need a clustered filesystem in a clustered machine environment... or will something as open as DRBD work? First goal is to implement local storage management (LVM LV-s, file based images) on OpenNode. After that probably NFS, iSCSI, etc - for shared storage. It should be possible already to manually configure OpenNode for NFS, iSCSI, SAN - like any other RHEL/CentOS host. > > What features will OpenNode offer that Proxmox VE is missing and that RHEV has? Red Hat has not released RHEV for Desktops yet but that is on the roadmap. Will OpenNode offer anything in the way of VDI? Will it offer SPICE remote display capability? OpenNode is currently concentrating on servers virtualization - as VDI is still very much evolving concept. Also KVM IO perfomance is still not ready for serious production use - imagine several Windows VM-s to be used as desktops...I think max 2 VM-s per CPU core is not really appealing and disk IO is way too slow in our tests.. For virtualizing linux desktops we use LTSP OpenVZ template for example. One thing we have in mind is to use FuncMan for not only VM (virtualization) management but also for services management inside VM-s or physical hosts (Samba, Apache, DHCP, etc). So FuncMan could be as general management interface for all server infrastructure. > > I'm sure a lot of people want to know the answers to these questions. :) > > Thanks in advance and TYL, > -- > Scott Dowdle > 704 Church Street > Belgrade, MT 59714 > (406)388-0827 [home] > (406)994-3931 [work] > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users > |
From: <ope...@li...> - 2010-03-04 00:04:03
|
Added some instructions how to manage KVM and OpenVZ VM-s locally on OpenNode. OpenVZ section is still not complete by far and not covering all the steps to have full working CT configuration (vzsplit etc) - coming soon probably. -- ---------------------------------------------- Andres Toomsalu, an...@ac... |
From: <ope...@li...> - 2010-03-03 23:19:17
|
Greetings, ----- ope...@li... wrote: > As there are 2 different hypervizors on OpenNode - then just to be > sure execute: > > virsh --connect qemu :///session start kvmdemo1 > > You can now use also virt-manager - as bridge conf is stored into > domain xml file already. Yes, passing the proper domain to virsh and virt-viewer and everything else I have tried seems to work. With virt-manager I did not need to pass the domain... it appears to assume qemu and connects. The problem is that virt-manager lists no devices to pick from on the Network step. By default it selects Virtual network with the default (which is for NAT only). You can select Shared physical device rather than Virtual network... but in that case there are no devices listed in the device dropdown menu... and there is no way to manually type in vmbr0. I'm not sure what is causing that functionality to break. If anyone figures it out, please let me know. TYL, -- Scott Dowdle 704 Church Street Belgrade, MT 59714 (406)388-0827 [home] (406)994-3931 [work] |
From: <ope...@li...> - 2010-03-03 22:51:50
|
As long as we are making a wish list, Another thing the Proxmox does not have is the ability to do high availability. RHEV does have this ability. The backup capabilities of Proxmox are ok, the ability to make a snapshot of a running box is helpful but it needs to be paired with another tool like BackupPC to be useful. The snapshots are great but if you have a large amount of data to be backed up the snapshots take forever. To get around this I take a snapshot once a week and use BackupPC to fill in the gaps. Although if you implement image management (Scott's first suggestion) you should be able to just copy what has changed on each snapshot saving time and disk space and possibly eliminate the need for BackupPC especially if those back images could be mounted read only so you could restore specific files. As for storage shared storage seems to be the way to go. You could just make an OpenFiler box as a virtual machine and connect via iSCSI to that if you did not already have iSCSI or something else in place. Thanks, _ /-\ ndrew On Wed, Mar 3, 2010 at 2:55 PM, <ope...@li...>wrote: > Greetings, > > I'm curious as to what the feature set of opennode's upcoming web-based > management system will be. I'm looking for something between that of > Proxmox VE and Red Hat Enterprise Linux Virtualization. > > === Proxmox VE === > It is dead easy to install and use... and it works well... but it lacks the > following features: > > 1) KVM VM image management - It would be nice if one could create a VM and > use the storage as a base image for other VMs. > > 2) Tiered user access - It would be nice to give ownership of a VM to a > user and let them have some web-based management of it > > 3) Power management - It would be nice to have some power policies so one > could load balance VMs across multiple physical nodes, or consolidate VMs if > one wanted to conserve power and migrate machines to turn off unneeded > physical machines when load is low > > 4) VDI features - It would be nice to have a connection broker for VDI > users. Red Hat open sourced SPICE in January but so far no one has been > able to incorporate > > == RHEV for Servers === > It claims to have all of the features missing in Proxmox VE BUT it has a > number of major annoyances: > > 1) Windows-based Management app - To install their management app you have > to put it on a Windows 2000 Server. It uses Microsoft SQL server as a > storage back end, Microsoft IIS server as the web-server, and .Net > technology for the management app. It requires Microsoft Internet Explorer > to connect to the management app. Red Hat is in the process of completely > rewriting the management app as a Jboss application but it will be some time > before that is available > > 2) The management app is very error prone and buggy - Granted, the hardware > configuration / network I used it on wasn't ideal but it should have worked > much better than it did. I was using NFS storage and I was constantly > having issues with storage disappearing. Talking to friends it seems to be > pretty painful to use with iSCSI targets too. The current management app is > just to fragile and error prone to be useful... at least for me. > > 3) The feature set is there but the complexity it adds seems to almost > require a very expensive hardware configuration to take advantage of those > features. > > === OpenNode === > > KVM and LVM provide a lot of functionality... but how does the shared > storage need to be implimented to use it? Will one need a clustered > filesystem in a clustered machine environment... or will something as open > as DRBD work? > > What features will OpenNode offer that Proxmox VE is missing and that RHEV > has? Red Hat has not released RHEV for Desktops yet but that is on the > roadmap. Will OpenNode offer anything in the way of VDI? Will it offer > SPICE remote display capability? > > I'm sure a lot of people want to know the answers to these questions. :) > > Thanks in advance and TYL, > -- > Scott Dowdle > 704 Church Street > Belgrade, MT 59714 > (406)388-0827 [home] > (406)994-3931 [work] > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users > -- _ /-\ ndrew Niemantsverdriet Academic Computing (406) 238-7360 Rocky Mountain College 1511 Poly Dr. Billings MT, 59102 |
From: <ope...@li...> - 2010-03-03 22:14:20
|
Danel, Maybe we should include xorg-x11-xinit into next OpenNode release - just to provide better virt-manager / virt-viewer experience? I will document the virt-viewer usage through x-over-ssh on OpenNode-s website. -- ---------------------------------------------- Andres Toomsalu, an...@ac... On 04.03.2010, at 0:03, ope...@li... wrote: > Andres, > > Oh, one last comment... because X11 apps tunneled over SSH work after installing xorg-x11-xinit (or maybe it is xorg-x11-xauth since both got installed)... I was able to leave off the --noautoconsole and the virt-viewer just popped up on my local machine... allowing me to skip the ssh / vnc setup that is a little more complicated. Other users might find that helpful. > > TYL, > -- > Scott Dowdle > 704 Church Street > Belgrade, MT 59714 > (406)388-0827 [home] > (406)994-3931 [work] > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users > |
From: <ope...@li...> - 2010-03-03 22:06:24
|
Danel, ----- ope...@li... wrote: > Try to run > > virsh -c qemu:///system start kvmdemo1 > > This should select correct libvirt driver to connect to. Without > giving driver name virsh might try to connect other drivers. Thanks, that is exactly what I needed. I was busy writing more emails to the mailing list and hadn't gotten that far. :) Works now! TYL, -- Scott Dowdle 704 Church Street Belgrade, MT 59714 (406)388-0827 [home] (406)994-3931 [work] |
From: <ope...@li...> - 2010-03-03 22:03:59
|
Andres, Oh, one last comment... because X11 apps tunneled over SSH work after installing xorg-x11-xinit (or maybe it is xorg-x11-xauth since both got installed)... I was able to leave off the --noautoconsole and the virt-viewer just popped up on my local machine... allowing me to skip the ssh / vnc setup that is a little more complicated. Other users might find that helpful. TYL, -- Scott Dowdle 704 Church Street Belgrade, MT 59714 (406)388-0827 [home] (406)994-3931 [work] |
From: <ope...@li...> - 2010-03-03 21:55:49
|
Greetings, I'm curious as to what the feature set of opennode's upcoming web-based management system will be. I'm looking for something between that of Proxmox VE and Red Hat Enterprise Linux Virtualization. === Proxmox VE === It is dead easy to install and use... and it works well... but it lacks the following features: 1) KVM VM image management - It would be nice if one could create a VM and use the storage as a base image for other VMs. 2) Tiered user access - It would be nice to give ownership of a VM to a user and let them have some web-based management of it 3) Power management - It would be nice to have some power policies so one could load balance VMs across multiple physical nodes, or consolidate VMs if one wanted to conserve power and migrate machines to turn off unneeded physical machines when load is low 4) VDI features - It would be nice to have a connection broker for VDI users. Red Hat open sourced SPICE in January but so far no one has been able to incorporate == RHEV for Servers === It claims to have all of the features missing in Proxmox VE BUT it has a number of major annoyances: 1) Windows-based Management app - To install their management app you have to put it on a Windows 2000 Server. It uses Microsoft SQL server as a storage back end, Microsoft IIS server as the web-server, and .Net technology for the management app. It requires Microsoft Internet Explorer to connect to the management app. Red Hat is in the process of completely rewriting the management app as a Jboss application but it will be some time before that is available 2) The management app is very error prone and buggy - Granted, the hardware configuration / network I used it on wasn't ideal but it should have worked much better than it did. I was using NFS storage and I was constantly having issues with storage disappearing. Talking to friends it seems to be pretty painful to use with iSCSI targets too. The current management app is just to fragile and error prone to be useful... at least for me. 3) The feature set is there but the complexity it adds seems to almost require a very expensive hardware configuration to take advantage of those features. === OpenNode === KVM and LVM provide a lot of functionality... but how does the shared storage need to be implimented to use it? Will one need a clustered filesystem in a clustered machine environment... or will something as open as DRBD work? What features will OpenNode offer that Proxmox VE is missing and that RHEV has? Red Hat has not released RHEV for Desktops yet but that is on the roadmap. Will OpenNode offer anything in the way of VDI? Will it offer SPICE remote display capability? I'm sure a lot of people want to know the answers to these questions. :) Thanks in advance and TYL, -- Scott Dowdle 704 Church Street Belgrade, MT 59714 (406)388-0827 [home] (406)994-3931 [work] |
From: <ope...@li...> - 2010-03-03 21:32:40
|
As there are 2 different hypervizors on OpenNode - then just to be sure execute: virsh --connect qemu:///session start kvmdemo1 You can now use also virt-manager - as bridge conf is stored into domain xml file already. -- ---------------------------------------------- Andres Toomsalu, an...@ac... On 03.03.2010, at 23:26, ope...@li... wrote: > Andres, > > ----- ope...@li... wrote: >> I have been done KVM VM installs through virt-install and specifing >> the bridge (vmbr0 that is) - this have been worked so far - so I >> expect it to work out for you also. I was looking into virt-manager >> not offering vmbr0 choice - it seems to boil down to hald and dbus >> problems - but so far I have not cracked the issue yet. > > Thanks for the reply. > > Just to make sure my changes to the system didn't put it in an unknown state, I did a fresh install of OpenNode from the 1.0 beta media. I did a yum upgrade and then I installed a few packages that make me happy like mc, screen, xorg-x11-xinit (so GUI X apps can easy be run over ssh tunnel), etc. I did not mess with the network configuration. > > As expected virt-manager will run but it still can't see the vmbr0 device and only offers NAT-based network devices. That's a shame so if you do figure out how to fix it, let me know. If it doesn't involve adding too many additional packages, I'd say it would be a worthwhile addition to the functionality. > > Per your advice, I used the virt-install method. I was able to install the machine and giving it the vmbr0 device, it was able to use a public IP just fine. Mission accomplished. > > Now though, post-install... after the VM is shut down, I'm not sure how to start it back up. virsh start vmname doesn't work. virsh gives the following error: > > [root@rhev-h2 ~]# virsh start kvmdemo1 > error: failed to get domain 'kvmdemo1' > error: Domain not found > > I'm used to virsh just working on Fedora 12 but I'm guessing it is because Fedora is assuming qemu whereas with opennode, it can be either an OpenVZ container or a KVM VM... so it doesn't assume a particular domain and that has to be specified? > > I'm going through the man page on virsh to see if I can figure it out... but I thought it didn't hurt to ask. I'm sure it is something pretty easy. > > TYL, > -- > Scott Dowdle > 704 Church Street > Belgrade, MT 59714 > (406)388-0827 [home] > (406)994-3931 [work] > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users > |
From: <ope...@li...> - 2010-03-03 21:32:04
|
Hi scott, Try to run virsh -c qemu:///system start kvmdemo1 This should select correct libvirt driver to connect to. Without giving driver name virsh might try to connect other drivers. Danel On 03.03.2010, at 23:26, ope...@li... wrote: > Andres, > > ----- ope...@li... wrote: >> I have been done KVM VM installs through virt-install and specifing >> the bridge (vmbr0 that is) - this have been worked so far - so I >> expect it to work out for you also. I was looking into virt-manager >> not offering vmbr0 choice - it seems to boil down to hald and dbus >> problems - but so far I have not cracked the issue yet. > > Thanks for the reply. > > Just to make sure my changes to the system didn't put it in an > unknown state, I did a fresh install of OpenNode from the 1.0 beta > media. I did a yum upgrade and then I installed a few packages that > make me happy like mc, screen, xorg-x11-xinit (so GUI X apps can > easy be run over ssh tunnel), etc. I did not mess with the network > configuration. > > As expected virt-manager will run but it still can't see the vmbr0 > device and only offers NAT-based network devices. That's a shame so > if you do figure out how to fix it, let me know. If it doesn't > involve adding too many additional packages, I'd say it would be a > worthwhile addition to the functionality. > > Per your advice, I used the virt-install method. I was able to > install the machine and giving it the vmbr0 device, it was able to > use a public IP just fine. Mission accomplished. > > Now though, post-install... after the VM is shut down, I'm not sure > how to start it back up. virsh start vmname doesn't work. virsh > gives the following error: > > [root@rhev-h2 ~]# virsh start kvmdemo1 > error: failed to get domain 'kvmdemo1' > error: Domain not found > > I'm used to virsh just working on Fedora 12 but I'm guessing it is > because Fedora is assuming qemu whereas with opennode, it can be > either an OpenVZ container or a KVM VM... so it doesn't assume a > particular domain and that has to be specified? > > I'm going through the man page on virsh to see if I can figure it > out... but I thought it didn't hurt to ask. I'm sure it is > something pretty easy. > > TYL, > -- > Scott Dowdle > 704 Church Street > Belgrade, MT 59714 > (406)388-0827 [home] > (406)994-3931 [work] > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users > |
From: <ope...@li...> - 2010-03-03 21:26:50
|
Andres, ----- ope...@li... wrote: > I have been done KVM VM installs through virt-install and specifing > the bridge (vmbr0 that is) - this have been worked so far - so I > expect it to work out for you also. I was looking into virt-manager > not offering vmbr0 choice - it seems to boil down to hald and dbus > problems - but so far I have not cracked the issue yet. Thanks for the reply. Just to make sure my changes to the system didn't put it in an unknown state, I did a fresh install of OpenNode from the 1.0 beta media. I did a yum upgrade and then I installed a few packages that make me happy like mc, screen, xorg-x11-xinit (so GUI X apps can easy be run over ssh tunnel), etc. I did not mess with the network configuration. As expected virt-manager will run but it still can't see the vmbr0 device and only offers NAT-based network devices. That's a shame so if you do figure out how to fix it, let me know. If it doesn't involve adding too many additional packages, I'd say it would be a worthwhile addition to the functionality. Per your advice, I used the virt-install method. I was able to install the machine and giving it the vmbr0 device, it was able to use a public IP just fine. Mission accomplished. Now though, post-install... after the VM is shut down, I'm not sure how to start it back up. virsh start vmname doesn't work. virsh gives the following error: [root@rhev-h2 ~]# virsh start kvmdemo1 error: failed to get domain 'kvmdemo1' error: Domain not found I'm used to virsh just working on Fedora 12 but I'm guessing it is because Fedora is assuming qemu whereas with opennode, it can be either an OpenVZ container or a KVM VM... so it doesn't assume a particular domain and that has to be specified? I'm going through the man page on virsh to see if I can figure it out... but I thought it didn't hurt to ask. I'm sure it is something pretty easy. TYL, -- Scott Dowdle 704 Church Street Belgrade, MT 59714 (406)388-0827 [home] (406)994-3931 [work] |
From: <ope...@li...> - 2010-03-03 21:06:37
|
Scott, I have been done KVM VM installs through virt-install and specifing the bridge (vmbr0 that is) - this have been worked so far - so I expect it to work out for you also. I was looking into virt-manager not offering vmbr0 choice - it seems to boil down to hald and dbus problems - but so far I have not cracked the issue yet. Regards, -- ---------------------------------------------- Andres Toomsalu, an...@ac... On 03.03.2010, at 20:23, ope...@li... wrote: > Danel, > > ----- ope...@li... wrote: >> Just to mention. Libvirt community has a stand that scripts/configs >> under /etc/libvirt should be edited through libvirt methods (virsh for >> console access). >> >> Default networking can be changed with virsh the following way: >> virsh -c qemu:///system >> net-edit default >> #change bridge name in xml configuration that appeared >> >> This should help with consistency as there are probably syntax checks >> after editing and saving. > > Editing the default doesn't seem to help at all. Changing the bridge name would save fine but would revert right back to the original after saving. The default also includes NAT setup... and DHCP settings... so just changing the bridge device doesn't fix it. > > I'll try the virt-install method... but since it is using libvirt just like virt-manager, I'm not expecting any different results... but we'll see. > > TYL, > -- > Scott Dowdle > 704 Church Street > Belgrade, MT 59714 > (406)388-0827 [home] > (406)994-3931 [work] > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users > |
From: <ope...@li...> - 2010-03-03 18:23:53
|
Danel, ----- ope...@li... wrote: > Just to mention. Libvirt community has a stand that scripts/configs > under /etc/libvirt should be edited through libvirt methods (virsh for > console access). > > Default networking can be changed with virsh the following way: > virsh -c qemu:///system > net-edit default > #change bridge name in xml configuration that appeared > > This should help with consistency as there are probably syntax checks > after editing and saving. Editing the default doesn't seem to help at all. Changing the bridge name would save fine but would revert right back to the original after saving. The default also includes NAT setup... and DHCP settings... so just changing the bridge device doesn't fix it. I'll try the virt-install method... but since it is using libvirt just like virt-manager, I'm not expecting any different results... but we'll see. TYL, -- Scott Dowdle 704 Church Street Belgrade, MT 59714 (406)388-0827 [home] (406)994-3931 [work] |
From: <ope...@li...> - 2010-03-03 12:09:27
|
Hi Just to mention. Libvirt community has a stand that scripts/configs under /etc/libvirt should be edited through libvirt methods (virsh for console access). Default networking can be changed with virsh the following way: virsh -c qemu:///system net-edit default #change bridge name in xml configuration that appeared This should help with consistency as there are probably syntax checks after editing and saving. Danel On 03.03.2010, at 12:54, ope...@li... wrote: > Hi Scott, > > No - don't edit /etc/sysconfig/network-scripts/ifcfg-venet0 - as there > are probably better alternatives. > > You should be able to change bridge on existing VM also - just edit > VM-s > xml definition file: > nano -w /etc/libvirt/qemu/VMNAME.xml > > #Replace bridge name like that in VM domain xml definition > <source bridge='vmbr0'/> > > It seems that default KVM bridge is defined here: > nano -w /etc/libvirt/qemu/networks/default.xml > > You could try to replace bridge name there also and see if new KVM VM > install with virt-manager now provides the right bridge. > In virt-manager you should select something like * Shared physical > device (instead of "Virtual network) and there should be eth0 (vmbr0) > then to select. > > If its not possible to instruct virt-manager to use vmbr0 instead of > virbr0 then better use virt-install cli utility to start KVM VM > install > for now (in this example its openfiler - replace iso, disk image and > VM > name to suit your needs): > > virt-install --connect qemu:///session --name openfiler23 --ram 512 > --disk path=/storage/images/openfiler23.img,size=2 --network > bridge:vmbr0 --vnc --os-type=linux --os-variant=rhel5 --cdrom > /storage/iso/openfiler-2.3-x86_64-disc1.iso --accelerate --hvm > --noautoconsole > > For attaching a vnc viewer do: > > #setup ssh tunnel for vnc > ssh -L 5555:127.0.0.1:5900 ro...@vi... > #on your desktop machine (this is ubuntu example - change vnc viewer > name to what you have installed) > xtighvncviewer localhost::5555 > > > Regards, > > ---------------------------------------------- > Andres Toomsalu, an...@ac... > > > > > ope...@li... wrote: >> Greetings, >> >> I'm trying to use virt-manager to create and manage KVM VMs on >> OpenNode. virt-manager doesn't seem to be aware that bridged >> networking is setup and defaults to providing private DHCP >> addresses to the VMs one creates. It seems that virt-manager is >> expecting one bridge setup in order to do public/static IPs and >> OpenNode sets up the bridging different. Perhaps it is just a >> matter of using a particular bridge device name. On my OpenNode >> machine I have two bridge devices: >> >> vmbr0 - Has the public IP assigned by OpenNode >> virbr0 - Has the private IP (192.168.122.1) setup by virt-manager >> >> At least I think that is what is going on. >> >> I realize that the OpenNode developers are working on a web-based >> management system... and that there are alternative management >> tools other than virt-manager... but it is installed and it would >> be nice if it was pre-configured in a working fashion. >> >> I was able to create KVM VM just fine with virt-manager, it just >> has semi-functional networking because only private IPs and NATing >> works. >> >> I don't know enough about the underlying config of virt-manager to >> fix it... other than creating a bridge device named br0 and giving >> it the public IP of the machine... bypassing the bridge device that >> OpenNode has created/configured. I haven't done that yet but I >> assume it would work since that's what I do on other Red Hat-based >> distros with KVM/virt-manager (RHEL 5.4 and Fedora 9, 10, 11, and >> 12). >> >> I also created an OpenVZ container with my own OS Template (rather >> than using those provided by OpenNode) and it worked fine. I >> realize that if I try the fix mentioned in the previous paragraph >> I'd have to edit the /etc/sysconfig/network-scripts/ifcfg-venet0 >> file to tell it to use the br0 instead of venet0. >> >> Any suggestions? >> >> TYL, >> > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users > |
From: <ope...@li...> - 2010-03-03 11:19:04
|
Hi Scott, No - don't edit /etc/sysconfig/network-scripts/ifcfg-venet0 - as there are probably better alternatives. You should be able to change bridge on existing VM also - just edit VM-s xml definition file: nano -w /etc/libvirt/qemu/VMNAME.xml #Replace bridge name like that in VM domain xml definition <source bridge='vmbr0'/> It seems that default KVM bridge is defined here: nano -w /etc/libvirt/qemu/networks/default.xml You could try to replace bridge name there also and see if new KVM VM install with virt-manager now provides the right bridge. In virt-manager you should select something like * Shared physical device (instead of "Virtual network) and there should be eth0 (vmbr0) then to select. If its not possible to instruct virt-manager to use vmbr0 instead of virbr0 then better use virt-install cli utility to start KVM VM install for now (in this example its openfiler - replace iso, disk image and VM name to suit your needs): virt-install --connect qemu:///session --name openfiler23 --ram 512 --disk path=/storage/images/openfiler23.img,size=2 --network bridge:vmbr0 --vnc --os-type=linux --os-variant=rhel5 --cdrom /storage/iso/openfiler-2.3-x86_64-disc1.iso --accelerate --hvm --noautoconsole For attaching a vnc viewer do: #setup ssh tunnel for vnc ssh -L 5555:127.0.0.1:5900 ro...@vi... #on your desktop machine (this is ubuntu example - change vnc viewer name to what you have installed) xtighvncviewer localhost::5555 Regards, ---------------------------------------------- Andres Toomsalu, an...@ac... ope...@li... wrote: > Greetings, > > I'm trying to use virt-manager to create and manage KVM VMs on OpenNode. virt-manager doesn't seem to be aware that bridged networking is setup and defaults to providing private DHCP addresses to the VMs one creates. It seems that virt-manager is expecting one bridge setup in order to do public/static IPs and OpenNode sets up the bridging different. Perhaps it is just a matter of using a particular bridge device name. On my OpenNode machine I have two bridge devices: > > vmbr0 - Has the public IP assigned by OpenNode > virbr0 - Has the private IP (192.168.122.1) setup by virt-manager > > At least I think that is what is going on. > > I realize that the OpenNode developers are working on a web-based management system... and that there are alternative management tools other than virt-manager... but it is installed and it would be nice if it was pre-configured in a working fashion. > > I was able to create KVM VM just fine with virt-manager, it just has semi-functional networking because only private IPs and NATing works. > > I don't know enough about the underlying config of virt-manager to fix it... other than creating a bridge device named br0 and giving it the public IP of the machine... bypassing the bridge device that OpenNode has created/configured. I haven't done that yet but I assume it would work since that's what I do on other Red Hat-based distros with KVM/virt-manager (RHEL 5.4 and Fedora 9, 10, 11, and 12). > > I also created an OpenVZ container with my own OS Template (rather than using those provided by OpenNode) and it worked fine. I realize that if I try the fix mentioned in the previous paragraph I'd have to edit the /etc/sysconfig/network-scripts/ifcfg-venet0 file to tell it to use the br0 instead of venet0. > > Any suggestions? > > TYL, > |