You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
(40) |
Apr
|
May
(1) |
Jun
(8) |
Jul
(9) |
Aug
(2) |
Sep
(9) |
Oct
(8) |
Nov
(23) |
Dec
(1) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(1) |
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
(3) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
(1) |
Nov
(4) |
Dec
(1) |
2012 |
Jan
(1) |
Feb
(23) |
Mar
(6) |
Apr
(1) |
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
(3) |
Oct
|
Nov
(1) |
Dec
(3) |
2013 |
Jan
|
Feb
|
Mar
(1) |
Apr
(5) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(1) |
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
(3) |
From: <ope...@li...> - 2015-12-20 02:10:57
|
Hi, I have installed OpenNode 6 but i can not find any option of downloading KVM template or selecting KVM VM type. Any hint on how i can download KVM template or select KVM VM type, will be appreciated. With Regards, Anselme N. |
From: <ope...@li...> - 2015-12-19 13:42:12
|
Hi, Im guessing that the reason why KVM templates are not shown there is that KVM support is not detected. Are you sure your host has hardware virtualization support enabled? http://www.cyberciti.biz/faq/linux-xen-vmware-kvm-intel-vt-amd-v-support/ But please note that KVM VM templates support in OpenNode were a bit underdeveloped - we only offered some simple KVM templates available from http://opennodecloud.com/templates/kvm/ However you should be able to create your own templates with opennode console (TUI). One thing more to note is that we have no more plans to develop and support OpenNode OS further (at least not without commercial contracts). Kind regards, -- <http://www.getpostbox.com>---------------------------------------------- Andres Toomsalu,an...@op... <mailto:an...@op...> Founder, OpenNode LLC http://www.opennodecloud.com <http://www.opennodecloud.com/> |
From: <ope...@li...> - 2015-12-19 09:30:45
|
Hi, I have installed OpenNode 6 but i can not find any option of downloading KVM template or selecting KVM VM type. As you can see on below pictures, I am geeting OpenVZ only. Any hint on how i can download KVM template or select KVM VM type, will be appreciated. [image: Inline image 2] [image: Inline image 3] With Regards, Anselme N. |
From: <ope...@li...> - 2014-11-15 16:02:28
|
I guess that TUI wants to read something from CT filetree - but because of ploop it cant - as CT filesystem is now encapsulated inside disk image. > ope...@li... > <mailto:ope...@li...> > 13. november 2014 18:25 > Never mind. I forgot to stop the container first. It /does/ show up as > an option to create from, however once you fill out all of the new > template data and get down to the OS template field, the cursor exits to > the far right of the screen and you cannot input anything in the OS > template field. Then the TUI says that it cannot create a template with > white space: > > > Ostemplate should not include white spaces. Got: 'Opening delta > /storage/lo <-- believe this part of the message is truncated > Adding delta dev=/dev/ploop53459 > img=/storage/local/vz/private/[Container ID]/root.hdd > > ope...@li... > <mailto:ope...@li...> > 13. november 2014 16:48 > > That is correct. The VM's container does not even show up as an option > to create from. > > Andres Toomsalu <mailto:an...@op...> > 13. november 2014 7:20 > > Im pretty sure that creating template out of VM with TUI does not work > with ploop - could you please try that also? > > ope...@li... > <mailto:ope...@li...> > 13. november 2014 0:18 > It would appear, without doing a ton of testing, that it worked as > expected. The method was as follows: > > * prior to creating container with Opennode TUI, vi /etc/vz/vz.conf and > change VE_LAYOUT=simfs to VE_LAYOUT=ploop > * start the TUI and create a container using the CentOS 6 template > * after the container is created and deployed, vzctl enter [Container > ID] and did a yum update to ensure full network connectivity, which > worked fine > * exit out of the running container and check the contents of > /storage/local/vz/private/[Container ID] rather than a complete listing > of the filesystems directories, I see a root.hdd directory with the > following contents: > > drwx------ 3 root root 4096 Nov 12 14:52 . > drwxr-xr-x 3 root root 4096 Nov 12 14:51 .. > -rw-r--r-- 1 root root 791 Nov 12 14:51 DiskDescriptor.xml > -rw------- 1 root root 0 Nov 12 14:51 DiskDescriptor.xml.lck > -rw------- 1 root root 1462763520 Nov 12 14:52 root.hdd > drwx------ 2 root root 4096 Nov 12 14:51 root.hdd.mnt > > I have to fire up another server at home to test migration, but the > above would indicate, I think, that the TUI did not balk at the ploop > filesystem format. > > When I can, I'll fire up the other test server at home and do the test > migrate and report back on that as well. > > Anything else that you would like to know about the results, please let > me know. > > Thanks, > > ope...@li... > <mailto:ope...@li...> > 12. november 2014 23:46 > > Sure. I've got a test machine on my home network that I can do this with > and I'll report back. > -- <http://www.getpostbox.com>---------------------------------------------- Andres Toomsalu,an...@op... <mailto:an...@op...> |
From: <ope...@li...> - 2014-11-14 17:52:08
|
On 11/14/2014 12:53 AM, ope...@li... wrote: > We have been using shorewall iptables firewall on host side - and it > has been working ok so far. > Here is the howto: > http://opennodecloud.com/opennode-os/2013/01/01/howto-firewall-support.html > I think doing direct iptables rules were a bit too complicated and it > was much easier to use shorewall rules compiler - than setup iptables > rules manually. > So for others who actually want to use straight iptables on their hardware node, here is the process for doing so. In our case we /want/ to use straight iptables for the containers as well, so that in each case we want to manage both the hardware device AND the individual containers separately. While I appreciate the solution suggested, it was something that we had already seen and discounted, because we have our knowledge base in iptables itself. The problem we ran into was that connection tracking would not work on the hardware node. This is because openvz disables it and rules it a performance issue for the hardware node. In our experience with other (Xen) virtualization platforms, the overhead for connection tracking is minimal. This /may/ be an issue on larger servers running several dozen containers, but not with a server that say is running 8 to 12 containers, which is our goal. So to the solution. Evaluate this based on your own personal environment; in other words, your mileage may vary: * Edit /etc/modprobe.d/openvz.conf * Change the value for ip_conntrack_disable_ve0=1 to ip_conntrack_disable_ve0=0 * Reboot Our iptables firewall looks something like this with the DMZ and other specific IP's removed: #!/bin/sh # #IPTABLES=/sbin/iptables # Unless specified, the defaults for OUTPUT is ACCEPT # The default for FORWARD and INPUT is DROP # echo " clearing any existing rules and setting default policy.." iptables -F INPUT iptables -P INPUT DROP iptables -A INPUT -p udp -m udp --sport 123 --dport 123 -j ACCEPT iptables -A INPUT -p tcp -m tcp -s [DMZ or management IP] --dport 22 -j ACCEPT iptables -A INPUT -p icmp -m icmp -s 69.20.200.19 -j ACCEPT # storage array below this line iptables -A INPUT -p tcp -m tcp -s [some ip we use for backup] --dport 22 -j ACCEPT iptables -A INPUT -p tcp -m tcp -s [another ip we use for backup] --dport 22 -j ACCEPT iptables -A INPUT -p udp -m udp -s [your dns server] --sport 53 -d 0/0 -j ACCEPT iptables -A INPUT -p udp -m udp -s [your dns server 2] --sport 53 -d 0/0 -j ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp -j REJECT --reject-with tcp-reset iptables -A INPUT -p udp -j REJECT --reject-with icmp-port-unreachable # turn on this logging feature if you think something bad is happening # logs to syslog #iptables -A INPUT -j LOG --log-prefix "FIREWALL-bad input:" # turn off explicit congestion notification if [ -e /proc/sys/net/ipv4/tcp_ecn ] then echo 0 > /proc/sys/net/ipv4/tcp_ecn fi -- -- Steven G. Spencer, Network Administrator KSC Corporate - The Kelly Supply Family of Companies Office 308-382-8764 Ext. 1131 Mobile 402-765-8010 |
From: <ope...@li...> - 2014-11-14 06:53:40
|
We have been using shorewall iptables firewall on host side - and it has been working ok so far. Here is the howto: http://opennodecloud.com/opennode-os/2013/01/01/howto-firewall-support.html I think doing direct iptables rules were a bit too complicated and it was much easier to use shorewall rules compiler - than setup iptables rules manually. > ope...@li... > <mailto:ope...@li...> > 14. november 2014 3:00 > All, > > OK, First, let me say I've read all of the firewall posts for openvz and > opennode. I have a specific goal. I want to run a straight iptables > firewall on the hardware node and then run individual straight iptables > firewalls on the containers. I've done this on our LAN opennode server, > as it did not require a firewall on the hardware node, just on the > container. I modprobed ip_conntrack and xt_state on the hardware node, > passed the specific modules needed to the configuration file for the > container I was working on, setup the iptables rules and was up and > running. > > Now, I'm simply trying to use a straight iptables script on an opennode > server with a publicly facing IP address, and I cannot get connection > tracking to work at all. It shows the ESTABLISHED, RELATED rule when I > do an iptables -L -n, but it does not pass traffic to even be able to > ping google, let alone download templates with the TUI. The container on > the inside will be used by an individual in our company who is used to > making his own firewall changes, and I don't want to upset the apple > cart by setting up the hardware node as a virtual hardware firewall for > all of the containers. > > My default INPUT policy is DROP, which I would expect to work with the > ESTABLISHED, RELATED rule. I should also say that if I change the > default input policy to ACCEPT, I can ping google and download > templates, but that obviously leaves the server vulnerable. > > Any ideas what I'm missing here? > > Thanks in advance! > -- <http://www.getpostbox.com>---------------------------------------------- Andres Toomsalu,an...@op... <mailto:an...@op...> http://www.opennodecloud.com <http://www.opennodecloud.com/> |
From: <ope...@li...> - 2014-11-13 23:00:19
|
All, OK, First, let me say I've read all of the firewall posts for openvz and opennode. I have a specific goal. I want to run a straight iptables firewall on the hardware node and then run individual straight iptables firewalls on the containers. I've done this on our LAN opennode server, as it did not require a firewall on the hardware node, just on the container. I modprobed ip_conntrack and xt_state on the hardware node, passed the specific modules needed to the configuration file for the container I was working on, setup the iptables rules and was up and running. Now, I'm simply trying to use a straight iptables script on an opennode server with a publicly facing IP address, and I cannot get connection tracking to work at all. It shows the ESTABLISHED, RELATED rule when I do an iptables -L -n, but it does not pass traffic to even be able to ping google, let alone download templates with the TUI. The container on the inside will be used by an individual in our company who is used to making his own firewall changes, and I don't want to upset the apple cart by setting up the hardware node as a virtual hardware firewall for all of the containers. My default INPUT policy is DROP, which I would expect to work with the ESTABLISHED, RELATED rule. I should also say that if I change the default input policy to ACCEPT, I can ping google and download templates, but that obviously leaves the server vulnerable. Any ideas what I'm missing here? Thanks in advance! -- -- Steven G. Spencer, Network Administrator KSC Corporate - The Kelly Supply Family of Companies Office 308-382-8764 Ext. 1131 Mobile 402-765-8010 |
From: <ope...@li...> - 2014-11-13 14:25:49
|
On 11/13/2014 07:48 AM, ope...@li... wrote: > On 11/12/2014 10:20 PM,ope...@li... wrote: >> >Im pretty sure that creating template out of VM with TUI does not work >> >with ploop - could you please try that also? > That is correct. The VM's container does not even show up as an option > to create from. Never mind. I forgot to stop the container first. It /does/ show up as an option to create from, however once you fill out all of the new template data and get down to the OS template field, the cursor exits to the far right of the screen and you cannot input anything in the OS template field. Then the TUI says that it cannot create a template with white space: Ostemplate should not include white spaces. Got: 'Opening delta /storage/lo <-- believe this part of the message is truncated Adding delta dev=/dev/ploop53459 img=/storage/local/vz/private/[Container ID]/root.hdd -- -- Steven G. Spencer, Network Administrator KSC Corporate - The Kelly Supply Family of Companies Office 308-382-8764 Ext. 1131 Mobile 402-765-8010 |
From: <ope...@li...> - 2014-11-13 13:48:15
|
On 11/12/2014 10:20 PM, ope...@li... wrote: > Im pretty sure that creating template out of VM with TUI does not work > with ploop - could you please try that also? That is correct. The VM's container does not even show up as an option to create from. -- -- Steven G. Spencer, Network Administrator KSC Corporate - The Kelly Supply Family of Companies Office 308-382-8764 Ext. 1131 Mobile 402-765-8010 |
From: <ope...@li...> - 2014-11-13 04:20:55
|
Im pretty sure that creating template out of VM with TUI does not work with ploop - could you please try that also? > ope...@li... > <mailto:ope...@li...> > 13. november 2014 1:18 > It would appear, without doing a ton of testing, that it worked as > expected. The method was as follows: > > * prior to creating container with Opennode TUI, vi /etc/vz/vz.conf and > change VE_LAYOUT=simfs to VE_LAYOUT=ploop > * start the TUI and create a container using the CentOS 6 template > * after the container is created and deployed, vzctl enter [Container > ID] and did a yum update to ensure full network connectivity, which > worked fine > * exit out of the running container and check the contents of > /storage/local/vz/private/[Container ID] rather than a complete listing > of the filesystems directories, I see a root.hdd directory with the > following contents: > > drwx------ 3 root root 4096 Nov 12 14:52 . > drwxr-xr-x 3 root root 4096 Nov 12 14:51 .. > -rw-r--r-- 1 root root 791 Nov 12 14:51 DiskDescriptor.xml > -rw------- 1 root root 0 Nov 12 14:51 DiskDescriptor.xml.lck > -rw------- 1 root root 1462763520 Nov 12 14:52 root.hdd > drwx------ 2 root root 4096 Nov 12 14:51 root.hdd.mnt > > I have to fire up another server at home to test migration, but the > above would indicate, I think, that the TUI did not balk at the ploop > filesystem format. > > When I can, I'll fire up the other test server at home and do the test > migrate and report back on that as well. > > Anything else that you would like to know about the results, please let > me know. > > Thanks, > > ope...@li... > <mailto:ope...@li...> > 12. november 2014 23:46 > > Sure. I've got a test machine on my home network that I can do this with > and I'll report back. > > Andres Toomsalu <mailto:an...@op...> > 12. november 2014 23:05 > Hi Steven, > > Ploop package is included in the OpenNode 6 yum repos for some time > now - and also some work was done to support veth devices management > in case of ploop VMs. > However - TUI has not been fully validated with ploop by us yet - not > sure if there are any serious issues or not. You could enable ploop in > /etc/vz/vz.conf and give it a try - and give some feedback about it :) > > > > Cheers, > ope...@li... > <mailto:ope...@li...> > 12. november 2014 21:45 > Greetings, > > I've been reading a bit on ploop vs simfs. Does anyone have any > experience with this on OpenNode and if so, what are the caveats, > pit-falls or reasons for or against using ploop? > > Thanks, > -- <http://www.getpostbox.com>---------------------------------------------- Andres Toomsalu,an...@op... <mailto:an...@op...> |
From: <ope...@li...> - 2014-11-12 21:18:13
|
On 11/12/2014 02:46 PM, ope...@li... wrote: > On 11/12/2014 02:05 PM, ope...@li... wrote: >> Hi Steven, >> >> Ploop package is included in the OpenNode 6 yum repos for some time >> now - and also some work was done to support veth devices management >> in case of ploop VMs. >> However - TUI has not been fully validated with ploop by us yet - not >> sure if there are any serious issues or not. You could enable ploop in >> /etc/vz/vz.conf and give it a try - and give some feedback about it :) > Sure. I've got a test machine on my home network that I can do this with > and I'll report back. > It would appear, without doing a ton of testing, that it worked as expected. The method was as follows: * prior to creating container with Opennode TUI, vi /etc/vz/vz.conf and change VE_LAYOUT=simfs to VE_LAYOUT=ploop * start the TUI and create a container using the CentOS 6 template * after the container is created and deployed, vzctl enter [Container ID] and did a yum update to ensure full network connectivity, which worked fine * exit out of the running container and check the contents of /storage/local/vz/private/[Container ID] rather than a complete listing of the filesystems directories, I see a root.hdd directory with the following contents: drwx------ 3 root root 4096 Nov 12 14:52 . drwxr-xr-x 3 root root 4096 Nov 12 14:51 .. -rw-r--r-- 1 root root 791 Nov 12 14:51 DiskDescriptor.xml -rw------- 1 root root 0 Nov 12 14:51 DiskDescriptor.xml.lck -rw------- 1 root root 1462763520 Nov 12 14:52 root.hdd drwx------ 2 root root 4096 Nov 12 14:51 root.hdd.mnt I have to fire up another server at home to test migration, but the above would indicate, I think, that the TUI did not balk at the ploop filesystem format. When I can, I'll fire up the other test server at home and do the test migrate and report back on that as well. Anything else that you would like to know about the results, please let me know. Thanks, -- -- Steven G. Spencer, Network Administrator KSC Corporate - The Kelly Supply Family of Companies Office 308-382-8764 Ext. 1131 Mobile 402-765-8010 |
From: <ope...@li...> - 2014-11-12 20:46:49
|
On 11/12/2014 02:05 PM, ope...@li... wrote: > Hi Steven, > > Ploop package is included in the OpenNode 6 yum repos for some time > now - and also some work was done to support veth devices management > in case of ploop VMs. > However - TUI has not been fully validated with ploop by us yet - not > sure if there are any serious issues or not. You could enable ploop in > /etc/vz/vz.conf and give it a try - and give some feedback about it :) Sure. I've got a test machine on my home network that I can do this with and I'll report back. -- -- Steven G. Spencer, Network Administrator KSC Corporate - The Kelly Supply Family of Companies Office 308-382-8764 Ext. 1131 Mobile 402-765-8010 |
From: <ope...@li...> - 2014-11-12 20:30:28
|
Hi Steven, Ploop package is included in the OpenNode 6 yum repos for some time now - and also some work was done to support veth devices management in case of ploop VMs. However - TUI has not been fully validated with ploop by us yet - not sure if there are any serious issues or not. You could enable ploop in /etc/vz/vz.conf and give it a try - and give some feedback about it :) > ope...@li... > <mailto:ope...@li...> > 12. november 2014 22:45 > Greetings, > > I've been reading a bit on ploop vs simfs. Does anyone have any > experience with this on OpenNode and if so, what are the caveats, > pit-falls or reasons for or against using ploop? > > Thanks, > Cheers, -- <http://www.getpostbox.com>---------------------------------------------- Andres Toomsalu,an...@op... <mailto:an...@op...> |
From: <ope...@li...> - 2014-11-12 19:12:15
|
Greetings, I've been reading a bit on ploop vs simfs. Does anyone have any experience with this on OpenNode and if so, what are the caveats, pit-falls or reasons for or against using ploop? Thanks, -- -- Steven G. Spencer, Network Administrator KSC Corporate - The Kelly Supply Family of Companies Office 308-382-8764 Ext. 1131 Mobile 402-765-8010 |
From: <ope...@li...> - 2014-09-08 17:36:46
|
Thats the case of OpenNode currently - its more of a lightweight "lego" platform to build your customized solutions on top - regarding higher level management and orchestration. OMS idea itself was nice - it was designed as a stateless controller with two-way sync - reflecting actual system state and (also manual) system changes - not being database centric and non-coherent to reality. Sadly sync layer underneath was a bit error-prone - and needs updating/stabilizing. Actually if you look deeper - OMS can do much more in CLI (ssh session to 6022 port) - than through WUI. Some CLI documentation exists in the wiki: * https://support.opennodecloud.com/wiki/doku.php?id=usrdoc:oms:omsh * https://support.opennodecloud.com/wiki/doku.php?id=usrdoc:oms:omshcommands We are lacking proper docs - but there are some bits and pieces... As for debugging OMS: https://support.opennodecloud.com/wiki/doku.php?id=devdoc:oms:omsdebug > ope...@li... > <mailto:ope...@li...> > 8. september 2014 19:52 > Andres, > > Well that is good enough for me. Thanks very much for the quick reply. > We will either hold off running OMS for now or live with the bugs. > Again, not a huge deal, as OpenNode is fully functional without it. > > Thanks again! > Steve > > On 09/08/2014 10:44 AM, ope...@li... wrote: > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users > Andres Toomsalu <mailto:an...@op...> > 8. september 2014 18:44 > Hi Steven, > > The current situation with OMS is that its not anymore in active > development - and in its current form it might need some tweaking in > order to get it running - as we never released fully stable OMS for > OpenNode 6. However we still have it running on couple of production > installations - just we didnt had time or resources to stabilize and > package OMS for general availability. Lot of issues were related to > saltstack version we were using back then - and it hasnt been updated > to latest saltstack - which might perform better now with OMS (but > probably needs some dev work for update). We are working actively on > OMS successor - but its currently only for early adopters with > targeted dev projects - not sure when it reaches GA. > > So in short - I would not recommend to rely on OMS - unless you get it > running :-) We have OMS running with WHMCS billing for simple Cloud > Provider setup - but it probably needs a bit our support hours in > order to replicate or package/stabilize for GA release - ie we dont > have free resources for that unless its going through our commercial > support channel. > > Or if there would be anybody from community who would be interested of > updating and stabilizing OMS - we could provide some OMS internals > know-how support in commentary form - in order to assist with that > task. Hint: there are some OMS internals info in OpenNode wiki which > might help. > > Regarding your OMS symptoms - I would say its probably OMS/Salt sync > action with host - that got stuck for some reason. OMS VM restart does > not help? salt-minion restart on OpenNode OS host? > > Regards, > -- > <http://www.getpostbox.com>---------------------------------------------- > > Andres Toomsalu,an...@op... <mailto:an...@op...> > > http://www.opennodecloud.com <http://www.opennodecloud.com/> > > > > > ope...@li... > <mailto:ope...@li...> > 8. september 2014 18:01 > All, > > I'm new to the opennode list, but have been doing virtualization with > Xen for more than 7 years. We are a Linux-only shop, and the low > overhead of openvz containers was a draw for switching to that > platform, and we chose opennode for its ease of implementation and use. > > We have our first opennode server up and running and I'm already > extremely impressed with the performance and the command line tools > are stellar. We don't really need the opennode manger web gui, but > since it was available to install within opennode, we thought we would > give that a try. > > We installed it and attached the VM's to the console and everything > was fine for a time, then two things have happened in rapid succession > to each other: > > * First, the opennode manager host machine just shows spinning as if > trying to update in the VM listing, but the VM's still showed > * Second, the opennode manager host machine shows spinning as if > trying to update but now NO VM's are showing. > > All of the VM containers are up and working so it is not mission > critical at all, but we would like to get the web gui working as well > if we can. > > Has anyone else run into this? We /have/ updated the OMS via the > command line: > > /opt/oms/update.sh && service oms restart > but that has done nothing to fix the issue. > > Thanks, > > -- > -- > Steven G. Spencer, Network Administrator > KSC Corporate - The Kelly Supply Family of Companies > Office 308-382-8764 Ext. 1131 > Mobile 402-765-8010 > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users |
From: <ope...@li...> - 2014-09-08 16:54:24
|
Andres, Well that is good enough for me. Thanks very much for the quick reply. We will either hold off running OMS for now or live with the bugs. Again, not a huge deal, as OpenNode is fully functional without it. Thanks again! Steve On 09/08/2014 10:44 AM, ope...@li... wrote: > Hi Steven, > > The current situation with OMS is that its not anymore in active > development - and in its current form it might need some tweaking in > order to get it running - as we never released fully stable OMS for > OpenNode 6. However we still have it running on couple of production > installations - just we didnt had time or resources to stabilize and > package OMS for general availability. Lot of issues were related to > saltstack version we were using back then - and it hasnt been updated > to latest saltstack - which might perform better now with OMS (but > probably needs some dev work for update). We are working actively on > OMS successor - but its currently only for early adopters with > targeted dev projects - not sure when it reaches GA. > > So in short - I would not recommend to rely on OMS - unless you get it > running :-) We have OMS running with WHMCS billing for simple Cloud > Provider setup - but it probably needs a bit our support hours in > order to replicate or package/stabilize for GA release - ie we dont > have free resources for that unless its going through our commercial > support channel. > > Or if there would be anybody from community who would be interested of > updating and stabilizing OMS - we could provide some OMS internals > know-how support in commentary form - in order to assist with that > task. Hint: there are some OMS internals info in OpenNode wiki which > might help. > > Regarding your OMS symptoms - I would say its probably OMS/Salt sync > action with host - that got stuck for some reason. OMS VM restart does > not help? salt-minion restart on OpenNode OS host? > > Regards, > -- > <http://www.getpostbox.com>---------------------------------------------- > > Andres Toomsalu,an...@op... <mailto:an...@op...> > > http://www.opennodecloud.com <http://www.opennodecloud.com/> > > > >> ope...@li... >> <mailto:ope...@li...> >> 8. september 2014 18:01 >> All, >> >> I'm new to the opennode list, but have been doing virtualization with >> Xen for more than 7 years. We are a Linux-only shop, and the low >> overhead of openvz containers was a draw for switching to that >> platform, and we chose opennode for its ease of implementation and use. >> >> We have our first opennode server up and running and I'm already >> extremely impressed with the performance and the command line tools >> are stellar. We don't really need the opennode manger web gui, but >> since it was available to install within opennode, we thought we >> would give that a try. >> >> We installed it and attached the VM's to the console and everything >> was fine for a time, then two things have happened in rapid >> succession to each other: >> >> * First, the opennode manager host machine just shows spinning as if >> trying to update in the VM listing, but the VM's still showed >> * Second, the opennode manager host machine shows spinning as if >> trying to update but now NO VM's are showing. >> >> All of the VM containers are up and working so it is not mission >> critical at all, but we would like to get the web gui working as well >> if we can. >> >> Has anyone else run into this? We /have/ updated the OMS via the >> command line: >> >> /opt/oms/update.sh && service oms restart >> but that has done nothing to fix the issue. >> >> Thanks, >> >> -- >> -- >> Steven G. Spencer, Network Administrator >> KSC Corporate - The Kelly Supply Family of Companies >> Office 308-382-8764 Ext. 1131 >> Mobile 402-765-8010 >> ------------------------------------------------------------------------------ >> Want excitement? >> Manually upgrade your production database. >> When you want reliability, choose Perforce >> Perforce version control. Predictably reliable. >> http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk >> _______________________________________________ >> OpenNode-users mailing list >> Ope...@li... >> https://lists.sourceforge.net/lists/listinfo/opennode-users > > > > > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > > > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users |
From: <ope...@li...> - 2014-09-08 16:16:15
|
Hi Steven, The current situation with OMS is that its not anymore in active development - and in its current form it might need some tweaking in order to get it running - as we never released fully stable OMS for OpenNode 6. However we still have it running on couple of production installations - just we didnt had time or resources to stabilize and package OMS for general availability. Lot of issues were related to saltstack version we were using back then - and it hasnt been updated to latest saltstack - which might perform better now with OMS (but probably needs some dev work for update). We are working actively on OMS successor - but its currently only for early adopters with targeted dev projects - not sure when it reaches GA. So in short - I would not recommend to rely on OMS - unless you get it running :-) We have OMS running with WHMCS billing for simple Cloud Provider setup - but it probably needs a bit our support hours in order to replicate or package/stabilize for GA release - ie we dont have free resources for that unless its going through our commercial support channel. Or if there would be anybody from community who would be interested of updating and stabilizing OMS - we could provide some OMS internals know-how support in commentary form - in order to assist with that task. Hint: there are some OMS internals info in OpenNode wiki which might help. Regarding your OMS symptoms - I would say its probably OMS/Salt sync action with host - that got stuck for some reason. OMS VM restart does not help? salt-minion restart on OpenNode OS host? Regards, -- <http://www.getpostbox.com>---------------------------------------------- Andres Toomsalu,an...@op... <mailto:an...@op...> http://www.opennodecloud.com <http://www.opennodecloud.com/> > ope...@li... > <mailto:ope...@li...> > 8. september 2014 18:01 > All, > > I'm new to the opennode list, but have been doing virtualization with > Xen for more than 7 years. We are a Linux-only shop, and the low > overhead of openvz containers was a draw for switching to that > platform, and we chose opennode for its ease of implementation and use. > > We have our first opennode server up and running and I'm already > extremely impressed with the performance and the command line tools > are stellar. We don't really need the opennode manger web gui, but > since it was available to install within opennode, we thought we would > give that a try. > > We installed it and attached the VM's to the console and everything > was fine for a time, then two things have happened in rapid succession > to each other: > > * First, the opennode manager host machine just shows spinning as if > trying to update in the VM listing, but the VM's still showed > * Second, the opennode manager host machine shows spinning as if > trying to update but now NO VM's are showing. > > All of the VM containers are up and working so it is not mission > critical at all, but we would like to get the web gui working as well > if we can. > > Has anyone else run into this? We /have/ updated the OMS via the > command line: > > /opt/oms/update.sh && service oms restart > but that has done nothing to fix the issue. > > Thanks, > > -- > -- > Steven G. Spencer, Network Administrator > KSC Corporate - The Kelly Supply Family of Companies > Office 308-382-8764 Ext. 1131 > Mobile 402-765-8010 > ------------------------------------------------------------------------------ > Want excitement? > Manually upgrade your production database. > When you want reliability, choose Perforce > Perforce version control. Predictably reliable. > http://pubads.g.doubleclick.net/gampad/clk?id=157508191&iu=/4140/ostg.clktrk > _______________________________________________ > OpenNode-users mailing list > Ope...@li... > https://lists.sourceforge.net/lists/listinfo/opennode-users |
From: <ope...@li...> - 2014-09-08 15:01:19
|
All, I'm new to the opennode list, but have been doing virtualization with Xen for more than 7 years. We are a Linux-only shop, and the low overhead of openvz containers was a draw for switching to that platform, and we chose opennode for its ease of implementation and use. We have our first opennode server up and running and I'm already extremely impressed with the performance and the command line tools are stellar. We don't really need the opennode manger web gui, but since it was available to install within opennode, we thought we would give that a try. We installed it and attached the VM's to the console and everything was fine for a time, then two things have happened in rapid succession to each other: * First, the opennode manager host machine just shows spinning as if trying to update in the VM listing, but the VM's still showed * Second, the opennode manager host machine shows spinning as if trying to update but now NO VM's are showing. All of the VM containers are up and working so it is not mission critical at all, but we would like to get the web gui working as well if we can. Has anyone else run into this? We /have/ updated the OMS via the command line: /opt/oms/update.sh && service oms restart but that has done nothing to fix the issue. Thanks, -- -- Steven G. Spencer, Network Administrator KSC Corporate - The Kelly Supply Family of Companies Office 308-382-8764 Ext. 1131 Mobile 402-765-8010 |
From: <ope...@li...> - 2014-07-24 13:19:24
|
This stable OS release has been rebased to CentOS 6.5 and is introducing Openvswitch as a host bridge networking stack with support for VETH based VLANs and network bandwidth throttling. OpenNode OS 6 Update 3 ISO installer can be downloaded from OpenNode SourceForge project page:https://sourceforge.net/projects/opennode/files/OpenNode_6_Update_3/ Changelog: * Rebased OpenNode base system to CentOS 6.5 - please see RHEL/CentOS 6.5 release notes:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.5_Release_Notes/ * Latest RHEL6 OpenVZ kernel (042stab092.2) with updated OpenVZ and KVM virtualization support * Updated vzctl (4.7.2) and ploop (1.11) package versions * Legacy linux bridge kernel module replaced with Openvswitch (OVS) - that has numerous new features - please seehttp://openvswitch.org/ * Standard 802.1Q VLANs and network bandwith throttling support on VM VETH interface through custom ON_NETIF parameter in VEID.conf file - please see VETH configuration examples here:https://support.opennodecloud.com/wiki/doku.php?id=usrdoc:os:network&#create_veth_interfaces <https://support.opennodecloud.com/wiki/doku.php?id=usrdoc:os:network&#create_veth_interfaces> For updating existing OpenNode OS 6 installations please run: yum update service libvirt restart NB! If you are currently running vzkernel version older than 042stab061.1 - you must install newer vzkernel package first and then run yum update for OS update like this: yum install vzkernel reboot yum update service libvirt restart NB! VM stop/start default behaviour on host reboot has now changed to suspend/resume mode. It can be reverted back to stop/start mode by issuing: sed -i '/^VE_STOP_MODE=/ c VE_STOP_MODE=stop' /etc/vz/vz.conf Wishing nice summer vacations, Team OpenNode |
From: <ope...@li...> - 2013-10-18 16:29:41
|
TurnKey Linux Appliance Library contains 100+ Debian based VM appliances - that integrate and polish the very best of open source software into ready to use solutions. Now available also as OpenNode OpenVZ container templates. We have been working closely with TurnKey Linux team (www.turnkeylinux.org) in order to make TKL virtual appliance library available as OpenNode OpenVZ container templates - downloadable and deployable directly from OpenNode TUI management utility. TurnKey Linux appliance library provides most popular pre-packaged open-source solutions - which are very easy to deploy - together with appliance backup and migration tools. TKL appliances are easy to maintain and auto-updated daily with security patches. There are two choices to activate TKL appliance library on OpenNode host: * Update to opennode-tui rpm package version to 20131017200837 or later: yum --enablerepo=opennode-dev update opennode-tui * or by adding TKL appliance repo description manually to /etc/opennode/opennode-tui.conf file - like this: <example> [apps-ovz-tkl-repo] url = sourceforge.net/projects/turnkeylinux/files/opennode/ type = openvz name = TurnKey Linux Appliances [general] repo-groups = default-kvm, default-openvz, apps-ovz-tkl </example> Please see more detailed info about TurnKey Linux Appliance Library from: www.turnkeylinux.org Cheers, -- ---------------------------------------------- Team OpenNode http://www.opennodecloud.com |
From: <ope...@li...> - 2013-09-22 19:18:07
|
OpenNode has made its way into Web Summit 2013 "featured early stage startups" list and we are going to have an exhibition stand at the conference. You are very welcome to drop by and meet us face to face. More info about Web Summit: www.websummit.net Despite the news/release silence during the summer - we have been actually pretty busy behind the scenes - so here is the shortlist about what we have been up to: * stabilizing OpenNode Management Server (OMS) - our central management controller with ajax web UI - and adding basic QEMU/KVM VM management into mix (stable OMS release pending) implementing popular WHMCS billing software (www.whmcs.com) integration with OMS and WHMCS customizations - in order to provide full client/service portal solution on top of OpenNode for cloud service providers/hosters - please see screenshots here: http://opennodecloud.com/about/whmcs-integration/ * preparing for OpenNode OS Update 3 release - which replaces linux bridge with Openvswitch networking stack (openvswitch.org) and provides a foundation for VLAN support and SDN networking (available from opennode-dev repo for testers) * researching possible Software Defined Networking (SDN) solutions for inclusion into future releases of OpenNode OS - still waiting for some maturity from open-source SDN solutions before inclusion * reserching GlusterFS for possible inclusion into OpenNode OS - mainly waiting for libgfapi-based GlusterFS storage type support to land into RHEL6 QEMU backport * moving VM template/appliance downloads into sourceforge.net mirrors and preparing for TurnKeyLinux appliance library (www.turnkeylinux.org) inclusion/support Kind regards, Team OpenNode |
From: <ope...@li...> - 2013-09-22 19:07:10
|
Launching Amazon EC2 instances with OpenNode OS is now a breeze - as we have released official OpenNode AMIs. Please lookup official AMI IDs for all EC2 regions from here: https://support.opennodecloud.com/wiki/doku.php?id=usrdoc:os:aws Launching OpenNode OS inside Amazon EC2 Cloud instances enables you to: * Partition large EC2 instances further - launch multiple linux/application containers inside EC2 instances with additional resource limits/quarantees * Package your applications into standard linux containers and run everywhere * Run and migrate containers across private and public cloud * Easy management - use single set of tools Team OpenNode |
From: <ope...@li...> - 2013-04-25 08:32:27
|
This stable OS release has been rebased to CentOS 6.4 and includes numerous updated packages that provide some new features as well. OpenNode OS 6 Update 2 ISO installer can be downloaded from OpenNode SourceForge project page: https://sourceforge.net/projects/opennode/files/OpenNode_6_Update_2/ Release highlights: Rebased to CentOS 6.4 release (newer qemu/kvm, libvirt, etc) - please see RHEL/CentOS 6.4 release notes: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.4_Release_Notes/ Includes RHEL6 042stab076.5 vzkernel vzctl 4.2 and ploop 1.6 packages Basic veth management support through custom ON_NETIF parameter in VEID.conf file (TUI support is in the works) Bind mounts support in TUI (for OpenVZ) Disk IO priorities support in TUI (for OpenVZ) NB! QEMU/KVM VM templates disk controller/bus type is hard-coded to virtio from now on (was IDE) - please keep that in mind when creating KVM VM templates (Guests must support virtio - ie install virtio drivers in Windows before creating template from it, linux26 kernels include virtio support by default) VM template (metadata) editing support in TUI (change template name, requirements) ISO installler won't create separate LVM logical volume for vz filesystem anymore - instead /vz is symlinked under /storage/local/vz from now on (allows to use free disk space more efficiently) Func management framework has been replaced by Saltstack - http://saltstack.com/community.html (required by current OMS v2) Included system-config-firewall package (GUI mode can be run over ssh X forwarding) - for managing iptables RHEL way if needed - more info: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/sect-Security_Guide-Firewalls-Basic_Firewall_Configuration.html Included koan for QEMU/KVM VM network kickstarting/installs with Cobbler For updating existing OpenNode OS 6 installations please run: yum update service libvirt restart NB! If you are currently running vzkernel version older than 042stab061.1 - you must install newer vzkernel package first and then run yum update for OS update like this: yum install vzkernel reboot yum update service libvirt restart About basic veth management support It allows to describe veth device parameters inside VEID.conf file and is able to auto create containers OS network configuration files on VM start. No opennode-tui support yet - this is still work in progress. Manual configuration can be done by adding ON_NETIF parameter string into VEID.conf file. Parameter example inside VEID.conf file (for manual usage): ON_NETIF="ifname=eth0,ipaddr=192.168.100.10,mask=255.255.255.0, gw=192.168.100.1,managed=yes,default=yes;ifname=eth1,dhcp=yes,managed=yes" Currently limited to creating (overwriting) containers (veth) network devices configuration. You must set "managed=yes" in the ON_NETIF parameter string in order to activate configuration creation for particular device. "default=yes" sets gw as container VM default gateway. Also you must activate veth devices for container with "vzctl set $VEID --netif_add ethX --save" - before you can use this feature. Wishing nice springtime, -- ---------------------------------------------- Team OpenNode |
From: <ope...@li...> - 2013-04-05 22:23:08
|
As we are preparing for stable OpenNode 6 Update 2 release we put out ISO installer beta1 for public download and testing. OpenNode 6 Update 2 ISO installer beta1 can be downloaded from OpenNode SourceForge project page: https://sourceforge.net/projects/opennode/files/OpenNode_6_Update_2/ Release highlights: Rebased to CentOS 6.4 release (newer qemu/kvm, libvirt, etc) - please see RHEL/CentOS 6.4 release notes: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.4_Release_Notes/ Includes RHEL6 042stab076.5 vzkernel vzctl 4.2 and ploop 1.6 packages Basic veth management support through custom ON_NETIF parameter in VEID.conf file (TUI support is in the works) Bind mounts support in TUI (for OpenVZ) Disk IO priorities support in TUI (for OpenVZ) NB! QEMU/KVM VM templates disk controller/bus type is hard-coded to virtio from now on (was IDE) - please keep that in mind when creating KVM VM templates (Guests must support virtio - ie install virtio drivers in Windows before creating template from it, linux26 kernels include virtio support by default) VM template (metadata) editing support in TUI (change template name, requirements) ISO installler won't create separate LVM logical volume for vz filesystem anymore - instead /vz is symlinked under /storage/local/vz from now on (allows to use free disk space more efficiently) Func management framework has been replaced by Saltstack - http://saltstack.com/community.html (required by current OMS) Included system-config-firewall package (GUI mode can be run over ssh X forwarding) - for managing iptables RHEL way if needed - more info: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/sect-Security_Guide-Firewalls-Basic_Firewall_Configuration.html Included koan for QEMU/KVM VM network kickstarting/installs with Cobbler For updating existing OpenNode 6 installations please run yum update against opennode-test repository (until we move its contents to stable repo). NB! If you are currently running vzkernel version older than 042stab061.1 - you must update vzkernel package first and then run yum update for OS update like this: yum update vzkernel yum update About basic veth management support It allows to describe veth device parameters inside VEID.conf file and is able to auto create containers OS network configuration files on VM start. No opennode-tui support yet - this is still work in progress. Manual configuration can be done by adding ON_NETIF parameter string into VEID.conf file. Parameter example inside VEID.conf file (for manual usage): ON_NETIF="ifname=eth0,ipaddr=192.168.100.10,mask=255.255.255.0, gw=192.168.100.1,managed=yes,default=yes;ifname=eth1,dhcp=yes,managed=yes" Currently limited to creating (overwriting) containers (veth) network devices configuration. You must set "managed=yes" in the ON_NETIF parameter string in order to activate configuration creation for particular device. "default=yes" sets gw as container VM default gateway. Also you must activate veth devices for container with "vzctl set $VEID --netif_add ethX --save" - before you can use this feature. Happy betatesting, Team OpenNode -- ---------------------------------------------- Andres Toomsalu, an...@op... Founder, OpenNode LLC http://www.opennodecloud.com |