Menu

#4040 Command opsaddimage failed silently with exit status 0 and no error message

2.8.4
pending
Ling
general
5
2015-02-22
2014-04-03
GONG Jie
No

This is openstack baremetal related problem. The testing environment is RHEL6.4 on x86-64.

The OpenStack all-in-one controllor was rebuit based on the Red Hat RDO Quickstart guide.
http://openstack.redhat.com/Quickstart

yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
yum install -y openstack-packstack
packstack --allinone

source ~/keystonerc_admin

After that, the controllor was modified based on the xCAT document below.
https://sourceforge.net/apps/mediawiki/xcat/index.php?title=Using_xCAT_in_OpenStack_Baremetal_Node_Deployment

The following rpm packages were installed and used.

[root@dx360m4n05 ~(keystone_admin)]# rpm -qa | grep openstack | sort
kernel-2.6.32-358.123.2.openstack.el6.x86_64
kernel-firmware-2.6.32-358.123.2.openstack.el6.noarch
openstack-ceilometer-alarm-2013.2.2-1.el6.noarch
openstack-ceilometer-api-2013.2.2-1.el6.noarch
openstack-ceilometer-central-2013.2.2-1.el6.noarch
openstack-ceilometer-collector-2013.2.2-1.el6.noarch
openstack-ceilometer-common-2013.2.2-1.el6.noarch
openstack-ceilometer-compute-2013.2.2-1.el6.noarch
openstack-cinder-2013.2.2-1.el6.noarch
openstack-dashboard-2013.2.2-1.el6.noarch
openstack-glance-2013.2.2-2.el6.noarch
openstack-keystone-2013.2.2-1.el6.noarch
openstack-neutron-2013.2.2-2.el6.noarch
openstack-neutron-openvswitch-2013.2.2-2.el6.noarch
openstack-nova-api-2013.2.2-1.el6.noarch
openstack-nova-cert-2013.2.2-1.el6.noarch
openstack-nova-common-2013.2.2-1.el6.noarch
openstack-nova-compute-2013.2.2-1.el6.noarch
openstack-nova-conductor-2013.2.2-1.el6.noarch
openstack-nova-console-2013.2.2-1.el6.noarch
openstack-nova-novncproxy-2013.2.2-1.el6.noarch
openstack-nova-scheduler-2013.2.2-1.el6.noarch
openstack-packstack-2013.2.1-0.34.dev989.el6.noarch
openstack-selinux-0.1.3-2.el6ost.noarch
openstack-swift-1.10.0-2.el6.noarch
openstack-swift-account-1.10.0-2.el6.noarch
openstack-swift-container-1.10.0-2.el6.noarch
openstack-swift-object-1.10.0-2.el6.noarch
openstack-swift-plugin-swift3-1.7-1.el6.noarch
openstack-swift-proxy-1.10.0-2.el6.noarch
openstack-utils-2013.2-2.el6.noarch
python-django-openstack-auth-1.1.2-1.el6.noarch

The configuration file of nova was modified as the following.

[root@dx360m4n05 ~(keystone_admin)]# grep -v '^$' /etc/nova/nova.conf | grep -v '^#'
[DEFAULT]
notification_driver=ceilometer.compute.nova_notifier
notification_driver=nova.openstack.common.notifier.rpc_notifier
state_path=/var/lib/nova
enabled_apis=ec2,osapi_compute,metadata
ec2_listen=0.0.0.0
osapi_compute_listen=0.0.0.0
osapi_compute_workers=32
metadata_listen=0.0.0.0
service_down_time=60
instance_usage_audit_period=hour
rootwrap_config=/etc/nova/rootwrap.conf
auth_strategy=keystone
use_forwarded_for=False
service_neutron_metadata_proxy=True
neutron_metadata_proxy_shared_secret=7b1785a3a2c04431
neutron_default_tenant_id=default
novncproxy_host=0.0.0.0
novncproxy_port=6080
instance_usage_audit=True
reserved_host_memory_mb=0
glance_api_servers=9.114.34.243:9292
network_api_class=nova.network.neutronv2.api.API
metadata_host=9.114.34.243
neutron_url=http://9.114.34.243:9696
neutron_url_timeout=30
neutron_admin_username=neutron
neutron_admin_password=2a02ac0eafac4ad0
neutron_admin_tenant_name=services
neutron_region_name=RegionOne
neutron_admin_auth_url=http://9.114.34.243:35357/v2.0
neutron_auth_strategy=keystone
neutron_ovs_bridge=br-int
neutron_extension_sync_interval=600
security_group_api=neutron
lock_path=/var/lib/nova/tmp
debug=False
verbose=True
use_syslog=False
rpc_backend=nova.openstack.common.rpc.impl_qpid
qpid_hostname=9.114.34.243
qpid_port=5672
qpid_username=guest
qpid_password=guest
qpid_heartbeat=60
qpid_protocol=tcp
qpid_tcp_nodelay=True
scheduler_host_manager=nova.scheduler.baremetal_host_manager.BaremetalHostManager
cpu_allocation_ratio=16.0
ram_allocation_ratio=1.0
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter
compute_driver=xcat.openstack.baremetal.driver.xCATBareMetalDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
libvirt_type=kvm
libvirt_inject_partition=-1
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=True
novncproxy_base_url=http://9.114.34.243:6080/vnc_auto.html
vncserver_listen=9.114.34.243
vncserver_proxyclient_address=9.114.34.243
vnc_enabled=True
volume_api_class=nova.volume.cinder.API
qpid_reconnect_interval=0
qpid_reconnect_interval_min=0
qpid_reconnect=True
sql_connection=mysql://nova:414c4f1cd30e4b4e@9.114.34.243/nova
qpid_reconnect_timeout=0
image_service=nova.image.glance.GlanceImageService
logdir=/var/log/nova
qpid_reconnect_interval_max=0
qpid_reconnect_limit=0
osapi_volume_listen=0.0.0.0
connection_type=libvirt
[hyperv]
[zookeeper]
[osapi_v3]
[conductor]
[keymgr]
[cells]
[database]
connection=mysql://nova:414c4f1cd30e4b4e@9.114.34.243/nova
[image_file_url]
[baremetal]
sql_connection=mysql://admin:cluster@9.114.34.243/nova_bm?charset=utf8
instance_type_extra_specs=cpu_arch:x86_64
tftp_root=/tftpboot
[rpc_notifier2]
[matchmaker_redis]
[ssl]
[trusted_computing]
[upgrade_levels]
[matchmaker_ring]
[vmware]
[spice]
[keystone_authtoken]
admin_tenant_name=services
admin_user=nova
admin_password=b7acfbb13ae04d76
auth_host=9.114.34.243
auth_port=35357
auth_protocol=http
auth_uri=http://9.114.34.243:5000/
[xcat]
deploy_timeout=0
deploy_checking_interval=10
reboot_timeout=0
reboot_checking_interval=5

It seems the command opsaddimage failed silently with exit status 0 and no error message.

[root@dx360m4n05 ~(keystone_admin)]# glance image-list
+--------------------------------------+--------+-------------+------------------+----------+--------+
| ID                                   | Name   | Disk Format | Container Format | Size     | Status |
+--------------------------------------+--------+-------------+------------------+----------+--------+
| a055c894-ded2-47ae-85d5-03eea72fc8d5 | cirros | qcow2       | bare             | 13147648 | active |
+--------------------------------------+--------+-------------+------------------+----------+--------+
[root@dx360m4n05 ~(keystone_admin)]# opsaddimage compute_1408a -c dx360m4n05.pok.stglabs.ibm.com
[root@dx360m4n05 ~(keystone_admin)]# echo $?
0
[root@dx360m4n05 ~(keystone_admin)]# glance image-list
+--------------------------------------+--------+-------------+------------------+----------+--------+
| ID                                   | Name   | Disk Format | Container Format | Size     | Status |
+--------------------------------------+--------+-------------+------------------+----------+--------+
| a055c894-ded2-47ae-85d5-03eea72fc8d5 | cirros | qcow2       | bare             | 13147648 | active |
+--------------------------------------+--------+-------------+------------------+----------+--------+

Discussion

  • GONG Jie

    GONG Jie - 2014-04-03
    • summary: Command opsaddimage failed silently with exit status 0 no error message --> Command opsaddimage failed silently with exit status 0 and no error message
    • Description has changed:

    Diff:

    --- old
    +++ new
    @@ -159,7 +159,7 @@
         reboot_timeout=0
         reboot_checking_interval=5
    
    -It seems the command opsaddimage failed silently with exit status 0 no error message.
    +It seems the command opsaddimage failed silently with exit status 0 and no error message.
    
         [root@dx360m4n05 ~(keystone_admin)]# glance image-list
         +--------------------------------------+--------+-------------+------------------+----------+--------+
    
     
  • GONG Jie

    GONG Jie - 2014-04-03
    • Description has changed:

    Diff:

    --- old
    +++ new
    @@ -1,6 +1,7 @@
     This is openstack baremetal related problem. The testing environment is RHEL6.4 on x86-64.
    
     The OpenStack all-in-one controllor was rebuit based on the Red Hat RDO Quickstart guide.
    +http://openstack.redhat.com/Quickstart
    
         :::bash
         yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
    
     
  • Ling

    Ling - 2014-04-03
    • status: open --> pending
     
  • Ling

    Ling - 2014-04-03

    There are two problems:

    1. ~/openrc file does not contain the correct info. I have updated it for you.
      All the openstack command that xCAT calls will source ~/openrc first to setup the env variable.
    2. The command did not return correct return code back. I have fixed that in code and have updated /opt/xcat/lib/perl/xCAT_plugin/openstack.pm on your management node.

    I have also added a verbose flag -V that you can use to get more details.

     
  • GONG Jie

    GONG Jie - 2014-04-04

    It seems all the xCAT calls are rely on the file ~/openrc. I think it is not a good idea. I think it is great if all the commands can use the environment variables existing in the user's context, just like all the OpenStack commands do.

     
  • Ling

    Ling - 2014-04-04

    Gone Jie, great suggestion! I have checkin new fixes as per your suggestion. And I have put the new openstack.pm file on your xCAT mn. I'will also modify the doc to remove the openrc requirement.
    Thanks.

     
MongoDB Logo MongoDB