Info nova compute manager updating host status cherry blossom on line dating romance
I followed manual installation according to cactus document. If anyone had resolved thhis issue it will be helpful. "euca-authorize -P icmp -t -1:-1 default" step gives"[Errno 111] Connection refused" .
"euca-authorize -P icmp -t -1:-1 default" step gives"[Errno 111] Connection refused" .
Another possibility is something's not quite right with your message queue settings.
Main thing here would again be name resolution and that your message queue is actually matched up here (ie settings on the controller match those on your volume-node).\n sys.exit(main())\n File "/usr/lib/python2.7/dist-packages/oslo/rootwrap/cmd.py", line 107, in main\n filters = wrapper.load_filters(config.filters_path)\n File "/usr/lib/python2.7/dist-packages/oslo/rootwrap/wrapper.py", line 119, in load_filters\n for (name, value) in filterconfig.items("Filters"):\n File "/usr/lib/python2.7/Config Parser.py", line 347, in items\n raise No Section Error(section)\n Config Parser.I had another controller setup , with which I could move forward.Thanks for the suggestions I am having the same issue. I followed manual installation according to cactus document. If anyone had resolved thhis issue it will be helpful. API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.Linux OVSInterface Driver firewall_driver = firewall. Noop Firewall Driver [oslo_messaging_rabbit] rabbit_host = 192.168.100.2 rabbit_userid = openstack rabbit_password = openstack [keystone_authtoken] auth_uri = auth_url = auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = nova dhcpbridge_flagfile=/etc/nova/dhcpbridge=/usr/bin/nova-dhcpbridge logdir=/var/log/nova state_path=/var/lib/nova lock_path=/var/lock/nova force_dhcp_release=True libvirt_use_virtio_for_bridges=True ec2_private_dns_show_ip=True api_paste_config=/etc/nova/enabled_apis=osapi_compute,metadata [vnc] enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = 192.168.100.3 novncproxy_base_url = host = 192.168.100.2 [oslo_concurrency] lock_path = /var/lib/nova/tmp [neutron] url = auth_url = auth_plugin = password project_domain_id = default user_domain_id = default region_name = Region One project_name = service username = neutron password = neutron [cinder] os_region_name = Region One #/etc/nova/[DEFAULT] compute_driver=libvirt. After configuration of Compute node I could see hypervisor list is empty.Please find the below details as per my observation and suggest me for any possible solution on this.It should be the same for all euca-commands so it is strange that it isn't working.Are you running from the same place you did euca-run-instances etc.? Both the machines are in the same subnet and are able to ping each other. Could you indicate whether it was migrate or resize that caused the error, and if any other commands worked.It looks like pylxd is failing to get a client connection to lxd as part of the error (about half way up), and so it's not clear if it's the migration or resize that's failing or something a bit more fundamental.