In situations where libvirt lost the storage pool the KVM Agent will re-create the
storage pool in libvirt.
This could be then libvirt is restarted for example.
The object returned internally was missing essential information like the sourceDir
aka the Ceph pool, the monitor IPs, cephx information and such.
In this case the first operation on this newly created pool would fail. All operations
afterwards would succeed.
(cherry picked from commit 48899e4c81)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Lookup zone_id field in legacy_zones table to search the table for legacy zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
(cherry picked from commit 71f76edf71)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/dao/LegacyZoneDaoImpl.java
Increased the ram size of Internal load balancer vm service offering also
Backported from fix by Harikrishna Patnala <harikrishna.patnala@citrix.com>
https://reviews.apache.org/r/17941/
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This should fix that we can't gather CPU statistics on hypervisors
> Ubuntu 12.04
(cherry picked from commit 69ee01af9d)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Conflicts:
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
The proxy-arp add/del is done on firewall rule add/del.
The proxy-arp rule is deleted only when there is no static nat or dest nat rule is not using the ip.
When there is static nat or PF and firewall rule
a. Delete firewall rule. It skips delete proxy-arp because the rule is used by static nat rule.
b. After deleting fw rule if we disable static nat there is no way to delete proxy-arp rule.
On VM expunge we are deleting firewall rules first then static nat rules. This caused the stale proxy-arp
rules.
With this fix adding/deleting proxy arp rule on static nat/PF rule add/del.
(cherry picked from commit 19668713ed)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2.8.7 to 3.3.5
3.3.5 is the latest stable version of AMQP client, which is also
backward comaptible. Successfully tested with updated client library.
(cherry picked from commit ff797dfa59)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Otherwise a RBDException will be thrown with the message that the snapshot
isn't protected.
modified: plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
Conflicts:
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
Otherwise a RBDException will be thrown with the message that the snapshot
isn't protected.
modified: plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
Conflicts:
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
permission error on hyper-v. Fixed the path with which a softlink is created for a
downloaded volume or template. Also addressed the issue at other places where the path
field wasn't set correctly when a volume was migrated or a template was created from
a volume.
[Baremetal] After deploy VM on Baremetal host, BM host fail to be PXE booted
(cherry picked from commit 793a6a7177)
Signed-off-by: Animesh Chaturvedi <animesh@apache.org>
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
Update volume path for every volume belonging to the VM to the corresponding new disk filename.
(cherry picked from commit 8cb03ddb23)
Signed-off-by: Animesh Chaturvedi <animesh@apache.org>
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
If a volume has been renamed upon migration update its volumePath to that of the new disk filename.
(cherry picked from commit c652ff45df)
Signed-off-by: Animesh Chaturvedi <animesh@apache.org>