There still exist two issues after Edison's commits.
(1) Migration from new hosts to old hosts failed.
The bridge name on old host is set to cloudVirBr* if network.bridge.name.schema is set to 3.0 in /etc/cloudstack/agent/agent.properties, but the actual bridge name is breth*-* after running cloudstack-agent-upgrade.
(2) all ports of vms (Basic zone, or Advanced zone with security groups) on old hosts are open, because the iptables rules are binding to device (bridge) name which is changed by cloudstack-agent-upgrade.
After this, the KVM upgrade steps :
a. Install 4.2 cloudstack agent on each kvm host
b. Run "cloudstack-agent-upgrade". This script will upgrade all the existing bridge name to new bridge name, and update related firewall rules.
c. install a libvirt hook:
c1. mkdir /etc/libvirt/hooks
c2. cp /usr/share/cloudstack-agent/lib/libvirtqemuhook /etc/libvirt/hooks/qemu
c3. chmod +x /etc/libvirt/hooks/qemu
c4. service libvirtd restart
c5. service cloudstack-agent restart
The code is excessively complicated and convoluted.
DisassociateIP ->
Revoke Rule -> {FW, PF{incl SNAT}, LB, RA VPN} ->
-> Send IpAssoc (false) to VR
Send all config to VR again
-> Send IpAssoc(false) to VR again <---- fails here since it cannot find the VLAN for the IP since it is already gone
-> Mark Ip as released
The workaround fix would be to not throw an exception in CitrixResourceBase if it is disassociate and the VLAN does not exist on the XS host.
Signed-off-by: Chiradeep Vittal <chiradeep@apache.org>
when secondary storage is mounted as read-only, changing permission of files on it will fail. But we should still stick to current mount point instread of
returning a wrong mount point /mnt/sec
Since introducing pool of session contexts we no more have a dedicated context for each VMware hypervisor host.
Hence vsm credentials stored in session context cannot be retrieved always correctly. Fix is to register the vsm credentials after fetching context and the context gets recycled after use.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Libvirt reports:
org.libvirt.LibvirtException: Storage volume not found: no storage vol
with matching name
in some cases, if the volume is created on one kvm host, while accessed
from other host.
It's possible due to concurrent access(read/write) storage.
The current fix is to try serveral times, and wait for 30 seconds for
each retry.
If the issue still there, then need to sync the storage pool access
This failed due to a RAW -> QCOW2 conversion (again).
The current code still makes to much assumptions about everything always
being QCOW2 while that is not always true.