From functionality point of view following will be changes.
-> Threshold will not be applied for vms undergoing HA process (say during host going down event).
-> Threshold will not be applied for vms undergoing maintenance.
-> Threshold will apply for stop-start vm "after" the skip.counting.hours
(time till which we reserve the capacity for the stopped vm).
Reason being stop-start is a user initiated action and we are giving a time
window (skip.counting.hours) till which its guaranteed to start in the same
cluster.
After that time, threshold check applies bcz we still want to reserve the
capacity for urgent situations like HA, putting host into maintenance rather than get it consumed in such user
initiated actions.
Signed off by : nitin mehta<nitin.mehta@citrix.com>
public range to see whether the same VLAN exists in portable IP range.
added check to enusre a VLAN id used for a public IP range is not used
for portable ip range
The cluster and zone wide storage pool allocators returned shared pools even for volumes meant to be on local storage pool.
If the VM uses local disk then cluster and zone storage allocators should not handle it and return null or empty list.
Also fixed the deployment planner to avoid a cluster if
a. avoid set returned by storage pool allocators is empty OR
b. all local or shared pools in a cluster are in avoid state
defined in different VLANs across public and portable ranges
added checks to restric same ip range to be configure as both public ip
range and portable ip range
This patch would reset the priority in such condition:
1. All redundant routers are stopped, e.g. due to network GC
2. User start one VM in the network
3. The routers would be brought up with reseted priority(100 & 99).
This would resolve the issue of network GC result in lower limit of redundant router priority reached.
Removed log_test_exceptions which did not add any value.
Skipped few tests which are incomplete. Added timeout logic
and to wait for router to boot.
(cherry picked from commit a65f1ebefc)
Signed-off-by: Sangeetha <sangeetha.hariharan@citrix.com>
The tests use to wait for ever for the vm to attain Running state.
Added a timeout so it does not get into infinite loop.
Signed-off-by: venkataswamybabu budumuru <venkataswamybabu.budumuru@citrix.com>
Renamed testcase name and also initialised _cleanup so that
it does not break on non-NS Cloudstack setup.
Signed-off-by: venkataswamybabu budumuru <venkataswamybabu.budumuru@citrix.com>
(cherry picked from commit c5e1c4725c)
Even though the volume may get migrated from shared to local storage, it is not possible to update the disk offering.
The fix is to disallow migration from shared to local store.
list Clusters/pods/zones API not accounting for reserved in the used capacity percentage.
Fix listCapacity cmd for reserved as well
Signed off by : nitin mehta<nitin.mehta@citrix.com>
VM deployment is fine, issue is in attach volume where all possible scenarios are not handled.
The following needs to be logic of attached volume:
1. if data volume scope is zone then allow attach (this is already there)
2. if data volume scope is cluster
a. if root volume scope is host, allow if both are in same cluster (already there)
b. if root volume scope is zone, allow if vm and data volume in same cluster (fixed as part of this commit)
3. if data volume scope is host allow if vm and data volume in same host (fixed as part of this commit)
I don't think host kernel version has any bearing on it. Original code
was tested with CentOS 6.3 and 6.4, but it seems to succeed or fail per-host,
e.g. a fast host might work and a slow host might not. I was getting intermittent
failures with ubuntu 12.04.3 prior to this patch.