xs 6.1/6.2 introduce the new virtual platform, so there are two virtual platforms, windows PV driver version must match virtual platforms,
this patch tracks PV driver versions in vm details and template details.
Anthony
From functionality point of view following will be changes.
-> Threshold will not be applied for vms undergoing HA process (say during host going down event).
-> Threshold will not be applied for vms undergoing maintenance.
-> Threshold will apply for stop-start vm "after" the skip.counting.hours
(time till which we reserve the capacity for the stopped vm).
Reason being stop-start is a user initiated action and we are giving a time
window (skip.counting.hours) till which its guaranteed to start in the same
cluster.
After that time, threshold check applies bcz we still want to reserve the
capacity for urgent situations like HA, putting host into maintenance rather than get it consumed in such user
initiated actions.
Signed off by : nitin mehta<nitin.mehta@citrix.com>
public range to see whether the same VLAN exists in portable IP range.
added check to enusre a VLAN id used for a public IP range is not used
for portable ip range
The cluster and zone wide storage pool allocators returned shared pools even for volumes meant to be on local storage pool.
If the VM uses local disk then cluster and zone storage allocators should not handle it and return null or empty list.
Also fixed the deployment planner to avoid a cluster if
a. avoid set returned by storage pool allocators is empty OR
b. all local or shared pools in a cluster are in avoid state
defined in different VLANs across public and portable ranges
added checks to restric same ip range to be configure as both public ip
range and portable ip range
This patch would reset the priority in such condition:
1. All redundant routers are stopped, e.g. due to network GC
2. User start one VM in the network
3. The routers would be brought up with reseted priority(100 & 99).
This would resolve the issue of network GC result in lower limit of redundant router priority reached.
Even though the volume may get migrated from shared to local storage, it is not possible to update the disk offering.
The fix is to disallow migration from shared to local store.
list Clusters/pods/zones API not accounting for reserved in the used capacity percentage.
Fix listCapacity cmd for reserved as well
Signed off by : nitin mehta<nitin.mehta@citrix.com>
VM deployment is fine, issue is in attach volume where all possible scenarios are not handled.
The following needs to be logic of attached volume:
1. if data volume scope is zone then allow attach (this is already there)
2. if data volume scope is cluster
a. if root volume scope is host, allow if both are in same cluster (already there)
b. if root volume scope is zone, allow if vm and data volume in same cluster (fixed as part of this commit)
3. if data volume scope is host allow if vm and data volume in same host (fixed as part of this commit)
- variables inlined
- cpu utilization is not cast to float from double
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
Signed-off-by: Min Chen <min.chen@citrix.com>
double values do not need a Double object to be casted to long
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
Signed-off-by: Min Chen <min.chen@citrix.com>
Applying the short term fix of force cleaning up if the answer recieved from startcommand is not valid
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Applying the short term fix of force cleaning up if the answer recieved from startcommand is not valid
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Issue happens as there are more than one thread processing connect for a host simultaneously. The VM full sync. is not designed to work in this scenario and as a result user VMs may get stopped incorrectly.
Direct agent scan task runs at regular intervals (direct.agent.scan.interval defaulted to 90 secs) and identifies hosts that needs to be processed for connect. In a normal scenario hosts mostly get connected within that interval and there are no issues. But if due to some reason the connect process takes more time and is not completed by the time next agent scan runs. In this case, based on the db. state same hosts may get picked up again. And then there will be situations where more than one thread is processing connect for the same host.
The fix is to check if there is a thread already processing connect for a host and in this case all subsequent threads for that host will simply bail out. Also there may be a scenario where one thread already completed processing connect but another thread already got scheduled before that and will again repeat the same. This is also prevented by putting appropriate checks.
This reverts commit 5a8a2a259e.
We would fix it in another way, since mgmt server may get state updated in
time.
Conflicts:
server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java