xs 6.1/6.2 introduce the new virtual platform, so there are two virtual platforms, windows PV driver version must match virtual platforms,
this patch tracks PV driver versions in vm details and template details.
Anthony
From functionality point of view following will be changes.
-> Threshold will not be applied for vms undergoing HA process (say during host going down event).
-> Threshold will not be applied for vms undergoing maintenance.
-> Threshold will apply for stop-start vm "after" the skip.counting.hours
(time till which we reserve the capacity for the stopped vm).
Reason being stop-start is a user initiated action and we are giving a time
window (skip.counting.hours) till which its guaranteed to start in the same
cluster.
After that time, threshold check applies bcz we still want to reserve the
capacity for urgent situations like HA, putting host into maintenance rather than get it consumed in such user
initiated actions.
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Applying the short term fix of force cleaning up if the answer recieved from startcommand is not valid
Signed off by : nitin mehta<nitin.mehta@citrix.com>
There are three issues in resource_count table
(1) expunge a vm, the public_ip decreases and becomes -1 in basic zone.
(2) recover a vm, the volume increase.
(3) restore a vm, the volume decrease.
Updating the fix to cover one more scenario when user directly calls API migrateVirtualMachineWithVolume.
If currentPool is accessible to destination host, skip calling allocators and move on to next volume to process.
This means if user calls migrateVirtualMachineWithVolume API where all volumes of VM are accessible on specified target host,
then API fails as there is no storage migration involved. Instead user should call migrateVirtualMachine API.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
This is due to a VM on zone wide primary storage not requiring storage migration while migrating across clster.
Detecting the storage pool type before allowing normal migration (without storage live migration) of VM across cluster.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Changes:
- Implict creation of the 'ExplicitDedication' Affinity group during resource dedication
- Only one group per account or per domain will be present
- ListDedicatedResources by affinityGroup
- Deployment should consider dedicated resources associated to the group only
- Deleting affinity group should release the dedicated resouces
- Releasing the dedicated resources should remove the group associated if there are no more resources.
Changes:
- 'ExcplicitDedication' type of group can be created/deleted by Root admin only
- Users can no longer create this type of affinity group
- RootAdmin can create this type of affinitygroup at domain level. Such a domain level group is available for all accounts in that domain for listing and for use during deployVM.
- The domain level affinitygroup should be visible to the users in that domain, domain admins and Root admin.
Add simultor to the seemingly strict filter which should happen within
the hypervisor resource and not the virtualmachine :/
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Marked the system template new system template as dynamicallyScalable
- handled upgrade case
- moved "dynamicallyScalable" flag to vm_instance table from user_vm_details to support dynamic scaling of system vm
Signed off by : Nitin Mehta<nitin.mehta@citrix.com>
Alerts are generated for VM migration between:
1) Source host is dedicated and destination host is not.
2) Source host is not dedicated and destination host is dedicated.
3) Both hosts are dedicated to different accounts/domains
Changes:
- Passing the avoid set generated by the first pass of deployment to the second try.
- The second try is done, when the first pass that uses a reserved plan fails to deploy on the reserved host, to search over the entire zone again