Changes:
- Implict creation of the 'ExplicitDedication' Affinity group during resource dedication
- Only one group per account or per domain will be present
- ListDedicatedResources by affinityGroup
- Deployment should consider dedicated resources associated to the group only
- Deleting affinity group should release the dedicated resouces
- Releasing the dedicated resources should remove the group associated if there are no more resources.
Changes:
- 'ExcplicitDedication' type of group can be created/deleted by Root admin only
- Users can no longer create this type of affinity group
- RootAdmin can create this type of affinitygroup at domain level. Such a domain level group is available for all accounts in that domain for listing and for use during deployVM.
- The domain level affinitygroup should be visible to the users in that domain, domain admins and Root admin.
Updating the new system template URLs for the existing templates during upgrade to 4.2.
If new 4.2 system template is registered before upgrade then marking the old templates as removed during upgrade.
Description:
Define upgrade paths from 4.1.0 to 4.1.1, and 4.1.1 to 4.1.2,
and 4.1.2 to 4.2.0. This new path will replace the existing
4.1.0 to 4.2.0 path. This is required to allow upgrades from
4.1.2 installations to 4.2.0. The 4.1.2 installation will have
that db version upon installation from the 4.1 branch. Please
note that these new upgrade paths are empty and don't make
any SQL schema modifications.
The time increased due to the newly added dedicated resources feature. During regular VM deployment, all dedicated resources are put in avoid list so that they are not considered for deployment.
Now the way to compute the list of dedicated resources is not optimal and performance deteriorates in an environment having lot of pods, clusters and hosts as the logic is to query db. for each suc resource.
The fix is to optimize the logic not to loop through all resources but get the list of each resource type in a single query.
Introduce a global lock on template and pool id so that concurrent threads wont be inserting the same row in DB and hit MySQLIntegrityConstraintViolationException
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Snapshot object is being accessed even when it is null. In case snapshot is not present in backup store the code should return after cleaning db entry.
Also noticed discrepancy in the upgraded db setup but couldn't fully verify it, so added logging in the upgrade logic for template/snapshots etc.
Track the Datacenter of previous cluster correctly while going through each cluster in the zone to see if 2 clusters are from different DC/vCenter.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
System template upgrade is not required during 4.0 upgrade since we handle the same during 4.2 upgrade. So removing the system template update during 4.0 upgrade.
KVM.snapshot.enabled is lowercased by f025db95 to keep the configs
uniformly lower-case. But it missed the upgrade script and the
references in SnapshotManagerImpl. This commit will fix the issue in all
locations
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Private templates would now get copied to only one of image storage chosen randamly as was the case earlier. Dont throw an exception for uploading volumes when there are multiple image stores, instead choose one of them randomly
Signed off by : nitin mehta<nitin.mehta@citrix.com>