Changes:
- Consider if VM requires the local storage or shared storage or both for its disks.
- Accordingly all pools in the cluster should consider local or shared or both pools
Conflicts:
server/src/com/cloud/agent/manager/allocator/HostAllocator.java
The cluster and zone wide storage pool allocators returned shared pools even for volumes meant to be on local storage pool.
If the VM uses local disk then cluster and zone storage allocators should not handle it and return null or empty list.
Also fixed the deployment planner to avoid a cluster if
a. avoid set returned by storage pool allocators is empty OR
b. all local or shared pools in a cluster are in avoid state
Conflicts:
engine/storage/src/org/apache/cloudstack/storage/allocator/ClusterScopeStoragePoolAllocator.java
engine/storage/src/org/apache/cloudstack/storage/allocator/ZoneWideStoragePoolAllocator.java
Introduction of a new Transaction API that is more consistent with the style
of Spring's transaction managment. The existing Transaction class was renamed
to TransactionLegacy. All of the non-DAO code in the management server has been
updated to use the new Transaction API.
The managed context framework provides a simple way to add logic
to ACS at the various entry points of the system. As threads are
launched and ran listeners can be registered for onEntry or onLeave
of the managed context. This framework will be used specifically
to handle DB transaction checking and setting up the CallContext.
This framework is need to transition away from ACS custom AOP to
Spring AOP.
- change the AccountService::isRootAdmin(short) to isRootAdmin(long accountId);
- Change all callers
- Change all places that check the account.getType() directly to call the AccountManagerImpl.
Changes:
- Implict creation of the 'ExplicitDedication' Affinity group during resource dedication
- Only one group per account or per domain will be present
- ListDedicatedResources by affinityGroup
- Deployment should consider dedicated resources associated to the group only
- Deleting affinity group should release the dedicated resouces
- Releasing the dedicated resources should remove the group associated if there are no more resources.
Conflicts:
plugins/dedicated-resources/src/org/apache/cloudstack/dedicated/DedicatedResourceManagerImpl.java
plugins/dedicated-resources/test/org/apache/cloudstack/dedicated/manager/DedicatedApiUnitTest.java
server/src/com/cloud/configuration/ConfigurationManagerImpl.java
The time increased due to the newly added dedicated resources feature. During regular VM deployment, all dedicated resources are put in avoid list so that they are not considered for deployment.
Now the way to compute the list of dedicated resources is not optimal and performance deteriorates in an environment having lot of pods, clusters and hosts as the logic is to query db. for each suc resource.
The fix is to optimize the logic not to loop through all resources but get the list of each resource type in a single query.
Conflicts:
server/src/com/cloud/deploy/DeploymentPlanningManagerImpl.java
Alerts are generated for VM migration between:
1) Source host is dedicated and destination host is not.
2) Source host is not dedicated and destination host is dedicated.
3) Both hosts are dedicated to different accounts/domains
Changes:
- Locking the group and save reservation mechanism done by DPM
- Added admin operation to cleanup VM reservations
- DPM will also cleanup VM reservations on startup
using affinity group even if the zone is dedicated to an account. The check to make sure that
explicit resources are not picked up for non-explicit deployment was present only at the domain
level for zones. Added a check at account level too.
Issues:
In Implicit planner resource usage is fixed to "Dedicated". It should be Dedicated/Shared depending upon the Implict Planner strict/preferred modes and hosts availability.
Fixed:
Issue is fixed by determining the resource usage to be "Dedicated/Shared" depending upon the Implicit strict/preferred mode and the hosts availability for the planner.
Patch 2 for https://reviews.apache.org/r/11379/
Created for files server/src/com/cloud/deploy/DeploymentPlanningManagerImpl.java, server/test/com/cloud/vm/DeploymentPlanningManagerImplTest.java, server/test/org/apache/cloudstack/affinity/AffinityApiUnitTest.java
- Changes merged from planner_reserve branch
- Exposing deploymentplanner as an optional parameter while creating a service offering
- changes to DeploymentPlanningManagerImpl to make sure host reserve-release happens between conflicting planner usages.
when deploying a VM
BUG-ID: CLOUdSTACK-2281
Bugfix-for: 4.2
Reviewed-by: Prachi Damle
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1367280909 -0600
Adding the zone, cluster, account level parameters
The parameters at scope (zone/cluster/pool/account) can be updated by updateConfiguration API with additional parameter zoneid/clusterid/accountid/storagepoolid
Whenever these scoped parameters are used in CS they get value from the corresponding details table if not defined get value from global parameter.
Same with the listConfiguration API with additional parameter zoneid/clusterid/accountid/storagepoolid
Changes:
- Regular plugin/adapter components should usually be loaded at run level RUNLEVEL_COMPONENT(5)
- HypervisorVmPlannerSelector was at level 0, while configurationServer at level 2 - causing config to be not loaded for the HypervisorVmPlannerSelector
Changes:
- DeployPlannerSelector was newly introduced for BareMetal feature. It had the planner name hardcoded.
- Change it to decide the planner by referring to the global config vm.allocation.algorithm value
Supporting kickstart in CloudStack baremetal
able to start vm
Conflicts:
client/tomcatconf/componentContext.xml.in
server/src/com/cloud/baremetal/BareMetalTemplateAdapter.java
server/src/com/cloud/baremetal/BareMetalVmManagerImpl.java
server/src/com/cloud/vm/UserVmManagerImpl.java
Corresponding getter/setter is renamed too.
Reason is GenericDao does not update the field unless the method name matches the field name; the setter of this VO was one such case.
un-allocated space is insufficient on primary storage
check the availability of un-allocated primary storage space during
planning stage, for multiple-volume VM creation scenario
modification in StorageManagerImpl.java and StorageManager.java:
add a new method storagePoolHasEnoughSpace(List<Volumes>, StoragePool),
check if storagePool has enough space for all requested volumes
modification in FirstfitPlanner.findPotentialDeploymentResources:
handle multiple volume case, keep track of allocated volumes for pools
and call storagePoolHasEnoughSpace to check space availability
modification in AbstractStoragePoolAllocator.java:
extract capacity computation logic and make a new method in
StorageManagerImpl
RB: https://reviews.apache.org/r/6028/
Send-by: mice_xia@tcloudcomputing.com
Bug 14006 - Admin could not create a VM when the cluster is Disabled
Changes:
- For Root admin, planner will not filter out the disabled pods or clusters from the resource list
Reviewed-by: Sheng Yang
Changes:
- Do not check if allocation_state is 'Enabled' in planner if the caller is Root Admin.
- This should let Root Admin create a VM in a disabled Zone.
Reviewed-By: Alex
Changes:
- Reuse the same storagepool where the Volume is ready on each retry of VM deployment until the cluster where the volume is has capacity
- After the cluster is out of capacity, we look in other clusters and find a new storagepool.
- At this point if the volume is recreatable on the new storagepool, depoyment will succeed provided everytyhing else goes through
- But if the volume is not recreatable and its cluster is out of capacity, we will still fail to deploy the VM
Changes:
To migrate systems using 'use.user.concentrated.pod.allocation' as true and 'vm.allocation.algorithm' as true, we need to
add following changes:
- There will be 5 values to 'vm.allocation.algorithm': 'random', 'firstfit', 'userdispersing', 'userconcentratedpod_random', 'userconcentratedpod_firstfit'
- 'userconcentratedpod_random' means we apply user concentration to pods and clusters. To hosts and pools we use random ordering.
- 'userconcentratedpod_firstfit' means we apply user concentration to pods and clusters. To hosts and pools we use firstfit ordering.
Changes:
- Once the HostAllocators have listed suitable hosts, planner should not reshuffle the list since that would lose the prioritization applied by the HostAllocators.
- E.g: HostAllocators chooses that host first which matches the guest OS category. If planner shuffles the list, that preferrence is lost.
Changes:
- Added a two new deployment planners 'UserDispersingPlanner' and 'UserConcentratedPodPlanner' to the DeploymentPlanners
- Planner can be chosen by setting the global config variable 'vm.allocation.algorithm' to either of the following values:
('random', 'firstfit', 'userdispersing', 'userconcentratedpod')
- By default, the value is 'random'. When the value is 'random', FirstFitPlanner is invoked as before that shuffles the resource lists.
- Now Admin can choose whether the deployment heuristic should be applied starting at cluster or pod level. This can be done by using the
global config variable 'apply.allocation.algorithm.to.pods' which is false by default. Thus by default as earlier, planner starts at clusters directly.
'UserConcentratedPodPlanner' changes:
- Earlier to 3.0, FirstFitPlanner used to reorder the clusters in case this heuristic was chosen.
- Now this is done by a separate planner and is applied only when 'vm.allocation.algorithm' is set to this planner
- It reorders the capacity based clusters/pods such that those pods having more number of Running Vms for the given account are tried first.
- Note that this userconcentration is applied only to pods and clusters. Not to hosts or storagepools within a cluster.
'UserDispersingPlanner' changes:
- 'UserDispersingPlanner' reorders the capacity ordered pods and clusters based on number of 'Running' VMs for the given account in ascending order. Aim is to choose thodes pods/clusters first which have less number of Running VMs for the given account
- Admin can provide weights to capacity and user dispersion so that both parameters get considered in reordering the pods/clusters. This can be done by setting
the global config parameter 'vm.user.dispersion.weight'. Default value is 1. Thus if this planner is chosen, by default, ordering will be done only by number of Running Vms, unless the weight is changed.
- HostAlllocators and StoragePoolAllocators also reorder the hosts and pools by ascending order of number of Running VMS/ Ready Volumes respectively for the given account. Thus try to choose that host or pool within a cluster with less number of VMs for the account.
move all listxxx interface from HostDao to managers(ResourceManager, SecondaryStorageVmManager etc) with decent name using SearchCriteria2
or direct call SearchCriteria2 on demand
Changes:
- We were ordering clusters based on capacity of the first-fit host found in each cluster. Due to this, there were cases where we deployed VMs to one cluster instead of balancing off within clusters.
- Now we order the list of clusters by aggregate capacity and choose the ones that have enough capacity for the required VM in this order.
- This should balance the load between clusters instead of bombarding one.
Conflicts:
server/src/com/cloud/capacity/dao/CapacityDao.java
server/src/com/cloud/capacity/dao/CapacityDaoImpl.java
Changes:
To make sure migration does not attempt to pick a host that has running VMs more than the max guest VM's limit:
- Changed manual migration to call host allocators to return a list of hosts suitable for migration. Host allocators check for the max guest VM limit.
- Earlier we returned hosts with enough capacity but now Host Allocators make other checks along with capacity. So the list of hosts returned are hosts that have enough capacity AND satisfy all other conditions like host tags, max guests limit etc. Or in other words Allocators dont return the hosts that dont satisfy all conditions even if they have capacity.
-Therefore, now we mark the list of hosts returned for manual migration as 'suitable' hosts instead of 'hasenoughCapacity' in the HostResponse.
- HA migration already calls allocators, so no change is needed there.
Bug 11186- Cannot restart existing VM if the cluster is disabled after the
VM has been created
Changes:
- We should not check for the cluster 'allocation_state' while starting an
existing VM provided it has storage already allocated. But if volumes are
deleted and new storage needs to be allocated, then we will not allow the
VM
start.
- However we should still prohibit adding new VMs in that cluster.
Bug 11186- Cannot restart existing VM if the cluster is disabled after the
VM has been created
Changes:
- We should not check for the cluster 'allocation_state' while starting an
existing VM provided it has storage already allocated. But if volumes are
deleted and new storage needs to be allocated, then we will not allow the VM
start.
- However we should still prohibit adding new VMs in that cluster.