Corresponding getter/setter is renamed too.
Reason is GenericDao does not update the field unless the method name matches the field name; the setter of this VO was one such case.
un-allocated space is insufficient on primary storage
check the availability of un-allocated primary storage space during
planning stage, for multiple-volume VM creation scenario
modification in StorageManagerImpl.java and StorageManager.java:
add a new method storagePoolHasEnoughSpace(List<Volumes>, StoragePool),
check if storagePool has enough space for all requested volumes
modification in FirstfitPlanner.findPotentialDeploymentResources:
handle multiple volume case, keep track of allocated volumes for pools
and call storagePoolHasEnoughSpace to check space availability
modification in AbstractStoragePoolAllocator.java:
extract capacity computation logic and make a new method in
StorageManagerImpl
RB: https://reviews.apache.org/r/6028/
Send-by: mice_xia@tcloudcomputing.com
Bug 14006 - Admin could not create a VM when the cluster is Disabled
Changes:
- For Root admin, planner will not filter out the disabled pods or clusters from the resource list
Reviewed-by: Sheng Yang
Changes:
- Do not check if allocation_state is 'Enabled' in planner if the caller is Root Admin.
- This should let Root Admin create a VM in a disabled Zone.
Reviewed-By: Alex
Changes:
- Reuse the same storagepool where the Volume is ready on each retry of VM deployment until the cluster where the volume is has capacity
- After the cluster is out of capacity, we look in other clusters and find a new storagepool.
- At this point if the volume is recreatable on the new storagepool, depoyment will succeed provided everytyhing else goes through
- But if the volume is not recreatable and its cluster is out of capacity, we will still fail to deploy the VM
Changes:
- Once the HostAllocators have listed suitable hosts, planner should not reshuffle the list since that would lose the prioritization applied by the HostAllocators.
- E.g: HostAllocators chooses that host first which matches the guest OS category. If planner shuffles the list, that preferrence is lost.
Changes:
- Added a two new deployment planners 'UserDispersingPlanner' and 'UserConcentratedPodPlanner' to the DeploymentPlanners
- Planner can be chosen by setting the global config variable 'vm.allocation.algorithm' to either of the following values:
('random', 'firstfit', 'userdispersing', 'userconcentratedpod')
- By default, the value is 'random'. When the value is 'random', FirstFitPlanner is invoked as before that shuffles the resource lists.
- Now Admin can choose whether the deployment heuristic should be applied starting at cluster or pod level. This can be done by using the
global config variable 'apply.allocation.algorithm.to.pods' which is false by default. Thus by default as earlier, planner starts at clusters directly.
'UserConcentratedPodPlanner' changes:
- Earlier to 3.0, FirstFitPlanner used to reorder the clusters in case this heuristic was chosen.
- Now this is done by a separate planner and is applied only when 'vm.allocation.algorithm' is set to this planner
- It reorders the capacity based clusters/pods such that those pods having more number of Running Vms for the given account are tried first.
- Note that this userconcentration is applied only to pods and clusters. Not to hosts or storagepools within a cluster.
'UserDispersingPlanner' changes:
- 'UserDispersingPlanner' reorders the capacity ordered pods and clusters based on number of 'Running' VMs for the given account in ascending order. Aim is to choose thodes pods/clusters first which have less number of Running VMs for the given account
- Admin can provide weights to capacity and user dispersion so that both parameters get considered in reordering the pods/clusters. This can be done by setting
the global config parameter 'vm.user.dispersion.weight'. Default value is 1. Thus if this planner is chosen, by default, ordering will be done only by number of Running Vms, unless the weight is changed.
- HostAlllocators and StoragePoolAllocators also reorder the hosts and pools by ascending order of number of Running VMS/ Ready Volumes respectively for the given account. Thus try to choose that host or pool within a cluster with less number of VMs for the account.
Changes:
- We were ordering clusters based on capacity of the first-fit host found in each cluster. Due to this, there were cases where we deployed VMs to one cluster instead of balancing off within clusters.
- Now we order the list of clusters by aggregate capacity and choose the ones that have enough capacity for the required VM in this order.
- This should balance the load between clusters instead of bombarding one.
Conflicts:
server/src/com/cloud/capacity/dao/CapacityDao.java
server/src/com/cloud/capacity/dao/CapacityDaoImpl.java
Changes:
To make sure migration does not attempt to pick a host that has running VMs more than the max guest VM's limit:
- Changed manual migration to call host allocators to return a list of hosts suitable for migration. Host allocators check for the max guest VM limit.
- Earlier we returned hosts with enough capacity but now Host Allocators make other checks along with capacity. So the list of hosts returned are hosts that have enough capacity AND satisfy all other conditions like host tags, max guests limit etc. Or in other words Allocators dont return the hosts that dont satisfy all conditions even if they have capacity.
-Therefore, now we mark the list of hosts returned for manual migration as 'suitable' hosts instead of 'hasenoughCapacity' in the HostResponse.
- HA migration already calls allocators, so no change is needed there.
Bug 11186- Cannot restart existing VM if the cluster is disabled after the
VM has been created
Changes:
- We should not check for the cluster 'allocation_state' while starting an
existing VM provided it has storage already allocated. But if volumes are
deleted and new storage needs to be allocated, then we will not allow the
VM
start.
- However we should still prohibit adding new VMs in that cluster.
Bug 11186- Cannot restart existing VM if the cluster is disabled after the
VM has been created
Changes:
- We should not check for the cluster 'allocation_state' while starting an
existing VM provided it has storage already allocated. But if volumes are
deleted and new storage needs to be allocated, then we will not allow the VM
start.
- However we should still prohibit adding new VMs in that cluster.
This patch enable redundant virtual routers.
1. To enable this feature, db need to be updated using follow SQL by now(we
would get a UI way later):
UPDATE network_offerings SET redundant_router=1 WHERE guest_type="Virtual" AND
system_only=0;
2. System would try to start up two routers at different hosts. But if there is
only one host in the zone, system would start up two routers on it.
3. The failover part is using keepalived, and connection tracking part is using
conntrackd. There would be one master router and one backup router. The status
of router(master or backup) can be query from the database table domain_router
now. Management server would update the status every 30s by default.
4. The routers for the same zone would use same external NIC(same ip and mac).
The script used for fail-over would ensure only one external NIC present in the
network at any time.
5. Currently management server don't got the ability to stop one of router is
both of them reported as master. The feature is in the todo list.
After two routers start up, disconnect anyone of them, the guest network
shouldn't be affected, and established connection(http, ssh, etc.) should still
works. The fail-over on gateway part should be 3~4 seconds.
Currently the patch works with KVM. Would deal with vmware and XenServer soon.
Changes:
- When the ROOT volume of a VM is found to be READY, changed planner to reuse the pool for every volume(root or data) that is READY and that has a pool not in maintenance and not in avoid state
- If ROOT volume is not ready, we dont care about the DATA disk. Both would get re-allocated.
- When a pool is reused for a ready volume, Planner does not call storagepool allocators. And such volumes are not assigned a pool in the deployment destination returned by the planner. Accordingly StorageManager :: prepare method wont recreate these volumes since they are not mentioned in the destination.