Changes:
- Consider if VM requires the local storage or shared storage or both for its disks.
- Accordingly all pools in the cluster should consider local or shared or both pools
From functionality point of view following will be changes.
-> Threshold will not be applied for vms undergoing HA process (say during host going down event).
-> Threshold will not be applied for vms undergoing maintenance.
-> Threshold will apply for stop-start vm "after" the skip.counting.hours
(time till which we reserve the capacity for the stopped vm).
Reason being stop-start is a user initiated action and we are giving a time
window (skip.counting.hours) till which its guaranteed to start in the same
cluster.
After that time, threshold check applies bcz we still want to reserve the
capacity for urgent situations like HA, putting host into maintenance rather than get it consumed in such user
initiated actions.
Signed off by : nitin mehta<nitin.mehta@citrix.com>
The cluster and zone wide storage pool allocators returned shared pools even for volumes meant to be on local storage pool.
If the VM uses local disk then cluster and zone storage allocators should not handle it and return null or empty list.
Also fixed the deployment planner to avoid a cluster if
a. avoid set returned by storage pool allocators is empty OR
b. all local or shared pools in a cluster are in avoid state
Changes:
- Implict creation of the 'ExplicitDedication' Affinity group during resource dedication
- Only one group per account or per domain will be present
- ListDedicatedResources by affinityGroup
- Deployment should consider dedicated resources associated to the group only
- Deleting affinity group should release the dedicated resouces
- Releasing the dedicated resources should remove the group associated if there are no more resources.
The time increased due to the newly added dedicated resources feature. During regular VM deployment, all dedicated resources are put in avoid list so that they are not considered for deployment.
Now the way to compute the list of dedicated resources is not optimal and performance deteriorates in an environment having lot of pods, clusters and hosts as the logic is to query db. for each suc resource.
The fix is to optimize the logic not to loop through all resources but get the list of each resource type in a single query.
Alerts are generated for VM migration between:
1) Source host is dedicated and destination host is not.
2) Source host is not dedicated and destination host is dedicated.
3) Both hosts are dedicated to different accounts/domains
Changes:
- Locking the group and save reservation mechanism done by DPM
- Added admin operation to cleanup VM reservations
- DPM will also cleanup VM reservations on startup
using affinity group even if the zone is dedicated to an account. The check to make sure that
explicit resources are not picked up for non-explicit deployment was present only at the domain
level for zones. Added a check at account level too.
Issues:
In Implicit planner resource usage is fixed to "Dedicated". It should be Dedicated/Shared depending upon the Implict Planner strict/preferred modes and hosts availability.
Fixed:
Issue is fixed by determining the resource usage to be "Dedicated/Shared" depending upon the Implicit strict/preferred mode and the hosts availability for the planner.
Patch 2 for https://reviews.apache.org/r/11379/
Created for files server/src/com/cloud/deploy/DeploymentPlanningManagerImpl.java, server/test/com/cloud/vm/DeploymentPlanningManagerImplTest.java, server/test/org/apache/cloudstack/affinity/AffinityApiUnitTest.java
- Changes merged from planner_reserve branch
- Exposing deploymentplanner as an optional parameter while creating a service offering
- changes to DeploymentPlanningManagerImpl to make sure host reserve-release happens between conflicting planner usages.
when deploying a VM
BUG-ID: CLOUdSTACK-2281
Bugfix-for: 4.2
Reviewed-by: Prachi Damle
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1367280909 -0600
Adding the zone, cluster, account level parameters
The parameters at scope (zone/cluster/pool/account) can be updated by updateConfiguration API with additional parameter zoneid/clusterid/accountid/storagepoolid
Whenever these scoped parameters are used in CS they get value from the corresponding details table if not defined get value from global parameter.
Same with the listConfiguration API with additional parameter zoneid/clusterid/accountid/storagepoolid
Changes:
- Regular plugin/adapter components should usually be loaded at run level RUNLEVEL_COMPONENT(5)
- HypervisorVmPlannerSelector was at level 0, while configurationServer at level 2 - causing config to be not loaded for the HypervisorVmPlannerSelector
Changes:
- DeployPlannerSelector was newly introduced for BareMetal feature. It had the planner name hardcoded.
- Change it to decide the planner by referring to the global config vm.allocation.algorithm value
Supporting kickstart in CloudStack baremetal
able to start vm
Conflicts:
client/tomcatconf/componentContext.xml.in
server/src/com/cloud/baremetal/BareMetalTemplateAdapter.java
server/src/com/cloud/baremetal/BareMetalVmManagerImpl.java
server/src/com/cloud/vm/UserVmManagerImpl.java
Corresponding getter/setter is renamed too.
Reason is GenericDao does not update the field unless the method name matches the field name; the setter of this VO was one such case.