Bug 11964 - You are allowed to migrate a virtual machine on to the host that it already exists o
Changes:
Validations were missing in th MigrateVmCmd API, since from UI the validations are always done while listing hosts for migration as first step.
Changes:
- Added a two new deployment planners 'UserDispersingPlanner' and 'UserConcentratedPodPlanner' to the DeploymentPlanners
- Planner can be chosen by setting the global config variable 'vm.allocation.algorithm' to either of the following values:
('random', 'firstfit', 'userdispersing', 'userconcentratedpod')
- By default, the value is 'random'. When the value is 'random', FirstFitPlanner is invoked as before that shuffles the resource lists.
- Now Admin can choose whether the deployment heuristic should be applied starting at cluster or pod level. This can be done by using the
global config variable 'apply.allocation.algorithm.to.pods' which is false by default. Thus by default as earlier, planner starts at clusters directly.
'UserConcentratedPodPlanner' changes:
- Earlier to 3.0, FirstFitPlanner used to reorder the clusters in case this heuristic was chosen.
- Now this is done by a separate planner and is applied only when 'vm.allocation.algorithm' is set to this planner
- It reorders the capacity based clusters/pods such that those pods having more number of Running Vms for the given account are tried first.
- Note that this userconcentration is applied only to pods and clusters. Not to hosts or storagepools within a cluster.
'UserDispersingPlanner' changes:
- 'UserDispersingPlanner' reorders the capacity ordered pods and clusters based on number of 'Running' VMs for the given account in ascending order. Aim is to choose thodes pods/clusters first which have less number of Running VMs for the given account
- Admin can provide weights to capacity and user dispersion so that both parameters get considered in reordering the pods/clusters. This can be done by setting
the global config parameter 'vm.user.dispersion.weight'. Default value is 1. Thus if this planner is chosen, by default, ordering will be done only by number of Running Vms, unless the weight is changed.
- HostAlllocators and StoragePoolAllocators also reorder the hosts and pools by ascending order of number of Running VMS/ Ready Volumes respectively for the given account. Thus try to choose that host or pool within a cluster with less number of VMs for the account.
only owner of the network can access it; if it's domain - all accounts in the domain and domain children can have an access.
* aclType replaces 2 old fields: isShared and isDomainSpecific.
* All 2.2.x account specific networks will have aclType=Account; 2.2.x Domain specific networks - aclType=domain; 2.2.x Zone level networks - aclType=Domain with domainId = Root domain id
1) if snapshot is originally created from root volume, allow only CreateTemplateCommand from snapshot 2) if snapshot is originally created from data volume, allow only CreateVolumeCommand from snapshot Reviewed by - Kishan
status 11428: resolved fixed
* can be specified for Shared network only
* if not specified for the Shared networks, try to locate it based on the zoneId and tags. If tags is not null, pick up first physicalNetwork from the zone that has matching tags. If tags is null, and there are none/more than 1 physical netwroks in the zone, error out.
Added PortForwardingServiceProvider, StaticNatServiceProvider, rename
PasswordServiceProvider to UserDataServiceProvider(may rename to a better name
later).
Add related function for service providers.
2)Re-apply all existing firewall rules as a part of implement call. TODO: Cleanup all existing rules from the backend (leave them in the DB) as a part of shutdown call
1. able to restore VM from its original template
2. Only allow to restore when VM is running or stopped
3. after restoring, VM state will not change, e.g. running is still running
status 9949: resolved fixed
TODO:
- Still leaving the provider columns in data_center schema as-is for CloudKit and BareMetal
- ExternalNetworkDeviceMgrImpl still needs to fix the dataCenter.setProviders calls and externalNetworkApplicance usage checks to see if zone has external networking.
In the past, the NetworkElement would cover almost all the functionality that
e.g. virtual router can cover: firewall, source NAT, static NAT, password,
VPN... So anyone want to implement the NetworkElement would have to implement
these service's specific methods, even it wouldn't support it. Also, if we want
to find a e.g. FirewallServiceProvider, we have to proceed all the current
network service providers, to call a method to know if it support such service.
That's neither elegant nor scaling way to do it.
As the first step, this patch separates each ServiceProvider from NetworkElement
(there are some interface already out of NetworkElement, so this patch slightly
modifies them too), and only the class would implement the correlated interface, would
have the ability to do these services.
Changes:
- VirtualMachineMgr puts the constraint that if Root volume is already READY, we provide the clusterId in the plan to the deploymentPlanner. Planner then searches for resources only under that cluster.
- If no deployment could be found, deploying VM fails.
- Fixed this, such that incase the root volume is recreatable, we call the planner again by removing the cluster constraint. Planner will then search for resources in other clusters.
- Works for system VMs(SSVM, consoleproxy, virual routers).
Changes:
- Added a new API 'migrateSystemVm' backed by MigrateSystemVMCmd.java to migrate system VMs (SSVM, consoleproxy, domain routers(router, LB, DHCP))
- This is Admin only action
- The existing API 'migratevirtualmachine' is only for user VMs
1) Introduce new managers - ProjectManager and DomainManager. Moved all domain related code from AccountManager to DomainManager.
2) Moved some code from ManagementServerImpl to the correct managers.
3) New resource limit for Domain - Project
Reviewed-by: Alex
Changes:
- When management server starts, it goes through all the pending work items from op_it_work table and schedules HA work for each. It used to mark each item as done. Instead we should keep the item as pending and let it get marked as Done after the HA work is done.
- Changes in VirtualMachineMgr::advanceStop() :
a) if we find a VM with null hostId, we stop the VM only if it is forced stopped.
b) if VM state transition to Stopping fails,for state Starting and Migrating we try to find the pending work item and then do cleanup the VM. In case state is Stopping we can cleanup directly.
c) We proceed releasing all resources only if state transitioned to 'Stopping'.
- Changes in HA:
a) Depend on VirtualMachineMgr::advanceStop() in case host is not found to do VM cleanup
- When Vm state between mgmt server and agent syncs from starting -> running, mark any pending work item as done.
Conflicts:
server/src/com/cloud/vm/VirtualMachineManagerImpl.java
reviewed-by: Alex/Kelven
Changes:
1. UserVmManagerImpl :: finalizeStart()
Added null check for the cmds.getAnswers() object. Return ‘true’ if null.
2. VirtualMachineManagerImpl :: advanceStart()
Move the line to set PodId to the vm being started above the state transition where hostId gets set, so that podId is not null in case management server goes down when vm starts on the agent. On restart, podId is not updated during fullsync. So this will prevent podId remaining null.
vm.setPodId(dest.getPod().getId());
Changes:
To make sure migration does not attempt to pick a host that has running VMs more than the max guest VM's limit:
- Changed manual migration to call host allocators to return a list of hosts suitable for migration. Host allocators check for the max guest VM limit.
- Earlier we returned hosts with enough capacity but now Host Allocators make other checks along with capacity. So the list of hosts returned are hosts that have enough capacity AND satisfy all other conditions like host tags, max guests limit etc. Or in other words Allocators dont return the hosts that dont satisfy all conditions even if they have capacity.
-Therefore, now we mark the list of hosts returned for manual migration as 'suitable' hosts instead of 'hasenoughCapacity' in the HostResponse.
- HA migration already calls allocators, so no change is needed there.