Changes:
- VirtualMachineMgr puts the constraint that if Root volume is already READY, we provide the clusterId in the plan to the deploymentPlanner. Planner then searches for resources only under that cluster.
- If no deployment could be found, deploying VM fails.
- Fixed this, such that incase the root volume is recreatable, we call the planner again by removing the cluster constraint. Planner will then search for resources in other clusters.
- Works for system VMs(SSVM, consoleproxy, virual routers).
Changes:
- We were ordering clusters based on capacity of the first-fit host found in each cluster. Due to this, there were cases where we deployed VMs to one cluster instead of balancing off within clusters.
- Now we order the list of clusters by aggregate capacity and choose the ones that have enough capacity for the required VM in this order.
- This should balance the load between clusters instead of bombarding one.
e.g: command=configuresimulator&name=SecurityIngressRulesCmd&zoneid=1&value=enabled:true|timeout=30, means enable command SecurityIngressRulesCmd for zone 1, and wait for 30 seconds.
Changes:
- Added a new API 'migrateSystemVm' backed by MigrateSystemVMCmd.java to migrate system VMs (SSVM, consoleproxy, domain routers(router, LB, DHCP))
- This is Admin only action
- The existing API 'migratevirtualmachine' is only for user VMs
Conflicts:
api/src/com/cloud/api/ResponseGenerator.java
api/src/com/cloud/api/commands/ListHostsCmd.java
api/src/com/cloud/server/ManagementService.java
api/src/com/cloud/vm/UserVmService.java
server/src/com/cloud/api/ApiResponseHelper.java
server/src/com/cloud/server/ManagementServerImpl.java
This bug also happens in XenServer 5.6 FP1, but this issue is hidden by bug 11564.
There are two issues here
1. in XenServer 5.6 FP1/SP2, when a slave host is down ( NIC is unplugged), and before master detect the slave is down( it takes about 10 minutes),
template.createClone doesn't work sometime, it complains can not contact slave host, looks like XenServer try to start VM on this slave host, we use VM.create instead, we can specify host in VM.create API, so it will not try to connect the slave
2. in XenServer 5.6 FP1/SP2, host tag is introduced into VDI, however in HA case, VM.reset_powerstate and VM.destory doesn't remove this host tag, when CloudStack try to start this VM on other host, XenServer complains this VDI is already used ( or mount as RW). CloudStack will manually remove the host tage in VDI in FenceCommand
status 11552: resolved fixed
To solve password file is destroyed along with restartNetwork command issue. If
the password is not set in fact, user can use "ResetPassword" to try again. But
it won't happen mostly, because it's only possible if the restartNetwork
happened between user start up VM and set the new password.
Reviewed-by: Keshav
status 11518: resolved fixed