store requires storage migration resulting in failure of VM migration. This also
improves the hostsformigration api. Firstly we were trying to list all hosts and
then finding suitable storage pools for all volumes and then we were checking
whether vm migration requires storage migration to that host. Now the process is
updated. We are checking for only those volumes which are not in zone wide primary
store. We are verifying by comparing volumes->poolid->clusterid to host clusterid.
If it uses local or clusterids are different then verifying whether host has
suitable storage pools for the volume of the vm to be migrated too.
(cherry picked from commit 64153a4371)
List of changes:
1. Created a separate thread pool for handling cron and ping tasks. The size of the pool is based on direct.agent.pool.size. The existing direct agent pool will run all commands other than cron and ping.
2. For normal tasks (generated as part of user/admin API calls), if throttle limit is reached then tasks get queued up for subsequent execution once threads are available.
3. For cron and ping tasks (internally generated by MS like ping, VM sync etc.), if throttle limit is reached then these gets rejected. Since these are internally generated these can be rejected without any issues.
(cherry picked from commit 120da605b0)
store requires storage migration resulting in failure of VM migration. This also
improves the hostsformigration api. Firstly we were trying to list all hosts and
then finding suitable storage pools for all volumes and then we were checking
whether vm migration requires storage migration to that host. Now the process is
updated. We are checking for only those volumes which are not in zone wide primary
store. We are verifying by comparing volumes->poolid->clusterid to host clusterid.
If it uses local or clusterids are different then verifying whether host has
suitable storage pools for the volume of the vm to be migrated too.
(cherry picked from commit 64153a4371)
Conflicts:
engine/orchestration/src/com/cloud/vm/VirtualMachineManagerImpl.java
Changes:
PodId in which the router should get started was not being saved to the DB due to the VO's setter method not following the setXXX format. So when planner loaded the router from DB, it always got podId as null and that would allow planner to deploy the router in any pod. If the router happens to start in a different pod than the user VM, the Vm fails to start since the Dhcp service check fails.
Fixed the VO's setPodId method, that was causing the DB save operation fail.
Added a new flag 'checkBeforeCleanup' to StopCommand based on which check is done to see if VM is running in HV host.
If VM is running then in this case it is not stopped and the operation bails out.
Also modified the MS code to call the StopCommand with appropriate value for the flag based on the context.
Currently it is only set to 'true' when called from the new vmsync logic based on powerstate of VM. For rest it
is set to 'false' meaning no change in behaviour.
template is downloading, template_store_ref has leftover not in ready
state, when create vm from that template, the code doesn't check either
zone id, nor template_store_ref state.
Conflicts:
engine/orchestration/src/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java
And when the flag is updated on the resource accordingly generate usage events again.
Also when display flag is false in deployvm cmd it should be false for the volumes associated with the vm as well
CLOUDSTACK-4762 : Enabling VGPU support for XenServer.
This feature is to enable the GPU-passthrough and vGPU functionality,
with the help of this feature, admins/users will be able to leverage
the GPU graphics unit power by deploying a virtul machine with GPU or
vGPU support or by changing the service offering of an existing VM
at any later point of time. There GPU/vGPU enabled VMs are able to run
graphical applications.
For now, this feature is only supported with XenServer hypervisor but
can be extended to add the support of other hypervisors.