Applying the short term fix of force cleaning up if the answer recieved from startcommand is not valid
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Changes:
Expected behavior:
The api should return the list of suitable/unsuitable hosts
Added fix that creates a deep copy of the the variable allHosts and prevents faulty host list return.
(cherry picked from commit 6354604eed)
Signed-off-by: animesh <animesh@apache.org>
Change:
- Also add a check in migrateSystemVM API to check that source and destination host are in the same cluster
(cherry picked from commit b24e9a6dd5)
Signed-off-by: animesh <animesh@apache.org>
Changes:
- listHosts within same cluster for migration of system and router VMs
(cherry picked from commit 52f4683099)
Signed-off-by: animesh <animesh@apache.org>
find any snapshot ova/ovf when we have multiple secondary storage for a
zone.
(cherry picked from commit 4ba68e3b3f)
Signed-off-by: animesh <animesh@apache.org>
deployment fails with error "Message: Invalid configuration for device
'2'."
ensuring that direct network guru assigns a mac address for the nic that
it designs
(cherry picked from commit 47fa6d9561)
Signed-off-by: animesh <animesh@apache.org>
Improving the error message saying no ssh key pair is assinged to VM to get the encrypted password and included a check for password enabled template
Signed-off-by: Koushik Das <koushik@apache.org>
(cherry picked from commit 37d500d2a6)
Signed-off-by: animesh <animesh@apache.org>
Vmware - Not able to fetch userdata from guest Vms using http://<router-address>/latest/user-data
(cherry picked from commit 697cc2e397)
Signed-off-by: animesh <animesh@apache.org>
During host connect multiple listeners gets invoked, one of them is the download listener.
As part of processConnect() method, it checks if templates needs to be downloaded to secondary
store for a particular HV type. As part of that check it computes list of HVs present in the
zone. The earlier logic was to query all hosts (excluding current one) and iterate over them to
make the list. This is not optimal and is bound to have some latency as the number of hosts
increases.
Optimized the logic by querying the list of HVs from the db. directly instead of iterating over
all hosts in the zone.
when a virtual machine is to be migrated across cluster and the source and destination host do
not share storage pools. migrationRequired flag was introduced in a recent commit that was always
set to false for XenServer. This caused the destination host to be flaged as not requiring
storage motion. Fixed the scope of the boolean and defaulted it to true. Other checks validate
if storage motion is required for XenServer.
The locking code in implement/shutdown network code was not efficient. Even in order to check the current state of the network lock was getting acquired which is not required. This resulted in delays in deploy VM as can be seen from attached logs where the code waited on the lock just to check if network is implemented.
As part of the fix moved out code that is checking if the network is already implemented or shutdowned outside the lock.
(cherry picked from commit 5528ba4b20)
Signed-off-by: animesh <animesh@apache.org>
ensure on network implement/restart/shutdown an ip assoc is sent so that
source nat ip is associated with source nat service provider.
(cherry picked from commit a0f23d0f94)
Signed-off-by: animesh <animesh@apache.org>
The locking code in implement/shutdown network code was not efficient. Even in order to check the current state of the network lock was getting acquired which is not required. This resulted in delays in deploy VM as can be seen from attached logs where the code waited on the lock just to check if network is implemented.
As part of the fix moved out code that is checking if the network is already implemented or shutdowned outside the lock.
For some scenarios like prepare nic, all network service providers are checked which is not efficient and also introduces unnecessary dependencies.
The check to use only the required providers is already there for implement, shutdown operation on network. Put the same check for all missing cases.
Updating the fix to cover one more scenario when user directly calls API migrateVirtualMachineWithVolume.
If currentPool is accessible to destination host, skip calling allocators and move on to next volume to process.
This means if user calls migrateVirtualMachineWithVolume API where all volumes of VM are accessible on specified target host,
then API fails as there is no storage migration involved. Instead user should call migrateVirtualMachine API.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
listHostsForMigrationOfVM is being called when user attempts to move a VM to other host. This is trying to find list of suitable storage pools that are attached to each of the suitable hosts for the VM.
Currently the selection of target storage pools for each volume of the VM is left to storage pool allocators.
But user might want to leave his volume unmoved/intact If it is on a zone wide storage pool.
This would be more efficient while migrating VM as storage live migration is not required and VM continues to use volumes on same storage pool as before.
Hence idea is to set same storage pool as target pool for each of the volume if the volume is already on zone wide storage pool.
A comparison of source pool of volume against target pool of volume yields the information if storage migration is required for the VM to move to target host or not.
Based on that information apropriate API migrateVM or migrateVmWithVolume could be decided.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
If you shut down the agent while VMs are running, the management
server assumes that the VMs are continuing to run. You can then
delete the host while it is in 'disconnected' state, and those VMs
will be unusable, forever in running state. They can't change state
because the host no longer exists. This patch checks for any VMs
that may have been tied to the removed host and resets their state
so that cloudstack can continue to manage them.