- variables inlined
- cpu utilization is not cast to float from double
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
Signed-off-by: Min Chen <min.chen@citrix.com>
double values do not need a Double object to be casted to long
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
Signed-off-by: Min Chen <min.chen@citrix.com>
Applying the short term fix of force cleaning up if the answer recieved from startcommand is not valid
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Issue happens as there are more than one thread processing connect for a host simultaneously. The VM full sync. is not designed to work in this scenario and as a result user VMs may get stopped incorrectly.
Direct agent scan task runs at regular intervals (direct.agent.scan.interval defaulted to 90 secs) and identifies hosts that needs to be processed for connect. In a normal scenario hosts mostly get connected within that interval and there are no issues. But if due to some reason the connect process takes more time and is not completed by the time next agent scan runs. In this case, based on the db. state same hosts may get picked up again. And then there will be situations where more than one thread is processing connect for the same host.
The fix is to check if there is a thread already processing connect for a host and in this case all subsequent threads for that host will simply bail out. Also there may be a scenario where one thread already completed processing connect but another thread already got scheduled before that and will again repeat the same. This is also prevented by putting appropriate checks.
This reverts commit 5a8a2a259e.
We would fix it in another way, since mgmt server may get state updated in
time.
Conflicts:
server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java
Changes:
Expected behavior:
The api should return the list of suitable/unsuitable hosts
Added fix that creates a deep copy of the the variable allHosts and prevents faulty host list return.
There are three issues in resource_count table
(1) expunge a vm, the public_ip decreases and becomes -1 in basic zone.
(2) recover a vm, the volume increase.
(3) restore a vm, the volume decrease.
deployment fails with error "Message: Invalid configuration for device
'2'."
ensuring that direct network guru assigns a mac address for the nic that
it designs
Improving the error message saying no ssh key pair is assinged to VM to get the encrypted password and included a check for password enabled template
Signed-off-by: Koushik Das <koushik@apache.org>
During host connect multiple listeners gets invoked, one of them is the download listener.
As part of processConnect() method, it checks if templates needs to be downloaded to secondary
store for a particular HV type. As part of that check it computes list of HVs present in the
zone. The earlier logic was to query all hosts (excluding current one) and iterate over them to
make the list. This is not optimal and is bound to have some latency as the number of hosts
increases.
Optimized the logic by querying the list of HVs from the db. directly instead of iterating over
all hosts in the zone.
when a virtual machine is to be migrated across cluster and the source and destination host do
not share storage pools. migrationRequired flag was introduced in a recent commit that was always
set to false for XenServer. This caused the destination host to be flaged as not requiring
storage motion. Fixed the scope of the boolean and defaulted it to true. Other checks validate
if storage motion is required for XenServer.
The locking code in implement/shutdown network code was not efficient. Even in order to check the current state of the network lock was getting acquired which is not required. This resulted in delays in deploy VM as can be seen from attached logs where the code waited on the lock just to check if network is implemented.
As part of the fix moved out code that is checking if the network is already implemented or shutdowned outside the lock.
For some scenarios like prepare nic, all network service providers are checked which is not efficient and also introduces unnecessary dependencies.
The check to use only the required providers is already there for implement, shutdown operation on network. Put the same check for all missing cases.
Updating the fix to cover one more scenario when user directly calls API migrateVirtualMachineWithVolume.
If currentPool is accessible to destination host, skip calling allocators and move on to next volume to process.
This means if user calls migrateVirtualMachineWithVolume API where all volumes of VM are accessible on specified target host,
then API fails as there is no storage migration involved. Instead user should call migrateVirtualMachine API.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
listHostsForMigrationOfVM is being called when user attempts to move a VM to other host. This is trying to find list of suitable storage pools that are attached to each of the suitable hosts for the VM.
Currently the selection of target storage pools for each volume of the VM is left to storage pool allocators.
But user might want to leave his volume unmoved/intact If it is on a zone wide storage pool.
This would be more efficient while migrating VM as storage live migration is not required and VM continues to use volumes on same storage pool as before.
Hence idea is to set same storage pool as target pool for each of the volume if the volume is already on zone wide storage pool.
A comparison of source pool of volume against target pool of volume yields the information if storage migration is required for the VM to move to target host or not.
Based on that information apropriate API migrateVM or migrateVmWithVolume could be decided.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>