This is happening as concurrent operations are happening on the same VM. Earlier this was not seen as all vm operations were synchronized at agent layer. By making execute.in.sequence
global config to false this restriction is no longer there. In the latest code operations to a single vm are synchronized by maintaining a job queue. In some scenarios the destroy vm operation
was not going through this job queue mechanism and so was resulting in failures due to simultaneous operations.
Instead of injecting object of VolumeOrchestrationService into VmwareResource, we now populate the command object (MigrateVolumeCommand here) with required information. Thus we dont need volume orchestration service to query that information from resource.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
vmware.reserve.cpu/mem are set to true. Insted of setting
the ovecommit values to one on upgrade, we popultate them
from the global values.
Signed off by : nitin mehta<nitin.mehta@citrix.com>
local Ip addresses do not get released resulting in all the link local
Ip addresses being consumed eventually.
fix ensure Nics with reservation strategy 'Start' should go through
release phase in the Nic life cycle so that release is performed before
Nic is removed to avoid resource leaks.
brought back up after being down for few hours,snapshot jobs do not get
triggered with reason "there is other active snapshot tasks on the
instance to which the volume is attached".
service and not used for LB
Fix adds a boolean flag to addNetscalerLoadBalancer api, which
will mark added NetScaler for exclusive GSLB service. A netscaler marked
as exclusive gslb service provider is not picked for any guest network's
lb provider.
Conflicts:
engine/schema/src/com/cloud/network/dao/ExternalLoadBalancerDeviceVO.java
plugins/network-elements/f5/src/com/cloud/network/element/F5ExternalLoadBalancerElement.java
plugins/network-elements/netscaler/src/com/cloud/api/commands/AddNetscalerLoadBalancerCmd.java
plugins/network-elements/netscaler/src/com/cloud/api/response/NetscalerLoadBalancerResponse.java
plugins/network-elements/netscaler/src/com/cloud/network/element/NetscalerElement.java
server/src/com/cloud/network/ExternalLoadBalancerDeviceManager.java
server/src/com/cloud/network/ExternalLoadBalancerDeviceManagerImpl.java
setup/db/db/schema-421to430.sql
1. Egress default policy rules is send to the firewall provider. It is up to the
provider to configure the rules.
2. The default policy rules are send for both allow and deny default policy.
3. On network shutdown rules for delete are send.
4. For VR and SRX, by default deny the traffic. So no default rule to deny traffic is required.
scaling up vms was not considering parameter cluster.(memory/cpu).allocated.capacity.disablethreshold. Fixed it
Also added overprovisioning factor retrieval at the cluster level for host capacity check