In case of VMware VM during root volume preparation if we are switching to a new volume, force expunge root disk that was created from the old template.
Because otherwise storage garbage collector will later try to expunge the old disk marked for expunge and fail with 'Cannot delete file' exception
since in VMware the new root vmdk has the same name and is now in use.
introduces a force option in delete network to forcifully delete a
network. This comes handy in rare cases where network fails to implenet
and network is in shutdown state, but network shutdown to rollback
implement process fails as well.
Create two storage pools, one with storage tag X, one with storage tag Y.
Create a service offering with storage tag X.
Create a disk offering with storage tag Y.
Attempt to deploy a virtual machine with a datadisk, using given offerings, it fails.
Deployment planner keeps a global object 'avoid'. It loops through each volume to
be created, asking storage allocators for matching pools, passing this avoid object.
First disk matches a pool or pools, adds ALL other pools to avoid object, then
deployment planner attaches matching pools to a list for that disk.
Second disk matches a pool, adds all other pools to avoid object, then deployment
planner says "wait, matching pool is in avoid, can't use it". Oops. In fact, at this
point ALL pools are in avoid (unless there are other pools that have both tags).
Need to remove matching pool from the avoid set during each select phase.
This is happening as concurrent operations are happening on the same VM. Earlier this was not seen as all vm operations were synchronized at agent layer. By making execute.in.sequence
global config to false this restriction is no longer there. In the latest code operations to a single vm are synchronized by maintaining a job queue. In some scenarios the destroy vm operation
was not going through this job queue mechanism and so was resulting in failures due to simultaneous operations.
Instead of injecting object of VolumeOrchestrationService into VmwareResource, we now populate the command object (MigrateVolumeCommand here) with required information. Thus we dont need volume orchestration service to query that information from resource.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
vmware.reserve.cpu/mem are set to true. Insted of setting
the ovecommit values to one on upgrade, we popultate them
from the global values.
Signed off by : nitin mehta<nitin.mehta@citrix.com>
local Ip addresses do not get released resulting in all the link local
Ip addresses being consumed eventually.
fix ensure Nics with reservation strategy 'Start' should go through
release phase in the Nic life cycle so that release is performed before
Nic is removed to avoid resource leaks.
brought back up after being down for few hours,snapshot jobs do not get
triggered with reason "there is other active snapshot tasks on the
instance to which the volume is attached".
service and not used for LB
Fix adds a boolean flag to addNetscalerLoadBalancer api, which
will mark added NetScaler for exclusive GSLB service. A netscaler marked
as exclusive gslb service provider is not picked for any guest network's
lb provider.
Conflicts:
engine/schema/src/com/cloud/network/dao/ExternalLoadBalancerDeviceVO.java
plugins/network-elements/f5/src/com/cloud/network/element/F5ExternalLoadBalancerElement.java
plugins/network-elements/netscaler/src/com/cloud/api/commands/AddNetscalerLoadBalancerCmd.java
plugins/network-elements/netscaler/src/com/cloud/api/response/NetscalerLoadBalancerResponse.java
plugins/network-elements/netscaler/src/com/cloud/network/element/NetscalerElement.java
server/src/com/cloud/network/ExternalLoadBalancerDeviceManager.java
server/src/com/cloud/network/ExternalLoadBalancerDeviceManagerImpl.java
setup/db/db/schema-421to430.sql
1. Egress default policy rules is send to the firewall provider. It is up to the
provider to configure the rules.
2. The default policy rules are send for both allow and deny default policy.
3. On network shutdown rules for delete are send.
4. For VR and SRX, by default deny the traffic. So no default rule to deny traffic is required.
scaling up vms was not considering parameter cluster.(memory/cpu).allocated.capacity.disablethreshold. Fixed it
Also added overprovisioning factor retrieval at the cluster level for host capacity check
Changes:
- During Vm migration while finding a new host within the cluster, we need to set the storagepool Id to the deployment plan too.
- This will indicate the planner that the volumes are ready and no need to find new pool
- This in turn will prevent the threshold check done during the pool allocation. This step is not needed since there is no need to allocate pools newly.
- Thus the migration wont fail because th threshold check fails.
Changes:
- Added 'virtualmachineid' parameter to the createVolume API to specify a VM for the volume. The Vm should be in 'Running' or 'Stopped' state.
- This parameter is used only when createVolume API is called using snapshotid parameter
- When this parameter is set, the volume is created from the snapshot in the pod/cluster of the VM. Also the volume is then attached to the VM in the same request
- If attach Volume fails but create has succeeded, the API errors out but the Volume created remains available. User may attach the same volume later
- When Vm is provided, but if no storage pool is available in the VM's pod/cluster then the volume is not created and API fails.
Volume create usage event and resource count werent getting registered. Check its type rather than it is UserVm since the code is coming from VirtualMachineManager.
listAlerts: introduced new parameter "name" to the alertResponse
Added new Admin API - generateAlert. Available to ROOT admin only
listAlerts: implemented search by alert name
And also add a new table: snapshotdetails
Conflicts:
plugins/storage/volume/default/src/org/apache/cloudstack/storage/datastore/driver/CloudStackPrimaryDataStoreDriverImpl.java