service and not used for LB
Fix adds a boolean flag to addNetscalerLoadBalancer api, which
will mark added NetScaler for exclusive GSLB service. A netscaler marked
as exclusive gslb service provider is not picked for any guest network's
lb provider.
Conflicts:
engine/schema/src/com/cloud/network/dao/ExternalLoadBalancerDeviceVO.java
plugins/network-elements/f5/src/com/cloud/network/element/F5ExternalLoadBalancerElement.java
plugins/network-elements/netscaler/src/com/cloud/api/commands/AddNetscalerLoadBalancerCmd.java
plugins/network-elements/netscaler/src/com/cloud/api/response/NetscalerLoadBalancerResponse.java
plugins/network-elements/netscaler/src/com/cloud/network/element/NetscalerElement.java
server/src/com/cloud/network/ExternalLoadBalancerDeviceManager.java
server/src/com/cloud/network/ExternalLoadBalancerDeviceManagerImpl.java
setup/db/db/schema-421to430.sql
1. Egress default policy rules is send to the firewall provider. It is up to the
provider to configure the rules.
2. The default policy rules are send for both allow and deny default policy.
3. On network shutdown rules for delete are send.
4. For VR and SRX, by default deny the traffic. So no default rule to deny traffic is required.
scaling up vms was not considering parameter cluster.(memory/cpu).allocated.capacity.disablethreshold. Fixed it
Also added overprovisioning factor retrieval at the cluster level for host capacity check
Changes:
- During Vm migration while finding a new host within the cluster, we need to set the storagepool Id to the deployment plan too.
- This will indicate the planner that the volumes are ready and no need to find new pool
- This in turn will prevent the threshold check done during the pool allocation. This step is not needed since there is no need to allocate pools newly.
- Thus the migration wont fail because th threshold check fails.
Changes:
- Added 'virtualmachineid' parameter to the createVolume API to specify a VM for the volume. The Vm should be in 'Running' or 'Stopped' state.
- This parameter is used only when createVolume API is called using snapshotid parameter
- When this parameter is set, the volume is created from the snapshot in the pod/cluster of the VM. Also the volume is then attached to the VM in the same request
- If attach Volume fails but create has succeeded, the API errors out but the Volume created remains available. User may attach the same volume later
- When Vm is provided, but if no storage pool is available in the VM's pod/cluster then the volume is not created and API fails.
Volume create usage event and resource count werent getting registered. Check its type rather than it is UserVm since the code is coming from VirtualMachineManager.
listAlerts: introduced new parameter "name" to the alertResponse
Added new Admin API - generateAlert. Available to ROOT admin only
listAlerts: implemented search by alert name
And also add a new table: snapshotdetails
Conflicts:
plugins/storage/volume/default/src/org/apache/cloudstack/storage/datastore/driver/CloudStackPrimaryDataStoreDriverImpl.java
This is the output of the program, the developer documentation may be a better place
-----
When developing against the CloudStack Orchestration Platform, you must following the following rules:
API Rule: Always be prepared to handle RuntimeExceptions.
When writing APIs, you must follow these rules:
API Rule: You may think you're the greatest developer in the world but every change to the API must be reviewed and approved.
API Rule: Every API must have unit tests written against it. And not it's unit tests
API Rule:
-----
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>