This is happening as concurrent operations are happening on the same VM. Earlier this was not seen as all vm operations were synchronized at agent layer. By making execute.in.sequence
global config to false this restriction is no longer there. In the latest code operations to a single vm are synchronized by maintaining a job queue. In some scenarios the destroy vm operation
was not going through this job queue mechanism and so was resulting in failures due to simultaneous operations.
Instead of injecting object of VolumeOrchestrationService into VmwareResource, we now populate the command object (MigrateVolumeCommand here) with required information. Thus we dont need volume orchestration service to query that information from resource.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Adding the missing file
During HA and maintenance call different planners (if the original planners are not able to find capacity) which skip some heurestics
Removing resource leaks from UsageSanityChecker and
refactoring it (encapsulation, removal of copy and paste, constants...)
Modularize static method for closing Statments in TransactionLegacy
and reusing this new method from other classes (Upgrade2214to30)
Create Unit and Integration Tests for UsageSanityChecker
Add DBUnit cases and integration profile for nitegration tests as
a base for future DB tests
Changes:
- The vmprofile owner passed in to the planner should be the VM's account and not the caller
- Do not do the access check for Root Admin
Conflicts:
server/src/com/cloud/deploy/DeploymentPlanningManagerImpl.java
local Ip addresses do not get released resulting in all the link local
Ip addresses being consumed eventually.
fix ensure Nics with reservation strategy 'Start' should go through
release phase in the Nic life cycle so that release is performed before
Nic is removed to avoid resource leaks.
brought back up after being down for few hours,snapshot jobs do not get
triggered with reason "there is other active snapshot tasks on the
instance to which the volume is attached".
encryption is enabled When db encryption is enabled, the server expects all
secure,hidden fields in encrypted form. moved the insert statements which has
dafault values to java and populated encrypted values if encryption is
enabled.
1. Egress default policy rules is send to the firewall provider. It is up to the
provider to configure the rules.
2. The default policy rules are send for both allow and deny default policy.
3. On network shutdown rules for delete are send.
4. For VR and SRX, by default deny the traffic. So no default rule to deny traffic is required.
service and not used for LB
Fix adds a boolean flag to addNetscalerLoadBalancer api, which
will mark added NetScaler for exclusive GSLB service. A netscaler marked
as exclusive gslb service provider is not picked for any guest network's
lb provider.
scaling up vms was not considering parameter cluster.(memory/cpu).allocated.capacity.disablethreshold. Fixed it
Also added overprovisioning factor retrieval at the cluster level for host capacity check
Changes:
- During Vm migration while finding a new host within the cluster, we need to set the storagepool Id to the deployment plan too.
- This will indicate the planner that the volumes are ready and no need to find new pool
- This in turn will prevent the threshold check done during the pool allocation. This step is not needed since there is no need to allocate pools newly.
- Thus the migration wont fail because th threshold check fails.
Changes:
- Added 'virtualmachineid' parameter to the createVolume API to specify a VM for the volume. The Vm should be in 'Running' or 'Stopped' state.
- This parameter is used only when createVolume API is called using snapshotid parameter
- When this parameter is set, the volume is created from the snapshot in the pod/cluster of the VM. Also the volume is then attached to the VM in the same request
- If attach Volume fails but create has succeeded, the API errors out but the Volume created remains available. User may attach the same volume later
- When Vm is provided, but if no storage pool is available in the VM's pod/cluster then the volume is not created and API fails.
This patch adds support for trust chains in the netscaler.
I initially planned on using the 10.1 API's "bundle" feature but during
my testing I found that was not working. So I am doing the chain linking
myself. Also NS can have only one entity of a certificate ie lets say
two different users try to add the same certificate on the netscaler
only one of them will go through. The other one says resouce already
exists even though they have different files.
This can be a problem in trust chains where the chain can be shared
between multiple accounts/certificates. So, I am using the figerprint as
an identifier of a certificate and making sure that we delete it only
when no one references it.
Volume create usage event and resource count werent getting registered. Check its type rather than it is UserVm since the code is coming from VirtualMachineManager.
Resource limit shouldnt be counted for resources with display flag = 0. Adding functions to resourcelimitmanager and doing it for the volumes at the moment.
The parameter is optional and true by default to preserve the original behavior
Conflicts:
api/src/org/apache/cloudstack/api/command/user/vpc/CreateVPCCmd.java
engine/orchestration/src/org/apache/cloudstack/engine/orchestration/NetworkOrchestrator.java
In VMware during VM start the existing disk information is used to configure the VMs. So even if a new disk is created using the new template VM continues to use the old disk.
Once the old root disk is marked for destroy force expunge it and sync the new disk into the VM folder before VM start
This is the output of the program, the developer documentation may be a better place
-----
When developing against the CloudStack Orchestration Platform, you must following the following rules:
API Rule: Always be prepared to handle RuntimeExceptions.
When writing APIs, you must follow these rules:
API Rule: You may think you're the greatest developer in the world but every change to the API must be reviewed and approved.
API Rule: Every API must have unit tests written against it. And not it's unit tests
API Rule:
-----
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
used cpu is getting bumped up when the over provisioning factor > 1. This was because we didnt record the overprovisioning factors of the vms which got deployed pre 4.2
Upgrade path will fix that by populating the cpu/mem overprovisioning factors for each of the vms in user_vm_details table using the global overprovisioning factor.
Reviewed by : bharat kumar <bharat.kumar@citrix.com>
Signed off by : nitin mehta<nitin.mehta@citrix.com>
If the VM has snapshots then the chain_info of a volume can be longer than 255 characters.
Increasing the column length of chain_info in VolumeVO to match the maximum length of type text(db schema type)
Changes:
- Consider if VM requires the local storage or shared storage or both for its disks.
- Accordingly all pools in the cluster should consider local or shared or both pools
Conflicts:
server/src/com/cloud/agent/manager/allocator/HostAllocator.java
Implemented commands that are required for VR to bootup and Vm deployment to work
Modified hyperv agent code, to deploy VR with Boot Args, boot args passed to VR using KVP Exchange Component.
Fix for VR to boot up and get configured with boot args, Fixed issue in VolumeOrchestrator
Implemented SetFirewallRulesCommand in HyperV Resource
Implemented VR network commands to provide the necessary services from VR
Fixed hyperv localstorage path encode url issue. encode is converting space to '+'
Cloudstack sends requests to directly managed HV hosts (direct agents) using the direct agent thread pool. The size of the pool is determined by global config direct.agent.pool.size defaulted to 500.
Currently there is no restriction on the number of threads a direct agent can use from this shared thread pool to send requests to the host. This is fine as long as the host is responding to requests
in a reasonable amount of time. But if there is a considerable delay in getting response, the thread remain blocked for that much time. As more commands are send to the slow host threads keep getting
blocked. This can eventually lead to a situation where requests to healthy hosts cannot be processed as there are not enough free threads.
The problem being addressed here is to localize the impact of few bad hosts, so that entire management server is not affected.
One such way is to throttle based on the # of outstanding requests on per host basis. The outstanding requests to a host will be a % of direct agent pool size. This is configurable based on
direct.agent.thread.cap. The default value is 0.1 or 10%, a value of 1 would mean the old behavior where there is no upper cap. This will ensure that the impacted host will be bound by a upper cap on the number of threads it can use to process requests and not the entire pool.
Now VPN connection can be created as "passive", which would enable the ability
of remote peer initiate the connection. So it's possible for VPC VR to
establish the connection to another VPC VR of CloudStack.
Test case also included.
The test case would create 2 vpcs and using VPN to connect them.
1) added createDetail to ResourceDetailDao interface to provide generic way of creating resourceDetail DB objects
2) added resource details support for firewall rules
1) Added support for Zone resource details
2) Renamed DcDetailsDao to DataCenterDetailsDao to follow the CS name convention for DataCenter related classes
* changed name for TaggedResourceType enum to ResourceObjectType as this enum is used both by ResourceMetaData and ResourceTags code
* enhanced the enum with extra fields resourceTagsSupport (boolean) and metadataSupport identifying if the resource supports tags and/or metadata.
* cleanup unused @Inject objects from the ResourceMetaDataManager
commit c9ee0d12e191e803fb341f3f96e95ca434a36f6c
Author: Wei Zhou <w.zhou@leaseweb.com>
Date: Wed Oct 23 16:55:10 2013 +0200
CLOUDSTACK-4931, CLOUDSTACK-4937: setDetails to user VMs only
(cherry picked from commit a94acc5a43)
commit fe1586c71377bc6d219db2dcf088c40b65dd1fc4
Author: Anthony Xu <anthony.xu@citrix.com>
Date: Tue Oct 22 11:20:27 2013 -0700
CLOUDSTACK-4649:
vm sync tracks the pv driver version for xenserver
Anthony
commit 56a218f66eda540b4b4b04030ee71fc6863f8532
Author: Anthony Xu <anthony.xu@citrix.com>
Date: Mon Oct 21 16:10:07 2013 -0700
CLOUDSTACK-4649:
xs 6.1/6.2 introduce the new virtual platform, so there are two virtual platforms, windows PV driver version must match virtual platforms,
this patch tracks PV driver versions in vm details and template details.
Anthony
commit 4e85d28c678a6f96b5b70d8d33fc60f9d1ea3df6
Author: Laszlo Hornyak <laszlo.hornyak@gmail.com>
Date: Mon Oct 21 21:17:33 2013 +0200
removed unused static field
- s_httpClientManager was not used
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
commit d4121fa26023db236f7396cea455ef090672ae9a
Author: Chris Suich <chris.suich@netapp.com>
Date: Tue Oct 22 10:45:22 2013 -0400
Updated DataMotionServiceImpl and ApiResponseHelper based on review feedback.
commit aaf026e1e4204d405bcda2ae4f1a01b1d0f7e7cb
Author: Chris Suich <chris.suich@netapp.com>
Date: Thu Oct 17 14:27:12 2013 -0400
Added context to strategy sorting error responses
Added TODOs for DRYing out pickStrategy() overloading
commit a221f4aa3fb2ddc255bc35cf753f98f88f5bf44e
Author: Chris Suich <chris.suich@netapp.com>
Date: Wed Oct 16 09:57:28 2013 -0400
Updated inefficient strategy sorting/selection
Removed unnecessary canRevertSnapshot from PrimaryDataStoreDriver
Other general cleaup and fixes from reviews
commit 7d58949c6a1b7e853e891b59387a9620e8cd7a91
Author: Chris Suich <chris.suich@netapp.com>
Date: Mon Oct 14 14:01:22 2013 -0400
Added volume snapshot revert capability to SnapshotResponse
Updated UI to hide/show snapshot revert action per snapshot
Signed-off-by: Edison Su <sudison@gmail.com>
Two Integer object was created for each comparison, just in order to compare the two values.
This is replaced with Integer.compare()
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
xs 6.1/6.2 introduce the new virtual platform, so there are two virtual platforms, windows PV driver version must match virtual platforms,
this patch tracks PV driver versions in vm details and template details.
Anthony
The cluster and zone wide storage pool allocators returned shared pools even for volumes meant to be on local storage pool.
If the VM uses local disk then cluster and zone storage allocators should not handle it and return null or empty list.
Also fixed the deployment planner to avoid a cluster if
a. avoid set returned by storage pool allocators is empty OR
b. all local or shared pools in a cluster are in avoid state
Conflicts:
engine/storage/src/org/apache/cloudstack/storage/allocator/ClusterScopeStoragePoolAllocator.java
engine/storage/src/org/apache/cloudstack/storage/allocator/ZoneWideStoragePoolAllocator.java
list Clusters/pods/zones API not accounting for reserved in the used capacity percentage.
Fix listCapacity cmd for reserved as well
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Introduction of a new Transaction API that is more consistent with the style
of Spring's transaction managment. The existing Transaction class was renamed
to TransactionLegacy. All of the non-DAO code in the management server has been
updated to use the new Transaction API.
was discussed on the mailing list as a useful debugging tool, currently
the log prints the DB id of the agent, which makes admins have to look
it up to know where the Command was run.
When running DatabaseUpgradeChecker as a standalone program _dao will not
be injected. Still create an instance of VersionDaoImpl in constructor
and when DatabaseUpgradeChecker is ran in the mgmt server it will be
overwritten by the injected value.
These changes are a joint effort between Edison and I to refactor some
of the code around snapshotting VM volumes and creating
templates/volumes from VM volume snapshots. In general, we were working
towards allowing PrimaryDataStoreDrivers to create snapshots on primary
storage and not requiring the snapshots to be transferred to secondary
storage.
High level changes:
-Added uuid to NfsTO, SwiftTO & S3TO to cut down on the requirement of
PrimaryDataStoreTO and ImageStoreTO which don't really serve much of a
purpose
-Initial work towards enable reverting VM volume from snapshots
-Added hypervisor commands for introducing and forgetting new hypervisor
objects (snapshots, templates & volumes)
Signed-off-by: Edison Su <sudison@gmail.com>
ACS is now comprised of a hierarchy of spring application contexts.
Each plugin can contribute configuration files to add to an existing
module or create it's own module.
Additionally, for the mgmt server, ACS custom AOP is no longer used
and instead we use Spring AOP to manage interceptors.
The managed context framework provides a simple way to add logic
to ACS at the various entry points of the system. As threads are
launched and ran listeners can be registered for onEntry or onLeave
of the managed context. This framework will be used specifically
to handle DB transaction checking and setting up the CallContext.
This framework is need to transition away from ACS custom AOP to
Spring AOP.
Various classes are using member injection to inject extensible objects.
Really those object should come from an AdapterList that is injected in.
This patch switches the code to use setter injection that will later allow
spring to inject an AdapterList or something similar to allow
extensibility.
Also fixed existing bugs for the API:
* corrected action event to be VOLUME.UPDATE (was VOLUME.ATTACH)
* all parameters to update, should be optional - fixed that. If nothing is specified, the db object will remain with its original fields
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue and/or TODO:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
- Documentation!
Signed-off-by : Toshiaki Hatano <haeena@haeena.net>
CLOUDSTACK-4457:
CLOUDSTACK-4459:
harden kvm getvolume. It's possible that one volume created on other kvm host, won't show up on another host, try more times to refresh storage pool if volume won't shown up
Conflicts:
engine/storage/integration-test/test/org/apache/cloudstack/storage/test/FakeDriverTestConfiguration.java
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
It was implemented by extending the NFS provider. Its validation was updated so that you can pass it a URL containing the
details of a CIFS share. The code that mounts NFS shares was extended to allow it do the same for CIFS shares. Otherwise,
the secondary storage code is left unchanged.
Issue happens as there are more than one thread processing connect for a host simultaneously. The VM full sync. is not designed to work in this scenario and as a result user VMs may get stopped incorrectly.
Direct agent scan task runs at regular intervals (direct.agent.scan.interval defaulted to 90 secs) and identifies hosts that needs to be processed for connect. In a normal scenario hosts mostly get connected within that interval and there are no issues. But if due to some reason the connect process takes more time and is not completed by the time next agent scan runs. In this case, based on the db. state same hosts may get picked up again. And then there will be situations where more than one thread is processing connect for the same host.
The fix is to check if there is a thread already processing connect for a host and in this case all subsequent threads for that host will simply bail out. Also there may be a scenario where one thread already completed processing connect but another thread already got scheduled before that and will again repeat the same. This is also prevented by putting appropriate checks.
Check for the all the transition states for Maintenance. Also corrected the isMaintenance function for StoragePoolVo
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Description:
Set the criterion for overriding/preserving the vmware.create.full.clone
flag so that if past version deployments have any deployments (data centers),
this flag will be set to false. Else, it will be set to true.
The earlier criterion to set this flag was based on the CS version numbers,
but that is not a good business logic to serve as a basis to set the flag.
Changes:
- In the upgrade path, for a private zone, entry needs to be added in the affinity_group_domain_map to provide access to the private zone for the domains it belongs too.
Changes:
- Implict creation of the 'ExplicitDedication' Affinity group during resource dedication
- Only one group per account or per domain will be present
- ListDedicatedResources by affinityGroup
- Deployment should consider dedicated resources associated to the group only
- Deleting affinity group should release the dedicated resouces
- Releasing the dedicated resources should remove the group associated if there are no more resources.
Conflicts:
plugins/dedicated-resources/src/org/apache/cloudstack/dedicated/DedicatedResourceManagerImpl.java
plugins/dedicated-resources/test/org/apache/cloudstack/dedicated/manager/DedicatedApiUnitTest.java
server/src/com/cloud/configuration/ConfigurationManagerImpl.java
Changes:
- 'ExcplicitDedication' type of group can be created/deleted by Root admin only
- Users can no longer create this type of affinity group
- RootAdmin can create this type of affinitygroup at domain level. Such a domain level group is available for all accounts in that domain for listing and for use during deployVM.
- The domain level affinitygroup should be visible to the users in that domain, domain admins and Root admin.
Conflicts:
server/src/com/cloud/api/query/QueryManagerImpl.java
server/src/org/apache/cloudstack/affinity/AffinityGroupServiceImpl.java
server/test/org/apache/cloudstack/affinity/AffinityApiUnitTest.java
On a fresh environment, some values in cloud.configuration table are persisted in com.cloud.server.ConfigurationServerImpl.persistDefaultValues()
A default value need to be set before com.cloud.upgrade.DatabaseUpgradeChecker
UI support for baremetal PXE server
CloudStack CLOUDSTACK-1364
UI support for baremetal DHCP server
Conflicts:
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BareMetalPingServiceImpl.java
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalKickStartServiceImpl.java
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalPxeManagerImpl.java
While upgrading to 4.2, the type of vswitch being used by each cluster is persisted to cluster_details table. This helps if user want to change the type of vswitch used in a zone or entire cloud later on but leave existing cluster continue to use old vswitches. Hence even after modifying the type of vswitch at cloud level (by modifying global configuration parameters) or modifying the type vswitch at zone level (by modifying the traffic label) would not disturb operation of existing clusters.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Builtin template Ready Status is No even after the Status is downlaod complete. The reason was that template sync updates only the template download state but not the state. Fixing that. Ideally we should change the state through state machine only.
Signed off by : nitin mehta<nitin.mehta@citrix.com>
only when there is portable IP range added at region level.
region response will now have details if portable IP service is enabled
or not. Portable IP service for a region is turned off by default. when
adming adds a portable ip range portable ip service is enabled for the
region.
Updating the new system template URLs for the existing templates during upgrade to 4.2.
If new 4.2 system template is registered before upgrade then marking the old templates as removed during upgrade.
The time increased due to the newly added dedicated resources feature. During regular VM deployment, all dedicated resources are put in avoid list so that they are not considered for deployment.
Now the way to compute the list of dedicated resources is not optimal and performance deteriorates in an environment having lot of pods, clusters and hosts as the logic is to query db. for each suc resource.
The fix is to optimize the logic not to loop through all resources but get the list of each resource type in a single query.
Conflicts:
server/src/com/cloud/deploy/DeploymentPlanningManagerImpl.java
Introduce a global lock on template and pool id so that concurrent threads wont be inserting the same row in DB and hit MySQLIntegrityConstraintViolationException
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Snapshot object is being accessed even when it is null. In case snapshot is not present in backup store the code should return after cleaning db entry.
Also noticed discrepancy in the upgraded db setup but couldn't fully verify it, so added logging in the upgrade logic for template/snapshots etc.
Track the Datacenter of previous cluster correctly while going through each cluster in the zone to see if 2 clusters are from different DC/vCenter.
Cherry picked from 4.2 commit a3450afff5
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
System template upgrade is not required during 4.0 upgrade since we handle the same during 4.2 upgrade. So removing the system template update during 4.0 upgrade.
KVM.snapshot.enabled is lowercased by f025db95 to keep the configs
uniformly lower-case. But it missed the upgrade script and the
references in SnapshotManagerImpl. This commit will fix the issue in all
locations
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
(cherry picked from commit 0e216fa7e41bdfe0cc744006bb896c8b00138ca2)
Private templates would now get copied to only one of image storage chosen randamly as was the case earlier. Dont throw an exception for uploading volumes when there are multiple image stores, instead choose one of them randomly
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Persist the download url in the db for volume download.
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Conflicts:
server/src/com/cloud/storage/VolumeManagerImpl.java
Marked the system template new system template as dynamicallyScalable
- handled upgrade case
- moved "dynamicallyScalable" flag to vm_instance table from user_vm_details to support dynamic scaling of system vm
Signed off by : Nitin Mehta<nitin.mehta@citrix.com>
Changes:
- During host deletion, host entry in databse gets removed prior to the disconnect task getting processed.
- This causes the disconnect task to get NPE while trying to do the host state transition
Some existing scenarios for root and data volume combination was not working. These are
a. Local root + Shared data
b. Shared root + Local data
Enabled these scenarios as part of this fix
The request to migrate was coming till ancient data motion strategy but wasn't worked on and forwarded
to the resource. The code probably got removed by a bad merge. Bringing it back.
The issue is that while calculating the used primary storage size, the updateResourceCount
API is also calculating the disk size of virtual router VM, created for that account and
because of this, the API is returning the incorrect result.
Changes:
- Passing the avoid set generated by the first pass of deployment to the second try.
- The second try is done, when the first pass that uses a reserved plan fails to deploy on the reserved host, to search over the entire zone again
Changes:
- Locking the group and save reservation mechanism done by DPM
- Added admin operation to cleanup VM reservations
- DPM will also cleanup VM reservations on startup
Available bytes was getting stored in the used bytes property of local storage pools. As a result of this, for newly added local pools Cloudstack thinks that there is no space available and generated alerts.
User should be able to delete/archive alerts and events by selecting a time period or by
choosing the alerts and events older than a date. Added the ability to choose a time period
too.
Summary of changes in the fix
- Optimized host scan logic, now instead of iterating over each cluster host scan is done for a batch of clusters
- Made host scan task interval configurable
only on first rule is created on the IP and last rule is revoked on the
IP
Current suboptima logic of IP Assoc
- On associate IP to GuestNetwork there is an IPAssoc command sent to
corresponding network service providers of the network
- On every rule apply on IP associated with the network send IP assoc
to the network service providers
- On every rule deletion on IP associated with a network sernd IP assoc
command to the network service providers
With this fix logic of IP assoc is changed as below which eliminates
executio of unnessary and expensive IpAssocCommand resource command
- On associate IP to GuestNetwork, associate IP only to the network,
Untill any service is associated with the IP dont send IP Assoc
- On creation of first rule on the IP send IPAssoc to corresponding
network service provider. Since IP is used for a service, IPAssoc
need to be sent to correpondign service provider
- On deletion of last rule on the IP send IPAssoc to corresponding
network service provider. When last rule is deleted, IP has no
service associated with it, so send IP assoc to service provider to
remove the IP association
Filter the detail map sent over the wire into Map<String, String> before
processing underneath by storage life cycle
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Description:
When retrieving primary datastore, handle case for non-existing datastores/hosts.
Throw exception and handle the exception in datastore mgmt layer and pass onward
to create storage pool API.
involved in the GSLB
add weights to each site participating in the GSLB. Traffic will be load
balanced across the sites based on the weigths associated with each
site. If not specified weight of site is defaulted to 1.
Breaking down storage components among oss, nonoss and simulator
contexts. The default components are loaded by
OSS - applicationContext + componentContext
NonOSS - applicationContext + nonossComponentContext
Simulator - applicationContext + simulatorComponentContext
provider beans are are selectively overridden for simpler configuration.
Where possible beans are loaded by local reference.
<list merge=true> does not unfortunately work perfectly for bean merging
the providers causing a bit of bloat. Explore for later.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>