Commit Graph

64 Commits

Author SHA1 Message Date
prachi be04ff861c Bug 13078 - Management Server does not respect selected OS Preference for any host within a single pod.
Changes:
- Once the HostAllocators have listed suitable hosts, planner should not reshuffle the list since that would lose the prioritization applied by the HostAllocators.
- E.g: HostAllocators chooses that host first which matches the guest OS category. If planner shuffles the list, that preferrence is lost.
2012-01-19 16:59:38 -08:00
kishan e2cb4f94d6 bug 12337: Encrypt only password in host_detail table. Removed unused and duplicate references of HostDetailDao
status 12337: resolved fixed
reviewed-by: Abhi
2011-12-20 19:28:41 +05:30
prachi 8134cfb4c8 Bug 11131 - Allocators: when vm fails to deploy in DataCenter, we should never retry in the same DC again as the DC is already in Avoid set
Changes:
- Added check in planner to verfiy that the datacenter is not in avoid set. If found in avoid set, planner does not proceed.
2011-11-23 17:40:20 -08:00
prachi 313e6ca284 Bug 8791 user dispersing allocator
Changes:
- Added a two new deployment planners  'UserDispersingPlanner' and 'UserConcentratedPodPlanner' to the DeploymentPlanners
- Planner can be chosen by setting the global config variable 'vm.allocation.algorithm' to either of the following values:
('random', 'firstfit', 'userdispersing', 'userconcentratedpod')
- By default, the value is 'random'. When the value is 'random', FirstFitPlanner is invoked as before that shuffles the resource lists.
- Now Admin can choose whether the deployment heuristic should be applied starting at cluster or pod level. This can be done by using the
global config variable 'apply.allocation.algorithm.to.pods' which is false by default. Thus by default as earlier, planner starts at clusters directly.

'UserConcentratedPodPlanner' changes:
- Earlier to 3.0, FirstFitPlanner used to reorder the clusters in case this heuristic was chosen.
- Now this is done by a separate planner and is applied only when 'vm.allocation.algorithm' is set to this planner
- It reorders the capacity based clusters/pods such that those pods having more number of Running Vms for the given account are tried first.
- Note that this userconcentration is applied only to pods and clusters. Not to hosts or storagepools within a cluster.

'UserDispersingPlanner' changes:
- 'UserDispersingPlanner' reorders the capacity ordered pods and clusters based on number of 'Running' VMs for the given account in ascending order. Aim is to choose thodes pods/clusters first which have less number of Running VMs for the given account
- Admin can provide weights to capacity and user dispersion so that both parameters get considered in reordering the pods/clusters. This can be done by setting
the global config parameter 'vm.user.dispersion.weight'. Default value is 1. Thus if this planner is chosen, by default, ordering will be done only by number of Running Vms, unless the weight is changed.
- HostAlllocators and StoragePoolAllocators also reorder the hosts and pools by ascending order of number of Running VMS/ Ready Volumes respectively for the given account. Thus try to choose that host or pool within a cluster with less number of VMs for the account.
2011-11-17 18:29:39 -08:00
Alex Huang f6fcaa49ec Merge complete except for virtualnetworkappliancemanager 2011-11-10 15:18:16 -08:00
Nitin 2b370ab535 bug 10657: Introducing cluster level global thresholds for cpu and ram so that these resources do not go beyond these thresholds. The reason for this is because, if the admn needs to provide maintenance, they don't have to add new machines or have ones on standby if the entire zone/pod/cluster is at 100% allocated capacity. Also introducing pool level global thresholds for allocated storage. There are other changes like DB upgrade and introduction of transaction. 2011-10-29 16:51:37 +05:30
frank eb0fdc2925 allow multiple clusters for baremetal planner
fix build
2011-10-27 17:46:54 -07:00
frank c81477a25c allow multiple clusters for baremetal planner
Conflicts:

	server/src/com/cloud/deploy/BareMetalPlanner.java
2011-10-27 17:39:14 -07:00
frank cef30956e9 Merge branch 'newagentmgr'
Conflicts:
	agent-simulator/src/com/cloud/api/commands/ConfigureSimulator.java
	ovm/src/com/cloud/ovm/hypervisor/OvmDiscoverer.java
	server/src/com/cloud/agent/manager/AgentManagerImpl.java
	server/src/com/cloud/capacity/CapacityManagerImpl.java
	server/src/com/cloud/network/F5BigIpManagerImpl.java
	server/src/com/cloud/network/JuniperSrxManagerImpl.java
	server/src/com/cloud/resource/ResourceManagerImpl.java
	server/src/com/cloud/server/ManagementServerImpl.java
	server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java
	server/src/com/cloud/vm/UserVmManagerImpl.java
	server/src/com/cloud/vm/VirtualMachineManagerImpl.java
	utils/src/com/cloud/utils/db/GenericDao.java
2011-10-27 11:09:56 -07:00
frank 30f95e638a Bug 11522 - New agent manager
1. get rid of host allocation state
2. remove Updating status from agent status
2011-10-24 16:49:32 -07:00
alena 219978a9be Create network using physical network id 2011-10-20 18:25:13 -07:00
frank 89e04458b6 Bug 11522 - New agent manager
move all listxxx interface from HostDao to managers(ResourceManager, SecondaryStorageVmManager etc) with decent name using SearchCriteria2
or direct call SearchCriteria2 on demand
2011-10-04 14:35:26 -07:00
prachi e0a179752d Bug 11617: Ensure the Deployment planner is choosing clusters based on aggregate capacity
Changes:
- We were ordering clusters based on capacity of the first-fit host found in each cluster. Due to this, there were cases where we deployed VMs to one cluster instead of balancing off within clusters.
- Now we order the list of clusters by aggregate capacity and choose the ones that have enough capacity for the required VM in this order.
- This should balance the load between clusters instead of bombarding one.

Conflicts:

	server/src/com/cloud/capacity/dao/CapacityDao.java
	server/src/com/cloud/capacity/dao/CapacityDaoImpl.java
2011-10-03 15:37:38 -07:00
prachi 4ad9ac5e71 Bug 11200 - maximum number of guests per host
Changes:

To make sure migration does not attempt to pick a host that has running VMs more than the max guest VM's limit:

- Changed manual migration to call host allocators to return a list of hosts suitable for migration. Host allocators check for the max guest VM limit.
- Earlier we returned hosts with enough capacity but now Host Allocators make other checks along with capacity. So the list of hosts returned are hosts that have enough capacity AND satisfy all other conditions like host tags, max guests limit etc. Or in other words Allocators dont return the hosts that dont satisfy all conditions even if they have capacity.
-Therefore, now we mark the list of hosts returned for manual migration as 'suitable' hosts instead of 'hasenoughCapacity' in the HostResponse.
- HA migration already calls allocators, so no change is needed there.
2011-09-08 18:08:31 -07:00
Kelven Yang a51383e296 bug 11219: use local storage flag from service offering when it is ROOT disk 2011-08-24 15:17:53 -07:00
Nitin 4b6d072490 bug 10972: Improve logging - put in the hostid and hostname on which the vm launches.
status 10972: resolved fixed
2011-08-24 18:04:46 +05:30
alena 8a7feb8ec1 Merge branch '2.2.y'
Conflicts:
	agent/src/com/cloud/agent/resource/computing/LibvirtComputingResource.java
	api/src/com/cloud/agent/api/routing/LoadBalancerConfigCommand.java
	api/src/com/cloud/agent/api/to/FirewallRuleTO.java
	api/src/com/cloud/agent/api/to/IpAddressTO.java
	api/src/com/cloud/agent/api/to/PortForwardingRuleTO.java
	api/src/com/cloud/api/ApiConstants.java
	api/src/com/cloud/api/BaseCmd.java
	api/src/com/cloud/api/ResponseGenerator.java
	api/src/com/cloud/api/commands/CreateFirewallRuleCmd.java
	api/src/com/cloud/api/commands/CreateIpForwardingRuleCmd.java
	api/src/com/cloud/api/commands/CreateLoadBalancerRuleCmd.java
	api/src/com/cloud/api/commands/CreatePortForwardingRuleCmd.java
	api/src/com/cloud/api/commands/DeleteLoadBalancerRuleCmd.java
	api/src/com/cloud/api/commands/ListCapabilitiesCmd.java
	api/src/com/cloud/api/commands/UpdateNetworkCmd.java
	api/src/com/cloud/api/response/CapabilitiesResponse.java
	api/src/com/cloud/network/Network.java
	api/src/com/cloud/network/NetworkService.java
	api/src/com/cloud/network/firewall/FirewallService.java
	api/src/com/cloud/network/lb/LoadBalancingRule.java
	api/src/com/cloud/network/lb/LoadBalancingRulesService.java
	api/src/com/cloud/network/rules/FirewallRule.java
	api/src/com/cloud/network/rules/RulesService.java
	api/src/com/cloud/offering/NetworkOffering.java
	client/tomcatconf/commands.properties.in
	cloud.spec
	core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java
	core/src/com/cloud/hypervisor/xen/resource/CitrixHelper.java
	core/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
	core/src/com/cloud/storage/template/DownloadManagerImpl.java
	core/src/com/cloud/vm/DomainRouterVO.java
	debian/cloud-deps.install
	patches/systemvm/debian/config/etc/init.d/cloud-early-config
	patches/systemvm/debian/config/root/ipassoc.sh
	patches/systemvm/debian/config/root/loadbalancer.sh
	scripts/vm/hypervisor/kvm/rundomrpre.sh
	scripts/vm/hypervisor/xenserver/vmops
	server/src/com/cloud/agent/manager/AgentAttache.java
	server/src/com/cloud/agent/manager/AgentManagerImpl.java
	server/src/com/cloud/agent/manager/AgentMonitor.java
	server/src/com/cloud/agent/manager/ClusteredAgentManagerImpl.java
	server/src/com/cloud/alert/ClusterAlertAdapter.java
	server/src/com/cloud/api/ApiResponseHelper.java
	server/src/com/cloud/api/ApiServer.java
	server/src/com/cloud/cluster/ClusterManagerImpl.java
	server/src/com/cloud/configuration/Config.java
	server/src/com/cloud/configuration/ConfigurationManager.java
	server/src/com/cloud/configuration/ConfigurationManagerImpl.java
	server/src/com/cloud/configuration/DefaultComponentLibrary.java
	server/src/com/cloud/deploy/FirstFitPlanner.java
	server/src/com/cloud/ha/HighAvailabilityManagerImpl.java
	server/src/com/cloud/host/dao/HostDaoImpl.java
	server/src/com/cloud/hypervisor/xen/discoverer/XcpServerDiscoverer.java
	server/src/com/cloud/network/LoadBalancerVO.java
	server/src/com/cloud/network/NetworkManager.java
	server/src/com/cloud/network/NetworkManagerImpl.java
	server/src/com/cloud/network/dao/FirewallRulesDao.java
	server/src/com/cloud/network/dao/FirewallRulesDaoImpl.java
	server/src/com/cloud/network/element/DhcpElement.java
	server/src/com/cloud/network/element/VirtualRouterElement.java
	server/src/com/cloud/network/firewall/FirewallManagerImpl.java
	server/src/com/cloud/network/lb/LoadBalancingRulesManagerImpl.java
	server/src/com/cloud/network/router/VirtualNetworkApplianceManager.java
	server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java
	server/src/com/cloud/network/rules/FirewallManager.java
	server/src/com/cloud/network/rules/FirewallRuleVO.java
	server/src/com/cloud/network/rules/PortForwardingRuleVO.java
	server/src/com/cloud/network/rules/RulesManagerImpl.java
	server/src/com/cloud/network/rules/StaticNatRuleImpl.java
	server/src/com/cloud/network/security/SecurityGroupListener.java
	server/src/com/cloud/network/security/SecurityGroupManagerImpl.java
	server/src/com/cloud/offerings/NetworkOfferingVO.java
	server/src/com/cloud/server/ConfigurationServerImpl.java
	server/src/com/cloud/server/ManagementServerImpl.java
	server/src/com/cloud/storage/StorageManager.java
	server/src/com/cloud/storage/StorageManagerImpl.java
	server/src/com/cloud/storage/dao/VMTemplateHostDaoImpl.java
	server/src/com/cloud/storage/download/DownloadMonitorImpl.java
	server/src/com/cloud/upgrade/DatabaseUpgradeChecker.java
	server/src/com/cloud/upgrade/dao/Upgrade228to229.java
	server/src/com/cloud/upgrade/dao/Upgrade229to2210.java
	server/src/com/cloud/user/AccountManagerImpl.java
	server/src/com/cloud/vm/UserVmManagerImpl.java
	server/src/com/cloud/vm/VirtualMachineManagerImpl.java
	server/src/com/cloud/vm/dao/DomainRouterDao.java
	server/src/com/cloud/vm/dao/DomainRouterDaoImpl.java
	setup/db/create-index-fk.sql
	setup/db/create-schema.sql
	setup/db/db/schema-222to224.sql
	setup/db/db/schema-227to228.sql
	setup/db/db/schema-228to229.sql
	setup/db/db/schema-229to2210.sql
	tools/testClient/README
	ui/scripts/cloud.core.instance.js
	utils/src/com/cloud/utils/SerialVersionUID.java
	utils/src/com/cloud/utils/db/ConnectionConcierge.java
	utils/src/com/cloud/utils/db/Merovingian2.java
	utils/src/com/cloud/utils/db/Transaction.java
	utils/src/com/cloud/utils/nio/Link.java
	utils/src/com/cloud/utils/nio/NioConnection.java
	utils/src/com/cloud/utils/time/InaccurateClock.java
2011-08-22 20:28:30 -07:00
prachi 8755044ea9 Merge Bug 11186 from 2.2.8mango
Bug 11186- Cannot restart existing VM if the cluster is disabled after the
VM has been created

Changes:
- We should not check for the cluster 'allocation_state' while starting an
    existing VM provided it has storage already allocated. But if volumes are
    deleted and new storage needs to be allocated, then we will not allow the
VM
    start.
- However we should still prohibit adding new VMs in that cluster.
2011-08-20 21:12:15 -07:00
prachi 3d4767c94c Merge Bug 11186 from 2.2.8mango
Bug 11186- Cannot restart existing VM if the cluster is disabled after the
VM has been created

Changes:
 - We should not check for the cluster 'allocation_state' while starting an
existing VM provided it has storage already allocated. But if volumes are
deleted and new storage needs to be allocated, then we will not allow the VM
start.
- However we should still prohibit adding new VMs in that cluster.
2011-08-20 21:02:09 -07:00
alena 812a1f3f7b Fixed the bug in allocator where cluster was added to avoid set as pod 2011-08-15 10:44:07 -07:00
alena 3945eec0df Fixed the bug in allocator where cluster was added to avoid set as pod 2011-08-15 10:43:59 -07:00
Alex Huang 7ac3c818a9 bug 11079: fixed a bug with autoboxing 2011-08-12 14:14:15 -07:00
Alex Huang 429c1a0a18 changed a bunch of map logs to trace 2011-07-22 18:14:02 -07:00
Alex Huang 18fa544da1 changed a bunch of map logs to trace 2011-07-22 18:13:24 -07:00
Alex Huang 1d2a529556 put big log trace in firstfit planner in to trace instead of debug 2011-07-11 14:37:52 -07:00
Alex Huang f5d5ed5dce put big log trace in firstfit planner in to trace instead of debug 2011-07-11 14:37:36 -07:00
Alex Huang ee2670edc7 Some operations on the lock table allowed through jmx 2011-07-06 16:10:18 -07:00
Alex Huang 7e9836dfd0 Some operations on the lock table allowed through jmx 2011-07-06 16:09:05 -07:00
Sheng Yang 62ac899091 bug 9154: Initial check in for enabling redundant virtual router
This patch enable redundant virtual routers.

1. To enable this feature, db need to be updated using follow SQL by now(we
would get a UI way later):

UPDATE network_offerings SET redundant_router=1 WHERE guest_type="Virtual" AND
system_only=0;

2. System would try to start up two routers at different hosts. But if there is
only one host in the zone, system would start up two routers on it.

3. The failover part is using keepalived, and connection tracking part is using
conntrackd. There would be one master router and one backup router. The status
of router(master or backup) can be query from the database table domain_router
now. Management server would update the status every 30s by default.

4. The routers for the same zone would use same external NIC(same ip and mac).
The script used for fail-over would ensure only one external NIC present in the
network at any time.

5. Currently management server don't got the ability to stop one of router is
both of them reported as master. The feature is in the todo list.

After two routers start up, disconnect anyone of them, the guest network
shouldn't be affected, and established connection(http, ssh, etc.) should still
works. The fail-over on gateway part should be 3~4 seconds.

Currently the patch works with KVM. Would deal with vmware and XenServer soon.
2011-06-07 14:47:45 -07:00
Alex Huang d9e0bcfa1e bug 10126: Renamed getPodId() to getPodIdToDeployIn() 2011-06-03 22:17:08 -07:00
Alex Huang 5ce631e9d7 Separated resource management and agent management code. It's not all done but at least we make a first step 2011-05-16 10:55:18 -07:00
Alex Huang fba1c95512 bug 9615: Part of the HA cleanup 2011-05-03 16:34:53 -07:00
Alex Huang 8c8354a00e bug 8745: we decided on not implementing revert on the agent because it really requires business logic above. Stop if the checkSsh doesn't work 2011-05-02 14:47:49 -07:00
prachi 209be1065b Bug 9585 - Existing Data Disk is being destroyed and recreated on Stop and Start of a User VM.
Changes:
- When the ROOT volume of a VM is found to be READY, changed planner to reuse the pool for every volume(root or data) that is READY and that has a pool not in maintenance and not in avoid state
- If ROOT volume is not ready, we dont care about the DATA disk. Both would get re-allocated.
- When a pool is reused for a ready volume, Planner does not call storagepool allocators. And such volumes are not assigned a pool in the deployment destination returned by the planner. Accordingly StorageManager :: prepare method wont recreate these volumes since they are not mentioned in the destination.
2011-04-27 11:36:51 -07:00
prachi b84a7477f0 Bug 9539 - cpu.overprovisioning.factor does not work
Changes:
- Changed host allocators/planner  to use cpu.overprovisioning.factor
- Removed following: while adding a new host, we were setting the total_cpu in op_host_capacity to be actual_cpu * cpu.overprovisioning.factor. Now we set it to actual_cpu.
- ListCapacities response now calculates the total CPU as actual * cpu.overprovisioning.factor (This change does not add anything new - listCapacities was pulling total CPU from op_host_capacity DB earlier which had the cpu.overprovisioning.factor applied already. Now we need to apply it over the DB entry.)
- HostResponse has a new field: 'cpuWithOverprovisioning' that returns the cpu after applying the cpu.overprovisioning.factor

- Db Upgrade 222 to 224 now updates the total_cpu in op_host_capacity to be the actual_cpu for each Routing host.
2011-04-22 18:09:31 -07:00
Frank 92155522f2 Add license header to files 2011-04-14 11:23:14 -07:00
prachi b1700af146 Bug 9387: Recreate system vms if template id changed....
Changes:
While starting a System VM:
- We check, incase the ROOT volume is READY, if the templateID of the volume matches the SystemVM's template.
- If it does not match, we update the volumes' templateId and ask deployment planner to reassign a pool to this volume even if it is READY.

In general:
- If a root volume is READY, we remove its entry from the deploydestination before calling storagemanager :: prepare()
- StorageManager creates a volume if a pool is assigned to it in deploydestination passed to it.
- If a volume has no pool assigned to it in deploydestination, it means the volume is ready and has a pool already allocated to it.
2011-04-13 13:47:07 -07:00
prachi 47c31a077a Bug 9387 - Recreate system vms if template id changed...
Changes:
- Planner must reassign the storage pool if the template id for system vms has changed.  StorageManager must then recreate the volume if the volume has been
reassigned.  This is needed to do automatic update of the system template.
2011-04-12 18:19:58 -07:00
Frank 105db3b15a Merge branch 'baremetal' to master
modifies:
	api/src/com/cloud/api/ApiConstants.java
	api/src/com/cloud/api/commands/AddHostCmd.java
	api/src/com/cloud/api/commands/CreatePodCmd.java
	api/src/com/cloud/api/commands/DeployVMCmd.java
	api/src/com/cloud/dc/Pod.java
	api/src/com/cloud/network/NetworkService.java
	server/src/com/cloud/agent/manager/AgentManagerImpl.java
	server/src/com/cloud/configuration/ConfigurationManagerImpl.java
	server/src/com/cloud/dc/HostPodVO.java
	server/src/com/cloud/network/NetworkManager.java
	server/src/com/cloud/network/NetworkManagerImpl.java
	server/src/com/cloud/vm/UserVmManagerImpl.java
	setup/db/create-schema.sql
	utils/src/com/cloud/utils/SerialVersionUID.java
2011-04-11 14:21:41 -07:00
Alex Huang b2eda8c71b Changes to the planners 2011-03-28 09:48:33 -07:00
Frank cdaa1edfa5 Bug 8208 - bare metal provisioning
Set dhcp range of linmin DHCPD to empty, so it will not conflict with
our External DHCP
2011-03-24 16:50:23 -07:00
Frank 8aa0ab99da Bug 8208 - bare metal provisioning
Start vm on last stayed host if vm has lastHostId
2011-03-24 12:56:56 -07:00
prachi 923f562aa8 Bug 6873: disable/enable mode for clusters (and pods and zones and hosts)
- Added a new flag 'allocation_state' to zone,pod,cluster and host
- The possible values for this flag are 'Enabled' or 'Disabled'
- When a new zone,pod,cluster or host is added, allocation_state is 'Disabled' by default.
- For existing zone,pod,cluster or host, the state is 'Enabled'.
- All Add/Update/List  commands for each of zone,pod,cluster or host can now take a new parameter 'allocationstate'
- If 'allocation_state' is 'Disabled', Allocators skip that zone or pod or cluster or pod.
- For a root admin, ListZones lists all zones including the 'Disabled' zones. But for any other user, the 'Disabled' zones are not included in the response.
- For any usecase that creates/deploys/adds/registers a resource and takes in zone as parameter, now we check if the Zone is 'Disabled'. If yes then the operation cannot be performed by a user other than root-admin. Add volume, snapshot, templates are examples of this usecase.
- To enable the root admin to test a particular pod/cluster/host, deployVM command takes in 'host_id' parameter that can be passed in only by root admin.
If this parameter is passed in by the admin, allocators do not search for hosts and use that host only. StoragePools are searched in the cluster of that host.
If VM cannot be deployed to that host, allocators and deployVM fails without retrying
2011-03-23 22:15:35 -07:00
Frank 617ef5c178 Bug 8208 - bare metal provisioning
set pod to external dhcp server when adding external dhcp server
2011-03-21 16:39:49 -07:00
Frank d5abb202ec Bug 8208 - bare metal provisioning
don't inherit from firstFitPlanner, using our baremetal planner
2011-03-21 10:48:21 -07:00
prachi 3624fee85d Changed the interface in StoragePoolAllocator to avoid a potential NPE in LocalStoragePoolAllocator. Allocators were taking in an instance of VM enclosed inside VirtualMachineProfile.
However in case of createVolume from Snapshot, there is no VM associated. So VM passed is null and this can cause a NPE.

Allocators hardly use the VM instance. LocalStoragePoolAllocator was mainly using it for checking if host has capacity. But it need not do this check, since that is done by HostAllocators anyway.
So removing the use of VM in StoragePoolAllocators.
2011-03-09 10:12:04 -08:00
Frank 6c819c1491 Merge branch 'bareMetal'
Conflicts:
	api/src/com/cloud/api/ApiConstants.java
	api/src/com/cloud/api/commands/DeployVMCmd.java
	api/src/com/cloud/offering/ServiceOffering.java
	api/src/com/cloud/vm/UserVmService.java
	client/tomcatconf/components.xml.in
	server/src/com/cloud/agent/manager/AgentManagerImpl.java
	server/src/com/cloud/configuration/DefaultComponentLibrary.java
	server/src/com/cloud/deploy/FirstFitPlanner.java
	server/src/com/cloud/service/ServiceOfferingVO.java
	server/src/com/cloud/vm/UserVmManagerImpl.java
	server/src/com/cloud/vm/VirtualMachineManagerImpl.java
2011-03-08 14:18:11 -08:00
Frank 7fa053370e Bug 8208 - bare metal provisioning
Add bare metal planner
2011-03-01 17:47:37 -08:00
Edison Su 53eb46dc2a Add local storage support for kvm 2011-03-01 19:51:43 -05:00
prachi c1f0aef550 More changes for Bug 7845 - Productize DeploymentPlanner
- After applying the User concentrated Pod heuristic, order of clusters within a pod (based on capacity) should remain intact
- Only change should be clusters of which pod to be considered first
2011-02-28 17:25:52 -08:00