Commit Graph

73 Commits

Author SHA1 Message Date
Prachi Damle 057097d0e0 Planner should set the pool information in the destination for volumes that are not yet ready. 2013-01-23 17:31:00 -08:00
Prachi Damle aa7b3e0f6d Renaming VmInstanceVO: dataCenterIdToDeployIn to dataCenterId
Corresponding getter/setter is renamed too.

Reason is GenericDao does not update the field unless the method name matches the field name; the setter of this VO was one such case.
2013-01-22 12:56:39 -08:00
Prachi Damle 94e8090bf3 Deploy, Start, Stop, Destroy VM orchestration service changes 2013-01-22 12:54:04 -08:00
Kelven Yang 2be270de89 Separate loadable components like Gurus, Elements, Adapters to componentContext.xml 2013-01-16 16:33:59 -08:00
Alex Huang 56e5fbdee2 removed import of componentlocator and inject from all files 2013-01-10 11:44:47 -08:00
Alex Huang 30f2565d98 Merge branch 'api_refactoring' into javelin 2013-01-08 12:36:04 -08:00
Kelven Yang 25d14418b9 Replace Adapters<T> with standard List<T> to work with Spring injection 2013-01-03 13:33:52 -08:00
Rohit Yadav 5edfc2760a refactor: remove redundant imports, fix trailing chars 2012-12-03 13:54:37 -08:00
Kelven Yang cea8f3bf37 Switch inject annotation to javax and let ComponentLocator to recognize both the new and original inject annotation 2012-11-07 15:03:22 -08:00
Kelven Yang aab02e2743 Add Spring annotation to major components 2012-11-07 14:53:39 -08:00
Nitin Mehta a50cf618ec bug CS-15278: For removing clusters crossing threshold find out the list of cluster through db instead of iteratting cluster one by one in the java code. 2012-08-13 16:20:57 +05:30
Mice Xia a74687128e Fix bug CS-15679 Max guest limit of hypervisor capabilities does not work properly 2012-08-10 16:50:47 +08:00
Edison Su 50ffa95f63 ifix CS-15609 Volumes can be created as a part of VM creation when
un-allocated space is insufficient on primary storage

check the availability of un-allocated primary storage space during
planning stage, for multiple-volume VM creation scenario
modification in StorageManagerImpl.java and StorageManager.java:
add a new method storagePoolHasEnoughSpace(List<Volumes>, StoragePool),
check if storagePool has enough space for all requested volumes
modification in FirstfitPlanner.findPotentialDeploymentResources:
handle multiple volume case, keep track of allocated volumes for pools
and call storagePoolHasEnoughSpace to check space availability
modification in AbstractStoragePoolAllocator.java:
extract capacity computation logic and make a new method in
StorageManagerImpl

RB: https://reviews.apache.org/r/6028/
Send-by: mice_xia@tcloudcomputing.com
2012-07-26 10:10:18 -07:00
David Nalley e87558256c Patch from Chip Childers
https://reviews.apache.org/r/5704/
License header updates for the server folder
2012-07-02 09:51:21 -04:00
Alena Prokharchyk 98fd5cf959 bug 14622: introduced ha tagging for host
status 14622: resolved fixed

Conflicts:

	server/src/com/cloud/host/dao/HostDao.java
2012-04-09 15:18:01 -07:00
frank 2f634c0913 Switch to Apache license 2012-04-03 04:50:05 -07:00
prachi 996110c928 Bug 14000 - Neither Admin or regular user can create a VM when the Pod is disabled
Bug 14006 - Admin could not create a VM when the cluster is Disabled

Changes:
- For Root admin, planner will not filter out the disabled pods or clusters from the resource list
2012-03-05 17:04:22 -08:00
prachi e37732c4de Bug 13766 - VMs are still running after disabling the zone
Reviewed-by: Sheng Yang

Changes:
- Do not check if allocation_state is 'Enabled' in planner if the caller is Root Admin.
- This should let Root Admin create a VM in a disabled Zone.
2012-02-22 17:34:00 -08:00
prachi eb43f780f7 Bug 13824 - VM deployment fails due to volume not being recreatable
Reviewed-By: Alex

Changes:
- Reuse the same storagepool where the Volume is ready on each retry of VM deployment until the cluster where the volume is has capacity
- After the cluster is out of capacity, we look in other clusters and find a new storagepool.
- At this point if the volume is recreatable on the new storagepool, depoyment will succeed provided everytyhing else goes through
- But if the volume is not recreatable and its cluster is out of capacity, we will still fail to deploy the VM
2012-02-16 17:04:25 -08:00
prachi be04ff861c Bug 13078 - Management Server does not respect selected OS Preference for any host within a single pod.
Changes:
- Once the HostAllocators have listed suitable hosts, planner should not reshuffle the list since that would lose the prioritization applied by the HostAllocators.
- E.g: HostAllocators chooses that host first which matches the guest OS category. If planner shuffles the list, that preferrence is lost.
2012-01-19 16:59:38 -08:00
kishan e2cb4f94d6 bug 12337: Encrypt only password in host_detail table. Removed unused and duplicate references of HostDetailDao
status 12337: resolved fixed
reviewed-by: Abhi
2011-12-20 19:28:41 +05:30
prachi 8134cfb4c8 Bug 11131 - Allocators: when vm fails to deploy in DataCenter, we should never retry in the same DC again as the DC is already in Avoid set
Changes:
- Added check in planner to verfiy that the datacenter is not in avoid set. If found in avoid set, planner does not proceed.
2011-11-23 17:40:20 -08:00
prachi 313e6ca284 Bug 8791 user dispersing allocator
Changes:
- Added a two new deployment planners  'UserDispersingPlanner' and 'UserConcentratedPodPlanner' to the DeploymentPlanners
- Planner can be chosen by setting the global config variable 'vm.allocation.algorithm' to either of the following values:
('random', 'firstfit', 'userdispersing', 'userconcentratedpod')
- By default, the value is 'random'. When the value is 'random', FirstFitPlanner is invoked as before that shuffles the resource lists.
- Now Admin can choose whether the deployment heuristic should be applied starting at cluster or pod level. This can be done by using the
global config variable 'apply.allocation.algorithm.to.pods' which is false by default. Thus by default as earlier, planner starts at clusters directly.

'UserConcentratedPodPlanner' changes:
- Earlier to 3.0, FirstFitPlanner used to reorder the clusters in case this heuristic was chosen.
- Now this is done by a separate planner and is applied only when 'vm.allocation.algorithm' is set to this planner
- It reorders the capacity based clusters/pods such that those pods having more number of Running Vms for the given account are tried first.
- Note that this userconcentration is applied only to pods and clusters. Not to hosts or storagepools within a cluster.

'UserDispersingPlanner' changes:
- 'UserDispersingPlanner' reorders the capacity ordered pods and clusters based on number of 'Running' VMs for the given account in ascending order. Aim is to choose thodes pods/clusters first which have less number of Running VMs for the given account
- Admin can provide weights to capacity and user dispersion so that both parameters get considered in reordering the pods/clusters. This can be done by setting
the global config parameter 'vm.user.dispersion.weight'. Default value is 1. Thus if this planner is chosen, by default, ordering will be done only by number of Running Vms, unless the weight is changed.
- HostAlllocators and StoragePoolAllocators also reorder the hosts and pools by ascending order of number of Running VMS/ Ready Volumes respectively for the given account. Thus try to choose that host or pool within a cluster with less number of VMs for the account.
2011-11-17 18:29:39 -08:00
Alex Huang f6fcaa49ec Merge complete except for virtualnetworkappliancemanager 2011-11-10 15:18:16 -08:00
Nitin 2b370ab535 bug 10657: Introducing cluster level global thresholds for cpu and ram so that these resources do not go beyond these thresholds. The reason for this is because, if the admn needs to provide maintenance, they don't have to add new machines or have ones on standby if the entire zone/pod/cluster is at 100% allocated capacity. Also introducing pool level global thresholds for allocated storage. There are other changes like DB upgrade and introduction of transaction. 2011-10-29 16:51:37 +05:30
frank cef30956e9 Merge branch 'newagentmgr'
Conflicts:
	agent-simulator/src/com/cloud/api/commands/ConfigureSimulator.java
	ovm/src/com/cloud/ovm/hypervisor/OvmDiscoverer.java
	server/src/com/cloud/agent/manager/AgentManagerImpl.java
	server/src/com/cloud/capacity/CapacityManagerImpl.java
	server/src/com/cloud/network/F5BigIpManagerImpl.java
	server/src/com/cloud/network/JuniperSrxManagerImpl.java
	server/src/com/cloud/resource/ResourceManagerImpl.java
	server/src/com/cloud/server/ManagementServerImpl.java
	server/src/com/cloud/storage/snapshot/SnapshotManagerImpl.java
	server/src/com/cloud/vm/UserVmManagerImpl.java
	server/src/com/cloud/vm/VirtualMachineManagerImpl.java
	utils/src/com/cloud/utils/db/GenericDao.java
2011-10-27 11:09:56 -07:00
frank 30f95e638a Bug 11522 - New agent manager
1. get rid of host allocation state
2. remove Updating status from agent status
2011-10-24 16:49:32 -07:00
alena 219978a9be Create network using physical network id 2011-10-20 18:25:13 -07:00
prachi e0a179752d Bug 11617: Ensure the Deployment planner is choosing clusters based on aggregate capacity
Changes:
- We were ordering clusters based on capacity of the first-fit host found in each cluster. Due to this, there were cases where we deployed VMs to one cluster instead of balancing off within clusters.
- Now we order the list of clusters by aggregate capacity and choose the ones that have enough capacity for the required VM in this order.
- This should balance the load between clusters instead of bombarding one.

Conflicts:

	server/src/com/cloud/capacity/dao/CapacityDao.java
	server/src/com/cloud/capacity/dao/CapacityDaoImpl.java
2011-10-03 15:37:38 -07:00
prachi 4ad9ac5e71 Bug 11200 - maximum number of guests per host
Changes:

To make sure migration does not attempt to pick a host that has running VMs more than the max guest VM's limit:

- Changed manual migration to call host allocators to return a list of hosts suitable for migration. Host allocators check for the max guest VM limit.
- Earlier we returned hosts with enough capacity but now Host Allocators make other checks along with capacity. So the list of hosts returned are hosts that have enough capacity AND satisfy all other conditions like host tags, max guests limit etc. Or in other words Allocators dont return the hosts that dont satisfy all conditions even if they have capacity.
-Therefore, now we mark the list of hosts returned for manual migration as 'suitable' hosts instead of 'hasenoughCapacity' in the HostResponse.
- HA migration already calls allocators, so no change is needed there.
2011-09-08 18:08:31 -07:00
Kelven Yang a51383e296 bug 11219: use local storage flag from service offering when it is ROOT disk 2011-08-24 15:17:53 -07:00
Nitin 4b6d072490 bug 10972: Improve logging - put in the hostid and hostname on which the vm launches.
status 10972: resolved fixed
2011-08-24 18:04:46 +05:30
alena 8a7feb8ec1 Merge branch '2.2.y'
Conflicts:
	agent/src/com/cloud/agent/resource/computing/LibvirtComputingResource.java
	api/src/com/cloud/agent/api/routing/LoadBalancerConfigCommand.java
	api/src/com/cloud/agent/api/to/FirewallRuleTO.java
	api/src/com/cloud/agent/api/to/IpAddressTO.java
	api/src/com/cloud/agent/api/to/PortForwardingRuleTO.java
	api/src/com/cloud/api/ApiConstants.java
	api/src/com/cloud/api/BaseCmd.java
	api/src/com/cloud/api/ResponseGenerator.java
	api/src/com/cloud/api/commands/CreateFirewallRuleCmd.java
	api/src/com/cloud/api/commands/CreateIpForwardingRuleCmd.java
	api/src/com/cloud/api/commands/CreateLoadBalancerRuleCmd.java
	api/src/com/cloud/api/commands/CreatePortForwardingRuleCmd.java
	api/src/com/cloud/api/commands/DeleteLoadBalancerRuleCmd.java
	api/src/com/cloud/api/commands/ListCapabilitiesCmd.java
	api/src/com/cloud/api/commands/UpdateNetworkCmd.java
	api/src/com/cloud/api/response/CapabilitiesResponse.java
	api/src/com/cloud/network/Network.java
	api/src/com/cloud/network/NetworkService.java
	api/src/com/cloud/network/firewall/FirewallService.java
	api/src/com/cloud/network/lb/LoadBalancingRule.java
	api/src/com/cloud/network/lb/LoadBalancingRulesService.java
	api/src/com/cloud/network/rules/FirewallRule.java
	api/src/com/cloud/network/rules/RulesService.java
	api/src/com/cloud/offering/NetworkOffering.java
	client/tomcatconf/commands.properties.in
	cloud.spec
	core/src/com/cloud/agent/resource/virtualnetwork/VirtualRoutingResource.java
	core/src/com/cloud/hypervisor/xen/resource/CitrixHelper.java
	core/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java
	core/src/com/cloud/storage/template/DownloadManagerImpl.java
	core/src/com/cloud/vm/DomainRouterVO.java
	debian/cloud-deps.install
	patches/systemvm/debian/config/etc/init.d/cloud-early-config
	patches/systemvm/debian/config/root/ipassoc.sh
	patches/systemvm/debian/config/root/loadbalancer.sh
	scripts/vm/hypervisor/kvm/rundomrpre.sh
	scripts/vm/hypervisor/xenserver/vmops
	server/src/com/cloud/agent/manager/AgentAttache.java
	server/src/com/cloud/agent/manager/AgentManagerImpl.java
	server/src/com/cloud/agent/manager/AgentMonitor.java
	server/src/com/cloud/agent/manager/ClusteredAgentManagerImpl.java
	server/src/com/cloud/alert/ClusterAlertAdapter.java
	server/src/com/cloud/api/ApiResponseHelper.java
	server/src/com/cloud/api/ApiServer.java
	server/src/com/cloud/cluster/ClusterManagerImpl.java
	server/src/com/cloud/configuration/Config.java
	server/src/com/cloud/configuration/ConfigurationManager.java
	server/src/com/cloud/configuration/ConfigurationManagerImpl.java
	server/src/com/cloud/configuration/DefaultComponentLibrary.java
	server/src/com/cloud/deploy/FirstFitPlanner.java
	server/src/com/cloud/ha/HighAvailabilityManagerImpl.java
	server/src/com/cloud/host/dao/HostDaoImpl.java
	server/src/com/cloud/hypervisor/xen/discoverer/XcpServerDiscoverer.java
	server/src/com/cloud/network/LoadBalancerVO.java
	server/src/com/cloud/network/NetworkManager.java
	server/src/com/cloud/network/NetworkManagerImpl.java
	server/src/com/cloud/network/dao/FirewallRulesDao.java
	server/src/com/cloud/network/dao/FirewallRulesDaoImpl.java
	server/src/com/cloud/network/element/DhcpElement.java
	server/src/com/cloud/network/element/VirtualRouterElement.java
	server/src/com/cloud/network/firewall/FirewallManagerImpl.java
	server/src/com/cloud/network/lb/LoadBalancingRulesManagerImpl.java
	server/src/com/cloud/network/router/VirtualNetworkApplianceManager.java
	server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java
	server/src/com/cloud/network/rules/FirewallManager.java
	server/src/com/cloud/network/rules/FirewallRuleVO.java
	server/src/com/cloud/network/rules/PortForwardingRuleVO.java
	server/src/com/cloud/network/rules/RulesManagerImpl.java
	server/src/com/cloud/network/rules/StaticNatRuleImpl.java
	server/src/com/cloud/network/security/SecurityGroupListener.java
	server/src/com/cloud/network/security/SecurityGroupManagerImpl.java
	server/src/com/cloud/offerings/NetworkOfferingVO.java
	server/src/com/cloud/server/ConfigurationServerImpl.java
	server/src/com/cloud/server/ManagementServerImpl.java
	server/src/com/cloud/storage/StorageManager.java
	server/src/com/cloud/storage/StorageManagerImpl.java
	server/src/com/cloud/storage/dao/VMTemplateHostDaoImpl.java
	server/src/com/cloud/storage/download/DownloadMonitorImpl.java
	server/src/com/cloud/upgrade/DatabaseUpgradeChecker.java
	server/src/com/cloud/upgrade/dao/Upgrade228to229.java
	server/src/com/cloud/upgrade/dao/Upgrade229to2210.java
	server/src/com/cloud/user/AccountManagerImpl.java
	server/src/com/cloud/vm/UserVmManagerImpl.java
	server/src/com/cloud/vm/VirtualMachineManagerImpl.java
	server/src/com/cloud/vm/dao/DomainRouterDao.java
	server/src/com/cloud/vm/dao/DomainRouterDaoImpl.java
	setup/db/create-index-fk.sql
	setup/db/create-schema.sql
	setup/db/db/schema-222to224.sql
	setup/db/db/schema-227to228.sql
	setup/db/db/schema-228to229.sql
	setup/db/db/schema-229to2210.sql
	tools/testClient/README
	ui/scripts/cloud.core.instance.js
	utils/src/com/cloud/utils/SerialVersionUID.java
	utils/src/com/cloud/utils/db/ConnectionConcierge.java
	utils/src/com/cloud/utils/db/Merovingian2.java
	utils/src/com/cloud/utils/db/Transaction.java
	utils/src/com/cloud/utils/nio/Link.java
	utils/src/com/cloud/utils/nio/NioConnection.java
	utils/src/com/cloud/utils/time/InaccurateClock.java
2011-08-22 20:28:30 -07:00
prachi 8755044ea9 Merge Bug 11186 from 2.2.8mango
Bug 11186- Cannot restart existing VM if the cluster is disabled after the
VM has been created

Changes:
- We should not check for the cluster 'allocation_state' while starting an
    existing VM provided it has storage already allocated. But if volumes are
    deleted and new storage needs to be allocated, then we will not allow the
VM
    start.
- However we should still prohibit adding new VMs in that cluster.
2011-08-20 21:12:15 -07:00
prachi 3d4767c94c Merge Bug 11186 from 2.2.8mango
Bug 11186- Cannot restart existing VM if the cluster is disabled after the
VM has been created

Changes:
 - We should not check for the cluster 'allocation_state' while starting an
existing VM provided it has storage already allocated. But if volumes are
deleted and new storage needs to be allocated, then we will not allow the VM
start.
- However we should still prohibit adding new VMs in that cluster.
2011-08-20 21:02:09 -07:00
alena 812a1f3f7b Fixed the bug in allocator where cluster was added to avoid set as pod 2011-08-15 10:44:07 -07:00
alena 3945eec0df Fixed the bug in allocator where cluster was added to avoid set as pod 2011-08-15 10:43:59 -07:00
Alex Huang 7ac3c818a9 bug 11079: fixed a bug with autoboxing 2011-08-12 14:14:15 -07:00
Alex Huang 429c1a0a18 changed a bunch of map logs to trace 2011-07-22 18:14:02 -07:00
Alex Huang 18fa544da1 changed a bunch of map logs to trace 2011-07-22 18:13:24 -07:00
Alex Huang 1d2a529556 put big log trace in firstfit planner in to trace instead of debug 2011-07-11 14:37:52 -07:00
Alex Huang f5d5ed5dce put big log trace in firstfit planner in to trace instead of debug 2011-07-11 14:37:36 -07:00
Alex Huang ee2670edc7 Some operations on the lock table allowed through jmx 2011-07-06 16:10:18 -07:00
Alex Huang 7e9836dfd0 Some operations on the lock table allowed through jmx 2011-07-06 16:09:05 -07:00
Sheng Yang 62ac899091 bug 9154: Initial check in for enabling redundant virtual router
This patch enable redundant virtual routers.

1. To enable this feature, db need to be updated using follow SQL by now(we
would get a UI way later):

UPDATE network_offerings SET redundant_router=1 WHERE guest_type="Virtual" AND
system_only=0;

2. System would try to start up two routers at different hosts. But if there is
only one host in the zone, system would start up two routers on it.

3. The failover part is using keepalived, and connection tracking part is using
conntrackd. There would be one master router and one backup router. The status
of router(master or backup) can be query from the database table domain_router
now. Management server would update the status every 30s by default.

4. The routers for the same zone would use same external NIC(same ip and mac).
The script used for fail-over would ensure only one external NIC present in the
network at any time.

5. Currently management server don't got the ability to stop one of router is
both of them reported as master. The feature is in the todo list.

After two routers start up, disconnect anyone of them, the guest network
shouldn't be affected, and established connection(http, ssh, etc.) should still
works. The fail-over on gateway part should be 3~4 seconds.

Currently the patch works with KVM. Would deal with vmware and XenServer soon.
2011-06-07 14:47:45 -07:00
Alex Huang d9e0bcfa1e bug 10126: Renamed getPodId() to getPodIdToDeployIn() 2011-06-03 22:17:08 -07:00
Alex Huang 5ce631e9d7 Separated resource management and agent management code. It's not all done but at least we make a first step 2011-05-16 10:55:18 -07:00
Alex Huang fba1c95512 bug 9615: Part of the HA cleanup 2011-05-03 16:34:53 -07:00
Alex Huang 8c8354a00e bug 8745: we decided on not implementing revert on the agent because it really requires business logic above. Stop if the checkSsh doesn't work 2011-05-02 14:47:49 -07:00
prachi 209be1065b Bug 9585 - Existing Data Disk is being destroyed and recreated on Stop and Start of a User VM.
Changes:
- When the ROOT volume of a VM is found to be READY, changed planner to reuse the pool for every volume(root or data) that is READY and that has a pool not in maintenance and not in avoid state
- If ROOT volume is not ready, we dont care about the DATA disk. Both would get re-allocated.
- When a pool is reused for a ready volume, Planner does not call storagepool allocators. And such volumes are not assigned a pool in the deployment destination returned by the planner. Accordingly StorageManager :: prepare method wont recreate these volumes since they are not mentioned in the destination.
2011-04-27 11:36:51 -07:00