Addition of two new resource types i.e. CPU and Memory in the existing pool of
resource types.
Added some methods to set the limits on these resources using updateResourceLimit
API command and to get a count using updateResourceCount. Also added calls in the
Virtual machine life cycle to check these limits and to increment/decrement the new
resource types
Resource Name :: Resource type number
CPU 8
Memory 9
Also added Unit Tests for the same.
The new policy is:
1. Generate a random IP.
2. Find the next available IP, start from the generated IP.
3. If we cannot find an available IP after certain times(10000 by default,
network.ipv6.search.retry.max) retry, give up.
The basic idea behind this is, deploy a fix sized threadpool for updating RvR
status, then using producer/consumer model. There is a global configuration
router.check.poolsize(10 by default) to control the pool size.
Using pool size 100 for 1000 RvR is tested with simulator and works well.
Also we can adjust the global configuration option router.check.interval to e.g.
60s from default 30s to mitigate the issue.
This is improvement of:
commit 1ca493e4fa
Author: Sheng Yang <sheng.yang@cloud.com>
Date: Wed Feb 29 17:43:50 2012 -0800
bug 14042: Don't set dhcp:router option on DHCP server for non-default
network on CentOS/RHEL
The old solution only works on CentOS/RHEL, this one would enable the ability to more
guest OS, and enable user to choose what policy should be for each guest os
type.
Support for local data disk. Currently enable/disable config is at zone level, in subsequent checkins it can be made more granular.
Following changes are made:
- Create disk offering API now takes an extra parameter to denote storage type (local or shared). This is similar to storage type in service offering.
- Create/delete of data volume on local storage
- Attach/detach for local data volumes. Re-attach is allowed as long as vm host and data volume storage pool host is same.
- Migration of VM instance is not supported if it uses local root or data volumes.
- Migrate is not supported for local volumes.
- Zone level config to enable/disable local storage usage for service and disk offerings.
- Local storage gets discovered when a host is added/reconnected if zone level config is enabled. When disabled existing local storages are not removed but any new local storage is not added.
- Deploy VM command validates service and disk offerings based on local storage config.
- Upgrade uses the global config 'use.local.storage' to set the zone level config for local storage.
(cherry picked from commit 62710aed37606168012a0ed255a876c8e7954010)
Description:
Code changes to manage Cisco Nexus 1000v in CloudStack.
VmwareResource has been modified to leverage Nexus vSwitch.
Providing following global configuration parameters,
vmware.use.nexus.vswitch -
This would decide whether Nexus vSwitch in the VMware
cluster environment would be used/managed by CloudStack
for it's network infrastructure needs.
vmware.guest.network.vswitch.type -
This setting would enable CloudStack to use Nexus vSwitch
in the VMware cluster environment for guest traffic.
vmware.private.network.vswitch.type -
This setting would enable CloudStack to use Nexus vSwitch
in the VMware cluster environment for private traffic.
vmware.public.network.vswitch.type -
This setting would enable CloudStack to use Nexus vSwitch
in the VMware cluster environment for private traffic.
Functional Specification -
http://wiki.cloudstack.org/display/RelOps/Cisco+Nexus+1000v+Support+in+CloudStack+-+Functional+Specification
Documentation / README for usage instructions -
http://wiki.cloudstack.org/display/RelOps/Configuration+instructions+for+CloudStack+Deployment+with+Nexus+vSwitch
Conflicts:
core/src/com/cloud/hypervisor/vmware/manager/VmwareManager.java
core/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
server/src/com/cloud/hypervisor/vmware/VmwareServerDiscoverer.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/HostMO.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/HypervisorHostHelper.java
vmware-base/src/com/cloud/hypervisor/vmware/util/VmwareContext.java
This fix will enable support for multiple NetScaler devices providing EIP service in same zone.
- Introduced global setting "eip.use.multiple.netscalers" to turn multiple netscaler support
- Enhanced configureNetscalerLoadBalancer API to take the PBR setup between the POD's subnet
and NetScaler device
- logic to pick a NetScaler (based on the guest IP and corresponding pod) while configuring INAT rule
2) Added new api - changeServiceForSystemVm - to support service offering upgrade for system vms
3) Removed global config parameters that are not in use anymore: consoleproxy.ram.size, consoleproxy.cpu.mhz, secstorage.vm.ram.size, secstorage.vm.cpu.mhz
Changes:
To migrate systems using 'use.user.concentrated.pod.allocation' as true and 'vm.allocation.algorithm' as true, we need to
add following changes:
- There will be 5 values to 'vm.allocation.algorithm': 'random', 'firstfit', 'userdispersing', 'userconcentratedpod_random', 'userconcentratedpod_firstfit'
- 'userconcentratedpod_random' means we apply user concentration to pods and clusters. To hosts and pools we use random ordering.
- 'userconcentratedpod_firstfit' means we apply user concentration to pods and clusters. To hosts and pools we use firstfit ordering.
Changes:
- We do not need these global setting anymore. These will be hidden since 3.0
- The default traffic label will be picked from the global setting which is null by default. When traffic label is null it means the resource uses tag on the default gateway
- Changes to invoke discoverer to reload the resource object on host connection
- Since a zone can have many physical networks, there can be multiple guest, public networks. Only the zone wide storage and management traffic label will be stored in host_details henceforth.
- If traffic labels are updated, discoverer should update the host_details
change default mount path of OS where mgmt server is running to /var/lib/cloud/managment/mnt
as /var/lib/cloud/management is home directory of user 'cloud', mgmt server can have full permission
to manipulate it
status 12956: resolved fixed
Admin can either configure system.vm.default.hypervisor which is a global configuration for all zones, or call updatezone add defaultSystemVMHypervisorType
status 12139: resolved fixed
As per the new design following would be done.
(a) any ISO-derived disk can be extracted
(b) there will be a global config to disable extraction of ISO based volumes.
That way people concerned about (a) can just use (b) to fix it.
Reviewed by : Kishan.
status 11811: resolved fixed
Changes:
- Added a two new deployment planners 'UserDispersingPlanner' and 'UserConcentratedPodPlanner' to the DeploymentPlanners
- Planner can be chosen by setting the global config variable 'vm.allocation.algorithm' to either of the following values:
('random', 'firstfit', 'userdispersing', 'userconcentratedpod')
- By default, the value is 'random'. When the value is 'random', FirstFitPlanner is invoked as before that shuffles the resource lists.
- Now Admin can choose whether the deployment heuristic should be applied starting at cluster or pod level. This can be done by using the
global config variable 'apply.allocation.algorithm.to.pods' which is false by default. Thus by default as earlier, planner starts at clusters directly.
'UserConcentratedPodPlanner' changes:
- Earlier to 3.0, FirstFitPlanner used to reorder the clusters in case this heuristic was chosen.
- Now this is done by a separate planner and is applied only when 'vm.allocation.algorithm' is set to this planner
- It reorders the capacity based clusters/pods such that those pods having more number of Running Vms for the given account are tried first.
- Note that this userconcentration is applied only to pods and clusters. Not to hosts or storagepools within a cluster.
'UserDispersingPlanner' changes:
- 'UserDispersingPlanner' reorders the capacity ordered pods and clusters based on number of 'Running' VMs for the given account in ascending order. Aim is to choose thodes pods/clusters first which have less number of Running VMs for the given account
- Admin can provide weights to capacity and user dispersion so that both parameters get considered in reordering the pods/clusters. This can be done by setting
the global config parameter 'vm.user.dispersion.weight'. Default value is 1. Thus if this planner is chosen, by default, ordering will be done only by number of Running Vms, unless the weight is changed.
- HostAlllocators and StoragePoolAllocators also reorder the hosts and pools by ascending order of number of Running VMS/ Ready Volumes respectively for the given account. Thus try to choose that host or pool within a cluster with less number of VMs for the account.
status 9831: resolved fixed
added a periodic task that does recalculation of counts. By default interval is set to 0, which will be interpreted as not to run task