Support for local data disk. Currently enable/disable config is at zone level, in subsequent checkins it can be made more granular.
Following changes are made:
- Create disk offering API now takes an extra parameter to denote storage type (local or shared). This is similar to storage type in service offering.
- Create/delete of data volume on local storage
- Attach/detach for local data volumes. Re-attach is allowed as long as vm host and data volume storage pool host is same.
- Migration of VM instance is not supported if it uses local root or data volumes.
- Migrate is not supported for local volumes.
- Zone level config to enable/disable local storage usage for service and disk offerings.
- Local storage gets discovered when a host is added/reconnected if zone level config is enabled. When disabled existing local storages are not removed but any new local storage is not added.
- Deploy VM command validates service and disk offerings based on local storage config.
- Upgrade uses the global config 'use.local.storage' to set the zone level config for local storage.
(cherry picked from commit 62710aed37606168012a0ed255a876c8e7954010)
Description:
Code changes to manage Cisco Nexus 1000v in CloudStack.
VmwareResource has been modified to leverage Nexus vSwitch.
Providing following global configuration parameters,
vmware.use.nexus.vswitch -
This would decide whether Nexus vSwitch in the VMware
cluster environment would be used/managed by CloudStack
for it's network infrastructure needs.
vmware.guest.network.vswitch.type -
This setting would enable CloudStack to use Nexus vSwitch
in the VMware cluster environment for guest traffic.
vmware.private.network.vswitch.type -
This setting would enable CloudStack to use Nexus vSwitch
in the VMware cluster environment for private traffic.
vmware.public.network.vswitch.type -
This setting would enable CloudStack to use Nexus vSwitch
in the VMware cluster environment for private traffic.
Functional Specification -
http://wiki.cloudstack.org/display/RelOps/Cisco+Nexus+1000v+Support+in+CloudStack+-+Functional+Specification
Documentation / README for usage instructions -
http://wiki.cloudstack.org/display/RelOps/Configuration+instructions+for+CloudStack+Deployment+with+Nexus+vSwitch
Conflicts:
core/src/com/cloud/hypervisor/vmware/manager/VmwareManager.java
core/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
server/src/com/cloud/hypervisor/vmware/VmwareServerDiscoverer.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/HostMO.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/HypervisorHostHelper.java
vmware-base/src/com/cloud/hypervisor/vmware/util/VmwareContext.java
This fix will enable support for multiple NetScaler devices providing EIP service in same zone.
- Introduced global setting "eip.use.multiple.netscalers" to turn multiple netscaler support
- Enhanced configureNetscalerLoadBalancer API to take the PBR setup between the POD's subnet
and NetScaler device
- logic to pick a NetScaler (based on the guest IP and corresponding pod) while configuring INAT rule
2) Added new api - changeServiceForSystemVm - to support service offering upgrade for system vms
3) Removed global config parameters that are not in use anymore: consoleproxy.ram.size, consoleproxy.cpu.mhz, secstorage.vm.ram.size, secstorage.vm.cpu.mhz
Changes:
To migrate systems using 'use.user.concentrated.pod.allocation' as true and 'vm.allocation.algorithm' as true, we need to
add following changes:
- There will be 5 values to 'vm.allocation.algorithm': 'random', 'firstfit', 'userdispersing', 'userconcentratedpod_random', 'userconcentratedpod_firstfit'
- 'userconcentratedpod_random' means we apply user concentration to pods and clusters. To hosts and pools we use random ordering.
- 'userconcentratedpod_firstfit' means we apply user concentration to pods and clusters. To hosts and pools we use firstfit ordering.
Changes:
- We do not need these global setting anymore. These will be hidden since 3.0
- The default traffic label will be picked from the global setting which is null by default. When traffic label is null it means the resource uses tag on the default gateway
- Changes to invoke discoverer to reload the resource object on host connection
- Since a zone can have many physical networks, there can be multiple guest, public networks. Only the zone wide storage and management traffic label will be stored in host_details henceforth.
- If traffic labels are updated, discoverer should update the host_details
change default mount path of OS where mgmt server is running to /var/lib/cloud/managment/mnt
as /var/lib/cloud/management is home directory of user 'cloud', mgmt server can have full permission
to manipulate it
status 12956: resolved fixed
Admin can either configure system.vm.default.hypervisor which is a global configuration for all zones, or call updatezone add defaultSystemVMHypervisorType
status 12139: resolved fixed
As per the new design following would be done.
(a) any ISO-derived disk can be extracted
(b) there will be a global config to disable extraction of ISO based volumes.
That way people concerned about (a) can just use (b) to fix it.
Reviewed by : Kishan.
status 11811: resolved fixed
Changes:
- Added a two new deployment planners 'UserDispersingPlanner' and 'UserConcentratedPodPlanner' to the DeploymentPlanners
- Planner can be chosen by setting the global config variable 'vm.allocation.algorithm' to either of the following values:
('random', 'firstfit', 'userdispersing', 'userconcentratedpod')
- By default, the value is 'random'. When the value is 'random', FirstFitPlanner is invoked as before that shuffles the resource lists.
- Now Admin can choose whether the deployment heuristic should be applied starting at cluster or pod level. This can be done by using the
global config variable 'apply.allocation.algorithm.to.pods' which is false by default. Thus by default as earlier, planner starts at clusters directly.
'UserConcentratedPodPlanner' changes:
- Earlier to 3.0, FirstFitPlanner used to reorder the clusters in case this heuristic was chosen.
- Now this is done by a separate planner and is applied only when 'vm.allocation.algorithm' is set to this planner
- It reorders the capacity based clusters/pods such that those pods having more number of Running Vms for the given account are tried first.
- Note that this userconcentration is applied only to pods and clusters. Not to hosts or storagepools within a cluster.
'UserDispersingPlanner' changes:
- 'UserDispersingPlanner' reorders the capacity ordered pods and clusters based on number of 'Running' VMs for the given account in ascending order. Aim is to choose thodes pods/clusters first which have less number of Running VMs for the given account
- Admin can provide weights to capacity and user dispersion so that both parameters get considered in reordering the pods/clusters. This can be done by setting
the global config parameter 'vm.user.dispersion.weight'. Default value is 1. Thus if this planner is chosen, by default, ordering will be done only by number of Running Vms, unless the weight is changed.
- HostAlllocators and StoragePoolAllocators also reorder the hosts and pools by ascending order of number of Running VMS/ Ready Volumes respectively for the given account. Thus try to choose that host or pool within a cluster with less number of VMs for the account.
status 9831: resolved fixed
added a periodic task that does recalculation of counts. By default interval is set to 0, which will be interpreted as not to run task
Added two New values "all" and "default" to global config "network.loadbalancer.haproxy.stats.visibility" . With this change, it can take six possible value:
global - stats visible from public network.
guest-network - stats visible only to guestnetwork.
link-local - stats visible only to link local network(for xen and kvm).
disabled - stats disabled.
all - stats available on public,guest and link-local. (Newly added)
default - stats availble on the serving http port, this does need any specific http port.(Newly added)
Except "default" and "disabled", all the rest of 4 need to configure the stats port.
Added New value "link-local" to global config network.loadbalancer.haproxy.stats.visibility . With this change it can take new parameter "link-local" value apart from the existing 3 values global,guest-network,disabled.
global - stats visible from public network
guest-network - stats visible only to guestnetwork.
link-local - stats visible only to link local network
disabled - stats disabled.
* when management server dies and notifies other management servers about this, the running management server has to cleanup host_transfer records belonging to the died management server
* issue agent load balancing task only when agent load (number of connected agents in the system) exceeds "agent.load.threshold" - 70% by default
* when management server dies and notifies other management servers about this, the running management server has to cleanup host_transfer records belonging to the died management server
* issue agent load balancing task only when agent load (number of connected agents in the system) exceeds "agent.load.threshold" - 70% by default
Conflicts:
server/src/com/cloud/configuration/Config.java
server/src/com/cloud/host/dao/HostDaoImpl.java
setup/db/db/schema-228to229.sql
* when management server dies and notifies other management servers about this, the running management server has to cleanup host_transfer records belonging to the died management server
* issue agent load balancing task only when agent load (number of connected agents in the system) exceeds "agent.load.threshold" - 70% by default
Conflicts:
server/src/com/cloud/configuration/Config.java
setup/db/db/schema-228to229.sql
Also:
1. Discard VPN related change.
2. Add network.dns.basiczone.updates in Config.java
3. Add findByNetworkOutsideThePod() for DomainRouterVO
Tested with VLAN and basic mode, works.
Disable redundant virtual router temporaily, would enable it after more testing.
Part 1
This backport contained:
commit 52317c718c25111c2535657139b541db0c9d1e1f
bug 9154: Initial check in for enabling redundant virtual router
commit 54199112055d754371bfb141168fb5538bf6d6ea
Add host verification for CheckRouterCommand
commit cef978a228c90056ead9be10cbc4de74c2b8de76
Fix CheckRouterAnswer's isMaster report
commit 4072f0a6991ac3b63601a1764fbe14188965f62f
Some build fixes and code refactoring for redundant router
commit 4d3350b7cd8ee2706a9bace4437fc194e36c8dd5
Redundant Router: Fix OVS
commit 6a228830e7c46d819fa0c3317e159e041337e887
Fix findByNetwork()/findByNetworkAndPod()'s return
commit c627777b3d5bdbcd60db4032cebd349a5b1ecd83
Redundant Router: Fix isVmAlive()
commit e1275d2514adc41f8744f5107d4069c38be195f1
Only issue CheckRouterCommand to redundant routers
And all modification to the scripts till
commit 4e3942462ed3fde3a3d7011e95839e2128fba514
logging changes
in the master branch.