Fix is use the storage overprovisioning factor (global configuration parameter "storage.overprovisioning.factor") to calculate total provisioning capacity for storage space allocation over VMFS based storage pools as well.
There are two level of thin provisioning provided in VMware, storage level and file-level (VMDK) thin provisioning. in CloudStack, all volumes are provisioned with thin VMDK format, so at hypervisor level, we ALWAYS do thin provisioning. If storage vendor has the ability to provide storage level thin provisioning in addition to VMDK thin provisioning, it is also allowed since it is transparent to Cloudstack.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Changes:
- Consider if VM requires the local storage or shared storage or both for its disks.
- Accordingly all pools in the cluster should consider local or shared or both pools
While deploying a VM, user passes the "keyboard" parameter to specify the
default language for that VM but in the consoleView, the default language
selected is en-us irrespective of the default language of the VM.
xs 6.1/6.2 introduce the new virtual platform, so there are two virtual platforms, windows PV driver version must match virtual platforms,
this patch tracks PV driver versions in vm details and template details.
Anthony
From functionality point of view following will be changes.
-> Threshold will not be applied for vms undergoing HA process (say during host going down event).
-> Threshold will not be applied for vms undergoing maintenance.
-> Threshold will apply for stop-start vm "after" the skip.counting.hours
(time till which we reserve the capacity for the stopped vm).
Reason being stop-start is a user initiated action and we are giving a time
window (skip.counting.hours) till which its guaranteed to start in the same
cluster.
After that time, threshold check applies bcz we still want to reserve the
capacity for urgent situations like HA, putting host into maintenance rather than get it consumed in such user
initiated actions.
Signed off by : nitin mehta<nitin.mehta@citrix.com>
public range to see whether the same VLAN exists in portable IP range.
added check to enusre a VLAN id used for a public IP range is not used
for portable ip range
The cluster and zone wide storage pool allocators returned shared pools even for volumes meant to be on local storage pool.
If the VM uses local disk then cluster and zone storage allocators should not handle it and return null or empty list.
Also fixed the deployment planner to avoid a cluster if
a. avoid set returned by storage pool allocators is empty OR
b. all local or shared pools in a cluster are in avoid state
defined in different VLANs across public and portable ranges
added checks to restric same ip range to be configure as both public ip
range and portable ip range
This patch would reset the priority in such condition:
1. All redundant routers are stopped, e.g. due to network GC
2. User start one VM in the network
3. The routers would be brought up with reseted priority(100 & 99).
This would resolve the issue of network GC result in lower limit of redundant router priority reached.
Even though the volume may get migrated from shared to local storage, it is not possible to update the disk offering.
The fix is to disallow migration from shared to local store.
list Clusters/pods/zones API not accounting for reserved in the used capacity percentage.
Fix listCapacity cmd for reserved as well
Signed off by : nitin mehta<nitin.mehta@citrix.com>
VM deployment is fine, issue is in attach volume where all possible scenarios are not handled.
The following needs to be logic of attached volume:
1. if data volume scope is zone then allow attach (this is already there)
2. if data volume scope is cluster
a. if root volume scope is host, allow if both are in same cluster (already there)
b. if root volume scope is zone, allow if vm and data volume in same cluster (fixed as part of this commit)
3. if data volume scope is host allow if vm and data volume in same host (fixed as part of this commit)