diff --git a/docs/en-US/changed-API-commands-4.2.xml b/docs/en-US/changed-API-commands-4.2.xml new file mode 100644 index 00000000000..cbaa2e3fa92 --- /dev/null +++ b/docs/en-US/changed-API-commands-4.2.xml @@ -0,0 +1,107 @@ + + +%BOOK_ENTITIES; +]> + +
+ Changed API Commands in 4.2 + + + + + Parameter Name + Description + + + + + updateResourceLimit + + Added the following resource types to the resourcetype + request parameter to set the limits: + + + CPU + + + RAM + + + primary storage + + + secondary storage + + + network rate + + + + + + updateResourceCount + + Added the following resource types to the resourcetype + request parameter to set the limits: + + + CPU + + + RAM + + + primary storage + + + secondary storage + + + network rate + + + + + + listResourceLimits + + Added the following resource types to the resourcetype + request parameter: + + + CPU + + + RAM + + + primary storage + + + secondary storage + + + network rate + + + + + + + +
diff --git a/docs/en-US/hypervisor-kvm-install-flow.xml b/docs/en-US/hypervisor-kvm-install-flow.xml index 7dfd47d2e52..aa19e47be77 100644 --- a/docs/en-US/hypervisor-kvm-install-flow.xml +++ b/docs/en-US/hypervisor-kvm-install-flow.xml @@ -34,5 +34,5 @@ - + diff --git a/docs/en-US/hypervisor-support-for-primarystorage.xml b/docs/en-US/hypervisor-support-for-primarystorage.xml index 7c2596eac29..fdef1f2b6e0 100644 --- a/docs/en-US/hypervisor-support-for-primarystorage.xml +++ b/docs/en-US/hypervisor-support-for-primarystorage.xml @@ -22,71 +22,83 @@ under the License. -->
- Hypervisor Support for Primary Storage - The following table shows storage options and parameters for different hypervisors. - - - - - - - - - - - VMware vSphere - Citrix XenServer - KVM - - - - - Format for Disks, Templates, and - Snapshots - VMDK - VHD - QCOW2 - - - iSCSI support - VMFS - Clustered LVM - Yes, via Shared Mountpoint - - - Fiber Channel support - VMFS - Yes, via Existing SR - Yes, via Shared Mountpoint - - - NFS support - Y - Y - Y - - - - Local storage support - Y - Y - Y - - - - Storage over-provisioning - NFS and iSCSI - NFS - NFS - - - - - - XenServer uses a clustered LVM system to store VM images on iSCSI and Fiber Channel volumes and does not support over-provisioning in the hypervisor. The storage server itself, however, can support thin-provisioning. As a result the &PRODUCT; can still support storage over-provisioning by running on thin-provisioned storage volumes. - KVM supports "Shared Mountpoint" storage. A shared mountpoint is a file system path local to each server in a given cluster. The path must be the same across all Hosts in the cluster, for example /mnt/primary1. This shared mountpoint is assumed to be a clustered filesystem such as OCFS2. In this case the &PRODUCT; does not attempt to mount or unmount the storage as is done with NFS. The &PRODUCT; requires that the administrator insure that the storage is available - - With NFS storage, &PRODUCT; manages the overprovisioning. In this case the global configuration parameter storage.overprovisioning.factor controls the degree of overprovisioning. This is independent of hypervisor type. - Local storage is an option for primary storage for vSphere, XenServer, and KVM. When the local disk option is enabled, a local disk storage pool is automatically created on each host. To use local storage for the System Virtual Machines (such as the Virtual Router), set system.vm.use.local.storage to true in global configuration. - &PRODUCT; supports multiple primary storage pools in a Cluster. For example, you could provision 2 NFS servers in primary storage. Or you could provision 1 iSCSI LUN initially and then add a second iSCSI LUN when the first approaches capacity. -
+ Hypervisor Support for Primary Storage + The following table shows storage options and parameters for different hypervisors. + + + + + + + + + + + VMware vSphere + Citrix XenServer + KVM + + + + + Format for Disks, Templates, and + Snapshots + VMDK + VHD + QCOW2 + + + iSCSI support + VMFS + Clustered LVM + Yes, via Shared Mountpoint + + + Fiber Channel support + VMFS + Yes, via Existing SR + Yes, via Shared Mountpoint + + + NFS support + Y + Y + Y + + + Local storage support + Y + Y + Y + + + Storage over-provisioning + NFS and iSCSI + NFS + NFS + + + + + XenServer uses a clustered LVM system to store VM images on iSCSI and Fiber Channel volumes + and does not support over-provisioning in the hypervisor. The storage server itself, however, + can support thin-provisioning. As a result the &PRODUCT; can still support storage + over-provisioning by running on thin-provisioned storage volumes. + KVM supports "Shared Mountpoint" storage. A shared mountpoint is a file system path local to + each server in a given cluster. The path must be the same across all Hosts in the cluster, for + example /mnt/primary1. This shared mountpoint is assumed to be a clustered filesystem such as + OCFS2. In this case the &PRODUCT; does not attempt to mount or unmount the storage as is done + with NFS. The &PRODUCT; requires that the administrator insure that the storage is + available + + With NFS storage, &PRODUCT; manages the overprovisioning. In this case the global + configuration parameter storage.overprovisioning.factor controls the degree of overprovisioning. + This is independent of hypervisor type. + Local storage is an option for primary storage for vSphere, XenServer, and KVM. When the + local disk option is enabled, a local disk storage pool is automatically created on each host. + To use local storage for the System Virtual Machines (such as the Virtual Router), set + system.vm.use.local.storage to true in global configuration. + &PRODUCT; supports multiple primary storage pools in a Cluster. For example, you could + provision 2 NFS servers in primary storage. Or you could provision 1 iSCSI LUN initially and + then add a second iSCSI LUN when the first approaches capacity. + diff --git a/docs/en-US/limit-accounts-domains.xml b/docs/en-US/limit-accounts-domains.xml index 64a886ef796..da45dabc982 100644 --- a/docs/en-US/limit-accounts-domains.xml +++ b/docs/en-US/limit-accounts-domains.xml @@ -176,7 +176,7 @@ If any operation needs to pass through two of more resource limit check, then the lower of 2 limits will be enforced, For e.g. if an account has VM limit of 10 and CPU limit of 20 and user under that account requests 5 VMs of 4 CPUs each, after this user can - deploy 5 more VMs(because VM limit is 10) but user has exausted his CPU limit and cannot + deploy 5 more VMs(because VM limit is 10) but user has exhausted his CPU limit and cannot deploy any more instance.