Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)
Added support for root disks on managed storage with KVM
Added support for volume snapshots with managed storage on KVM
Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage
Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.
Included support for Reinstall VM on KVM with managed storage
Enabled offline migration on KVM from non-managed storage to managed storage and vice versa
Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)
Added support to download (extract) a managed-storage volume to a QCOW2 file
When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.
Included support for the KVM auto-convergence feature
The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
Added a new Global Setting: kvm.storage.live.migration.wait
For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)
Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.
Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot
Added an SIOC API plug-in to support VMware SIOC
When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.
Added the ability to revert a volume to a snapshot
Enabled cluster-scoped managed storage
Added support for VMware dynamic discovery
Extending Config Drive support
* Added support for VMware
* Build configdrive.iso on ssvm
* Added support for VPC and Isolated Networks
* Moved implementation to new Service Provider
* UI fix: add support for urlencoded userdata
* Add support for building systemvm behind a proxy
Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
Co-Authored-By: Frank Maximus <frank.maximus@nuagenetworks.net>
Co-Authored-By: Sigert Goeminne <sigert.goeminne@nuagenetworks.net>
* CLOUDSTACK-9972: Enhance listVolume API to include physical size and utilization.
Also fixed pool, cluster and pod info
* CLOUDSTACK-9972: Fix volume_view and duplicate API constant
* CLOUDSTACK-9972: Backport Do not allow vms to be deployed on hosts that are in disabled pod
* CLOUDSTACK-9972: Fix localization missing keys
* CLOUDSTACK-9972: Fix sql path
Issue
====
Start instance fails after reverting to a VM snapshot, when there is 1 or more child VM snapshots in the snapshot tree of the VM.
Per the code that detects the presence of a snapshot, we are checking for only current snapshot instead of checking presence of any snapshot in the snapshot tree.
The failure to detect all snapshots means ACP reconfigures the VM in wrong way assuming there are no snapshots for the VM.
This results in start failure.
Fix
===
Ensure correct detection of VM snapshots in the VM snapshot tree
This closes#1828
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
(cherry picked from commit 673bb25b5936d1c54e9210781280e9ddc507c830)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Improve disk chain usage while attaching, migrating disks
- Gets root disk controller based diskDeviceBusName from volume's chain info
- Refactor and move VirtualMachineDiskInfo to cloud-utils
- Allows mixing of scsi controller types
- Fixes a NPE case with map passed as null, for example in case of detach volume
command
- Use a osdefault translator that allow use of recent os types added (enums of
which) are not available in the sdk
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
commit #7
So far only 1 controller (scsi or ide) is supported in Cloudstack for ide or
scsi, this is existing limitation. Added support for 2nd IDE controller. Support adding IDE
virtual disk to VM. Also added check if VM is running as IDE virtual disk cannot be attached
to VM if VM is runnning.If user detaches a virtual disk on lower unit number of controller,
then subsequent attach operation should find free unit number on the controller and attach
the virtual disk there.
commit #6
Let the controllers of existing VMs continue without flip, current busInfo retrieved from
chain_info field of volume record from database would be preferred over
controller settings from all configuration settings.
commit #5
Editing global configuration param vmware.root.disk.controller osdefault value results
in loss of previous root disk controller type. Hence root disk's controller type for legacy
VMs is unknow post that modificaiton by user. If VM is stop/start then we could get this
infromation from bus info of existing volume. But if user resets VM and then try to start VM.
The existing bus info would be lost. Hence existing disk info is not available to depend on.
Using lsilogic or generic scsi controller for ROOT disk of legacy VMs if reset.
commit #4
Avoid adding additional (>1) scsi controllers to system vms. While attaching volume to legacy VM
don't use osdefault optoin which applicable only for VM created with the option enabled, use
legacy data disk controller type (lsilogic)
commit #3
If root disk's controller type is scsi and data disk controller type condenses
to any of scsi sub-types then data disk controller type would fall back to root disk controller itself. This
ensures data volumes would be accessible in all cases as controller of root volume would be reliable
and it means VM has the supported controller. It also avoids mix of scsi controller sub-types in a user instance.
Also translating disk controller type scsi to lsilogic.
commit #2
Support auto detection of recommended virtual disk controller type for specific guest OS.
commit #1
Support granual controller types. Add support for controller types in template registration as well.
Fix white spaces.
Removed stale HEAD merge lines
Removed tail of merge lines
Fixed VmwareResource, removing storage commands that moved to VmwareStorageProcessor.
removed stale code of controller that is present in processor
Fixed check style errors.
Fixed injection.
Tested with Linux and windows templates. Unable to run iso based tests due to few bugs in register iso area.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
(cherry picked from commit a4cc987a6f)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
1. If a VM by the same name exists on a different cluster in the VMware DC, unregister the existing VM and continue with the VM start.
2. If VM start succeeds, delete VM files associated with the unregistered VM.
3. If VM start fails, re-register the unregistered VM.
For AttachVolume/DetachVolume API command, improve user error message in case of RuntimeException by throwing the exception instead of 'Unexpected Exception'.
While attaching a new disk to an instance, the unit number on the controller key should be the lowest unit number
that is not in use. And in case the controller type is SCSI it shouln't be the reserved SCSI unit number.
Change in vCenter 5.5 API from prior versions forced code change in CloudStack. Update property value of property "VirtualE1000.deviceInfo.summary" is accommodated now.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
[VMware] Synchronize the tasks of disk preparation and reconfiguration of a VM with the disk.
This is to avoid attaching a disk with a wrong device number on the controller key.
[VMware] While attaching a new disk to an instance, the unit number on the controller key should be the lowest unit number on the key that is not in use.
Instead of a number that is 1 digit higher the highest unit number that is currently in use.
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
If a volume has been renamed upon migration update its volumePath to that of the new disk filename.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
DetachISO is succeeding even though detach opeartion is failing as cdrom is locked by VM as it was mounted inside VM.
Detect if cdrom is locked or not. If locked fail detach operation and warn user to unmount before detaching the iso/cdrom device.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareStorageProcessor.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
During ISO detach operation, answer question from vCenter by programmatically answering for VM question in detaching process.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>