* Fix resource count discrepancies
* Fixup while removing vm
* Fix discrepancies when starting VMs
* Fixup tests
* Fixups
* Don't take lock when amount is negative
Adds:
- Fix for volume limit checks for disk offerings with multiple tags - When a VM is deployed with multiple disks having offerings with multiple tags the resource limit check may falter as currently it tries to check based on individual diskoffering. With this, change if offering d1 and d2 for volumes v1 and v2 both have tag1, server will check volume limits for tag tag1 using the combined size of v1 and v2.
- Fix for template tag hosts in random host allocator - May affect use of template tag, service offering tags and random host allocator together. The current code for the random host allocator falters while trying to find the host allocation. This was found and fixed during the addition of the unit test here, https://github.com/shapeblue/cloudstack-apple/pull/378/files#diff-bbf9baea014e5cc1dfe9e7d13467c9857208cfe65e93883721d88a6f0452f912
- Unit tests for changes in api,server,ui: tagged resource limits #327
* Added vm uuid as part of error response when vm create fails after vm entity is persisted
* Fixed styling issue
* Fixed styling issue.
* Fix unit tests
* Fixed merge conflicts.
* Fixed merge conflicts.
---------
Co-authored-by: Annie Li <ji_li@apple.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Introduces the concept of tagged resource limits. Limits can be enforced on accounts and domains for the deployment of entities for a tagged resource. Current tagged resource limits can be used for the following resource types,
Host limits
user_vm
cpu
memory
Storage limits
volume
primary_storage
Following global settings can used to specify tags for which limit needs to be enforced,
Host: resource.limit.host.tags
Storage: resource.limit.storage.tags
Option for specifying tagged resource limits and viewing tagged resource usage are made available in the UI.
Enhances use of templatetag for VM deployment and template creation
Adds option to list disk offering with suitability flag for a virtualmachine. A new parameter named virtualmachineid has been added to the listDiskOfferings API which when passed returns suitableforvirtualmachine param in the reponse.
* Allocate new volume on restore virtual machine operation when resource count increment succeeds
- keep them in transaction, and fail operation if resource count increment fails
* Added some (negative) unit tests for restore vm
Note: these are however redundant, both on main branch and the private
branch across two classes.
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes case of appending userdata when both template and vm data are either shellscript or cloudconfig
Fixes error when appending gzip userdata
Fixes case when userdata manual text from VM is not getting decoded-encoded correctly.
Fixes case of appending multipart data when both template and vm data contain same format types.
Refactor - moved validateUserData method to UserDataManager class
Refactor userdata test to check resultant multipart userdata thoroughly
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
(cherry picked from commit 729e6d1446)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Design document: https://cwiki.apache.org/confluence/display/CLOUDSTACK/%5BDRAFT%5D+Minimal+changes+to+allow+new+dynamic+hypervisor+type%3A+Custom+Hypervisor
This PR introduces the minimal changes to add a new hypervisor type (internally named Custom in the codebase, and configurable display name), allowing to write an external hypervisor plugin as a Custom Hypervisor to CloudStack
The custom hypervisor name is set by the setting: 'hypervisor.custom.display.name'. The new hypervisor type does not affect the behaviour of any CloudStack operation, it simply introduces a new hypervisor type into the system.
CloudStack does not have any means to dynamically add new hypervisor types. The hypervisor types are internally preset by an enum defined within the CloudStack codebase and unless a new version supports a new hypervisor it is not possible to add a host of a hypervisor that is not in part of the enum. It is possible to implement minimal changes in CloudStack to support a new hypervisor plugin that may be developed privately
This PR is an initial work on allowing new dynamic hypervisor types (adds a new element to the HypervisorType enum, but allows variable display name for the hypervisor)
Replace the HypervisorType from a fixed enum to an extensible registry mechanism, registered from the hypervisor plugin
- The new hypervisor type is internally named 'Custom' to the CloudStack services (management server and agent services, database records).
- A new global setting ‘hypervisor.custom.display.name’ allows administrators to set the display name of the hypervisor type. The display name will be shown in the CloudStack UI and API.
- In case the ‘hypervisor.list’ setting contains the display name of the new hypervisor type, the setting value is automatically updated after the ‘hypervisor.custom.display.name’ setting is updated.
- The new Custom hypervisor type supports:
- Direct downloads (the ability to download templates into primary storage from the hypervisor hosts without using secondary storage)
- Local storage (use hypervisor hosts local storage as primary storage)
- Template format: RAW format (the templates to be registered on the new hypervisor type must be in RAW format)
- The UI is also extended to display the new hypervisor type and the supported features listed above.
- The above are the minimal changes for CloudStack to support the new hypervisor type, which can be tested by integrating the plugin codebase with this feature.
This PR allows the cloud administrators to test custom hypervisor plugins implementations in CloudStack and easily integrate it into CloudStack as a new hypervisor type ("Custom"), reducing the implementation to only the hypervisor supported specific storage/networking and the hypervisor resource to communicate with the management server.
- CloudStack admin should be able to create a zone for the new custom hypervisor and add clusters, hosts into the zone with normal operations
- CloudStack users should be able to execute normal VMs/volumes/network/storage operations on VMs/volumes running on the custom hypervisor hosts
(cherry picked from commit 8b5ba13b81)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Don't allow inadvertent deletion of hidden details via API
* Update VM details unit test ensuring system/hidden details not removed
* Update test/integration/component/test_update_vm.py
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
When deploying a VM is failed during the allocation process it may leave the resources that have been already allocated before the failure. They will get removed from the database as the whole code block is wrapped inside a transaction twice but the server would not inform the network or storage plugins to clean up the allocated resources.
This PR removes Transactions during VM allocation which results in the allocated VM and its resource records being persisted in DB even during failures. When failure is encountered VM is moved to Error state. This helps VM and its resources to be properly deallocated when it is expunged either by a server task such as ExpungeTask or during manual expunge.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Live storage migration of volume in scaleIO within same storage scaleio cluster
* Added migrate command
* Recent changes of migration across clusters
* Fixed uuid
* recent changes
* Pivot changes
* working blockcopy api in libvirt
* Checking block copy status
* Formatting code
* Fixed failures
* code refactoring and some changes
* Removed unused methods
* removed unused imports
* Unit tests to check if volume belongs to same or different storage scaleio cluster
* Unit tests for volume livemigration in ScaleIOPrimaryDataStoreDriver
* Fixed offline volume migration case and allowed encrypted volume migration
* Added more integration tests
* Support for migration of encrypted volumes across different scaleio clusters
* Fix UI notifications for migrate volume
* Data volume offline migration: save encryption details to destination volume entry
* Offline storage migration for scaleio encrypted volumes
* Allow multiple Volumes to be migrated with migrateVirtualMachineWithVolume API
* Removed unused unittests
* Removed duplicate keys in migrate volume vue file
* Fix Unit tests
* Add volume secrets if does not exists during volume migrations. secrets are getting cleared on package upgrades.
* Fix secret UUID for encrypted volume migration
* Added a null check for secret before removing
* Added more unit tests
* Fixed passphrase check
* Add image options to the encypted volume conversion
Fixes#6697
Allows the server to generate started and completed events for VM.CREATE event type when VM is deployed with startvm=false.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* 4.17:
KVM: revert libvirtd config and retry if fail to add a host (#7090)
UI: display cpu cores and speed instead of cputotal by default (#7106)
storage: validate disk size range of custom disk offering when resize volume (#7073)
Fixes#6701
When volume migration is initiated by system, account check is not needed.
Introduces a new global setting - allow.diskoffering.change.during.scale.vm. This determines whether to allow or disallow disk offering change for root volume during scaling of a stopped or running VM.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Co-authored-by: Rohit Yadav <rohityadav89@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR blocks change of service offering if the offering root volume encryption values don't match. We don't support dynamically removing or adding encryption to a VM.
Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver.
In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally.
The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation.
This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools.
NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first.
### Management Server
##### API
* createDiskOffering now has an 'encrypt' Boolean
* createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM. This has been refactored to deal with the new separation of service offering from disk offering internally.
* listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listHosts now shows encryption support of each hypervisor host via `encryptionsupported`
* Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume.
##### Volume functions
A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found.
Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming.
Supported functions:
* Data Volume create
* VM root volume create
* VM root volume reinstall
* Offline volume snapshot/restore
* Migration of VM with storage (e.g. local storage VM migration)
* Resize volume
* Detach/attach volume
Blocked functions:
* Online volume snapshot
* VM snapshot w/memory
* Scheduled snapshots (would fail when VM is running)
* Disk offering migration to offerings that don't have matching encryption
* Creating template from encrypted volume
* Creating volume from encrypted volume
* Volume extraction (would we decrypt it first, or expose the key? Probably the former).
##### Primary Storage Support
For storage developers, adding encryption support involves:
1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it.
2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested.
##### Scheduling
For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI. This was patterned after other features that require hypervisor support such as UEFI.
The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption. This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide.
VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption.
##### DB Changes
A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database. The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit.
#### KVM Agent
For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest. This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data.
For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy.
Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs. On the agent side, this passphrase is used in two ways:
1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc.
2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless.
In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`. These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way.
It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`.
Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere. As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed. In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism.
Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault).
Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
This PR addresses parallel resource allocation as a generalization of the problem and solution described in #6644. Instead of the Global lock on the resources a reservation record is created which is added in the resource check count in the ResourceLimitService/ResourceLimitManagerImpl. As a convenience a CheckedReservation is created. This is an implementation of AutoClosable and can be used as a guard in a try-with-resource fashion. The close method of the CheckedReservation wil delete the reservation record.
Co-authored-by: Boris Stoyanov - a.k.a Bobby <bss.stoyanov@gmail.com>
Adds option to provide custom DNS servers for isolated network, shared network and VPC tier.
New API parameters added in createNetwork API along with the corresponding response parameters.
Doc PR: apache/cloudstack-documentation#276
Fixes#6679
Fixes behaviour when the VM is scaled to a new compute offering which has the same disk offering associated as the earlier compute offering.