Commit Graph

1265 Commits

Author SHA1 Message Date
kishankavala 80bbb29abf
CleanUp Async Jobs after mgmt server maintenance (#8394)
This PR fixes moves resources stuck in transition state during async job cleanup

Problem:
During maintenance of the management server, other servers in the cluster or the same server after a restart initiate async job cleanup. However, this process leaves resources in a transitional state. The only recovery option currently available is to make direct database changes.

Solution:
This PR introduces a resolution by changing Volume, Virtual Machine, and Network resources from their transitional states. This adjustment enables the reattempt of failed operations without the need for manual database modifications.
2024-01-19 13:26:25 +05:30
Vishesh c3b77cb7b8
Fix host stuck in connecting state (#8502)
There are a lot of test failures due to test_vm_life_cycle.py in multiple PRs due to host not available for migration of VMs.
#8438 (comment)
#8433 (comment)
#7344 (comment)

While debugging I noticed that the hosts get stuck in Connecting state because MS is waiting for a response of the ReadyCommand from the agent. Since we take a lock on connection and disconnection, restarting the agent doesn't work. To fix this, we have to restart the MS or wait for ~1 hour (default timeout).

On the agent side, it gets stuck waiting for a response from the Script execution.

To reproduce, run smoke/test_vm_life_cycle.py (TestSecuredVmMigration test class to be specific). Once the tests are complete, you will notice that some hosts are stuck in Connecting state. And restarting the agent fails due to the named lock. Locks on DB can be checked using the below query.

SELECT *
FROM performance_schema.metadata_locks
INNER JOIN performance_schema.threads ON THREAD_ID = OWNER_THREAD_ID
WHERE PROCESSLIST_ID <> CONNECTION_ID() \G;

This PR adds a wait for the ready command and a timeout to the Script execution to ensure that the thread doesn't get stuck and the named lock from database is released.
2024-01-15 13:56:34 +05:30
Suresh Kumar Anaparti e87ce0c723
Fix reorder/list pools when cluster details are not set, while deploying vm / attaching volume (#8373)
This PR fixes reorder/list pools when cluster details are not set, while deploying vm / attaching volume.

Problem:
Attach volume to a VM fails, on infra with zone-wide pools & vm.allocation.algorithm=userdispersing as the cluster details are not set (passed as null) while reordering / listing pools by volumes.

Solution:
Ignore cluster details when not set, while reordering / listing pools by volumes.
2024-01-10 18:13:32 +05:30
Abhishek Kumar 82f7abddb3 Merge remote-tracking branch 'apache/4.18' 2023-12-13 11:24:15 +05:30
Bryan Lima 3bb318bab9
kvm: Add support for cgroupv2 (#8252)
1. Problem description

In Apache CloudStack (ACS), when a VM is deployed in a host with the KVM hypervisor, an XML file is created in the assigned host, which has a property shares that defines the weight of the VM to access the host CPU. The value of this property has no unit, and it is a relative measure to calculate how much CPU a given VM will have in the host. However, this value has a limit, which depends on the version of cgroup utilized by the host's kernel. The problem lies at the range value of shares that varies between both versions: [2, 264144] for cgroups version 1; and [1, 10000] for cgroups version 2. Currently, ACS calculates the value of shares using Equation 1, presented below, where CPU is the number of cores and speed is the CPU frequency; both specified in the VM's compute offering. Therefore, if a compute offering has, for example, 6 cores at 2 GHz, the shares value will be 12000 and an exception will be thrown by libvirt if the host utilizes cgroup v2. The second version is becoming the default one in current Linux distributions; thus, it is necessary to address this limitation.

    Equation 1
    shares = CPU * speed

Fixes: #6744
2. Proposed changes

To address the problem described, we propose to apply a scale conversion considering the max shares of the host. Using the same formula currently utilized by ACS, it is possible to calculate the maximum shares of a VM for a given host. In other words, using the number of cores and the nominal speed of the host's CPU as the upper limit of shares allowed to a VM. Then, this value will be scaled to the allowed interval of [1, 10000] of cgroup v2 by using a linear scale conversion.

The VM shares would be calculated as Equation 2, presented below, where VM requested shares is the requested shares value calculated using Equation 1, cgroup upper limit is fixed with a value of 10000 (cgroups v2 upper limit), and host max shares is the maximum shares value of the host, calculated using Equation 1. Using Equation 2, the only case where a VM passes the cgroup v2 limit is when the user requests more resources than the host has, which is not possible with the current implementation of ACS.

    Equation 2
    shares = (VM requested shares * cgroup upper limit)/host max shares

To implement the proposal, the following APIs will be updated: deployVirtualMachine, migrateVirtualMachine and scaleVirtualMachine. When a VM is being deployed, a new verification will be added to find a suitable host. The max shares of each host will be calculated, and the VM calculated shares will be verified if it does not surpass the host's value. Likewise, the migration of VMs will have a similar new verification. Lastly, the scale of VMs will also have the same verification for the VM's host.

To determine the max shares of a given host, we will use the same equation currently used in ACS for calculating the shares of VMs, presented in Section 1. When Equation 1 is used to determine the maximum shares of a host, CPU is the number of cores of the host, and speed is the nominal CPU speed, i.e., considering the CPU's base frequency.

It is important to note that these changes are only for hosts with the KVM hypervisor using cgroup v2 for now.
2023-12-13 10:51:24 +05:30
Rene Glover 1031c31e6a
FiberChannel Multipath for KVM + Pure Flash Array and HPE-Primera Support (#7889)
This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.
2023-12-09 11:31:33 +05:30
Abhishek Kumar c599011ef5 Merge remote-tracking branch 'apache/4.18' 2023-12-08 18:06:15 +05:30
Harikrishna 7eb36367c9
Add lock mechanism considering template id, pool id, host id in PowerFlex Storage (#8233)
Observed a failure to start new virtual machine with PowerFlex storage. Traced it to concurrent VM starts using the same template and the same host to copy. Second mapping attempt failed.

While creating the volume clone from the seeded template in primary storage, adding a lock with the string containing IDs of template, storage pool and destination host avoids the situation of concurrent mapping attempts with the same host.
2023-12-08 13:21:16 +05:30
Bryan Lima b0910fc61d
Add dynamic secondary storage selection (#7659) 2023-12-04 09:52:32 +01:00
kishankavala 5651eab49c
ObjectStore Framework with MinIO and Simulator plugins (#7752)
This PR adds Object Storage feature to CloudStack.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/%5BDRAFT%5D+CloudStack+Object+Store
2023-12-01 17:51:00 +05:30
João Jandre 26b01f6f3b
Flexible tags for hosts and storage pools (#7489)
Co-authored-by: João Jandre <joao@scclouds.com.br>
2023-11-30 09:36:47 +01:00
Daan Hoogland 05b9b6e2e7 Merge branch '4.18' into main 2023-11-13 11:36:51 +01:00
Abhishek Kumar d0f3233fda
edge-zone,kvm,iso,cks: allow k8s deployment with direct-download iso (#8142)
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2023-11-10 13:56:05 +01:00
John Bampton f090c77f41
misc: fix spelling (#7549)
Co-authored-by: Stephan Krug <stekrug@icloud.com>
2023-11-02 09:23:53 +01:00
Vishesh 5362bad442
Storage Management (#7949) 2023-11-01 10:46:22 +01:00
Daan Hoogland 587d1d7dba Merge remote-tracking branch 'apache/4.18' into main 2023-10-26 09:37:38 +02:00
slavkap 6ae3b73ca2
Create snapshot from VM snapshot without memory for NFS/Local storage (#8117) 2023-10-26 08:46:14 +02:00
Abhishek Kumar 543c54c718
api,server,ui: snapshot copy, multi-zone replica (#7873)
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.

Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.

In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.

As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.

`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.

Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.

`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.

----
New API added
`copySnapshot`

Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form

Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
2023-10-23 09:01:58 +02:00
sato03 e437d1016f
Snapshot removal and storage cleanup logs (#8031) 2023-10-16 16:20:09 +02:00
Rohit Yadav 8a34afa8ab Merge remote-tracking branch 'origin/4.18' 2023-10-11 21:00:06 +05:30
Rohit Yadav 8350ce5aa4
storage: allow VM snapshots without memory for KVM when global setting allows (#8062)
This removes the conditional logic where comment notest to remove it
after PR #5297 is merged that is applicable for ACS 4.18+. Only when the
global setting is enabled and memory isn't selected, VM snapshot could
be allowed for VMs on KVM that have qemu-guest-agent running.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2023-10-11 20:56:45 +05:30
SadiJr 1bda2343f3
Improve logs when searching one storage pool to allocate a new volume (#7212)
Co-authored-by: SadiJr <sadi@scclouds.com.br>
2023-09-28 13:42:42 +02:00
Vishesh c69e3c5f42
Remove powermock from engine/storage/configdrive (#7988) 2023-09-22 14:07:29 +02:00
Vishesh 84277e783b
remove powermock from engine (#7975) 2023-09-20 10:11:28 +02:00
Wei Zhou 246bb24b0f Updating pom.xml version numbers for release 4.18.2.0-SNAPSHOT
Signed-off-by: Wei Zhou <weizhou@apache.org>
2023-09-12 17:26:53 +02:00
Wei Zhou 4bdff06acd Updating pom.xml version numbers for release 4.18.1.0
Signed-off-by: Wei Zhou <weizhou@apache.org>
2023-09-07 08:50:50 +02:00
Nicolas Vazquez 8b5ba13b81
plugins: Add Custom hypervisor minimal changes (#7692)
### Description

Design document: https://cwiki.apache.org/confluence/display/CLOUDSTACK/%5BDRAFT%5D+Minimal+changes+to+allow+new+dynamic+hypervisor+type%3A+Custom+Hypervisor

This PR introduces the minimal changes to add a new hypervisor type (internally named Custom in the codebase, and configurable display name), allowing to write an external hypervisor plugin as a Custom Hypervisor to CloudStack

The custom hypervisor name is set by the setting: 'hypervisor.custom.display.name'. The new hypervisor type does not affect the behaviour of any CloudStack operation, it simply introduces a new hypervisor type into the system.

CloudStack does not have any means to dynamically add new hypervisor types. The hypervisor types are internally preset by an enum defined within the CloudStack codebase and unless a new version supports a new hypervisor it is not possible to add a host of a hypervisor that is not in part of the enum. It is possible to implement minimal changes in CloudStack to support a new hypervisor plugin that may be developed privately

This PR is an initial work on allowing new dynamic hypervisor types (adds a new element to the HypervisorType enum, but allows variable display name for the hypervisor)

##### Proposed Future work:
Replace the HypervisorType from a fixed enum to an extensible registry mechanism, registered from the hypervisor plugin

#### Feature Specifications
- The new hypervisor type is internally named 'Custom' to the CloudStack services (management server and agent services, database records).
- A new global setting ‘hypervisor.custom.display.name’ allows administrators to set the display name of the hypervisor type. The display name will be shown in the CloudStack UI and API.
   - In case the ‘hypervisor.list’ setting contains the display name of the new hypervisor type, the setting value is automatically updated after the ‘hypervisor.custom.display.name’ setting is updated.
- The new Custom hypervisor type supports:
   - Direct downloads (the ability to download templates into primary storage from the hypervisor hosts without using secondary storage)
   - Local storage (use hypervisor hosts local storage as primary storage)
   - Template format: RAW format (the templates to be registered on the new hypervisor type must be in RAW format)
- The UI is also extended to display the new hypervisor type and the supported features listed above.
- The above are the minimal changes for CloudStack to support the new hypervisor type, which can be tested by integrating the plugin codebase with this feature.


#### Use cases
This PR allows the cloud administrators to test custom hypervisor plugins implementations in CloudStack and easily integrate it into CloudStack as a new hypervisor type ("Custom"), reducing the implementation to only the hypervisor supported specific storage/networking and the hypervisor resource to communicate with the management server.

- CloudStack admin should be able to create a zone for the new custom hypervisor and add clusters, hosts into the zone with normal operations
- CloudStack users should be able to execute normal VMs/volumes/network/storage operations on VMs/volumes running on the custom hypervisor hosts
2023-08-16 20:53:24 +05:30
John Bampton 6f4503488b
pre-commit: apply `end-of-file-fixer` to all files (#7551) 2023-08-02 13:47:21 +02:00
Vishesh 594c70dde0
Sync precommit config from main (#7732)
Co-authored-by: John Bampton <jbampton@users.noreply.github.com>
Co-authored-by: dahn <daan@onecht.net>
2023-07-07 11:18:16 +02:00
Wei Zhou 09a4a252d7 Merge remote-tracking branch 'apache/4.18' into HEAD 2023-06-21 15:08:56 +02:00
Harikrishna 40cc10a73d
Allow volume migrations in ScaleIO within and across ScaleIO storage clusters (#7408)
* Live storage migration of volume in scaleIO within same storage scaleio cluster

* Added migrate command

* Recent changes of migration across clusters

* Fixed uuid

* recent changes

* Pivot changes

* working blockcopy api in libvirt

* Checking block copy status

* Formatting code

* Fixed failures

* code refactoring and some changes

* Removed unused methods

* removed unused imports

* Unit tests to check if volume belongs to same or different storage scaleio cluster

* Unit tests for volume livemigration in ScaleIOPrimaryDataStoreDriver

* Fixed offline volume migration case and allowed encrypted volume migration

* Added more integration tests

* Support for migration of encrypted volumes across different scaleio clusters

* Fix UI notifications for migrate volume

* Data volume offline migration: save encryption details to destination volume entry

* Offline storage migration for scaleio encrypted volumes

* Allow multiple Volumes to be migrated with migrateVirtualMachineWithVolume API

* Removed unused unittests

* Removed duplicate keys in migrate volume vue file

* Fix Unit tests

* Add volume secrets if does not exists during volume migrations. secrets are getting cleared on package upgrades.

* Fix secret UUID for encrypted volume migration

* Added a null check for secret before removing

* Added more unit tests

* Fixed passphrase check

* Add image options to the encypted volume conversion
2023-06-21 11:57:05 +05:30
Vishesh 48af4625a2
Fix end of file precommit for TemplateServiceImplTest.java (#7561) 2023-05-25 13:11:09 +02:00
nvazquez 0024cb0372
Merge branch '4.18' 2023-05-24 11:01:10 -03:00
John Bampton 11d45654a6
misc: fix spelling (#7206)
This PR fixes spellings
2023-05-23 11:06:16 +05:30
Abhishek Kumar 32caf9057e
engine-storage: fix errored template becomes active (#7485)
* engine-storage: fix errored template becomes active

Fixes #7342

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* test

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

---------

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2023-05-12 15:23:16 +02:00
Rohit Yadav a2561df25b Merge remote-tracking branch 'origin/4.18' 2023-05-08 12:57:38 +05:30
Marcus Sorensen ec0f8bddf6
Support local storage live migration for direct download templates (#7453)
Co-authored-by: Marcus Sorensen <mls@apple.com>
2023-05-04 17:37:58 -03:00
Rohit Yadav 8a42ab9ce4 Merge remote-tracking branch 'origin/4.18' 2023-04-14 21:49:12 +05:30
Nicolas Vazquez 2dc016adde
Fix for direct download templates with multiple bypassed references (#7400)
This PR fixes an issue observed on multiple zones and direct download templates on KVM, in which a template gets multiple records on the template_store_ref table. When this happens, the template cannot be used as direct download. In case of a system VM template using direct download, system VM deployments fail
2023-04-13 12:48:29 +05:30
Harikrishna b774ee5d11
vmware: Datastore cluster synchronization should check if the child datastores are in UP state or not (#7385)
This fix ensures when datastore cluster in VMware is added as a primary storage pool in CloudStack then all the child datastores (which already exists in CS) should be in Up state.

For example:

1. Datastore Cluster DS has two child datastores A and B in vCenter. (B is already added as a storage pool in CloudStack)
2. Now try to add datastore cluster DS into CloudStack as a primary storage pool
3. CloudStack tries to add child datastores A and B in CloudStack, since B is already there in CloudStack, it will reuse the existing storagepool entry and will keep under parent Storage pool DS.

During Step 3 we are now checking if B is Up state or not.
2023-04-11 22:23:12 +05:30
John Bampton c2e17310d6
Add three more `pre-commit` checks (#7083)
Co-authored-by: dahn <daan@onecht.net>
2023-03-27 13:28:55 +02:00
Daan Hoogland fb4f6a334d Updating pom.xml version numbers for release 4.19.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <daan@onecht.net>
2023-03-15 19:46:01 +01:00
Daan Hoogland 05cda2729f Updating pom.xml version numbers for release 4.18.1.0-SNAPSHOT
Signed-off-by: Daan Hoogland <daan@onecht.net>
2023-03-15 19:38:14 +01:00
Daan Hoogland 0574087284 Updating pom.xml version numbers for release 4.18.0.0
Signed-off-by: Daan Hoogland <daan@onecht.net>
2023-03-11 09:35:41 +01:00
Suresh Kumar Anaparti d8c7e34b38
Improve global settings UI to be more intuitive/logical (#5797)
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: dahn <daan@onecht.net>
2023-01-31 11:23:43 +01:00
João Jandre 61a722548f
Create API to reassign volume (#6938) 2023-01-27 11:10:56 +01:00
John Bampton d74f64a2e1
Use lowercase HTTP header field names so we are compatible with HTTP/2 (#7006) 2023-01-23 11:17:54 +01:00
John Bampton e65c22d883
Fix spelling (#6860) 2022-11-13 10:56:15 +01:00
João Jandre 14937e1adb
Fixed NPE on volume creation from snapshot (#6839)
Co-authored-by: João Jandre <joao@scclouds.com.br>
2022-10-26 08:44:01 +02:00
dahn 4a06363749
Ova download fix (#6758) 2022-10-21 14:31:19 +02:00
Daniel Augusto Veronezi Salvador 2ca164ac96
Quota custom tariffs (#5909)
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
Co-authored-by: dahn <daan.hoogland@gmail.com>
2022-10-17 10:03:50 +02:00
Daniel Augusto Veronezi Salvador 7936eb04e9
server: Fix delete parent snapshot (#6630)
ACS + Xenserver works with differential snapshots. ACS takes a volume full snapshot and the next ones are referenced as a child of the previous snapshot until the chain reaches the limit defined in the global setting snapshot.delta.max; then, a new full snapshot is taken. PR #5297 introduced disk-only snapshots for KVM volumes. Among the changes, the delete process was also refactored. Before the changes, when one was removing a snapshot with children, ACS was marking it as Destroyed and it was keeping the Image entry on the table cloud.snapshot_store_ref as Ready. When ACS was rotating the snapshots (the max delta was reached) and all the children were already marked as removed; then, ACS would start removing the whole hierarchy, completing the differential snapshot cycle. After the changes, the snapshots with children stopped being marked as removed and the differential snapshot cycle was not being completed.

This PR intends to honor again the differential snapshot cycle for XenServer, making the snapshots to be marked as removed when deleted while having children and following the differential snapshot cycle.

Also, when one takes a volume snapshot and ACS backs it up to the secondary storage, ACS inserts 2 entries on table cloud.snapshot_store_ref (Primary and Image). When one deletes a volume snapshot, ACS first tries to remove the snapshot from the secondary storage and mark the entry Image as removed; then, it tries to remove the snapshot from the primary storage and mark the entry Primary as removed. If ACS cannot remove the snapshot from the primary storage, it will keep the snapshot as BackedUp; however, If it does not exist in the secondary storage and without the entry SNAPSHOT.DELETE on cloud.usage_event. In the end, after the garbage collector flow, the snapshot will be marked as BackedUp, with a value in the field removed and still being rated. This PR also addresses the correction for this situation.

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2022-10-13 12:31:11 +05:30
Daniel Augusto Veronezi Salvador f7b29856d1
Refactor SnapshotDataStoreDaoImpl (#6751)
* Refactor SnapshotDataStoreDaoImpl and add unit tests

* Create constants for duplicated literals

* Refactor search builders

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
Co-authored-by: dahn <daan.hoogland@gmail.com>
2022-10-11 13:53:02 +02:00
Codegass 69e158d77d
Refactor TestHttp.testHttpclient to avoid the Exception Suppression (#6733)
* Refactor TestHttp.testHttpclient to avoid the Exception Suppression

* Remove the unnecessary import
2022-10-09 15:22:32 +05:30
Harikrishna 713a236843
UserData as first class resource (#6202)
This PR introduces a new feature to make userdata as a first class resource much like existing SSH keys.

Detailed feature specification document:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Userdata+as+a+first+class+resource
2022-10-05 17:34:59 +05:30
Marcus Sorensen 697e12f8f7
kvm: volume encryption feature (#6522)
This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver.

In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally.

The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation.

This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools.

NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first.

### Management Server

##### API

* createDiskOffering now has an 'encrypt' Boolean
* createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM.  This has been refactored to deal with the new separation of service offering from disk offering internally.
* listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listHosts now shows encryption support of each hypervisor host via `encryptionsupported`
* Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume.

##### Volume functions

A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found.

Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming.

Supported functions:
* Data Volume create
* VM root volume create
* VM root volume reinstall
* Offline volume snapshot/restore
* Migration of VM with storage (e.g. local storage VM migration)
* Resize volume
* Detach/attach volume

Blocked functions:
* Online volume snapshot
* VM snapshot w/memory
* Scheduled snapshots (would fail when VM is running)
* Disk offering migration to offerings that don't have matching encryption
* Creating template from encrypted volume
* Creating volume from encrypted volume
* Volume extraction (would we decrypt it first, or expose the key? Probably the former).

##### Primary Storage Support

For storage developers, adding encryption support involves:

1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it.

2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested.

##### Scheduling

For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI.  This was patterned after other features that require hypervisor support such as UEFI.

The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption.  This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide.

VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption.

##### DB Changes

A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database.  The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit.

#### KVM Agent

For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest.  This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data.

For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy.

Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs.  On the agent side, this passphrase is used in two ways:

1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc.

2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless.

In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`.  These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way.

It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`.

Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere.  As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed.  In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism.

Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault).

Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
2022-09-27 10:20:59 +05:30
João Jandre efbf74ee06
Added new logs to volume creation (#6689)
Co-authored-by: João Paraquetti <joao@scclouds.com.br>
2022-09-26 19:11:14 -03:00
Abhishek Kumar e720b72e15 Merge remote-tracking branch 'apache/4.17' into main 2022-08-31 17:38:30 +05:30
Abhishek Kumar a21efe75df
vmware: fix vm snapshot with datastore cluster, drs (#6643)
Fixes #6595
Sync volume datastore, path and chaininfo info while calculating snapshot chain size after snapshot operation is complete from vCenter.
2022-08-31 16:00:14 +05:30
Suresh Kumar Anaparti 75da982d73
Updated resource counter to include correct size after volume creation/resize and other improvements (#6587)
* Updated resource counter to include correct size after volume creation/resize and other improvements
- Recalculate resource counters for root domain in the periodic task
- Update correct size in the primary_storage resource counter after volume creation/resize
- Some code improvements

* review and sonarcloud issues

Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Co-authored-by: Daan Hoogland <daan@onecht.net>
2022-08-16 10:41:42 +02:00
Paula Oliveira 9717ed9af2
Improve log messages on VolumeOrchestrator class (#6408)
Co-authored-by: Paula Zomignani Oliveira <paula@scclouds.com.br>
2022-08-12 09:17:06 +02:00
John Bampton f9347ecf2c
Fix spelling (#6597) 2022-08-03 15:43:47 +05:30
Rohit Yadav 661956cc60 Merge remote-tracking branch 'origin/4.17' 2022-07-20 11:52:26 +05:30
Harikrishna 2c05b63495
kvm: Fix for Revert volume snapshot (#6527)
This PR fixes the issue #6209 where the snapshot revert operation fails after certain volume operations like Migrate VM with volume / migrate volume / reinstall VM.

The root cause of the issue after these volume operations, the primary storage entry is getting deleted for that volume. We have fixed it here to get the primary datastore entry wrt volume and continue the operation.
2022-07-20 11:34:02 +05:30
dahn 731a83babf
add global setting to allow parallel execution on vmware (#6413)
* add global setting to allow parallel execution on vmware

* cleanup setting distribution for vmware.create.full.clone

* query setting in vmware guru

* don´t touch other hypervisor's commands

* guru hierarchy cleanup
2022-07-15 10:01:35 +02:00
nvazquez 84eed6db72
Merge branch '4.17' 2022-06-10 08:28:41 -03:00
dahn 90a0ee0b6c
fix pseudo random behaviour in pool selection (#6307)
* refactor and log trace

* tracelogs

* shuffle pools with real randomiser

* sinlge retrieval of async job context

* some review comments addressed

* Apply suggestions from code review

Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>

* log formatting

* integration test for distribution of volumes over storages

* move test to smoke tests

* imports

* sonarcloud issue # AYCOmVntKzsfKlhz0HDh

* spellos

* review comments

* review comments

* sonarcloud issues

* unittest

* import

* Update AbstractStoragePoolAllocatorTest.java

Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
2022-06-10 08:06:23 -03:00
nvazquez 0bcc609f05
Updating pom.xml version numbers for release 4.18.0.0-SNAPSHOT
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-06-06 12:25:35 -03:00
nvazquez 038a669d6b
Updating pom.xml version numbers for release 4.17.1.0-SNAPSHOT
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-06-06 12:19:44 -03:00
nvazquez c56220fcf2
Updating pom.xml version numbers for release 4.17.0.0
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-05-31 14:33:47 -03:00
Pearl Dsilva 48f7f10089
xen: Fix volume snapshot deletion when it has child snapshots (#6296) 2022-04-22 14:36:08 -03:00
DK101010 ccac1a383f
Feat/add vdisk UUID to list volume (#5848)
* get vdisk uuid from vcenter and store it into database

* add vdisk uuid as external_uuid to listVolume response

* add sql upgrade file

* Update vmware-base/src/main/java/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java

Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>

* update sql add column external_uuid

* Update server/src/main/java/com/cloud/storage/VolumeApiServiceImpl.java

Co-authored-by: Wei Zhou <weizhou@apache.org>

* adapt param description for externalUuid

* add 'idempotent column add' to create external_uuid col

* rename method to getExternalDiskUUID

* remove line disk_offering.system_use

Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
2022-04-19 23:34:09 -03:00
slavkap 4004dfcfd8
StorPool storage plugin (#6007)
* StorPool storage plugin

Adds volume storage plugin for StorPool SDS

* Added support for alternative endpoint

Added option to switch to alternative endpoint for SP primary storage

* renamed all classes from Storpool to StorPool

* Address review

* removed unnecessary else

* Removed check about the storage provider

We don't need this check, we'll get if the snapshot is on StorPool be
its name from path

* Check that current plugin supports all functionality before upgrade CS

* Smoke tests for StorPool plug-in

* Fixed conflicts

* Fixed conflicts and added missed Apache license header

* Removed whitespaces in smoke tests

* Added StorPool plugin jar for Debian

the StorPool jar will be included into cloudstack-agent package for
Debian/Ubuntu
2022-04-14 11:12:01 -03:00
Daniel Augusto Veronezi Salvador 39fad2d9d7
KVM disk-only based snapshot of volumes instead of taking VM's full snapshot and extracting disks (#5297)
* Refactor create volume snapshot with running VM

* Refactor create volume snapshot with stopped VM

* Refactor create volume from snapshot

* Refactor create template from snapshot

* Refactor volume migration (migrateVolume/ migrateVirtualMachineWithVolume)

* Refactor snapshot deletion

* Refactor snapshot revertion

* Adjusts and fix cherry-pick conflicts

* Remove diffuse tests

* Add validation to add flag '--delete' on command 'virsh blockcommand' only if libvirt version is equal or higher 6.0.0

* Expunge temporary snapshot only if template creation is from snapshot

* Extract strings to constant

* Remove unused imports

* Fix error on revert backed up snapshot

* Turn method's return to void as it is not used

* Rename method in SnapshotHelper

* Fix folder creation when using SharedMountPoint pool

* Remove static import

* Remove unnused method

* Cover take snapshot in centos 7

* Handle right snapshot flag according to qemu version

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2022-04-12 08:14:27 -03:00
Pearl Dsilva 431c352a6d
Synchronization of network devices on newly added hosts for Persistent Networks (#5977)
* Persistent Network feature & Marvin component tests

* Cleaned up comments and imports

* fixed small error

* add support to add setup persistent networks' resources when a disabled host is enabled

* small fix

* use wildcard instead of hard-coding the bridge name

* allow clean up of resources when removing a host in maintenance mode

* skip test for simulator hypervisor

Co-authored-by: shatoboar <sang-woo.bae@campus.tu-berlin.de>
2022-04-11 23:12:05 -03:00
slavkap 2b075ed39e
Storage-based Snapshots for KVM VMs (#3724)
* VM snapshots of running KVM instance using storage providers plugins for disk snapshots

Added new virtual machine snapshot strategy which is using storage providers plugins to take/revert/delete snapshots.
You can take VM snapshot without VM memory on KVM instance, using storage providers implementations for disk snapshots.
Also revert and delete is added as functionality. Added Thaw/Freeze command for KVM instance.
The snapshots will be consistent, because we freeze the VM during the snapshotting. Backup to secondary storage is executed after
thaw of the VM and if it is enabled in global settings.

* Removed duplicated functionality

Set few methods in DefaultVMSnapshotStrategy to protected to reuse them
without duplicating the code. Remove code that is actualy not needed

* Added requirements in global setting kvm.vmstoragesnapshot.enabled

Added more information in kvm.vmstoragesnapshot.enabled global setting,
that it needs installation of:
- qemu version 1.6+
- qemu-guest-agent installed on guest virtual machine

when the option is enabled

* Added Apache license header

* Removed commented code

* If "kvm.vmstoragesnapshot.enabled" is null should be considered as false

* removed unused imports, replaced default template

Removed unused imports which causing failures and replaced template to
CentOS8

* "kvm.vmstoragesnapshot.enabled" set to dynamic

* Getting status of freeze/thaw commands not the return code

Will chacke the status if freeze/thaw of Guest VM succeded, rather than
looking for return code. Code refactoring

* removed "CreatingKVM" VMsnapshot state and events related to it

* renamed AllocatedKVM to AllocatedVM

the states should not be associated to a hypervisor type

* loggin the result of "drive-backup" command

* Check which VM snapshot strategy could handle the vm snapshots

gets the best match of VM snapshot strategy which could handle the vm
snapshots on KVM.
Other storage plugins could integrate with this functionality to support group snapshots

* Added poolId in canHandle for KVM hypervisors

Added poolId into canHandle method used to check if all volumes are on
the same PowerFlex's storage pool

* skip smoke tests if the hypervisor's OS type is CentOS

This PR works with functionality included in qemu-kvm-ev which
does not come by default on CentOS. The smoke tests will be skipped if
the hypervisor OS is CentOS

* Added missed import in smoke test

* Suggested change to use ` org.apache.commons.lang.StringUtils.isNotBlank`

* Fix getting device on Ubuntu

On Ubuntu the device isn't provided and we have to get it from
node-name parameter. For drive-backup command (for Ubuntu) is needed and job-id which
is the value of node-name (this extra param works on Ubuntu and CentOS as well).

* Removed new snapshot states and functionality for NFS

* throw CloudRuntimeException

provide a properer error message when delete VM snapshot fails

* exclude GROUP snapshots when listing snapshots

* Skip tests if there is pool with NFS/Local

* address comments
2022-04-07 21:42:12 -03:00
nvazquez c3854ba781
Merge branch '4.16' 2022-03-20 23:14:57 -03:00
Pearl Dsilva f8b648b938
Fix migration of VM with volume on Ubuntu (#6116)
* Fix migration of VM with volume on Ubuntu

* address comment
2022-03-20 23:14:24 -03:00
nvazquez e3132af64e
Merge branch '4.16' 2022-03-10 08:49:43 -03:00
Wei Zhou 3a456f1b31
server: mark volume snapshots as Destroyed if it does not exist on primary and secondary storage when delete a volume (#6057)
* server: mark volume snapshots as Destroyed in some cases when delete a volume in QCOW2 format

when delete a volume in QCOW2 format, if volume snapshot does not exist on primary and secondary storage, mark the snapshot as Destroyed.

* Update #6057: remove check on volume format
2022-03-10 08:49:03 -03:00
John Bampton 6401c850b7
Fix spelling (#6064)
* Fix spelling

- `interupted` to `interrupted`
- `paramter` to `parameter`

* Fix more typos
2022-03-08 13:02:35 -03:00
Suresh Kumar Anaparti bc70535ee5
Updating pom.xml version numbers for release 4.16.2.0-SNAPSHOT
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
2022-03-03 18:15:33 +05:30
Suresh Kumar Anaparti cad9332082
Updating pom.xml version numbers for release 4.16.1.0
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
2022-02-25 19:01:16 +05:30
Nicolas Vazquez 7f0a322b7d
[Vmware] Prevent NPE on template registration if guest OS is removed (#5980) 2022-02-11 07:36:59 -03:00
Suresh Kumar Anaparti bf70566c2c
Merge branch '4.16' into main 2022-02-02 17:30:21 +05:30
Nicolas Vazquez 3e92a63155
[XenServer/XCP-ng] Pass the image store NFS version on storage commands (#5886)
* Add NFS version to mount command

* Remove extra line

* Extend NFS version to mount secondary storage

* Unused import

* Refactor NFS version to be granular

* Make use of the ConfigKey on the NFS version setting value
2022-01-31 12:21:13 +05:30
Harikrishna f15cab16da
server: Decouple service (compute) offering and disk offering (#5008)
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.

more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring

* Schema changes and disk offering column change from "type" to "compute_only"

* Few more changes

* Decoupled service offering and disk offering

* Remove diskofferingid from vminstance VO

* Decouple service offering and disk offering states

* diskoffering getsize() is only for strict disk offerings

* Fix deployVM flow

* Added new API params to compute offering creation

* Add diskofferingstrictness to serviceoffering vo under quota

* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case

Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume

* Fix User vm response to show proper service offering and disk offerings

* Added disk size strictness in disk offering response

* Added disk offering strictness to the service offering response

* Remove comments

* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form

* Added diskoffering details to the service offering response

* Added UI changes in deployvm wizard to accept override disk offering id

* Fix delete compute offering

* Fix VM deployment from custom service offering

* Move uselocalstorage column access from service offering to disk offering

* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering

* Fixed diskoffering automatic selection on add compute offering wizard

* UI: move compute only toggle button outside the box in add compute offering wizard

* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings

* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration

* Added disk offering change checks during resize volume operation

* Added new API changeofferingforVolume API and corresponding changes

* Add UI form for changeOfferingForVolume API

* Fix UI conflicts

* Fix service offering usage as disk offering

* Fix unit test failures

* fix user_vm_view

* Addressed review comments

* Fixed service_offering_view

* Fix service offering edit flow

* Fix service offering constructor to address custom offering

* Fix domain_router_view to get proper service offering id

* Removed unused import

* Addressed review comments and fixed update service offering flow with storage tags

* Added marvin test cases for checking disk offering strictness

* review comments addressed

* Remove system_use column from disk offering join

* update volume_view to update system_use column from service offering and not disk offering

* Fix changeOfferingForVolume API for custom disk offering

* Fix global setting implementation

* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view

* Changes for override root disk offering in deployvm wizard in case of custom offering

* Fix a unit test case

* Fixed recent unit test cases with new serviceofferingvo constructor

* Fix unit test in VolumeApiServiceImpl

* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow

* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering

* Fix smoke test failures

* Added tool tip for migrate volume UI form

* Address review comments and fix UI form of deploy VM in case of ISO.

* Fixed resize volume UI form for data disk

* UI changes to disable override root disk size when override root disk offering is enabled

* UI fix in deploy vm wizard

* Fix listdiskoffering after rebasing with main

* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list

* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form

* Fix false response on updateDiskOffering API

* Added search field for changeofferingforvolume UI form

* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster

* Removed DB changes from 4.16 upgrade file

* Resolving merge conflicts with main 4.17

* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.

* UI: Added automigrate checkbox in scale VM form

* Addes since attributes to new API params

* Added shrinkOK parameter to changeofferingforvolume API

* Added shrinkOk param to UI in changeOfferingforVolume form

* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form

* Removed old foreign key constraint on IDs of service offering and disk offering

* Allow resize and automigrate of root volume if required in all cases of service offering change

* Allow only resize to higher disk size from UI

* Fixing vue syntax error

* Make UI changes to provide root disk size box when the linked disk offering is of custom

* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms

* Fix resize volume operation to update the VM settings

* Fix migratevolume form to pick selected storage pool id in list diskofferings API
2022-01-27 15:08:42 +05:30
Daniel Augusto Veronezi Salvador d26ce157db
Fix camel case (#5898)
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2022-01-26 19:20:18 -03:00
Suresh Kumar Anaparti 982eef202f
Merge branch '4.16' into main 2022-01-26 12:21:24 +05:30
Nicolas Vazquez 84f5768e64
[VMware][Deploy-as-is] OVF properties not importing when template is uploaded from local (#5861)
* Fix ova upload missing details

* Refactor and cleanup

* Unused import
2022-01-26 11:28:52 +05:30
Suresh Kumar Anaparti 5c02f6d507
Merge branch '4.16' into main 2022-01-06 17:47:37 +05:30
dahn 2774bc156f
use physical size instead of virtual size for migration. (#5750)
* Use Physical size to evaluate if migration is possible

* Improve logging and consider files skipped as failure in complete migration

* skipped can't be negative

* remove useless method

* group multidisk templates for secstor migration

* use enum

* Update engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/DataMigrationUtility.java

Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl d'Silva <pearl.dsilva@shapeblue.com>
2022-01-06 17:18:50 +05:30
Suresh Kumar Anaparti 99313f8eae
Merge branch '4.16' into main 2021-12-20 14:01:41 +05:30
Daniel Augusto Veronezi Salvador 79d924f3ee
Insert correct template size when live migrating VM with volumes (#5758)
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2021-12-16 20:21:38 +05:30
Daniel Augusto Veronezi Salvador b4aabadc4d
Replace string libraries with org.apache.commons.lang3.StringUtils (#5386)
* Replace google lib for lang3 and adjust methods calls

* Replace string libs by lang3

* Prohibit others string libs

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2021-11-18 13:41:48 +05:30
nicolas 3f79436840
Updating pom.xml version numbers for release 4.17.0.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:55:52 -03:00
nicolas 93c3c3b9ac
Updating pom.xml version numbers for release 4.16.1.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:50:22 -03:00
nicolas 44c08b5acc
Updating pom.xml version numbers for release 4.16.0.0
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-04 14:14:57 -03:00
Nicolas Vazquez a5372a98dc
Fix storage cleanup corner case preventing VM deletion (#5575)
* Fix storage cleanup corner case

* Improve deletion

* Refactor
2021-10-16 00:09:54 -03:00
Gabriel Beims Bräscher 404e264caf
CloudStack fails to migrate VM with volume when there are datadisks attatched (#5410)
* Check if should map volume in createStoragePoolMappingsForVolumes

* Invert conditional at internalCanHandle
2021-10-08 11:50:37 +05:30
Harikrishna cd4e7e031a
Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster (#5539)
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster

* Change in constructors

* Naming changes

* Remove commented code

* Refactor code for more readability

* Addressed review comments on code refactor
2021-10-04 20:58:25 -03:00
Pearl Dsilva 93150f465b
api: Fix list templates when no secondary stores present (#5468) 2021-09-20 14:07:47 -03:00
Nicolas Vazquez 3ca3843b02
[Vmware] Fix for ovf templates with prefix (#5448)
* [Vmware] Fix for ovf templates with prefix

* Support multiple hardware versions
2021-09-16 16:16:41 -03:00
Peinthor Rene 66c39c1589
storage: Linstor volume plugin (#4994)
This adds a volume(primary) storage plugin for the Linstor SDS.
Currently it can create/delete/migrate volumes, snapshots should be possible,
but currently don't work for RAW volume types in cloudstack.

* plugin-storage-volume-linstor: notify libvirt guests about the resize
2021-09-16 10:50:58 +05:30
Daniel Augusto Veronezi Salvador 8ffba83214
Keep volume policies after migrating it to another primary storage (#5067)
* Add commons-lang3 to Utils

* Create an util to provide methods that ReflectionToStringBuilder does not have yet

* Create method to retrieve map of tags from resource

* Enable tests on volume components and remove useless tests

* Refactor VolumeObject and add unit tests

* Extract createPolicy in several methods

* Create method to copy policies between volumes and add unit tests

* Copy policies to new volume before removing old volume on volume migration

* Extract "destroySourceVolumeAfterMigration" to a method and test it

* Remove javadoc @param with no sensible information

* Rename method name to a generic name

Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
2021-09-08 09:13:41 -03:00
Nicolas Vazquez 413d10dd81
server: Extend the Annotations framework (#5103)
* Extend addAnnotation and listAnnotations APIs

* Allow users to add, list and remove comments

* Add adminsonly UI and allow admins or owners to remove comments

* New annotations tab

* In progress: new comments section

* Address review comments

* Fix

* Fix annotationfilter and comments section

* Add keyword and delete action

* Fix and rename annotations tab

* Update annotation visibility API and update comments table accordingly

* Allow users seeing all the comments for their owned resources

* Extend comments for volumes and snapshots

* Extend comments to multiple entities

* Add uuid to ssh keypairs

* SSH keypair UI refactor

* Extend comments to the infrastructure entities

* Add missing entities

* Fix upgrade version for ssh keypairs

* Fix typo on DB upgrade schema

* Fix annotations table columns when there is no data

* Extend the list view of items showing they if they have comments

* Remove extra test

* Add annotation permissions

* Address review comments

* Extend marvin tests for annotations

* updating ui stuff

* addition to toggle visibility

* Fix pagination on comments section

* Extend to kubernetes clusters

* Fixes after last review

* Change default value for adminsonly column

* Remove the required field for the annotationfilter parameter

* Small fixes on visibility and other fixes

* Cleanup to reduce files changed

* Rollback extra line

* Address review comments

* Fix cleanup error on smoke test

* Fix sending incorrect parameter to checkPermissions method

* Add check domain access for the calling account for domain networks

* Fix only display annotations icon if there are comments the user can see

* Simply change the Save button label to Submit

* Change order of the Tools menu to provent users getting 404 error on clicking the text instead of expanding

* Remove comments when removing entities

* Address review comments on marvin tests

* Allow users to list annotations for an entity ID

* Allow users to see all comments for allowed entities

* Fix search filters

* Remove username from search filter

* Add pagination to the annotations tab

* Display username for user comments

* Fix add permissions for domain and resource admins

* Fix for domain admins

* Trivial but important UI fix

* Replace pagination for annotations tab

* Add confirmation for delete comment

* Lint warnings

* Fix reduced list as domain admin

* Fix display remove comment button for non admins

* Improve display remove action button

* Remove unused parameter on groupShow

* Include a clock icon to the all comments filter except for root admin

* Move cleanup SQL to the correct file after rebasing main

Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
2021-09-08 10:14:06 +05:30
Abhishek Kumar 56f4da6dce Merge remote-tracking branch 'apache/4.15' into main 2021-09-02 16:13:33 +05:30
Pearl Dsilva 557dc5e1a0
api: List details of template download state for stores corresponding to a zone (#5379)
* api: List details of template download state for stores corresponding to a zone

* fix test
2021-09-02 10:58:58 +05:30
Rohit Yadav a1a3aff2b5 Merge remote-tracking branch 'origin/4.15' into main 2021-08-31 14:29:30 +05:30
slavkap 961e85eb60
Fix of creating volumes from snapshots without backup to secondary storage (#5349)
* Fix of creating volumes from snapshots without backup

When few snaphots are created onyl on primary storage, and try to create
a volume or a template from the snapshot only the first operation is
successful. Its because the snapshot is backup on secondary storage with
wrong SQL query. The problem appears on Ceph/NFS but may affects other
storage plugins.
Bypassing secondary storage is implemented only for Ceph primary storage
and it didn't cover the functionality to create volume from snapshot
which is kept only on Ceph

* Address review
2021-08-31 12:46:57 +05:30
Daniel Augusto Veronezi Salvador 9c51009134
Remove storage scope validation on KVM live migration (#5321)
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2021-08-20 14:54:14 -03:00
Daniel Augusto Veronezi Salvador 65a48dcb74
Add SharedMountPoint to KVMs supported storage pool types (#4780)
* Add SharedMountPoint to KVMs supported storage pool types

* Fix live migration to iSCSI and improve logs

Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
2021-08-16 12:32:19 -03:00
Pearl Dsilva ea7d3b34d1
Cleanup volume information from db when deleted (#4551)
* Cleanup volume information from db when deleted

* reuse search builder

* revert change

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2021-08-09 14:21:07 +05:30
Daniel Augusto Veronezi Salvador 1389862c22
engine/storage: Fix regression on create volume from snapshot (#5282)
* Fix regression on create volume from snapshot

* Log hidden exception

* Revert "Log hidden exception"

This reverts commit 70e655687f.

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2021-08-09 13:37:10 +05:30
Spaceman1984 96c9c5a5e2
Added disk provisioning type support for VMWare (#4640)
* Added disk provisioning type support for VMWare

* Review changes

* Fixed unit test

* Review changes

* Added missing licenses

* Review changes

* Update StoragePoolInfo.java

Removed white space

* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id

* Delete __init__.py

* Merge fix

* Fixed failing test

* Added comment about parameters

* Added error log when update fails

* Added exception when using API

* Ordering storage pool selection to prefer thick disk capable pools if available

* Removed unused parameter

* Reordering changes

* Returning storage pool details after update

* Removed multiple pool update, updated marvin test, removed duplicate enum

* Removed comment

* Removed unused import

* Removed for loop

* Added missing return statements for failed checks

* Class name change

* Null pointer

* Added more info when a deployment fails

* Null pointer

* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java

Co-authored-by: dahn <daan.hoogland@gmail.com>

* Small bug fix on API response and added missing bracket

* Removed datastore cluster code

* Removed unused imports, added missing signature

* Removed duplicate config key

* Revert "Added more info when a deployment fails"

This reverts commit 2486db78dc.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2021-07-16 22:37:42 -03:00
Abhishek Kumar 50a16979c5
refactor: migrate vm with storage (#5030)
* refactor: migrate with storage host capability check

Refactors Boolean HypervisorCapabilitiesDao::isStorageMotionSupported to boolean HypervisorCapabilitiesDao::isStorageMotionSupported for simplifying callers.
Refactors log messages.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* simplify

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* changes

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* review comments addressed

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* var rename

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-07-15 12:57:13 +05:30
Rohit Yadav d916e416ec Updating pom.xml version numbers for release 4.15.2.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-07-02 22:59:07 +05:30
Rohit Yadav 379454caae Updating pom.xml version numbers for release 4.15.1.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-06-28 15:27:27 +05:30
Rohit Yadav f58b72f6f7 Merge remote-tracking branch 'origin/4.15' 2021-06-27 18:25:46 +05:30
slavkap d82909318f
server: Fix of delete of Ceph's snapshots from secondary storage (#5130)
This PR fixes the deletion will be handled by DefaultSnapshotStrategy::deleteSnapshot #4797
2021-06-25 12:04:36 +05:30
davidjumani 29109b4332
db: Cleanup obsolete tables (#5002)
* db: Cleanup unused tables

* Removing volume_host_ref references

* Removing template_host_ref references

* fix space issue

* Fix fk constraint

* Removing certificate table

* Revert "Removing certificate table"

This reverts commit fa24e6483f.

* Addressing comments
2021-06-24 16:50:31 -03:00
Suresh Kumar Anaparti 958182481e cloudstack: make code more inclusive
Inclusivity changes for CloudStack

- Change default git branch name from 'master' to 'main' (post renaming/changing default git branch to 'main' in git repo)
- Rename some offensive words/terms as appropriate for inclusiveness.

This PR updates the default git branch to 'main', as part of #4887.

Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-06-08 15:47:20 +05:30
Rohit Yadav cb167072a1 Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-05-07 16:37:42 +05:30
Harikrishna 32e3bbdcc5
VMware Datastore Cluster primary storage pool synchronisation (#4871)
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.

Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
2021-05-07 16:30:54 +05:30
Rohit Yadav 4742ac15f7 Merge remote-tracking branch 'origin/4.15' 2021-04-29 21:50:40 +05:30
dahn be255e4203
server: protect against stray snapshot-details without snapshot (#4924)
This PR makes sure no orphaned snapshot details are considered in the cleanup at startup job.
a real solution would be to implement some kind of cascading delete, but as the parent record is "only" marked as removed this would be a bit com

Co-authored-by: Daan Hoogland <dahn@onecht.net>
2021-04-29 20:40:29 +05:30
Abhishek Kumar 42c83b08f5 Merge remote-tracking branch 'apache/4.15' 2021-04-26 14:33:58 +05:30
Nicolas Vazquez f728287aa2
server: Fix template garbage collection cleanup (#4944) 2021-04-24 18:57:47 +05:30
slavkap b4ee4acaf3
server: Fix volume state on migrate with migrateVirtualMachineWithVolume API call (#4934)
When invoking migrateVirtualMachineWithVolume API call and a strategy isn't found the volumes are left in Migrating state

This PR puts back the volumes to Ready state.
2021-04-22 14:30:18 +05:30
Abhishek Kumar cce736709e Merge remote-tracking branch 'apache/4.15' 2021-04-12 11:43:57 +05:30
Pearl Dsilva a64ad9d9b7
server: Prevent vm snapshots being indefinitely stuck in Expunging state on deletion failure (#4898)
Fixes #4201

This PR addresses the issue of a vm snapshot being indefinitely stuck is Expunging state in case deletion fails. 

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2021-04-12 08:09:37 +05:30
Rohit Yadav 7270ca7e25 Merge remote-tracking branch 'origin/4.14' into 4.15 2021-04-06 12:51:26 +05:30
Gabriel Beims Bräscher cb91a769d3
Fix npe when migrating vm with volume (#4698) (#4775)
Cherry-pick commit 59fba4916b and fix conflict.

Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
2021-04-06 11:54:29 +05:30
Rohit Yadav c1a02e1697 Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-03-29 16:34:22 +05:30
Wei Zhou b8884efa7f
server: create DB entry for storage pool capacity when create storage pool (#4805)
* server: create DB entry for storage pool capacity when create storage pool

* Revert "server: create DB entry for storage pool capacity when create storage pool"

This reverts commit e790167bfe.

* server: create DB entry for storage pool capacity when create zone-wide storage pools
2021-03-29 16:21:24 +05:30
Daniel Augusto Veronezi Salvador 59fba4916b
Fix npe when migrating vm with volume (#4698)
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
2021-03-08 17:56:56 +01:00
sureshanaparti 45e77dd6f0
server: Clean up the duplicate volume when the destination managed volume creation failed on migrate volume operation (#4730)
Duplicated volumes after failed migration in Allocated state

Fix: Clean up the duplicate volume when the destination managed volume creation failed on migrate volume operation
2021-03-03 13:30:08 +05:30
Rohit Yadav fa067e02a7 Updating pom.xml version numbers for release 4.14.2.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-03-02 12:32:27 +05:30
Rohit Yadav 77290df0d5 Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-26 12:09:11 +05:30
Abhishek Kumar 88337bdea4
server: fix finding pools for volume migration (#4693)
While finding pools for volume migration list following compatible storages:
- all zone-wide storages of the same hypervisor.
- when the volume is attached to a VM, then all storages from the same cluster as that of VM.
- for detached volume, all storages that belong to clusters of the same hypervisor. 

Fixes #4692 
Fixes #4400
2021-02-25 22:13:50 +05:30
sureshanaparti eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Rohit Yadav 66f0beda5f Updating pom.xml version numbers for release 4.14.1.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-02-08 16:24:09 +05:30
Rohit Yadav b482da8c91 Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-01-11 13:58:30 +05:30
Daan Hoogland 280c13a4bb Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-05 15:51:02 +00:00
Daan Hoogland 81e9e6809b Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:34:46 +00:00
Daan Hoogland e26202f23e Updating pom.xml version numbers for release 4.16.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:32:10 +00:00
Daan Hoogland 01b3e361c7 Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2020-12-23 16:32:25 +00:00
Harikrishna b1ddd7c2e6
vmware: Fix for mapping guest OS type read from OVF to existing guest OS in C… (#4553)
* Fix for mapping guest OS type read from OVF to existing guest OS in CloudStack database  while registering VMware template

* Added unit tests to String Utils methods and updated the code

* Updated the java doc section

* Updated os description logic to keep equals ignore match with guest os display name
2020-12-23 19:37:21 +05:30
Nicolas Vazquez 4617be4583
vmware: Fix template upload from local (#4555)
Update the guest OS from the OVF file after upload is completed
This PR fixes the template upload from local on VMware

Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
2020-12-23 15:13:39 +05:30
Alexandru Bagu fdb2ee3165
storage: Fix hypervisor type cast to string (#4516)
This PR addresses an error that appears when you try to add a new host. I don't even understand why there was a cast to String in the first place. I will assume some classes send HypervisorType and some send a string (empty or otherwise). Shouldn't this be addressed to use the same type everywhere? With this fix adding a new xenserver host works fine.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2020-12-14 11:56:44 +05:30
Pearl Dsilva e4a504b084
Make global setting non-dynamic (#4505)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-12-01 14:00:35 +05:30
Spaceman1984 dfa09fc856
server: Setting snapshot removed on timeout (#4425)
* Setting snapshot state to error on timeout

* Setting removed field so snapshot record is ignored by garbage collection

* Removed explicitly setting error status, renamed method from markFailed to markRemoved

* Renamed method, moved code a few lines down

* Moved remove logic

* Removed unused service

* Moved removed logic - last time, promise
2020-11-21 02:20:16 +05:30
Rakesh 735b6de296
Cleanup download urls when SSVM destroyed (#4078)
Co-authored-by: Rakesh Venkatesh <r.venkatesh@global.leaseweb.com>
2020-11-18 14:01:31 +01:00
Spaceman1984 acee15a530
Moved dedicated hosts to the end of the resultset when selecting an e… (#4428) 2020-11-18 12:07:14 +00:00
Pearl Dsilva 1dbb76f64b
Fix: Data migration (#4475)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-11-18 09:45:53 +01:00
nvazquez ee5b8763a6 Fix remove VM and its volumes for deploy-as-is if have previously failed - restore cpu flags in nested virtualization test 2020-10-19 15:05:58 +05:30
Harikrishna Patnala 5fdabc1cb0 Added storage policy details to disk while creating disk and restricted migration of volumes to storage pools which are not storage policy compliance 2020-10-19 15:05:58 +05:30
Harikrishna Patnala 46b5322d9b Adding vSphere storage policy to disk on start command and attach volume command 2020-10-19 15:05:58 +05:30
Harikrishna Patnala 07abcf5705 During migrate volume command, when operation timed out exception or any exception is occured it is not handled properly to clean the volume_store_ref entry.
Fixed it to clean the volume_store_ref entry upon on any exception
2020-10-19 15:05:57 +05:30
nvazquez 94bebe8792 Revert back deploy as is column on templates but keep it as default for new templates 2020-10-19 15:05:57 +05:30
nvazquez 9b51a706db Set deploy-as-is to default on VMware 2020-10-19 15:05:57 +05:30
nvazquez b0d3168e0b Fail template registration when guest OS not found 2020-10-19 15:05:57 +05:30
nvazquez 32d85b0fa2 Display storage on logging when not deploy-as-is and guest OS small refactor 2020-10-19 15:05:57 +05:30
nvazquez 41354227e2 Handle guest OS read from deploy-as-is OVF descriptor 2020-10-19 15:05:57 +05:30
nvazquez edfbed34ad Use network adapter from OVF on deploy-as-is 2020-10-19 15:05:57 +05:30
Harikrishna Patnala 33ae2afc89 Removed few duplicate imports during rebase with master 2020-10-19 15:05:57 +05:30
Harikrishna Patnala 44dc0c6072 Fixed rat failure on new class DeployAsIsHelper.java
Also removed some unused imports during rebase
2020-10-19 15:05:57 +05:30
nvazquez f73830acbb Refactor deploy as is constants 2020-10-19 15:05:57 +05:30
nvazquez bb4ce2118d Add new template and vm deploy as is details table and refactor 2020-10-19 15:05:57 +05:30
nvazquez d864e9dc39 [VMware] Full OVF properties support 2020-10-19 15:05:56 +05:30
Harikrishna Patnala 19745ea049 Fix enable primary datastore maintenance command seriliaztion on it 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 201ebe8868 Simulator failures fixing 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 61dd85876b Fix migrate vm and volume APIs in case if datastore cluster 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 873f9dd9ac Datastore Clusters operations on putting into maintenance mode, update storage pool with tags, cancelling mantenance mode and deleting storage pool 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 75fb1d91ee Fix adding Datastore clusters and listing 2020-10-19 14:57:15 +05:30
Harikrishna Patnala b4a23ea5f6 Allocation logic to skip datastore cluster and consider only storagepools inside the datastore cluster 2020-10-19 14:57:15 +05:30
Harikrishna Patnala 41b3fc19d6 Add Datastore cluster and the child entities which are datastores in the cluster into CloudStack
Setting scope is still pending.
2020-10-19 14:57:15 +05:30
Harikrishna Patnala 48786b2d31 DataStore Clusters addition as a storage pool 2020-10-19 14:57:15 +05:30
Harikrishna Patnala 6df819028e UI changes and accept any type of datastore as presetup in vmware 2020-10-19 14:57:15 +05:30
Harikrishna Patnala fb0a96e7fb Check if datastore is complaince with the storagepolicy provided in the disk offering.
Added corresponding manager objects from PBM sdk to do the job.
Made dao layer changes to read the storage policy in diskoffering
2020-10-19 14:57:15 +05:30
Pearl Dsilva 0d487fc8c9
support for data migration of incremental snaps on xen (#4395)
* support for handling incremental snaps (on DB entries) on xen

* Addressed comments

* Update NfsSecondaryStorageResource.java

adjusted space in comment/ log

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-10-18 02:15:10 +05:30
Pearl Dsilva b464fe41c6
server: Secondary Storage Usage Improvements (#4053)
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
2020-09-17 10:12:10 +05:30
Spaceman1984 d57aa83517
server: Added nfs minor version support (#4180)
This PR adds minor version support when mounting nfs on the SSVM as requested in #2861

The global setting "secstorage.nfs.version" has been changed to use the String data type which allows any minor version to be specified.
2020-08-19 14:53:38 +05:30
nvazquez 7e3b61b723 Merge branch '4.14' 2020-07-18 14:17:43 -03:00
nvazquez 5c6e79b1eb Merge branch '4.13' into 4.14 2020-07-18 14:15:46 -03:00
Nicolas Vazquez f843c537f0
Fix snapshots garbage collection (#4188)
* Cleanup orphan entries from snapshot store ref for primary storage

* Add debug message
2020-07-18 14:12:53 -03:00
Nicolas Vazquez 8c1d749360
[VMware] Enable unmanaging guest VMs (#4103)
* Enable unmanaging guest VMs

* Minor fixes

* Fix stop usage event only if VM is not stopped when unmanaging

* Rename unmanaged VMs manager

* Generate netofferingremove usage event if VM is not stopped

* Generate usage event VM snapshot primary off when unmanaging
2020-06-26 08:31:43 -03:00
Rohit Yadav de3ccd2c29 Merge remote-tracking branch 'origin/4.14' 2020-06-15 09:56:55 +05:30
Rohit Yadav e94a54f3b4 Merge remote-tracking branch 'origin/4.13' into 4.14 2020-06-15 09:56:06 +05:30
Spaceman1984 6a683dcf77
storage: Fixed null pointer (#4130)
Fixes #4090

When trying to migrate a VM across 2 clusters, if a snapshot has been deleted and garbage collection has run to update the removed field, it is not possible to migrate the instance due to a null pointer.
2020-06-15 09:54:22 +05:30
andrijapanicsb 5f926c3353 Updating pom.xml version numbers for release 4.15.0.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 10:18:39 +01:00
andrijapanicsb 05e9b11694 Updating pom.xml version numbers for release 4.14.1.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 09:59:32 +01:00
andrijapanicsb 6f96b3b2b3 Updating pom.xml version numbers for release 4.14.0.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-11 15:03:14 +01:00
andrijapanicsb 398e685e01 Updating pom.xml version numbers for release 4.13.2.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-04-29 12:29:12 +01:00
Daan Hoogland 689e529d7b Merge release branch 4.13 to master
* 4.13:
  Fixed guest vlan range going missing when using zone wizzard (#4042)
  Volume migration (#4043)
2020-04-23 20:19:30 +02:00
andrijapanicsb b2ffa3efa5 Updating pom.xml version numbers for release 4.13.1.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-04-23 19:17:09 +01:00
dahn c1570b9c91
Volume migration (#4043)
* Update AncientDataMotionStrategy.java

fix When secondary storage usage is> 90%, VOLUME migration across primary storage will cause the migration to fail and lose VOLUME

* Update AncientDataMotionStrategy.java

Volume is migrated across Primary storage. If no secondary storage is available(Or used capacity> 90% ), the migration is canceled.
Before modification, if secondary storage cannot be found, copyVolumeBetweenPools return NUll

copyAsync considers answer = null to be a sign of successful task execution, so it deletes the VOLUME on the old primary storage. This is the root cause of data loss, because VOLUME did not perform the migration at all.

* code in comment removed

Co-authored-by: div8cn <35140268+div8cn@users.noreply.github.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
2020-04-23 19:56:27 +02:00
Daan Hoogland b984184b7a Merge release branch 4.13 to master
* 4.13:
  Snapshot deletion issues (#3969)
  server: Cannot list affinity group if there are hosts dedicated… (#4025)
  server: Search zone-wide storage pool when allocation algothrim is firstfitleastconsumed (#4002)
2020-04-11 16:45:00 +02:00
dahn f18fe5e1da
Snapshot deletion issues (#3969)
* Fixes snapshot deletion

* Remove legacy '@Component', it is not necessary in this bean/class.

* Fix log message missing %d and remove snapshot on DB

* Remove "dummy" boolean return statement

* Manage snapshot deletion for KVM + NFS (primary storage)

* checkstyle trailing spaces

* rename options strings to *_OPTION

* Fix typo on deleteSnapshotOnSecondaryStorage and enhance log message

* Move the snapshotDao.remove(snapshotId); (#4006)

* Fix deletesnapshot worflow to handle both snapshots created in primary storage and snapshots backed up to secondary storage

* Fix extra space

* refactor out separate handling methods for secondary and primary (reducing returns)

* return false on unexpected error or log when expected

* != instead of ==

* secondary instead of backup storage

* init to null

* Handle snapshot deletion on primary storage. When primary store ref not found for snapshot do not fail the operation.

* Fix debug levels on log messages

Co-authored-by: GabrielBrascher <gabriel@apache.org>
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
2020-04-11 16:40:27 +02:00
Wei Zhou 6bf92fb136
server: Search zone-wide storage pool when allocation algothrim is firstfitleastconsumed (#4002) 2020-04-06 22:01:40 +02:00