Commit Graph

457 Commits

Author SHA1 Message Date
Daan Hoogland 373f017002 Merge branch '4.19' 2024-06-18 19:58:43 +02:00
Harikrishna bb0c1f93af
Add volume encryption checks during the disk offering change (#9209) 2024-06-17 10:36:47 +02:00
Daan Hoogland e61f3bae4d Merge branch '4.19' 2024-04-29 11:37:40 +02:00
Vishesh 80a8b80a9d
Update volume's passphrase to null if diskOffering doesn't support encryption (#8904) 2024-04-29 12:18:09 +05:30
Abhishek Kumar 02305fbc5f Merge remote-tracking branch 'apache/4.19' 2024-04-04 17:36:05 +05:30
Wei Zhou 939d0b9011 engine-storage: control download redirection
Add a global setting to control whether redirection is allowed while
downloading templates and volumes

core: some changes on SimpleHttpMultiFileDownloader
similar as HttpTemplateDownloader

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
(cherry picked from commit b1642bc3bf)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2024-04-04 11:19:20 +05:30
Abhishek Kumar f36273888b build: fix logger post forward-merge
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2024-03-01 18:14:54 +05:30
Abhishek Kumar b29ec2bf12 Merge remote-tracking branch 'apache/4.19' 2024-03-01 17:40:58 +05:30
Harikrishna c462be1412
New API "checkVolume" to check and repair any leaks or issues reported by qemu-img check (#8577)
* Introduced a new API checkVolumeAndRepair that allows users or admins to check and repair if any leaks observed.
Currently this is supported only for KVM

* some fixes

* Added unit tests

* addressed review comments

* add repair volume while granting access

* Changed repair parameter to accept both leaks/all

* Introduced new global setting volume.check.and.repair.before.use to do volume check and repair before VM start or volume attach operations

* Added volume check and repair changes only during VM start and volume attach operations

* Refactored the names to look similar across the code

* Some code fixes

* remove unused code

* Renamed repair values

* Fixed unit tests

* changed version

* Address review comments

* Code refactored

* used volume name in logs

* Changed the API to Async and the setting scope to storage pool

* Fixed exit value handling with check volume command

* Fixed storage scope to the setting

* Fix volume format issues

* Refactored the log messages

* Fix formatting
2024-02-29 14:41:49 +05:30
Daan Hoogland 3baa45bc2a forward Merge branch '4.19' into main 2024-02-26 16:00:53 +01:00
Daan Hoogland f4987bf8ee Merge release branch 4.18 to 4.19
* 4.18:
  Storage plugin support to check if volume on datastore requires access for migration (#8655)
  CKS: fix /opt/bin/deploy-cloudstack-secret in CKS control nodes (#8697)
2024-02-26 15:53:11 +01:00
Suresh Kumar Anaparti f731fe882c
Storage plugin support to check if volume on datastore requires access for migration (#8655)
* Check if volume on datastore requires access for migration, and grant/revoke volume access if requires

* Updated default implementation for requiresAccessForMigration method in PrimaryDataStoreDriver
2024-02-26 20:16:31 +05:30
Abhishek Kumar 592038a304
api,server,ui: granular resource limit management (#8362)
Feature spec: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Granular+Resource+Limit+Management

Introduces the concept of tagged resource limits for granular resource limit management. Limits can be enforced on accounts and domains for the deployment of entities for a tagged resource. Current tagged resource limits can be used for the following resource types,

Host limits
- user_vm
- cpu
- memory

Storage limits
- volume
- primary_storage

Following global settings can used to specify tags for which limit needs to be enforced,

Host: `resource.limit.host.tags`
Storage: `resource.limit.storage.tags`

Option for specifying tagged resource limits and viewing tagged resource usage are made available in the UI.

Enhances the use of templatetag for VM deployment and template creation

Adds option to list service/compute offerings that can be used with a given template. A new parameter named templateid has been added.

Adds option to list disk offering with suitability flag for a virtual machine. A new parameter named virtualmachineid has been added to the listDiskOfferings API which when passed returns suitableforvirtualmachine param in the response.
2024-02-19 14:17:34 +05:30
João Jandre 49cecaed06
Normalize loggers and upgrade log4j 1.2 to log4j 2.19 (#7131)
* Normalize logs

All classes that could have their loggers inherited from their fathers had their own loggers deleted;
Most loggers didn't have to be static, so most of them were normalized so that they wouldn't be;
All loggers are protected now;
Static logger's name are now 'LOGGER';
Non-static logger's name are now 'logger';
New class DbUpgradeAbstractImpl created so that all Upgraders extend it and inherit its logger

* Upgrade log4j

* fix errors caused by the merge

* Refactor cglibThrowableRenderer functionality to log4j2 and upgrade the last configuration files

* fix sonarcloud bug

* Fix errors caused by merge, remove some unused loggers, and rename a variable that was mistakenly renamed on the normalization commit

* Readd snmpTrapAppender, remove TestAppender

* Regenerate changes

* regenerate changes

* refactor last custom appender

* fix systemvm configuration xml

* Regenerate changes

* Regenerate changes

* regenerate changes

* Regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* Fix utils pom

* fix some tests

* regenerate changes

* Fix jar being printed on exception

* fix logging in system VMs, fix commands not having log4j2 classpath.

* regenerate changes

* Fix some unwanted renomeations

* fix end of file

* regenerate changes

* regenerate changes

* fix merge error

* regenerate changes

* fix tests

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* readd reload4j to tungsten as juniper depends on it

* Regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* re-add reload4j dependency to network-contrail, as juniper depends on it

* regenerate changes

* regenerate changes

* regenerate changes

* fix typo

* regenerate changes

* regenerate changes

* Fix end of files

* regenerate changes

* add logj42 to cloud-utils-SHADED.jar

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* regenerate changes

* Regenerate changes

* Regenerate changes

* Regenerate changes

* regenerate changes

* Regenerate changes

* regenerate changes

* Regenerate changes

* Regenerate changes

* Regenerate changes

* regenerate changes

* Regenerate changes

* Regenerate changes

* fix some tests

* Regenerate changes

* Regenerate changes

* fix test

* Regenerate changes

* Regenerate changes
2024-02-08 09:55:41 -03:00
Suresh Kumar Anaparti 8ea9fc911d
StoragePoolType as class (#8544)
* StoragePoolType as a class

* Fix agent side StoragePoolType enum to class

* Handle StoragePoolType for StoragePoolJoinVO

* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.

* Fix UserVMJoinVO for StoragePoolType

* fixed missing imports

* Since StoragePoolType is a class, it cannot be converted by @Enumerated annotation.
Implemented conveter class and logic to utilize @Convert annotation.

* Fixed equals for the enum.

* removed not needed try/catch for prepareAttribute

* Added license to the file.

* Implemented "supportsPhysicalDiskCopy" for storage adaptor.

Co-authored-by: mprokopchuk <mprokopchuk@apple.com>

* Add javadoc to StoragePoolType class

* Add unit test for StoragePoolType comparisons

* StoragePoolType "==" and ".equals()" fix.

* Fix StoragePoolType for FiberChannelAdapter

* Fix for abstract storage adaptor set up issue

* review comments

* Pass StoragePoolType object for poolType dao attribute

---------

Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@apple.com>
Co-authored-by: mprokopchuk <mprokopchuk@gmail.com>
2024-02-05 13:27:15 +05:30
Abhishek Kumar 7dffbc6e47 Updating pom.xml version numbers for release 4.20.0.0-SNAPSHOT
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2024-02-02 18:16:37 +05:30
Abhishek Kumar a7b97ff3b0 Updating pom.xml version numbers for release 4.19.1.0-SNAPSHOT
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2024-02-02 18:06:04 +05:30
Abhishek Kumar 2746225b99 Updating pom.xml version numbers for release 4.19.0.0
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2024-01-29 10:21:52 +05:30
Vishesh fedcf66de0
Externalise a few timeouts & fix timeout for hostSupportsUefi in libvirt ready command wrapper (#8547)
This PR fixes bug introduced in #8502. Timeout for script execution was set to 60 ms instead of 60s which resulted in host not getting UEFI enabled. This is a blocker for 4.19 release.

We do this by introducing a new agent parameter `agent.script.timeout` (default - 60 seconds) to use as a timeout for the script checking host's UEFI status.

We also externalize the timeout for the ReadyCommand by introducing a new global setting `ready.command.wait` (default - 60 seconds).

For ModifyStoragePoolCommand, we don't externalize the timeout to avoid confusion for the user. Since, the required timeout can vary depending on the provider in use and we are only setting the wait for default host listener for now. Instead, we reuse the global `wait` setting by dividing it by `5` making the default value of 6 minutes (1800/5 = 360s) for ModifyStoragePoolCommand.

Note: the actual time, the MS waits is twice the wait set for a Command. Check reference code below.
19250403e6/engine/orchestration/src/main/java/com/cloud/agent/manager/AgentAttache.java (L406-L442)
2024-01-27 23:36:13 +05:30
kishankavala 80bbb29abf
CleanUp Async Jobs after mgmt server maintenance (#8394)
This PR fixes moves resources stuck in transition state during async job cleanup

Problem:
During maintenance of the management server, other servers in the cluster or the same server after a restart initiate async job cleanup. However, this process leaves resources in a transitional state. The only recovery option currently available is to make direct database changes.

Solution:
This PR introduces a resolution by changing Volume, Virtual Machine, and Network resources from their transitional states. This adjustment enables the reattempt of failed operations without the need for manual database modifications.
2024-01-19 13:26:25 +05:30
Vishesh c3b77cb7b8
Fix host stuck in connecting state (#8502)
There are a lot of test failures due to test_vm_life_cycle.py in multiple PRs due to host not available for migration of VMs.
#8438 (comment)
#8433 (comment)
#7344 (comment)

While debugging I noticed that the hosts get stuck in Connecting state because MS is waiting for a response of the ReadyCommand from the agent. Since we take a lock on connection and disconnection, restarting the agent doesn't work. To fix this, we have to restart the MS or wait for ~1 hour (default timeout).

On the agent side, it gets stuck waiting for a response from the Script execution.

To reproduce, run smoke/test_vm_life_cycle.py (TestSecuredVmMigration test class to be specific). Once the tests are complete, you will notice that some hosts are stuck in Connecting state. And restarting the agent fails due to the named lock. Locks on DB can be checked using the below query.

SELECT *
FROM performance_schema.metadata_locks
INNER JOIN performance_schema.threads ON THREAD_ID = OWNER_THREAD_ID
WHERE PROCESSLIST_ID <> CONNECTION_ID() \G;

This PR adds a wait for the ready command and a timeout to the Script execution to ensure that the thread doesn't get stuck and the named lock from database is released.
2024-01-15 13:56:34 +05:30
Rene Glover 1031c31e6a
FiberChannel Multipath for KVM + Pure Flash Array and HPE-Primera Support (#7889)
This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.
2023-12-09 11:31:33 +05:30
Abhishek Kumar c599011ef5 Merge remote-tracking branch 'apache/4.18' 2023-12-08 18:06:15 +05:30
Harikrishna 7eb36367c9
Add lock mechanism considering template id, pool id, host id in PowerFlex Storage (#8233)
Observed a failure to start new virtual machine with PowerFlex storage. Traced it to concurrent VM starts using the same template and the same host to copy. Second mapping attempt failed.

While creating the volume clone from the seeded template in primary storage, adding a lock with the string containing IDs of template, storage pool and destination host avoids the situation of concurrent mapping attempts with the same host.
2023-12-08 13:21:16 +05:30
Abhishek Kumar 543c54c718
api,server,ui: snapshot copy, multi-zone replica (#7873)
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.

Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.

In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.

As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.

`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.

Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.

`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.

----
New API added
`copySnapshot`

Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form

Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
2023-10-23 09:01:58 +02:00
Wei Zhou 246bb24b0f Updating pom.xml version numbers for release 4.18.2.0-SNAPSHOT
Signed-off-by: Wei Zhou <weizhou@apache.org>
2023-09-12 17:26:53 +02:00
Wei Zhou 4bdff06acd Updating pom.xml version numbers for release 4.18.1.0
Signed-off-by: Wei Zhou <weizhou@apache.org>
2023-09-07 08:50:50 +02:00
John Bampton 6f4503488b
pre-commit: apply `end-of-file-fixer` to all files (#7551) 2023-08-02 13:47:21 +02:00
Wei Zhou 09a4a252d7 Merge remote-tracking branch 'apache/4.18' into HEAD 2023-06-21 15:08:56 +02:00
Harikrishna 40cc10a73d
Allow volume migrations in ScaleIO within and across ScaleIO storage clusters (#7408)
* Live storage migration of volume in scaleIO within same storage scaleio cluster

* Added migrate command

* Recent changes of migration across clusters

* Fixed uuid

* recent changes

* Pivot changes

* working blockcopy api in libvirt

* Checking block copy status

* Formatting code

* Fixed failures

* code refactoring and some changes

* Removed unused methods

* removed unused imports

* Unit tests to check if volume belongs to same or different storage scaleio cluster

* Unit tests for volume livemigration in ScaleIOPrimaryDataStoreDriver

* Fixed offline volume migration case and allowed encrypted volume migration

* Added more integration tests

* Support for migration of encrypted volumes across different scaleio clusters

* Fix UI notifications for migrate volume

* Data volume offline migration: save encryption details to destination volume entry

* Offline storage migration for scaleio encrypted volumes

* Allow multiple Volumes to be migrated with migrateVirtualMachineWithVolume API

* Removed unused unittests

* Removed duplicate keys in migrate volume vue file

* Fix Unit tests

* Add volume secrets if does not exists during volume migrations. secrets are getting cleared on package upgrades.

* Fix secret UUID for encrypted volume migration

* Added a null check for secret before removing

* Added more unit tests

* Fixed passphrase check

* Add image options to the encypted volume conversion
2023-06-21 11:57:05 +05:30
Rohit Yadav 8a42ab9ce4 Merge remote-tracking branch 'origin/4.18' 2023-04-14 21:49:12 +05:30
Harikrishna b774ee5d11
vmware: Datastore cluster synchronization should check if the child datastores are in UP state or not (#7385)
This fix ensures when datastore cluster in VMware is added as a primary storage pool in CloudStack then all the child datastores (which already exists in CS) should be in Up state.

For example:

1. Datastore Cluster DS has two child datastores A and B in vCenter. (B is already added as a storage pool in CloudStack)
2. Now try to add datastore cluster DS into CloudStack as a primary storage pool
3. CloudStack tries to add child datastores A and B in CloudStack, since B is already there in CloudStack, it will reuse the existing storagepool entry and will keep under parent Storage pool DS.

During Step 3 we are now checking if B is Up state or not.
2023-04-11 22:23:12 +05:30
Daan Hoogland fb4f6a334d Updating pom.xml version numbers for release 4.19.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <daan@onecht.net>
2023-03-15 19:46:01 +01:00
Daan Hoogland 05cda2729f Updating pom.xml version numbers for release 4.18.1.0-SNAPSHOT
Signed-off-by: Daan Hoogland <daan@onecht.net>
2023-03-15 19:38:14 +01:00
Daan Hoogland 0574087284 Updating pom.xml version numbers for release 4.18.0.0
Signed-off-by: Daan Hoogland <daan@onecht.net>
2023-03-11 09:35:41 +01:00
João Jandre 61a722548f
Create API to reassign volume (#6938) 2023-01-27 11:10:56 +01:00
João Jandre 14937e1adb
Fixed NPE on volume creation from snapshot (#6839)
Co-authored-by: João Jandre <joao@scclouds.com.br>
2022-10-26 08:44:01 +02:00
Marcus Sorensen 697e12f8f7
kvm: volume encryption feature (#6522)
This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver.

In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally.

The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation.

This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools.

NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first.

### Management Server

##### API

* createDiskOffering now has an 'encrypt' Boolean
* createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM.  This has been refactored to deal with the new separation of service offering from disk offering internally.
* listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listHosts now shows encryption support of each hypervisor host via `encryptionsupported`
* Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume.

##### Volume functions

A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found.

Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming.

Supported functions:
* Data Volume create
* VM root volume create
* VM root volume reinstall
* Offline volume snapshot/restore
* Migration of VM with storage (e.g. local storage VM migration)
* Resize volume
* Detach/attach volume

Blocked functions:
* Online volume snapshot
* VM snapshot w/memory
* Scheduled snapshots (would fail when VM is running)
* Disk offering migration to offerings that don't have matching encryption
* Creating template from encrypted volume
* Creating volume from encrypted volume
* Volume extraction (would we decrypt it first, or expose the key? Probably the former).

##### Primary Storage Support

For storage developers, adding encryption support involves:

1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it.

2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested.

##### Scheduling

For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI.  This was patterned after other features that require hypervisor support such as UEFI.

The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption.  This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide.

VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption.

##### DB Changes

A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database.  The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit.

#### KVM Agent

For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest.  This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data.

For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy.

Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs.  On the agent side, this passphrase is used in two ways:

1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc.

2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless.

In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`.  These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way.

It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`.

Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere.  As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed.  In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism.

Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault).

Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
2022-09-27 10:20:59 +05:30
Abhishek Kumar e720b72e15 Merge remote-tracking branch 'apache/4.17' into main 2022-08-31 17:38:30 +05:30
Abhishek Kumar a21efe75df
vmware: fix vm snapshot with datastore cluster, drs (#6643)
Fixes #6595
Sync volume datastore, path and chaininfo info while calculating snapshot chain size after snapshot operation is complete from vCenter.
2022-08-31 16:00:14 +05:30
Suresh Kumar Anaparti 75da982d73
Updated resource counter to include correct size after volume creation/resize and other improvements (#6587)
* Updated resource counter to include correct size after volume creation/resize and other improvements
- Recalculate resource counters for root domain in the periodic task
- Update correct size in the primary_storage resource counter after volume creation/resize
- Some code improvements

* review and sonarcloud issues

Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Co-authored-by: Daan Hoogland <daan@onecht.net>
2022-08-16 10:41:42 +02:00
Paula Oliveira 9717ed9af2
Improve log messages on VolumeOrchestrator class (#6408)
Co-authored-by: Paula Zomignani Oliveira <paula@scclouds.com.br>
2022-08-12 09:17:06 +02:00
John Bampton f9347ecf2c
Fix spelling (#6597) 2022-08-03 15:43:47 +05:30
nvazquez 0bcc609f05
Updating pom.xml version numbers for release 4.18.0.0-SNAPSHOT
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-06-06 12:25:35 -03:00
nvazquez 038a669d6b
Updating pom.xml version numbers for release 4.17.1.0-SNAPSHOT
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-06-06 12:19:44 -03:00
nvazquez c56220fcf2
Updating pom.xml version numbers for release 4.17.0.0
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-05-31 14:33:47 -03:00
DK101010 ccac1a383f
Feat/add vdisk UUID to list volume (#5848)
* get vdisk uuid from vcenter and store it into database

* add vdisk uuid as external_uuid to listVolume response

* add sql upgrade file

* Update vmware-base/src/main/java/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java

Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>

* update sql add column external_uuid

* Update server/src/main/java/com/cloud/storage/VolumeApiServiceImpl.java

Co-authored-by: Wei Zhou <weizhou@apache.org>

* adapt param description for externalUuid

* add 'idempotent column add' to create external_uuid col

* rename method to getExternalDiskUUID

* remove line disk_offering.system_use

Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
2022-04-19 23:34:09 -03:00
Pearl Dsilva 431c352a6d
Synchronization of network devices on newly added hosts for Persistent Networks (#5977)
* Persistent Network feature & Marvin component tests

* Cleaned up comments and imports

* fixed small error

* add support to add setup persistent networks' resources when a disabled host is enabled

* small fix

* use wildcard instead of hard-coding the bridge name

* allow clean up of resources when removing a host in maintenance mode

* skip test for simulator hypervisor

Co-authored-by: shatoboar <sang-woo.bae@campus.tu-berlin.de>
2022-04-11 23:12:05 -03:00
nvazquez e3132af64e
Merge branch '4.16' 2022-03-10 08:49:43 -03:00
Wei Zhou 3a456f1b31
server: mark volume snapshots as Destroyed if it does not exist on primary and secondary storage when delete a volume (#6057)
* server: mark volume snapshots as Destroyed in some cases when delete a volume in QCOW2 format

when delete a volume in QCOW2 format, if volume snapshot does not exist on primary and secondary storage, mark the snapshot as Destroyed.

* Update #6057: remove check on volume format
2022-03-10 08:49:03 -03:00
Suresh Kumar Anaparti bc70535ee5
Updating pom.xml version numbers for release 4.16.2.0-SNAPSHOT
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
2022-03-03 18:15:33 +05:30
Suresh Kumar Anaparti cad9332082
Updating pom.xml version numbers for release 4.16.1.0
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
2022-02-25 19:01:16 +05:30
Suresh Kumar Anaparti 5c02f6d507
Merge branch '4.16' into main 2022-01-06 17:47:37 +05:30
dahn 2774bc156f
use physical size instead of virtual size for migration. (#5750)
* Use Physical size to evaluate if migration is possible

* Improve logging and consider files skipped as failure in complete migration

* skipped can't be negative

* remove useless method

* group multidisk templates for secstor migration

* use enum

* Update engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/DataMigrationUtility.java

Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl d'Silva <pearl.dsilva@shapeblue.com>
2022-01-06 17:18:50 +05:30
Daniel Augusto Veronezi Salvador b4aabadc4d
Replace string libraries with org.apache.commons.lang3.StringUtils (#5386)
* Replace google lib for lang3 and adjust methods calls

* Replace string libs by lang3

* Prohibit others string libs

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2021-11-18 13:41:48 +05:30
nicolas 3f79436840
Updating pom.xml version numbers for release 4.17.0.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:55:52 -03:00
nicolas 93c3c3b9ac
Updating pom.xml version numbers for release 4.16.1.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:50:22 -03:00
nicolas 44c08b5acc
Updating pom.xml version numbers for release 4.16.0.0
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-04 14:14:57 -03:00
Nicolas Vazquez a5372a98dc
Fix storage cleanup corner case preventing VM deletion (#5575)
* Fix storage cleanup corner case

* Improve deletion

* Refactor
2021-10-16 00:09:54 -03:00
Daniel Augusto Veronezi Salvador 8ffba83214
Keep volume policies after migrating it to another primary storage (#5067)
* Add commons-lang3 to Utils

* Create an util to provide methods that ReflectionToStringBuilder does not have yet

* Create method to retrieve map of tags from resource

* Enable tests on volume components and remove useless tests

* Refactor VolumeObject and add unit tests

* Extract createPolicy in several methods

* Create method to copy policies between volumes and add unit tests

* Copy policies to new volume before removing old volume on volume migration

* Extract "destroySourceVolumeAfterMigration" to a method and test it

* Remove javadoc @param with no sensible information

* Rename method name to a generic name

Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
2021-09-08 09:13:41 -03:00
Nicolas Vazquez 413d10dd81
server: Extend the Annotations framework (#5103)
* Extend addAnnotation and listAnnotations APIs

* Allow users to add, list and remove comments

* Add adminsonly UI and allow admins or owners to remove comments

* New annotations tab

* In progress: new comments section

* Address review comments

* Fix

* Fix annotationfilter and comments section

* Add keyword and delete action

* Fix and rename annotations tab

* Update annotation visibility API and update comments table accordingly

* Allow users seeing all the comments for their owned resources

* Extend comments for volumes and snapshots

* Extend comments to multiple entities

* Add uuid to ssh keypairs

* SSH keypair UI refactor

* Extend comments to the infrastructure entities

* Add missing entities

* Fix upgrade version for ssh keypairs

* Fix typo on DB upgrade schema

* Fix annotations table columns when there is no data

* Extend the list view of items showing they if they have comments

* Remove extra test

* Add annotation permissions

* Address review comments

* Extend marvin tests for annotations

* updating ui stuff

* addition to toggle visibility

* Fix pagination on comments section

* Extend to kubernetes clusters

* Fixes after last review

* Change default value for adminsonly column

* Remove the required field for the annotationfilter parameter

* Small fixes on visibility and other fixes

* Cleanup to reduce files changed

* Rollback extra line

* Address review comments

* Fix cleanup error on smoke test

* Fix sending incorrect parameter to checkPermissions method

* Add check domain access for the calling account for domain networks

* Fix only display annotations icon if there are comments the user can see

* Simply change the Save button label to Submit

* Change order of the Tools menu to provent users getting 404 error on clicking the text instead of expanding

* Remove comments when removing entities

* Address review comments on marvin tests

* Allow users to list annotations for an entity ID

* Allow users to see all comments for allowed entities

* Fix search filters

* Remove username from search filter

* Add pagination to the annotations tab

* Display username for user comments

* Fix add permissions for domain and resource admins

* Fix for domain admins

* Trivial but important UI fix

* Replace pagination for annotations tab

* Add confirmation for delete comment

* Lint warnings

* Fix reduced list as domain admin

* Fix display remove comment button for non admins

* Improve display remove action button

* Remove unused parameter on groupShow

* Include a clock icon to the all comments filter except for root admin

* Move cleanup SQL to the correct file after rebasing main

Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
2021-09-08 10:14:06 +05:30
Spaceman1984 96c9c5a5e2
Added disk provisioning type support for VMWare (#4640)
* Added disk provisioning type support for VMWare

* Review changes

* Fixed unit test

* Review changes

* Added missing licenses

* Review changes

* Update StoragePoolInfo.java

Removed white space

* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id

* Delete __init__.py

* Merge fix

* Fixed failing test

* Added comment about parameters

* Added error log when update fails

* Added exception when using API

* Ordering storage pool selection to prefer thick disk capable pools if available

* Removed unused parameter

* Reordering changes

* Returning storage pool details after update

* Removed multiple pool update, updated marvin test, removed duplicate enum

* Removed comment

* Removed unused import

* Removed for loop

* Added missing return statements for failed checks

* Class name change

* Null pointer

* Added more info when a deployment fails

* Null pointer

* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java

Co-authored-by: dahn <daan.hoogland@gmail.com>

* Small bug fix on API response and added missing bracket

* Removed datastore cluster code

* Removed unused imports, added missing signature

* Removed duplicate config key

* Revert "Added more info when a deployment fails"

This reverts commit 2486db78dc.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2021-07-16 22:37:42 -03:00
Rohit Yadav cb167072a1 Merge remote-tracking branch 'origin/4.15'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-05-07 16:37:42 +05:30
Harikrishna 32e3bbdcc5
VMware Datastore Cluster primary storage pool synchronisation (#4871)
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.

Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
2021-05-07 16:30:54 +05:30
sureshanaparti 45e77dd6f0
server: Clean up the duplicate volume when the destination managed volume creation failed on migrate volume operation (#4730)
Duplicated volumes after failed migration in Allocated state

Fix: Clean up the duplicate volume when the destination managed volume creation failed on migrate volume operation
2021-03-03 13:30:08 +05:30
sureshanaparti eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Rohit Yadav b482da8c91 Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-01-11 13:58:30 +05:30
Daan Hoogland 280c13a4bb Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-05 15:51:02 +00:00
Daan Hoogland 81e9e6809b Updating pom.xml version numbers for release 4.15.1.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:34:46 +00:00
Daan Hoogland e26202f23e Updating pom.xml version numbers for release 4.16.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:32:10 +00:00
Daan Hoogland 01b3e361c7 Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2020-12-23 16:32:25 +00:00
Pearl Dsilva 1dbb76f64b
Fix: Data migration (#4475)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-11-18 09:45:53 +01:00
nvazquez ee5b8763a6 Fix remove VM and its volumes for deploy-as-is if have previously failed - restore cpu flags in nested virtualization test 2020-10-19 15:05:58 +05:30
Harikrishna Patnala 5fdabc1cb0 Added storage policy details to disk while creating disk and restricted migration of volumes to storage pools which are not storage policy compliance 2020-10-19 15:05:58 +05:30
Harikrishna Patnala 46b5322d9b Adding vSphere storage policy to disk on start command and attach volume command 2020-10-19 15:05:58 +05:30
nvazquez 94bebe8792 Revert back deploy as is column on templates but keep it as default for new templates 2020-10-19 15:05:57 +05:30
nvazquez 9b51a706db Set deploy-as-is to default on VMware 2020-10-19 15:05:57 +05:30
Harikrishna Patnala 33ae2afc89 Removed few duplicate imports during rebase with master 2020-10-19 15:05:57 +05:30
nvazquez d864e9dc39 [VMware] Full OVF properties support 2020-10-19 15:05:56 +05:30
Harikrishna Patnala 201ebe8868 Simulator failures fixing 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 61dd85876b Fix migrate vm and volume APIs in case if datastore cluster 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 873f9dd9ac Datastore Clusters operations on putting into maintenance mode, update storage pool with tags, cancelling mantenance mode and deleting storage pool 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 75fb1d91ee Fix adding Datastore clusters and listing 2020-10-19 14:57:15 +05:30
Harikrishna Patnala 41b3fc19d6 Add Datastore cluster and the child entities which are datastores in the cluster into CloudStack
Setting scope is still pending.
2020-10-19 14:57:15 +05:30
Harikrishna Patnala 48786b2d31 DataStore Clusters addition as a storage pool 2020-10-19 14:57:15 +05:30
Harikrishna Patnala 6df819028e UI changes and accept any type of datastore as presetup in vmware 2020-10-19 14:57:15 +05:30
Pearl Dsilva b464fe41c6
server: Secondary Storage Usage Improvements (#4053)
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
2020-09-17 10:12:10 +05:30
Nicolas Vazquez 8c1d749360
[VMware] Enable unmanaging guest VMs (#4103)
* Enable unmanaging guest VMs

* Minor fixes

* Fix stop usage event only if VM is not stopped when unmanaging

* Rename unmanaged VMs manager

* Generate netofferingremove usage event if VM is not stopped

* Generate usage event VM snapshot primary off when unmanaging
2020-06-26 08:31:43 -03:00
andrijapanicsb 5f926c3353 Updating pom.xml version numbers for release 4.15.0.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 10:18:39 +01:00
andrijapanicsb 05e9b11694 Updating pom.xml version numbers for release 4.14.1.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 09:59:32 +01:00
andrijapanicsb 6f96b3b2b3 Updating pom.xml version numbers for release 4.14.0.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-11 15:03:14 +01:00
Nicolas Vazquez 73122fd0a9
[KVM] Direct download agnostic of the storage provider (#3828)
* Remove constraint for NFS storage

* Add new property on agent.properties

* Add free disk space on the host prior template download

* Add unit tests for the free space check

* Fix free space check - retrieve avaiable size in bytes

* Update default location for direct download

* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope

* Verify location for temporary download exists before checking free space

* In progress - refactor and extension

* Refactor and fix

* Last fixes and marvin tests

* Remove unused test file

* Improve logging

* Change default path for direct download

* Fix upload certificate

* Fix ISO failure after retry

* Fix metalink filename mismatch error

* Fix iso direct download

* Fix for direct download ISOs on local storage and shared mount point

* Last fix iso

* Fix VM migration with ISO

* Refactor volume migration to remove secondary storage intermediate

* Fix simulator issue
2020-03-06 19:56:54 +01:00
Wei Zhou fd5bea838b
New feature: Add support to destroy/recover volumes (#3688)
* server: fix resource count of primary storage if some volumes are Expunged but not removed

Steps to reproduce the issue
(1) create a vm and stop it. check resource count of primary storage
(2) download volume. resource count of primary storage is not changed.
(3) expunge the vm, the volume will be Expunged state as there is a volume snapshot on secondary storage. The resource count of primary storage decreased.
(4) update resource count of the account (or domain), the resource count of primary storage is reset to the value in step (2).

* New feature: Add support to destroy/recover volumes

* Add integration test for volume destroy/recover

* marvin: check resource count of more types

* messages translate to JP

* Update messages for CN

* translate message for NL

* fix two issues per Daan's comments

Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
2020-02-07 11:25:10 +01:00
Paul Angus 50fc045f36 Updating pom.xml version numbers for release 4.14.0.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-09-07 09:57:46 +01:00
Nicolas Vazquez 0fbf5006b8 kvm: live storage migration intra cluster from NFS source and destination (#2983)
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548

Live storage migration on KVM under these conditions:

From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.

Additional notes:

To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026
https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:

Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-10 15:35:26 +05:30
Rohit Yadav 0700d91a68 Merge branch '4.12'
- Fixes PR #3146 db cleanup to the correct 4.12->4.13 upgrade path
- Fixes failing unit test due to jdk specific changes after forward
  merging

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-14 15:15:17 +05:30
Rohit Yadav 00ff536f81 Merge remote-tracking branch 'origin/4.11' into 4.12
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-14 14:26:11 +05:30
Anurag Awasthi f9b61bc737 orchestration: Allow VM that has never started to have volumes attached (#3276)
With this patch b766bf7
we started tracking disks in attaching state so that other attach request can fail gracefully. However this missed the case where disks were in allocated state but attach was requested.

For the use case where users want to attach disk in allocated state but not ready, we need to have allocated-attaching transition as well. We must take care of returning to the original state - allocated or ready - when attach request has completed.

For the use case of unstarted vm's the disk must proceed as follows - "Allocated" -> Attaching -> Allocated. When VM is started, the disk is "created" and pool is assigned. For the use case of started VMs it's more trivial and disk proceeds as follows - Ready -> Attaching -> Ready.

Test this by creating a VM with "startvm=false", create a disk and try attaching it in allocated state. It would give an exception on latest 4.11 but will be fixed on this patch.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-10 23:40:38 +05:30
GabrielBrascher 8d3feb100a Updating pom.xml version numbers for release 4.13.0.0-SNAPSHOT
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-20 18:47:35 -03:00
GabrielBrascher a137398bf1 Updating pom.xml version numbers for release 4.12.0.0
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-14 10:11:46 -03:00