* snapshot: don't schedule next snapshot job for a removed volume
When management server starts, it starts the snapshot scheduler. In case
there is a volume snapshot policy which exists for a volume which does
not exist, it can cause SQL constraint issue and cause the management
server to break from starting its various components and cause HTTP 503
error.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* remove schedule on missing volume
---------
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* storage,plugins: delegate allow zone-wide volume migration check and access grant to storage drivers
Following checks have been delegated to storage drivers,
- For volumes on zone-wide storage, whether they need storage migration when VM is migrated
- Whther volume required grant access
Apply fixes in resolving PrimaryDataStore
* add tests
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unused import
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Update engine/orchestration/src/test/java/org/apache/cloudstack/engine/orchestration/VolumeOrchestratorTest.java
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
PR #8549 replaced RSA with ed25519. unfornately, ed25519 is unsupported in FIPS mode
```
$ ssh-keygen -t ed25519 -m PEM -N '' -f key1
ED25519 keys are not allowed in FIPS mode
$ ssh-keygen -t ecdsa -m PEM -N '' -f key1
Generating public/private ecdsa key pair.
Your identification has been saved in key1
Your public key has been saved in key1.pub
The key fingerprint is:
.........
```
* Introduced a new API checkVolumeAndRepair that allows users or admins to check and repair if any leaks observed.
Currently this is supported only for KVM
* some fixes
* Added unit tests
* addressed review comments
* add repair volume while granting access
* Changed repair parameter to accept both leaks/all
* Introduced new global setting volume.check.and.repair.before.use to do volume check and repair before VM start or volume attach operations
* Added volume check and repair changes only during VM start and volume attach operations
* Refactored the names to look similar across the code
* Some code fixes
* remove unused code
* Renamed repair values
* Fixed unit tests
* changed version
* Address review comments
* Code refactored
* used volume name in logs
* Changed the API to Async and the setting scope to storage pool
* Fixed exit value handling with check volume command
* Fixed storage scope to the setting
* Fix volume format issues
* Refactored the log messages
* Fix formatting
* Use free/total instead of free metric to calculate imbalance
* Filter out hosts for condensed while checking imbalance
* Make DRS more configurable
* code refactor
* Add unit tests
* fixup
* Fix validation for drs.imbalance.condensed.skip.threshold
* Add logging and other minor changes for drs
* Add some logging for drs
* Change format for drs imbalance to string
* Show drs imbalance as percentage
* Fixup label for memorytotal in en.json
* VR: fix issue between VPC VMs and other Public IPs in the same subnet as additional Public IPs
* Update PR8599: move to VpcVirtualNetworkApplianceManagerImpl
* Allocate new volume on restore virtual machine operation when resource count increment succeeds
- keep them in transaction, and fail operation if resource count increment fails
* Added some (negative) unit tests for restore vm
This PR fixes a regression caused by #8465 on advanced zones, import fails with:
2024-01-10 12:13:33,234 DEBUG [o.a.c.e.o.NetworkOrchestrator] (API-Job-Executor-3:ctx-991bbe9f job-128 ctx-f49517d4) (logid:d7b8e716) Allocating nic for vm 142272e8-9e2e-407b-9d7e-e9a03b81653c in network Network {"id": 204, "name": "Isolated", "uuid": "9679fac5-e3ac-4694-a57b-beb635340f39", "networkofferingid": 10} during import
2024-01-10 12:13:33,239 ERROR [o.a.c.v.UnmanagedVMsManagerImpl] (API-Job-Executor-3:ctx-991bbe9f job-128 ctx-f49517d4) (logid:d7b8e716) Failed to import NICs while importing vm: i-2-31-VM
com.cloud.exception.InsufficientVirtualNetworkCapacityException: Unable to acquire Guest IP address for network Network {"id": 204, "name": "Isolated", "uuid": "9679fac5-e3ac-4694-a57b-beb635340f39", "networkofferingid": 10}Scope=interface com.cloud.dc.DataCenter; id=1
at org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.importNic(NetworkOrchestrator.java:4582)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importNic(UnmanagedVMsManagerImpl.java:859)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importVirtualMachineInternal(UnmanagedVMsManagerImpl.java:1198)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importUnmanagedInstanceFromHypervisor(UnmanagedVMsManagerImpl.java:1511)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.baseImportInstance(UnmanagedVMsManagerImpl.java:1342)
at org.apache.cloudstack.vm.UnmanagedVMsManagerImpl.importUnmanagedInstance(UnmanagedVMsManagerImpl.java:1282)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Also, addresses the VNC password field set instead of a fixed string
This PR changes the password.policy.regex default value to empty. With an empty value for the configuration, it is skipped during the password policy check, only when the configuration is set to something different than a blank string, the regex will get checked.
This way, when creating a user on org.apache.cloudstack.ldap.LdapAuthenticator#authenticate() we won't get an error by default, as an empty value for the password is passed.
This simply improves the log statement that prints debug statements
during beginning of a stats collector run for hosts or VMs.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR updates the conserve mode of default vpc tier offering to conserve_mode=1
so we can create both port forwarding and load balancing rules on a public IP in vpc tiers.
This fixes#8313
When a public IP gets removed from quarantine, the removal reason gets saved to the database; however, it may also be useful for operators to know who removed the public IP from quarantine. For that reason, this PR extends the public IP quarantine feature so that the account that deliberately removed an IP from quarantine also gets saved to the database.
ResourceType.volume stores the count of the volume and not the size so increment decrement should be just 1 when assigning a volume to a different account.
Sometimes users have the need to move resources between domains, for example, in a big company, a department may be moved from one part of the company to another, changing the company's department hierarchy, the easiest way of reflecting this change on the company's cloud environment would be to move subdomains between domains, but currently ACS offers no option to do that.
This PR adds the moveDomain API, which will move domains between subdomains. Furthermore, if the domain that is being moved has any subdomains, those will also be moved, maintaining the current subdomain tree.
This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.
Updates AutoScaleManager/AutoScaleManagerImpl so that getNextVmHostName and checkAutoScaleVmGroupName can be overridden in derivative implementations to allow for custom naming conditions and restrictions. If possible, would like to include this in 4.19 since it is a trivial change.
This can be used to create an extension of AutoScaleManagerImpl.java, overriding these 2 methods
This PR adds the capability in CloudStack to convert VMware Instances disk(s) to KVM using virt-v2v and import them as CloudStack instances. It enables CloudStack operators to import VMware instances from vSphere into a KVM cluster managed by CloudStack. vSphere/VMware setup might be managed by CloudStack or be a standalone setup.
CloudStack will let the administrator select a VM from an existing VMware vCenter in the CloudStack environment or external vCenter requesting vCenter IP, Datacenter name and credentials.
The migrated VM will be imported as a KVM instance
The migration is done through virt-v2v: https://access.redhat.com/articles/1351473, https://www.ovirt.org/develop/release-management/features/virt/virt-v2v-integration.html
The migration process timeout can be set by the setting convert.instance.process.timeout
Before attempting the virt-v2v migration, CloudStack will create a clone of the source VM on VMware. The clone VM will be removed after the registration process finishes.
CloudStack will delegate the migration action to a KVM host and the host will attempt to migrate the VM invoking virt-v2v. In case the guest OS is not supported then CloudStack will handle the error operation as a failure
The migration process using virt-v2v may not be a fast process
CloudStack will not perform any check about the guest OS compatibility for the virt-v2v library as indicated on: https://access.redhat.com/articles/1351473.
* 4.18:
server: Initial new vpnuser state (#8268)
UI: Removed redundant IP Address Column when create Port forwarding rules (#8275)
UI: Removed ICMP input fields for protocol number from ACL List rules modal (#8253)
server: check if there are active nics before network GC (#8204)
When creating a private gateway, if an ACL verification error occurs, the changes made up to that point are not rolled back, resulting in inconsistent records in the database.
This PR intends to fix this bug and, if an error occurs during the creation of the private gateway, the changes will be rolled back.
* clear pool id if volume allocation fails and leave volume state as Allocated with a pool id assigned
* clear_pool_id_if_volume_allocation_fails
---------
Co-authored-by: Annie Li <ji_li@apple.com>
This PR adds argument 'ipadress' to the disassociateIpAddress api. IP address can be disassociated by directly giving the address instead of ID.
Fixes: #8125
Extending the current functionality of KVM Host HA for the StorPool storage plugin and the option for easy integration for the rest of the storage plugins to support Host HA
This extension works like the current NFS storage implementation. It allows it to be used simultaneously with NFS and StorPool storage or only with StorPool primary storage.
If it is used with different primary storages like NFS and StorPool, and one of the health checks fails for storage, there is an option to report the failure to the management with the global config kvm.ha.fence.on.storage.heartbeat.failure. By default this option is disabled when enabled the Host HA service will continue with the checks on the host and eventually will fence the host
The CloudStack coding conventions specify org.apache.commons.lang3.StringUtils as the default alternative for string operations, and com.cloud.utils.StringUtils for operations not covered by the former.
Since org.apache.commons.lang3.StringUtils is seen as the default alternative and is used more often in NetworkModelImpl.java, this PR inverts how these two classes are referenced in this file in order to standardize it.
OAuth2, the industry-standard authorization or authentication framework, simplifies the process of
granting access to resources. CloudStack supports OAuth2 authentication wherein users can login into
CloudStack without using a username and password. Support for Google and Github providers has been added.
Other OAuth2 providers can be easily integrated with CloudStack using its plugin framework.
The login page will show provider options when the OAuth2 is enabled and corresponding providers are configured.
"OAuth configuration" sub-section is present under "Configuration" where admins can register the corresponding
OAuth providers.
This pull request (PR) implements a Distributed Resource Scheduler (DRS) for a CloudStack cluster. The primary objective of this feature is to enable automatic resource optimization and workload balancing within the cluster by live migrating the VMs as per configuration.
Administrators can also execute DRS manually for a cluster, using the UI or the API.
Adds support for two algorithms - condensed & balanced. Algorithms are pluggable allowing ACS Administrators to have customized control over scheduling.
Implementation
There are three top level components:
Scheduler
A timer task which:
Generate DRS plan for clusters
Process DRS plan
Remove old DRS plan records
DRS Execution
We go through each VM in the cluster and use the specified algorithm to check if DRS is required and to calculate cost, benefit & improvement of migrating that VM to another host in the cluster. On the basis of cost, benefit & improvement, the best migration is selected for the current iteration and the VM is migrated. The maximum number of iterations (live migrations) possible on the cluster is defined by drs.iterations which is defined as a percentage (as a value between 0 and 1) of total number of workloads.
Algorithm
Every algorithms implements two methods:
needsDrs - to check if drs is required for cluster
getMetrics - to calculate cost, benefit & improvement of a migrating a VM to another host.
Algorithms
Condensed - Packs all the VMs on minimum number of hosts in the cluster.
Balanced - Distributes the VMs evenly across hosts in the cluster.
Algorithms use drs.level to decide the amount of imbalance to allow in the cluster.
APIs Added
listClusterDrsPlan
id - ID of the DRS plan to list
clusterid - to list plans for a cluster id
generateClusterDrsPlan
id - cluster id
iterations - The maximum number of iterations in a DRS job defined as a percentage (as a value between 0 and 1) of total number of workloads. Defaults to value of cluster's drs.iterations setting.
executeClusterDrsPlan
id - ID of the cluster for which DRS plan is to be executed.
migrateto - This parameter specifies the mapping between a vm and a host to migrate that VM. Format of this parameter: migrateto[vm-index].vm=<uuid>&migrateto[vm-index].host=<uuid>.
Config Keys Added
ClusterDrsPlanExpireInterval
Key drs.plan.expire.interval
Scope Global
Default Value 30 days
Description The interval in days after which old DRS records will be cleaned up.
ClusterDrsEnabled
Key drs.automatic.enable
Scope Cluster
Default Value false
Description Enable/disable automatic DRS on a cluster.
ClusterDrsInterval
Key drs.automatic.interval
Scope Cluster
Default Value 60 minutes
Description The interval in minutes after which a periodic background thread will schedule DRS for a cluster.
ClusterDrsIterations
Key drs.max.migrations
Scope Cluster
Default Value 50
Description Maximum number of live migrations in a DRS execution.
ClusterDrsAlgorithm
Key drs.algorithm
Scope Cluster
Default Value condensed
Description DRS algorithm to execute on the cluster. This PR implements two algorithms - balanced & condensed.
ClusterDrsLevel
Key drs.imbalance
Scope Cluster
Default Value 0.5
Description Percentage (as a value between 0.0 and 1.0) of imbalance allowed in the cluster. 1.0 means no imbalance
is allowed and 0.0 means imbalance is allowed.
ClusterDrsMetric
Key drs.imbalance.metric
Scope Cluster
Default Value memory
Description The cluster imbalance metric to use when checking the drs.imbalance.threshold. Possible values are memory and cpu.
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.
Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.
In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.
As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.
`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.
Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.
`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.
----
New API added
`copySnapshot`
Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form
Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
Co-authored-by: Stephan Krug <stephan.krug@scclouds.com.br>
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: GaOrtiga <49285692+GaOrtiga@users.noreply.github.com>
Co-authored-by: Gabriel Pordeus Santos <gabrielpordeus@gmail.com>
Co-authored-by: Gabriel <gabriel.fernandes@scclouds.com.br>
This PR fixes few issues:
- check ip range of new network instead of network cidr, so that the two networks can use same cidr but no IP conflicts.
- Private gateways: return vlan number only for root admins
- when update isolated network, check new guest vm cidr and IPs of neworks/vpc gateways associated to it
Don't use LibvirtServerDiscoverer's processHostAdded() in CustomServerDiscoverer
Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
* Allow configkey to set 'cloud-name' cloud-init metadata
* Update engine/api/src/main/java/com/cloud/vm/VirtualMachineManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/network/NetworkModelImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/network/router/CommandSetupHelper.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Revert "Update server/src/main/java/com/cloud/network/router/CommandSetupHelper.java"
This reverts commit 8abc3e38c4.
* Revert "Update server/src/main/java/com/cloud/network/NetworkModelImpl.java"
This reverts commit 7f239be919.
* Rework/Fix review code suggestions
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* UI: Fix user role login due to missing API access on custom hypervisor name
* Refactor to include the custom HW display name as part of the response of listCapabilities API
* Add since parameter
If the original value is `false`, and search build is configured without the condition. Now change the value to `true`, it will not get effective due to missing condition.
* Don't allow inadvertent deletion of hidden details via API
* Update VM details unit test ensuring system/hidden details not removed
* Update test/integration/component/test_update_vm.py
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
### Description
Design document: https://cwiki.apache.org/confluence/display/CLOUDSTACK/%5BDRAFT%5D+Minimal+changes+to+allow+new+dynamic+hypervisor+type%3A+Custom+Hypervisor
This PR introduces the minimal changes to add a new hypervisor type (internally named Custom in the codebase, and configurable display name), allowing to write an external hypervisor plugin as a Custom Hypervisor to CloudStack
The custom hypervisor name is set by the setting: 'hypervisor.custom.display.name'. The new hypervisor type does not affect the behaviour of any CloudStack operation, it simply introduces a new hypervisor type into the system.
CloudStack does not have any means to dynamically add new hypervisor types. The hypervisor types are internally preset by an enum defined within the CloudStack codebase and unless a new version supports a new hypervisor it is not possible to add a host of a hypervisor that is not in part of the enum. It is possible to implement minimal changes in CloudStack to support a new hypervisor plugin that may be developed privately
This PR is an initial work on allowing new dynamic hypervisor types (adds a new element to the HypervisorType enum, but allows variable display name for the hypervisor)
##### Proposed Future work:
Replace the HypervisorType from a fixed enum to an extensible registry mechanism, registered from the hypervisor plugin
#### Feature Specifications
- The new hypervisor type is internally named 'Custom' to the CloudStack services (management server and agent services, database records).
- A new global setting ‘hypervisor.custom.display.name’ allows administrators to set the display name of the hypervisor type. The display name will be shown in the CloudStack UI and API.
- In case the ‘hypervisor.list’ setting contains the display name of the new hypervisor type, the setting value is automatically updated after the ‘hypervisor.custom.display.name’ setting is updated.
- The new Custom hypervisor type supports:
- Direct downloads (the ability to download templates into primary storage from the hypervisor hosts without using secondary storage)
- Local storage (use hypervisor hosts local storage as primary storage)
- Template format: RAW format (the templates to be registered on the new hypervisor type must be in RAW format)
- The UI is also extended to display the new hypervisor type and the supported features listed above.
- The above are the minimal changes for CloudStack to support the new hypervisor type, which can be tested by integrating the plugin codebase with this feature.
#### Use cases
This PR allows the cloud administrators to test custom hypervisor plugins implementations in CloudStack and easily integrate it into CloudStack as a new hypervisor type ("Custom"), reducing the implementation to only the hypervisor supported specific storage/networking and the hypervisor resource to communicate with the management server.
- CloudStack admin should be able to create a zone for the new custom hypervisor and add clusters, hosts into the zone with normal operations
- CloudStack users should be able to execute normal VMs/volumes/network/storage operations on VMs/volumes running on the custom hypervisor hosts
* Replace Hashtable with LinkedHashMap in createIsoResponse
This change replaces the use of Hashtable with LinkedHashMap in the `createIsoResponse` method of `ViewResponseHelper`.
The reason for this modification is to maintain the insertion order of entries, which isn't the case with Hashtable.
This could lead to more predictable results and behaviors in calling methods.
* Replace Hashtable with LinkedHashMap in view response creation methods
Changed Hashtable to LinkedHashMap in various response creation methods within ViewResponseHelper class.
This modification ensures an ordered iteration which is beneficial for scenarios where the insertion order of responses needs to be maintained consistently.
---------
Co-authored-by: Sina Kashipazha <soreana@users.noreply.github.com>
* 4.18:
server: remove registered userdata when cleanup an account (#7777)
server: Use max secondary storage defined on the account during upload (#7441)
test: upgrade kubernetes versions to 1.25.0/1.26.0 (#7685)
kvm: Added VNI Devices as normal bridge slave devs (#7836)
noVNC: fix JP keyboard on vmware7+ which uses websocket URL (#7694)
* 4.18:
UI: Filter templates by zone and hypervisor type when reinstall a VM (#7739)
KVM: fix SSVM starting when overprovisioning memory (#7663)
pom.xml: add property project.systemvm.template.location (#7706)
cloudutils: fix adding rocky9 host failure due to missing /etc/sysconfig/libvirtd (#7779)
server: get id from persisted object ReservationVO (#7785)
search in (too) large result sets (#7766)
ui: fix 404 error when list volumes of system vms (#7772)
packaging: install tzdata-java on centos7/centos8 (#7768)
This PR addresses rare case of potential overlap of resource reservation and resource count.
For different resource types there could be some delay between incrementing of the resource count and clearing of the earlier done reservation. This may result in failures when there are parallel deployments happening.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes case of appending userdata when both template and vm data are either shellscript or cloudconfig
Fixes error when appending gzip userdata
Fixes case when userdata manual text from VM is not getting decoded-encoded correctly.
Fixes case of appending multipart data when both template and vm data contain same format types.
Refactor - moved validateUserData method to UserDataManager class
Refactor userdata test to check resultant multipart userdata thoroughly
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* 4.18:
Storage and volumes statistics tasks for StorPool primary storage (#7404)
proper storage construction (#6797)
guarantee MAC uniqueness (#7634)
server: allow migration of all VMs with local storage on KVM (#7656)
Add L2 networks to Zones with SG (#7719)
When deploying a VM is failed during the allocation process it may leave the resources that have been already allocated before the failure. They will get removed from the database as the whole code block is wrapped inside a transaction twice but the server would not inform the network or storage plugins to clean up the allocated resources.
This PR removes Transactions during VM allocation which results in the allocated VM and its resource records being persisted in DB even during failures. When failure is encountered VM is moved to Error state. This helps VM and its resources to be properly deallocated when it is expunged either by a server task such as ExpungeTask or during manual expunge.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR resource throws exception with the correct error code and logs the error message when a resource allocation failure is encountered during resize volume operation.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Allow retrieving only the count of resources on APIs listPublicIpAddresses, listNetworks, listVirtualMachines and listVolumes
* Use parameter to retrieve only the count of resources in the dashboard
* Create abstract class
* Guest OS mapping improvements
- Checks the OS mapping name in hypervisor (VMware, XenServer)
- Displays guest OS mappings in UI
* Added API getHypervisorGuestOsNames to list the guest OS names in the hypervisor, and code improvements
* Some static analysis fixes
* Removed commented code in listview
* Guest OS list
* UI changes for adding guest os and mappings
* Added guest os mappings in guest os form
* Added new filter to guest os mapping
* Name and description changes
* VMWare Host and cluster MO unit tests
* CheckGuestOsMapping command and answer unit tests
* GetHypervisorGuestOsNames command and answer unit tests
* VmwareResource unitests
* GuestOsMapper unittests
* icon changes
* Addressed review comments
* Renaming fixes
* Removed comments
* marvin tests for guest os operations
* Added marvin tests for OS mappings
* Document links and UI improvements
* Added deduplication for the list guest OS API
* Fixed linter failure
* Few bug fixes and UI changes
* Few improvements
* Addressed code smells
* Fixed UI issues after rebase
---------
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Supported Virtual machine operations:
- live migration of VM to another host
- virtual machine snapshots (group snapshot without memory)
- revert VM snapshot
- delete VM snapshot
Supported Volume operations:
- attach/detach volume
- live migrate volume between two StorPool primary storages
- volume snapshot
- delete snapshot
- revert snapshot
This PR adds a check on host's status. Without this if the agent is not in Up or Connecting state, expunging of a VM fails.
Steps to reproduce:
- Enable vm.configdrive.force.host.cache.use in Global Configuration.
- Create a L2 network with config drive
- Deploy a vm with the L2 network created in previous step
- Stop the vm and destroy vm (not expunge it)
- Stop the cloudstack-agent on the VM's host
- Expunge the vm
Fixes: #7428
* Live storage migration of volume in scaleIO within same storage scaleio cluster
* Added migrate command
* Recent changes of migration across clusters
* Fixed uuid
* recent changes
* Pivot changes
* working blockcopy api in libvirt
* Checking block copy status
* Formatting code
* Fixed failures
* code refactoring and some changes
* Removed unused methods
* removed unused imports
* Unit tests to check if volume belongs to same or different storage scaleio cluster
* Unit tests for volume livemigration in ScaleIOPrimaryDataStoreDriver
* Fixed offline volume migration case and allowed encrypted volume migration
* Added more integration tests
* Support for migration of encrypted volumes across different scaleio clusters
* Fix UI notifications for migrate volume
* Data volume offline migration: save encryption details to destination volume entry
* Offline storage migration for scaleio encrypted volumes
* Allow multiple Volumes to be migrated with migrateVirtualMachineWithVolume API
* Removed unused unittests
* Removed duplicate keys in migrate volume vue file
* Fix Unit tests
* Add volume secrets if does not exists during volume migrations. secrets are getting cleared on package upgrades.
* Fix secret UUID for encrypted volume migration
* Added a null check for secret before removing
* Added more unit tests
* Fixed passphrase check
* Add image options to the encypted volume conversion