Add a global setting to control whether redirection is allowed while
downloading templates and volumes
core: some changes on SimpleHttpMultiFileDownloader
similar as HttpTemplateDownloader
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
(cherry picked from commit b1642bc3bf)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Move allow.additional.vm.configuration.list.kvm from Global to Account setting
- Disallow VM details start with "extraconfig" when deploy VMs
- Skip changes on VM details start with "extraconfig" when update VM settings
- Allow only extraconfig for DPDK in service offering details
- Check if extraconfig values in vm details are supported when start VMs
- Check if extraconfig values in service offering details are supported when start VMs
- Disallow add/edit/update VM setting for extraconfig on UI
(cherry picked from commit e6e4fe16fb)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit 7aea9db1c8)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Move allow.additional.vm.configuration.list.kvm from Global to Account setting
- Disallow VM details start with "extraconfig" when deploy VMs
- Skip changes on VM details start with "extraconfig" when update VM settings
- Allow only extraconfig for DPDK in service offering details
- Check if extraconfig values in vm details are supported when start VMs
- Check if extraconfig values in service offering details are supported when start VMs
- Disallow add/edit/update VM setting for extraconfig on UI
(cherry picked from commit e6e4fe16fb)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* NSX integration - skeletal code
* Fix module not loading on startup
* add upgrade path and daos
\n add nsx controller command
* add support for adding and listing nsx provider to a zone
* add license
* add default VPC offering and update upgrade path
* add global setting to enable nsx plugin
* add delete nsx controller operation
* add nsxresource
* add NSX resource , api client, create tier1 gw
* update db
* update response and add license
* Add support to create and delete nsx tier-1 gateway
* add license
* cleanup and add skeletal code for network creation
* add create/delete segment and UI integration
* add license
* address code smells - part 1
* fix test / build failure
* NSX integration - skeletal code
* Fix module not loading on startup
* add upgrade path and daos
\n add nsx controller command
* add support for adding and listing nsx provider to a zone
* add license
* add default VPC offering and update upgrade path
* add global setting to enable nsx plugin
* add delete nsx controller operation
* add nsxresource
* add NSX resource , api client, create tier1 gw
* update db
* update response and add license
* Add support to create and delete nsx tier-1 gateway
* add license
* cleanup and add skeletal code for network creation
* add create/delete segment and UI integration
* add license
* address code smells - part 1
* fix test / build failure
* add ui changes + update nsx_provider table transport zones + use NSX broadcast domain for add nics to router
* ui: fix password field, and backend changes
* add route advertisement
* update offering
* update offering
* add sleep before deletion of vpc / tier g/w for ports to be removed
* move creation of segments to design phase
* change provider to VPC router for Dhcp & dns service in an nsx offering
* Add public nic for NSX
* reserve first IP (after g/w) of subnet for router nic - NSX
* revert reserving 1st IP in vpc segments
* [NSX] Create a DHCP relay and add it to a VPC tier segment (#107)
* Create DHCP relay command and execute request
* In progress integrate with networking
* Create DHCP relay config on the network VR allocation
* Revert domain router dao changes
* Create DHCP relay con VR nic plug to NSX network
* Link DHCP relay config to segment after creation
* [NSX] Cleanup DHCP Relay config on segment deletion (#108)
* Cleanup DHCP Relay config on segment deletion
* update segment & relay name generators and call delete dhcprelay after deletion of segment
* address comment
* [NSX] Fix DHCP relay config deletion was missing zone name (#8068)
* [NSX] Refactor API wrapper operations (#8059)
* [NSX] Refactor API wrapper operations
* Big refactor
* Address review comment
* change network cidr to cidr to prevent NPE
* add domain and zone names to the various networks - vpc & tier
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* Nsx unit tests (#8090)
* Add tests
* add test for NsxGuestNetworkGuru
* add unit tests for NsxResource
* add unti tests for NsxElement
* cleanup
* [NSX] Refactor API wrapper operations
* update tests
* update tests - add nsxProviderServiceImpl test
* add unit test - NsxServiceImpl
* add license
* Big refactor
* Address review comment
* change network cidr to cidr to prevent NPE
* add domain and zone names to the various networks - vpc & tier
* fix tests
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* modify NSX resource naming convention (#8095)
* modify NSX resource naming convention
* remove unused imports
* add a setup phase between desgin and implementation of a network for intermediary steps
* add method to all classes
* NSX: Refactor Network & VPC offering (#8110)
* [NSX] Refactor API wrapper operations
* Network offering changes for NSX
* fix services and provider combination
* address comments: rename param
* update nsx_mode parameter
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix test
* [NSX] Allow NSX isolated networks (#8132)
* Add network offerings for NSX on isolated networks
* Fix offerings creation
* In progress NSX isolated network
* Fixes
* Fix NIC allocation to router
* NSX: Add Step for Adding Public traffic network for NSX During zone creation (#8126)
* NSX: Add Step for Adding Public traffic network for NSX
* address comments and cleanup
* address comment
* remove indent
* NSX: Create and Delete static NAT & Port forward rules (#8131)
* NSX: Create and delete NSX Static Nat rules
* fix issues with static nat
* add static nat
* Support to add and delete Port forward rules
* add license
* fix adding multiple pf rules
* cleanup
* fix lint check
* fix smoke tests
* fix smoke tests
* Nsx add lb rule (#8161)
* NSX: Create and delete NSX Static Nat rules
* fix issues with static nat
* add static nat
* Support to add and delete Port forward rules
* add license
* fix adding multiple pf rules
* cleanup
* NSX: Add support to create and delete Load balancer rules
* fix deletion of lb rules
* add header file and update protocol detail
* build failure fix
* [NSX] Add SNAT support (#8100)
* In progress add source NAT
* Fix after merge
* Fix tests
* Fix NPE on isolated network deletion
* Reserve source NAT IP when its not passed for NSX VPC
* Create source NAT rule on VR NIC allocation
* Fix update VPC and remove VPC to update and remove SNAT rule
* Fix packaging
* Address review comment
* Fix build
* fix build - unused import
* Add defensive checks
* Add missing design to NSX public guru
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* NSX: Fix VR public NIC allocation (#8166)
* NSX: fix LB member addition and deletion and add defensive checks (#8167)
* Fix public NIC NPE on broadcast URI
* NSX: Router Public nic to get IP from systemVM Ip range (#8172)
* NSX: Router Public nic to get IP from systemVM Ip range
* Fix VR IP address and setSourceNatIp command
* NSX: hide systemVM reserved IP range SourceNAT
* fix test
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix test failure
* test failure fix
* [NSX] Fix update source NAT IP (#8176)
* [NSX] Fix update source NAT IP
* Fix startup
* Fix API result
* NSX - add LB route Advertizement (#8192)
* [NSX] Add ACL types support (#8224)
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Fix DROP action rules and transform tests
* Add new ACL rules
* Fixes
* associate security policies with groups and not to DFW and add deletion of rules
* Fix name convention
---------
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* NSX: Fix creation of VPCs (#8320)
* Fix ACL rules creation (#8323)
* [NSX] Fix database views (#8325)
* NSX: Add CKS Support & Firewall rules for Isolated Networks (#8189)
* NSX: Add ALL LB IP to the list of route advertisements in tier1
* NSX: Support Source NAT on NSX Isolated networks
* NSX: Cks Support
* NSX: Create segment group on segment creation
* Add unit tests
* Remove group for segment before removing segment
* Create Distributed Firewall rules
* Remove distributed firewall policy on segment deletion
* Fix policy rule ID and add more unit tests
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs
* Add Firewall rules
* build failure - fix unit test
* fix npes
* Add support to delete firewall rules
* update nsx cks offering
* add license
* update order of ports in PF & FW rules
* fix filter for getting transport zones
* CKS support changed - MTU updated, etc
* add LB for CKS on VPC
* address comments
* adapt upstream cks logic for vpc
* rever mtu hack
* update UI changes as per upstream fix
* change display test for CKS n/w offerings for isolated and VPC tiers
* add extra line for linter
* address comment
* revert list change
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* fix ui build failure
* [NSX] Address SonarCloud Bugs (#8341)
* [NSX] Address SonarCloud Bugs
* Fix NSX API connection issues
* NSX: Add unit tests to increase coverage (#8355)
* NSX: Add unit tests
* cleanup unused imports
* add more unit tests
* add tests for publicnsxnetworkguru
* add license
* fix build failures
* address sonar comment
* fix security hotspots
* NSX: Add more unit tests (#8381)
* NSX : Unit tests
* remove unused imports
* remove unused import causing build failure
* fix build failures due to unused imports
* fix build failure
* fix test assertion
* remove unused imports
* remove unused import
* Nsx UI zone bug (#8398)
* NSX: Attempt to fix NSX Zone creation bug for public networks
* fix zone wizard public traffic issue
* add proper filtering of offerings based on VPC nsx mode
* clean up console logs
* NSX: Fix code smells and reported bugs (#8409)
* NSX: Fix code smells and reported bugs
* fox override issue
* remove unused imports
* fix test
* refactor code to reduce complexity
* add lisence
* cleanup
* fix build failure
* fix build failure
* address comments
* test - add config to ignore certain files from test coverage
* test exclusion of classes from test cov
* rever pom changes
* [NSX] Add more unit tests (#8431)
* [NSX] Add more unit tests
* More tests
* Fix build errors
* NSX: Prevent creation of L2 and Shared networks for NSX (#8463)
* NSX: Prevent creation of L2 and Shared networks for NSX
* add checks to backend to prevent creation of l2 and shared networks in nsx zones and filter only nsx offerings when creating isolated networks
* cleanup
* NSX: Fix code smells (#8436)
* NSX: Fix code smells
* Add changes to service creation logic
* CKS: Add action to during firewall rule creation (#8498)
* NSX,UI: Deduplicate network list when creating kubernetes clusters (#8513)
* NSX: Make LB service selectable in network offering (#8512)
* NSX: Make LB service selectable in network offering
* fix label
* address comments
* address comments
* NSX: Add appropriate error message when icmp type is set to -1 for NSX (#8504)
* NSX: Add appropriate error message when icmp type is set to -1 for NSX
* address comments
* update text
* fix test
* fix test - build failure
* fix test - build failure
* NSX: Cleanup NSX resources during k8s cluster cleanup (#8528)
* fix test failure
* NSX: Improve segment deletion process (#8538)
* NSX: Add passive monitor for NSX LB to test whether a server is available (#8533)
* NSX: Add passive monitor for NSX LB to test whether a server is available
* Add active monitors too
* fix build failure
* NSX: Add check for ICMP code / type for NSX zones (#8542)
* NSX: Fix Routed Mode for Isolated and VPC networks (#8534)
* NSX: Fix Routed Mode for Isolated and VPC networks
* NSX: Fix Routed mode - add checks for ports added for FW rules
* clean up code
* fix build failure
* NSX: Add retry logic with sleep to delete segments (#8554)
* NSX: Add retry logic with sleep to delete segments
* add logs
* NSX: Fix custom ACL check (#2)
* NSX: Fix custom ACL check
* NSX: Fix custom ACL check
* Nsx vpc routed mode (#5)
* NSX: Fix VPC routed mode
* NSX: VPC route mode
* remove unnecessary changes
* Nsx: Support internal LB (#4)
* NSX: Support internal LB service in NSX
* add lb removal logic
* Fix UI issue hiding internal LB tab
* Refactor method name
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* NSX: Improve NSX resource cleanup process (#3)
* Fix unit test
* NSX: Add SourceNAT service to the default Routed offering for VPC (#13)
* Fix VPC restart with cleanup (#12)
* NSX: Fix ACL rule removal on replacement and fix rule order (#11)
* NSX: fix smoke test failure for ACLs (#9)
* Fix unit tests
* Fix NSX plugin pom XML
* NSX: Add support to re-order ACL rules (NSX FW rules) (#14)
* [WIP] NSX: Add support to re-order ACL rules (NSX FW rules)
* fix reordering of acl rules on all networks that it is associated to
* clean up and attempt test fix
* Fix tests
* Remove unused import
* tweak reorder logic
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Fix zone creation issue for internal load balancer
* Fix
* Fix unit test
* fix logger
* fix logger
* fix logger
* NSX: Fix VPC form to ignore source NAT IP when creating VPCs and fix label
* Move SQL changes to the newest schema file
* NSX: Last Fixes
* Fix build
---------
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Introduced a new API checkVolumeAndRepair that allows users or admins to check and repair if any leaks observed.
Currently this is supported only for KVM
* some fixes
* Added unit tests
* addressed review comments
* add repair volume while granting access
* Changed repair parameter to accept both leaks/all
* Introduced new global setting volume.check.and.repair.before.use to do volume check and repair before VM start or volume attach operations
* Added volume check and repair changes only during VM start and volume attach operations
* Refactored the names to look similar across the code
* Some code fixes
* remove unused code
* Renamed repair values
* Fixed unit tests
* changed version
* Address review comments
* Code refactored
* used volume name in logs
* Changed the API to Async and the setting scope to storage pool
* Fixed exit value handling with check volume command
* Fixed storage scope to the setting
* Fix volume format issues
* Refactored the log messages
* Fix formatting
Feature spec: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Granular+Resource+Limit+Management
Introduces the concept of tagged resource limits for granular resource limit management. Limits can be enforced on accounts and domains for the deployment of entities for a tagged resource. Current tagged resource limits can be used for the following resource types,
Host limits
- user_vm
- cpu
- memory
Storage limits
- volume
- primary_storage
Following global settings can used to specify tags for which limit needs to be enforced,
Host: `resource.limit.host.tags`
Storage: `resource.limit.storage.tags`
Option for specifying tagged resource limits and viewing tagged resource usage are made available in the UI.
Enhances the use of templatetag for VM deployment and template creation
Adds option to list service/compute offerings that can be used with a given template. A new parameter named templateid has been added.
Adds option to list disk offering with suitability flag for a virtual machine. A new parameter named virtualmachineid has been added to the listDiskOfferings API which when passed returns suitableforvirtualmachine param in the response.
* Normalize logs
All classes that could have their loggers inherited from their fathers had their own loggers deleted;
Most loggers didn't have to be static, so most of them were normalized so that they wouldn't be;
All loggers are protected now;
Static logger's name are now 'LOGGER';
Non-static logger's name are now 'logger';
New class DbUpgradeAbstractImpl created so that all Upgraders extend it and inherit its logger
* Upgrade log4j
* fix errors caused by the merge
* Refactor cglibThrowableRenderer functionality to log4j2 and upgrade the last configuration files
* fix sonarcloud bug
* Fix errors caused by merge, remove some unused loggers, and rename a variable that was mistakenly renamed on the normalization commit
* Readd snmpTrapAppender, remove TestAppender
* Regenerate changes
* regenerate changes
* refactor last custom appender
* fix systemvm configuration xml
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Fix utils pom
* fix some tests
* regenerate changes
* Fix jar being printed on exception
* fix logging in system VMs, fix commands not having log4j2 classpath.
* regenerate changes
* Fix some unwanted renomeations
* fix end of file
* regenerate changes
* regenerate changes
* fix merge error
* regenerate changes
* fix tests
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* readd reload4j to tungsten as juniper depends on it
* Regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* re-add reload4j dependency to network-contrail, as juniper depends on it
* regenerate changes
* regenerate changes
* regenerate changes
* fix typo
* regenerate changes
* regenerate changes
* Fix end of files
* regenerate changes
* add logj42 to cloud-utils-SHADED.jar
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* Regenerate changes
* regenerate changes
* Regenerate changes
* Regenerate changes
* fix some tests
* Regenerate changes
* Regenerate changes
* fix test
* Regenerate changes
* Regenerate changes
This PR fixes bug introduced in #8502. Timeout for script execution was set to 60 ms instead of 60s which resulted in host not getting UEFI enabled. This is a blocker for 4.19 release.
We do this by introducing a new agent parameter `agent.script.timeout` (default - 60 seconds) to use as a timeout for the script checking host's UEFI status.
We also externalize the timeout for the ReadyCommand by introducing a new global setting `ready.command.wait` (default - 60 seconds).
For ModifyStoragePoolCommand, we don't externalize the timeout to avoid confusion for the user. Since, the required timeout can vary depending on the provider in use and we are only setting the wait for default host listener for now. Instead, we reuse the global `wait` setting by dividing it by `5` making the default value of 6 minutes (1800/5 = 360s) for ModifyStoragePoolCommand.
Note: the actual time, the MS waits is twice the wait set for a Command. Check reference code below.
19250403e6/engine/orchestration/src/main/java/com/cloud/agent/manager/AgentAttache.java (L406-L442)
This PR adds the capability in CloudStack to convert VMware Instances disk(s) to KVM using virt-v2v and import them as CloudStack instances. It enables CloudStack operators to import VMware instances from vSphere into a KVM cluster managed by CloudStack. vSphere/VMware setup might be managed by CloudStack or be a standalone setup.
CloudStack will let the administrator select a VM from an existing VMware vCenter in the CloudStack environment or external vCenter requesting vCenter IP, Datacenter name and credentials.
The migrated VM will be imported as a KVM instance
The migration is done through virt-v2v: https://access.redhat.com/articles/1351473, https://www.ovirt.org/develop/release-management/features/virt/virt-v2v-integration.html
The migration process timeout can be set by the setting convert.instance.process.timeout
Before attempting the virt-v2v migration, CloudStack will create a clone of the source VM on VMware. The clone VM will be removed after the registration process finishes.
CloudStack will delegate the migration action to a KVM host and the host will attempt to migrate the VM invoking virt-v2v. In case the guest OS is not supported then CloudStack will handle the error operation as a failure
The migration process using virt-v2v may not be a fast process
CloudStack will not perform any check about the guest OS compatibility for the virt-v2v library as indicated on: https://access.redhat.com/articles/1351473.
Extending the current functionality of KVM Host HA for the StorPool storage plugin and the option for easy integration for the rest of the storage plugins to support Host HA
This extension works like the current NFS storage implementation. It allows it to be used simultaneously with NFS and StorPool storage or only with StorPool primary storage.
If it is used with different primary storages like NFS and StorPool, and one of the health checks fails for storage, there is an option to report the failure to the management with the global config kvm.ha.fence.on.storage.heartbeat.failure. By default this option is disabled when enabled the Host HA service will continue with the checks on the host and eventually will fence the host
This PR adds new functionality to copy snapshots across zones and take snapshots for multiple zones.
Copy functionality is similar to template copy. The source zone acts as the web server from where the destination zone(s) can download the snapshot files. For this purpose, a new API - `copySnapshot` has been added. The response for copySnapshot will be returning zone and download details from the first destination zone of the request. This behaviour is similar to the `copyTemplate` API.
In a similar manner, multiple zones can be selected while taking the snapshots or creating snapshot policies. For this snapshot will be taken in the base zone(in which volume is present) and then copied to the additional zones. A new parameter - `zoneids` has been added to `createSnapshot` and `createSnapshotPolicy` APIs.
As snapshots can be present on multiple zones (secondary stores), a new parameter `zoneid` has been added to delete the snapshot copy on a specific zone.
`listSnapshots` API has been updated to allow listing snapshot entries for different zones/datastores. New parameters - `showUnique`, `locationType` have been added.
Events generated during snapshot operations will now be linked to the snapshot itself rather than the volume of the snapshot.
`listSnapshotPolicies` and `createSnapshotPolicy` APIs will return zone details of the zones in which backup will be scheduled for the policy.
----
New API added
`copySnapshot`
Request and response params updated for APIs
```
- listSnapshots
- deleteSnapshot
- createTemplate
- listZones
- listSnapshotPolicies
- createSnapshotPolicy
```
UI updated for
- Snapshot detail view
- Create snapshot form
- Create snapshot policy form
- Create volume (from snapshot) form
- Create template (from snapshot) form
Doc PR: https://github.com/apache/cloudstack-documentation/pull/344
PR: https://github.com/apache/cloudstack/pull/7873
Fixes case of appending userdata when both template and vm data are either shellscript or cloudconfig
Fixes error when appending gzip userdata
Fixes case when userdata manual text from VM is not getting decoded-encoded correctly.
Fixes case of appending multipart data when both template and vm data contain same format types.
Refactor - moved validateUserData method to UserDataManager class
Refactor userdata test to check resultant multipart userdata thoroughly
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This fix ensures when datastore cluster in VMware is added as a primary storage pool in CloudStack then all the child datastores (which already exists in CS) should be in Up state.
For example:
1. Datastore Cluster DS has two child datastores A and B in vCenter. (B is already added as a storage pool in CloudStack)
2. Now try to add datastore cluster DS into CloudStack as a primary storage pool
3. CloudStack tries to add child datastores A and B in CloudStack, since B is already there in CloudStack, it will reuse the existing storagepool entry and will keep under parent Storage pool DS.
During Step 3 we are now checking if B is Up state or not.
* Auto Enable Disable KVM hosts
* Improve health check result
* Fix corner cases
* Script path refactor
* Fix sonar cloud reports
* Fix last code smells
* Add marvin tests
* Fix new line on agent.properties to prevent host add failures
* Send alert on auto-enable-disable and add annotations when the setting is enabled
* Address reviews
* Add a reason for enabling or disabling a host when the automatic feature is enabled
* Fix comment on the marvin test description
* Fix for disabling the feature if the admin has manually updated the host resource state before any health check result
The alert.email.addresses description is ambiguous and can cause doubts to operators. This description has been altered to avoid confusion. In addition, typos in alert.smtp.useStartTLS and project.smtp.useStartTLS have been fixed.
Co-authored-by: Stephan Krug <stephan.krug@scclouds.com.br>
This PR creates a new API createConsoleAccess to create VM console URL allowing it to connect using other UI implementations. To avoid reply attacks, the console access is enhanced to use a one time token per session
New configuration added:
consoleproxy.extra.security.validation.enabled: Enable/disable extra security validation for console proxy using a token
Documentation PR: apache/cloudstack-documentation#284
Adds option to provide custom DNS servers for isolated network, shared network and VPC tier.
New API parameters added in createNetwork API along with the corresponding response parameters.
Doc PR: apache/cloudstack-documentation#276
* add global setting to allow parallel execution on vmware
* cleanup setting distribution for vmware.create.full.clone
* query setting in vmware guru
* don´t touch other hypervisor's commands
* guru hierarchy cleanup
This PR enhances the existing PowerFlex/ScaleIO storage plugin to support separate (storage) network for Hosts(KVM)/Storage connection, mainly the SDC (ScaleIo Data Client) connection.
* refactor and log trace
* tracelogs
* shuffle pools with real randomiser
* sinlge retrieval of async job context
* some review comments addressed
* Apply suggestions from code review
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* log formatting
* integration test for distribution of volumes over storages
* move test to smoke tests
* imports
* sonarcloud issue # AYCOmVntKzsfKlhz0HDh
* spellos
* review comments
* review comments
* sonarcloud issues
* unittest
* import
* Update AbstractStoragePoolAllocatorTest.java
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Persistent Network feature & Marvin component tests
* Cleaned up comments and imports
* fixed small error
* add support to add setup persistent networks' resources when a disabled host is enabled
* small fix
* use wildcard instead of hard-coding the bridge name
* allow clean up of resources when removing a host in maintenance mode
* skip test for simulator hypervisor
Co-authored-by: shatoboar <sang-woo.bae@campus.tu-berlin.de>
* Mount disabled storage pool on host reboot
Add a global setting so that disabled pools will be mounted
again on host reboot
* fix build error
* Update description
* add cluster-wide support
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.
more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring
* Schema changes and disk offering column change from "type" to "compute_only"
* Few more changes
* Decoupled service offering and disk offering
* Remove diskofferingid from vminstance VO
* Decouple service offering and disk offering states
* diskoffering getsize() is only for strict disk offerings
* Fix deployVM flow
* Added new API params to compute offering creation
* Add diskofferingstrictness to serviceoffering vo under quota
* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case
Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume
* Fix User vm response to show proper service offering and disk offerings
* Added disk size strictness in disk offering response
* Added disk offering strictness to the service offering response
* Remove comments
* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form
* Added diskoffering details to the service offering response
* Added UI changes in deployvm wizard to accept override disk offering id
* Fix delete compute offering
* Fix VM deployment from custom service offering
* Move uselocalstorage column access from service offering to disk offering
* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering
* Fixed diskoffering automatic selection on add compute offering wizard
* UI: move compute only toggle button outside the box in add compute offering wizard
* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings
* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration
* Added disk offering change checks during resize volume operation
* Added new API changeofferingforVolume API and corresponding changes
* Add UI form for changeOfferingForVolume API
* Fix UI conflicts
* Fix service offering usage as disk offering
* Fix unit test failures
* fix user_vm_view
* Addressed review comments
* Fixed service_offering_view
* Fix service offering edit flow
* Fix service offering constructor to address custom offering
* Fix domain_router_view to get proper service offering id
* Removed unused import
* Addressed review comments and fixed update service offering flow with storage tags
* Added marvin test cases for checking disk offering strictness
* review comments addressed
* Remove system_use column from disk offering join
* update volume_view to update system_use column from service offering and not disk offering
* Fix changeOfferingForVolume API for custom disk offering
* Fix global setting implementation
* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view
* Changes for override root disk offering in deployvm wizard in case of custom offering
* Fix a unit test case
* Fixed recent unit test cases with new serviceofferingvo constructor
* Fix unit test in VolumeApiServiceImpl
* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow
* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering
* Fix smoke test failures
* Added tool tip for migrate volume UI form
* Address review comments and fix UI form of deploy VM in case of ISO.
* Fixed resize volume UI form for data disk
* UI changes to disable override root disk size when override root disk offering is enabled
* UI fix in deploy vm wizard
* Fix listdiskoffering after rebasing with main
* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list
* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form
* Fix false response on updateDiskOffering API
* Added search field for changeofferingforvolume UI form
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster
* Removed DB changes from 4.16 upgrade file
* Resolving merge conflicts with main 4.17
* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.
* UI: Added automigrate checkbox in scale VM form
* Addes since attributes to new API params
* Added shrinkOK parameter to changeofferingforvolume API
* Added shrinkOk param to UI in changeOfferingforVolume form
* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form
* Removed old foreign key constraint on IDs of service offering and disk offering
* Allow resize and automigrate of root volume if required in all cases of service offering change
* Allow only resize to higher disk size from UI
* Fixing vue syntax error
* Make UI changes to provide root disk size box when the linked disk offering is of custom
* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms
* Fix resize volume operation to update the VM settings
* Fix migratevolume form to pick selected storage pool id in list diskofferings API
* Improve logs
* Remove unnecessary comments
* Use diamond inference
* Fix some logs
* Remove unnecessary unboxing
* Create method to handle job result
* Remove unused vars and fix some logics
* Extract code to method and few adjusts
* Use CollectionUtils
* Extract pending work job validation to method
* Create new constructors
* Extract work job and info creation to a method
* Extract submit async job to a method
* Extract find vm by id to a method
* Change log level from trace to debug
* Remove unnused methods and add logs
* Undo code remotion
* Remove asserts and fix conditionals
* Address @GabrielBrascher reviews
* Remove double quotes from keys in manual json
* Undo code remotion
* Add object to log
* Remove statement from try/catch
* Implement toString with ReflectionToStringBuilderUtils
* Fix errors related to merge main
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Check the pool used space from the bytes used in the storage pool stats collector, for non-default primary storage pools that cannot provide stats.
Also, Update the used bytes from the pool stats answer for non-default primary storage pools if the pool can provide stats.
* Update server/src/main/java/com/cloud/storage/StorageManagerImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* space fix
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Extend addAnnotation and listAnnotations APIs
* Allow users to add, list and remove comments
* Add adminsonly UI and allow admins or owners to remove comments
* New annotations tab
* In progress: new comments section
* Address review comments
* Fix
* Fix annotationfilter and comments section
* Add keyword and delete action
* Fix and rename annotations tab
* Update annotation visibility API and update comments table accordingly
* Allow users seeing all the comments for their owned resources
* Extend comments for volumes and snapshots
* Extend comments to multiple entities
* Add uuid to ssh keypairs
* SSH keypair UI refactor
* Extend comments to the infrastructure entities
* Add missing entities
* Fix upgrade version for ssh keypairs
* Fix typo on DB upgrade schema
* Fix annotations table columns when there is no data
* Extend the list view of items showing they if they have comments
* Remove extra test
* Add annotation permissions
* Address review comments
* Extend marvin tests for annotations
* updating ui stuff
* addition to toggle visibility
* Fix pagination on comments section
* Extend to kubernetes clusters
* Fixes after last review
* Change default value for adminsonly column
* Remove the required field for the annotationfilter parameter
* Small fixes on visibility and other fixes
* Cleanup to reduce files changed
* Rollback extra line
* Address review comments
* Fix cleanup error on smoke test
* Fix sending incorrect parameter to checkPermissions method
* Add check domain access for the calling account for domain networks
* Fix only display annotations icon if there are comments the user can see
* Simply change the Save button label to Submit
* Change order of the Tools menu to provent users getting 404 error on clicking the text instead of expanding
* Remove comments when removing entities
* Address review comments on marvin tests
* Allow users to list annotations for an entity ID
* Allow users to see all comments for allowed entities
* Fix search filters
* Remove username from search filter
* Add pagination to the annotations tab
* Display username for user comments
* Fix add permissions for domain and resource admins
* Fix for domain admins
* Trivial but important UI fix
* Replace pagination for annotations tab
* Add confirmation for delete comment
* Lint warnings
* Fix reduced list as domain admin
* Fix display remove comment button for non admins
* Improve display remove action button
* Remove unused parameter on groupShow
* Include a clock icon to the all comments filter except for root admin
* Move cleanup SQL to the correct file after rebasing main
Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
* Global setting to select preferred storage pool
Currently all the volumes are allocated on storage pools
based on the capacity or the algorithm selected. Sometimes
we need to deploy all volumes of particular account in a
specific storage pool and in that case its not possible.
with this change, we can specify the uuid of the preferred
storage pool, so that all volumes of the account will be
deployed in this pool
* code feedback
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
* server: skip zone check for PERHOST iso during attachIso
Hypervisor tools ISO - vmware-toools.iso, xs-tools.iso are marked as PERHOST in DB. They are active but not downloaded to the secondary storages and hence no template-zone entry.
Skips the template-zone check for such templates.
Fixes#5265
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* inverted check
* use constants in TemplateManager
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Added disk provisioning type support for VMWare
* Review changes
* Fixed unit test
* Review changes
* Added missing licenses
* Review changes
* Update StoragePoolInfo.java
Removed white space
* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id
* Delete __init__.py
* Merge fix
* Fixed failing test
* Added comment about parameters
* Added error log when update fails
* Added exception when using API
* Ordering storage pool selection to prefer thick disk capable pools if available
* Removed unused parameter
* Reordering changes
* Returning storage pool details after update
* Removed multiple pool update, updated marvin test, removed duplicate enum
* Removed comment
* Removed unused import
* Removed for loop
* Added missing return statements for failed checks
* Class name change
* Null pointer
* Added more info when a deployment fails
* Null pointer
* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Small bug fix on API response and added missing bracket
* Removed datastore cluster code
* Removed unused imports, added missing signature
* Removed duplicate config key
* Revert "Added more info when a deployment fails"
This reverts commit 2486db78dc.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Externalize secondary storage capacity threshold
* Use default value as threshold when config value is lower than 0.0
* Move config to CapacityManager
* Validate config in CapacityManagerImpl
* Use config in StorageOrchestrator
* Change config description
* Remove unused import
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
- Added connection manager to the gateway client.
- Renew the client session on '401 Unauthorized' response.
- Refactored the gateway client calls, for GET and POST methods.
- Consume the http entity content after login/(re)authentication and close the content stream if exists.
- Updated storage pool client connection timeout configuration 'storage.pool.client.timeout' to non-dynamic.
- Added storage pool client max connections configuration 'storage.pool.client.max.connections' (default: 100) to specify the maximum connections for the ScaleIO storage pool client.
- Updated unit tests.
and blocked the attach volume operation for uploaded volume on ScaleIO/PowerFlex storage pool
* forceha: fix vm is not started if it is poweroff from inside
steps to reproduce the issue
(1) make sure force.ha is true in global setting. if not, change it to true, and restart mgt server
(2) create a service offering , ha is not enabled
(3) create a vm
(4) log into the vm, and power off via cli.
expected result: vm is started again by cloudstack
actual result: vm is not started.
* forceha: fix vms are still running if host is force-removed
when host can be force removed, however vms are stopped in cloudstack, but not stopped on host
```
(localcloud) 🐱 > delete host id="a5625393-444d-4d0a-b31d-62baf88a8be1" forced=true
{
"success": true
}```
after some minutes, vms are still runnning on host
```
root@mgt01:~# ssh node63 virsh list
Id Name State
---------------------------
1 i-2-19-VM running
2 i-2-11-VM running
```
error message are
```
Cannot transmit host 2 to Enabled state
com.cloud.utils.fsm.NoTransitionException: No next resource state found for current state = Enabled event = DeleteHost
at com.cloud.resource.ResourceManagerImpl.resourceStateTransitTo(ResourceManagerImpl.java:1216)
at com.cloud.resource.ResourceManagerImpl$1.doInTransactionWithoutResult(ResourceManagerImpl.java:907)
```
* forceha: Make ForceHA dynamic
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.
Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
Fixes#4517
Adds capacity checks for RandomAllocator (host allocator)
Factors out host cpu capability and capacity check wrt serviceoffering code into CapacityManager.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Public IP addresses dedicated to one domain should not be accessed
by other domains. Also, root admin should be able to display all
public ip addresses in system.
Currently following issues exist
1. Public IP address assigned to one domain can be accessed by
other sibling domains
If use.system.public.ip is false then child domains should not
see public ip of ROOT domain
Before fix
```
(test1) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
"count": 59,
"publicipaddress": [
```
After fix
```
(test) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
"count": 10,
```
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ
Documentation PR: apache/cloudstack-documentation#169
This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack
Other improvements addressed in addition to PowerFlex/ScaleIO support:
- Added support for config drives in host cache for KVM
=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)
- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates
- Updated full deployment destination for preparing the network(s) on VM start
- Propagate the direct download certificates uploaded to the newly added KVM hosts
- Discover the template size for direct download templates using any available host from the zones specified on template registration
=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones
- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)
- Retry VM deployment/start when the host cannot grant access to volume/template
- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore
- Check the router filesystem is writable or not, before performing health checks
=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not
- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)
* Addressed some issues for few operations on PowerFlex storage pool.
- Updated migration volume operation to sync the status and wait for migration to complete.
- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.
- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).
- Updated resize volume error message string.
- Blocked the below operations on PowerFlex storage pool:
-> Extract Volume
-> Create Snapshot for VMSnapshot
* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.
- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html
Other fixes included:
- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)
- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.
- Perform basic tests (for connectivity and file system) on router before updating the health check config data
=> Validate the basic tests (connectivity and file system check) on router
=> Cleanup the health check results when router is destroyed
* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0
* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool
and Minor improvements in PowerFlex/ScaleIO storage plugin code
* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.
- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.
* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py
* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)
* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.
* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration
* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure
* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
Steps to reproduce the issue:
(1)Create 10000 service offerings (by db changes below or cloudmonkey).
```
DROP PROCEDURE IF EXISTS cloud.insert_service_offering;
DELIMITER $$
CREATE PROCEDURE cloud.insert_service_offering()
BEGIN
DECLARE count INT DEFAULT 10000;
SET @offeringid = (select max(id)+1 from disk_offering);
WHILE count > 0 DO
INSERT INTO disk_offering (id,name,uuid,display_text,disk_size,type,created) values (@offeringid,'test-offering-wei',uuid(), 'test-offering-wei',0,'Service',now());
INSERT INTO service_offering (id,cpu,speed,ram_size) values (@offeringid, 1, 500,256);
SET @offeringid = @offeringid + 1;
SET count = count - 1;
END WHILE;
END $$
DELIMITER ;
CALL cloud.insert_service_offering();
mysql> CALL cloud.insert_service_offering();
Query OK, 0 rows affected (2 min 30.85 sec)
```
(2) Check the total time of periodical capacity check in cloudstack.
Without this patch, it spend 2.5 seconds (2 hosts)
```
2021-01-15 16:10:12,793 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Running Capacity Checker ...
2021-01-15 16:10:15,287 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Done running Capacity Checker ...
```
With this patch ,it spend 1.3 seconds (2 hosts)
```
2021-01-15 16:12:43,604 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Running Capacity Checker ...
2021-01-15 16:12:44,927 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Done running Capacity Checker ...
```
If there are 100 hosts, the total time will be reduced from 100+ seconds to around 10 seconds.
This feature enables the following:
Balanced migration of data objects from source Image store to destination Image store(s)
Complete migration of data
setting an image store to read-only
viewing download progress of templates across all data stores
Related Primate PR: apache/cloudstack-primate#326
This PR adds minor version support when mounting nfs on the SSVM as requested in #2861
The global setting "secstorage.nfs.version" has been changed to use the String data type which allows any minor version to be specified.
While migrate a vm, in the popup, the host dedicated to other accounts/domains are also 'Suitable" for migration, which is obviously wrong.
The same issue happens with api findHostsForMigration
Currently in cloudstack, when we click on "Acquire New Ip", it will
randomly acquire IP from the pool. With this enhancement, it is
possible to select the IP from the drop down IP list of that network.
Same thing applies for a VPC as well.
* Service layer changes for new way of tracking maintanence progress
* Fixes after offline code review
* Fix marvin tests
* Change state name and add documentation
* Fix test
* Fix and add more unit tests for different caseS
* Fix and enhance Marvin Tests
* Fixes for corner cases
* More fixes and logging
* UI fixes
* Some minor changes and reducing VMs on host for more contained tests
* Fixed ssh client auth problem causing test failure
* Code review changes + fixes + some more logging
* Fix flaky tests by adding delays between host states
* Added fetching only enabled hosts for tests
* Make port blocking KVM specific and refactor to handle failure
* Make failing migrations due to tagged host instead of port blocking
* Added additional check for migrating VMs
* Refactor to use single place for methods checking maintenance states
* Add missing HA config keys
* Change time value to seconds
* Change Integer to Long
* Using ConfigKey defaultValue
* Do some code refactoring
* Simplify code
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548
Live storage migration on KVM under these conditions:
From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.
Additional notes:
To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:
Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Allows creating storage offerings associated with particular domain(s) and zone(s). In create disk/storage offfering form UI, a mult-select control has been addded to select desired zone(s) and domain select element has been made multi-select.
createDiskOffering API has been modified to allow passing list of domain and zone IDs with keys domainids and zoneids respectively. These lists are stored in DB in cloud.disk_offering_details table with 'domainids' and 'zoneids' key as string of comma separated list of IDs. Response for create, update and list disk offering APIs will return domainids, domainnames, zoneids and zonenames in details object of offering.
listDiskOfferings API has been modified to allow passing zoneid to return only offerings which are associated with the zone.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Keep connection alive when on maintenance
* Refactor cancel maintenance and unit tests
* Add marvin tests
* Refactor
* Changing the way we get ssh credentials
* Add check on SSH restart and improve marvin tests
* Migrate template to target host if needed.
Fix KVM VM local storage live migration by migrating its template to the
target host if needed.
* Address reviewer and add method that updates the DB template reference
* Remove deprecated Config.PrimaryStorageDownloadWait
* Code formating of @Inject to follow checkstyle
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations
* Fix indentation of marvin test file and reformat against PEP8
* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.
* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one
* Fix unhandled NPE during VM migration
* Revert back to distinct event descriptions for VM to host or storage pool migration
* Reformat test_primary_storage file against PEP-8 and Remove unused imports
* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils