This PR fixes the issue #6209 where the snapshot revert operation fails after certain volume operations like Migrate VM with volume / migrate volume / reinstall VM.
The root cause of the issue after these volume operations, the primary storage entry is getting deleted for that volume. We have fixed it here to get the primary datastore entry wrt volume and continue the operation.
* add global setting to allow parallel execution on vmware
* cleanup setting distribution for vmware.create.full.clone
* query setting in vmware guru
* don´t touch other hypervisor's commands
* guru hierarchy cleanup
- Refactor IPv6 related tests
- Adds smoke test for IPv4 network to IPv6 upgrade
- Adds smoke test for IPv6 VPC
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
While deleting a traffic type, ACS validates if there is any VM related to it. However, if we have several physical networks containing a traffic type, ACS does not filter the physical network to do the validation. For instance, if we have two (2) physical networks containing the traffic type Guest, the first one having VMs related, and the second not having VMs related, if we try to remove the second traffic type, ACS give us the message The Traffic Type is not deletable because there are existing networks with this traffic type:Guest.
The API deleteTrafficType was designed to filter the physical network where the traffic type is, however, due to a typo this filtering was not been applied correctly. This PR intends to fix this typo to honor the API behavior.
In an advanced zone I created 4 physical networks, one for each traffic type (Public, Guest, Management, Storage). I instantiated some VMs so they get guest IPs. In the Public physical network I added a Guest traffic type. I tried to remove the new Guest traffic type from Public physical network, which did not have any VMs related to it, and, before the changes, I was getting the message The Traffic Type is not deletable because there are existing networks with this traffic type:Guest. After the changes, I could remove successfully the new Guest traffic type via API deleteTrafficType. I also tried to remove the Guest traffic type which had VMs related to it, however, as expected, I received the The Traffic Type is not deletable... message.
I also created a unit test to validate the data retrieving.
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
This PR enhances the existing PowerFlex/ScaleIO storage plugin to support separate (storage) network for Hosts(KVM)/Storage connection, mainly the SDC (ScaleIo Data Client) connection.
* refactor and log trace
* tracelogs
* shuffle pools with real randomiser
* sinlge retrieval of async job context
* some review comments addressed
* Apply suggestions from code review
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* log formatting
* integration test for distribution of volumes over storages
* move test to smoke tests
* imports
* sonarcloud issue # AYCOmVntKzsfKlhz0HDh
* spellos
* review comments
* review comments
* sonarcloud issues
* unittest
* import
* Update AbstractStoragePoolAllocatorTest.java
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Prevent NPE on reboot stopped VM
* Use VM UUID instead of VM ID
* Apply suggestion
* Refactor and fix start VM output
* Use format instead of concatenation
* ms stats thread added
* initial data collection for management server
* empty list management server metrics command
* bean copy into MS metrics object
* ms status VO
* further API and DB plumbing
* minimal metrics response in API
* remove commented, refactor data collection plumbing
* javadocs
* surpress stacktrace on expected error
* update status experiment
* ms status publish framework added
* review comment addressed
* static data to DB and API, /proc/ reading
* addressing review comments
* ui for ms details
* small ui adjustment
* beanCopy
* agentcount response and system parameter
* labels
* package-lock
* add version strings to regular list API
* add shutdown time to DB
* add last start and last stop to regular list response
* distro info in regular response/session count added
* metrics as details
* add heap used and remove details map
* thread-statusses
* move db upgrade to 4.17
* sysmem
* procmem
* ui demo comments applied
* javadoc
* get conf and log file locations
* loginfo
* cpuLoadStats
* no.remote
* extra spaces removed
* clusterlistener
* add unit to kb value
* revert accidental rename
* silly fqcn removed
* get mem info from bean is possible
* refactor long sequence for readability
* registerListener
* listUsageMetrics and isDbLocal
* rats
* local usage and db or not
* minimal listDbMetrics
* db vars and stats
* cleanup and #queries queried
* db stats calculation
* rat
* remove list response wrapper from sinlge details-lists responses
* rudimentary metrics view
* metrics table cleanup
* table makeup, collection dates
* move component to appropriate location
* capitalisation removed
* rebase error resolved
* rename deamon to daemon
* small style comments applied
* another merge issue
* naming comments and boot time
* stop/start prefixed with server
* layout-fix
* listMSMetrics test and test refactor
* usage metrics test
* db metrics test
* extra validations
* Update ui/public/locales/en.json
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* descriptions of loadaverages and replica's
* collection time on top
* cpu load on metrics overview
* DbStatsCollection
* some parameter description texts
* labels adjusted
* new output 'kernelversion' and log info cleanup
* labels
* Update api/src/main/java/com/cloud/server/ManagementServerHostStats.java
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/response/DbMetricsResponse.java
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/dao/ManagementServerHostDao.java
Co-authored-by: Rodrigo D. Lopez <19981369+RodrigoDLopez@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Rodrigo D. Lopez <19981369+RodrigoDLopez@users.noreply.github.com>
* Update api/src/main/java/org/apache/cloudstack/api/response/ManagementServerResponse.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update api/src/main/java/org/apache/cloudstack/api/response/ManagementServerResponse.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update engine/schema/src/main/java/com/cloud/host/dao/HostDao.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/server/StatsCollector.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/dao/ManagementServerHostDao.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/server/StatsCollector.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/server/StatsCollector.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/server/StatsCollector.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
* some (more) refactorring suggestions applied
* human readable memory sizes
* rat
* actual collection time instead of query time, improved descriptions
* merge errors fixed
* optional metric values
* javadoc and logging
* names of jmx vars have changed
* vue3-compatibility
* new output parameter type
* lower retention default
* vue3 fixes
* polish comments
* polish comments 2, the reckoning
* note on usage servers
* merge conflict errors
* pollish
* conditional assertion to deal with simulator restart
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
Co-authored-by: Rodrigo D. Lopez <19981369+RodrigoDLopez@users.noreply.github.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Support for live patching systemVMs and deprecating systemVM.iso. Includes:
- fix systemVM template version
- Include agent.zip, cloud-scripts.tgz to the commons package
- Support for live-patching systemVMs - CPVM, SSVM, Routers
- Fix Unit test
- Remove systemvm.iso dependency
* The following commit:
- refactors logic added to support SystemVM deployment on KVM
- Adds support to copy specific files (required for patching) to the hosts on Xenserver
- Modifies vmops method - createFileInDomr to take cleanup param
- Adds configuratble sleep param to CitrixResourceBase::connect() used to verify if telnet to specifc port is possible (if sleep is 0, then default to _sleep = 10000ms)
- Adds Command/Answer for patch systemVMs on XenServer/Xcp
* - Support to patch SystemVMs - VMWare
- Remove attaching systemvm.iso to systemVMs
- Modify / Refactor VMware start command to copy patch related files to the systemvms
- cleanup
* Commit comprises of:
- remove docker from systemvm template - use containerd as container runtime
- update create-k8s-binaries script to use ctr for all docker operations
- Update userdata sent to the k8s nodes
- update cksnode script, run during patching of the cks/k8s nodes
* Add ssh to k8s nodes details in the Access tab on the UI
* test
* Refactor ca/cert patching logic
* Commit comprises of the following changes:
- Use restart network/VPC API to patch routers
- use livePatch API support patching of only cpvm/ssvm
- add timeout to the keystore setup/import script
* remove all references of systemvm.iso
* Fix keystore-cert-import invocation + refactor cert timeout in CP/SS VMs
* fix script timeout
* Refactor cert patching for systemVMs + update keystore-cert-import script + patch-sysvms script + remove patchSysvmCommand from networkelementcommand
* remove commented code + change core user to cloud for cks nodes
* Update ownership of ssh directory
* NEED TO DISCUSS - add on the fly template conversion as an ExecStartPre action (systemd)
* Add UI changes + move changes from patch file to runcmd
* test: validate performance for template modification during seeding
* create vms folder in cloudstack-commons directory - debian rules
* remove logic for on the fly template convert + update k8s test
* fix syntax issue - causing issue with shared network tests
* Code cleanup
* refactor patching logic - certs
* move logic of fixing rootdiskcontroller from upgrade to kubernetes service
* add livepatch option to restart network & vpc
* smooth upgrade of cks clusters
* Support for live patching systemVMs and deprecating systemVM.iso. Includes:
- fix systemVM template version
- Include agent.zip, cloud-scripts.tgz to the commons package
- Support for live-patching systemVMs - CPVM, SSVM, Routers
- Fix Unit test
- Remove systemvm.iso dependency
* The following commit:
- refactors logic added to support SystemVM deployment on KVM
- Adds support to copy specific files (required for patching) to the hosts on Xenserver
- Modifies vmops method - createFileInDomr to take cleanup param
- Adds configuratble sleep param to CitrixResourceBase::connect() used to verify if telnet to specifc port is possible (if sleep is 0, then default to _sleep = 10000ms)
- Adds Command/Answer for patch systemVMs on XenServer/Xcp
* - Support to patch SystemVMs - VMWare
- Remove attaching systemvm.iso to systemVMs
- Modify / Refactor VMware start command to copy patch related files to the systemvms
- cleanup
* Commit comprises of:
- remove docker from systemvm template - use containerd as container runtime
- update create-k8s-binaries script to use ctr for all docker operations
- Update userdata sent to the k8s nodes
- update cksnode script, run during patching of the cks/k8s nodes
* Add ssh to k8s nodes details in the Access tab on the UI
* test
* Refactor ca/cert patching logic
* Commit comprises of the following changes:
- Use restart network/VPC API to patch routers
- use livePatch API support patching of only cpvm/ssvm
- add timeout to the keystore setup/import script
* remove all references of systemvm.iso
* Fix keystore-cert-import invocation + refactor cert timeout in CP/SS VMs
* fix script timeout
* Refactor cert patching for systemVMs + update keystore-cert-import script + patch-sysvms script + remove patchSysvmCommand from networkelementcommand
* remove commented code + change core user to cloud for cks nodes
* Update ownership of ssh directory
* NEED TO DISCUSS - add on the fly template conversion as an ExecStartPre action (systemd)
* Add UI changes + move changes from patch file to runcmd
* test: validate performance for template modification during seeding
* create vms folder in cloudstack-commons directory - debian rules
* remove logic for on the fly template convert + update k8s test
* fix syntax issue - causing issue with shared network tests
* Code cleanup
* add cgroup config for containerd
* add systemd config for kubelet
* add additional info during image registry config
* address comments
* add temp links of download.cloudstack.org
* address part of the comments
* address comments
* update containerd config - as version has upgraded to 1.5 from 1.4.12 in 4.17.0
* address comments - simplify
* fix vue3 related icon changes
* allow network commands when router template version is lower but is patched
* add internal LB to the list of routers to be patched on network restart with live patch
* add unit tests for API param validations and new helper utilities - file scp & checksum validations
* perform patching only for non-user i.e., system VMs
* add test to validate params
* remove unused import
* add column to domain_router to display software version and support networkrestart with livePatch from router view
* Requires upgrade column to consider package (cloud-scripts) checksum to identify if true/false
* use router software version instead of checksum
* show N/A if no software version reported i.e., in upgraded envs
* fix deb failure
* update pom to official links of systemVM template
* fix mismatching between db uuids and custom attributes uuids
during the datastore cluster creation, cloudstack could not
recognize the existing primary storage and create a new one because
uuid format not equal
* remove method call setUuid
* add upgrade step to fix faulty pool uuids
* adapt method to transform uuid each time
* extract error msg
* rm unused import
* add exception to log error as parameter
* adapt sql to fetch wrong uuids
* rm spaces
* move upgrade code to Upgrade41610to41700
Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
* get vdisk uuid from vcenter and store it into database
* add vdisk uuid as external_uuid to listVolume response
* add sql upgrade file
* Update vmware-base/src/main/java/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* update sql add column external_uuid
* Update server/src/main/java/com/cloud/storage/VolumeApiServiceImpl.java
Co-authored-by: Wei Zhou <weizhou@apache.org>
* adapt param description for externalUuid
* add 'idempotent column add' to create external_uuid col
* rename method to getExternalDiskUUID
* remove line disk_offering.system_use
Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
* Enhancement: create Shared networks and VPC private gateways by users
* UI bug fix: pass correct domainid in CreateSharedNetworkForm
* Update #5730: fix test failure with test_guest_vlan_range.py
* Update #5730: fix test failure with test_persistent_network.py
* Update #5730: Add since to new API commands and API parameters
* Update #5730: Get first physical network for VPC private gateway if other ways do not work
* Update #5730: code optimization (return !offering.isSpecifyVlan())
* Update #5730: fix hard-coded network offering id in test_pvlan.py
* Update #5730: skip access check on the network owner if the owner is ROOT/system
* Update #5730: overlap check on cidr/startip/endip
* Update #5730: add methods to get accountid/domainid of shared networks
* Update #5730: improve integration tests
* Update #5730: update as per GutoVeronezi's comments
* Network Sharing: give network access permission to other accounts within a domain
* network: update ip in lb/pf/dnat tables when update vm nic ip
* Update #5757: create 3 separated methods for DNAT/LB/PF update
* travis: install python3-setuptools
* Network Sharing: update integration test
* Update #5769: Remove NetworkPermission.Ops
* Update #5769: Update as per Daan's comments
* Update #5769: Update as per Suresh's comments
* Update #5769: fix UI bug that accounts/projects are not listed
* Update #5769: fix domain admin can deploy vm on L2 network of other users
* Update #5769: Remove method listPermittedNetworkIdsByDomains in NetworkPermissionDao
* Update #5769: Skip network operation permissions check for root admin
* UI: fix create Isolated/L2 network form
* Update #5730: fix create Shared network form
* Update #5769: fix domain admin can deploy vm on L2 network of other users
* test: fix test_storage_policy.py
* Update #5769: fix remove_nic in test_network_permissions.py
* Update #5769: extract some codes to a method
* Update #5769: fix add/remove nic by domain admin
* Update #5769: allow domain admin to enable/disable static nat and create port forwarding rules
* Update #5769: update integration test
* Update #5769: fix unit test AssignLoadBalancerTest.java
* Update #5769: allow normal users to share network permission to other users on UI
* Update #5769: fix small UI bug with label
* Update #5769: Support L2 network as associated network
* test: sleep 30s after restarting mgt server in test_kubernetes_supported_versions.py to fix test failures with test_secondary_storage.py
* Update #5784: revert part of changes in #2420
* Update #5757: invert if condition to reduce code indentation
* Update #5769: fix regular user cannot create L2 network
* Update #5769: Add associated nework id and name in private gateway response
* Update #5769: list networks by networkfilter=Account on UI
* Update #5769: fix ui issue when list private gateways or create shared network if no isolated networks
* Update #5769: fix vue ui warnings
* Update #5679: add BaseResponseWithAssociatedNetwork and extract method setResponseAssociatedNetworkInformation
* Update #5679: extract some methods in VpcManagerImpl.java
* Update #5679: Update smoke tests as per Daan's comments
* Update #5769: fix vpc with private gateways cannot be removed when remove an acount
* Update #5769: fix unit test failures after merging latest main
* Update #5769: fix schema-41610to41700.sql
* Update #5769: fix Request failed due to empty network offering list on UI
* Update #5769: Throw exception when account is not found by name
* Update #5769: display a warning message if network offering list is empty
* Update #5769: fix an UI bug caused by previous commit b286cb7677
* Update #5769: fix UI bugs due to vue3 merge
* Update #5769: fix issue due to account type refactoring
* Update #5769: fix ui bugs due to vue3
* Update #5769: fix issue due to vue3 upgrade
* Update #5769: fix issue due to vue3 upgrade part 2
* Update #5769: fix issue due to vue3 upgrade part 3
* Update #5769: highlight default scope when create shared network on UI
* Update #5769: fix domain list is not loaded on UI
* Update #5769: fix restart/delete shared network by normal users
* Update #5769: fix restart domain-scope shared network by domain admin
* Update #5769: fix 3 UI bugs (1) double networks in list; (2) icon of first items in list; (3) account/project autoselect
* Update #5769: fix 2 ui bugs; (1) selected project is not changed when change domain; (2) no network should be selected by default
* Update #5769: fix update shared networks by domain admin/regular user
* Update #5769: fix Flicking warning message about the empty network offerings
* Update #5769: display associated network name in shared network info card
* Update #5769: fix create private gateway form
* Update #5769: fix network lists in project view
* Update #5769: fix duplicated networks in network dropdown
* Update #5769: fix failed to create shared network if associated L2 network is Setup
* Update #5769: check AccessType.OperateEntry on network in its implementation
* Revert "Update #5769: check AccessType.OperateEntry on network in its implementation"
This reverts commit c42c489e5b.
* Update #5769: fix keyword search in list guest vlans
* StorPool storage plugin
Adds volume storage plugin for StorPool SDS
* Added support for alternative endpoint
Added option to switch to alternative endpoint for SP primary storage
* renamed all classes from Storpool to StorPool
* Address review
* removed unnecessary else
* Removed check about the storage provider
We don't need this check, we'll get if the snapshot is on StorPool be
its name from path
* Check that current plugin supports all functionality before upgrade CS
* Smoke tests for StorPool plug-in
* Fixed conflicts
* Fixed conflicts and added missed Apache license header
* Removed whitespaces in smoke tests
* Added StorPool plugin jar for Debian
the StorPool jar will be included into cloudstack-agent package for
Debian/Ubuntu
* Refactor create volume snapshot with running VM
* Refactor create volume snapshot with stopped VM
* Refactor create volume from snapshot
* Refactor create template from snapshot
* Refactor volume migration (migrateVolume/ migrateVirtualMachineWithVolume)
* Refactor snapshot deletion
* Refactor snapshot revertion
* Adjusts and fix cherry-pick conflicts
* Remove diffuse tests
* Add validation to add flag '--delete' on command 'virsh blockcommand' only if libvirt version is equal or higher 6.0.0
* Expunge temporary snapshot only if template creation is from snapshot
* Extract strings to constant
* Remove unused imports
* Fix error on revert backed up snapshot
* Turn method's return to void as it is not used
* Rename method in SnapshotHelper
* Fix folder creation when using SharedMountPoint pool
* Remove static import
* Remove unnused method
* Cover take snapshot in centos 7
* Handle right snapshot flag according to qemu version
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
* Persistent Network feature & Marvin component tests
* Cleaned up comments and imports
* fixed small error
* add support to add setup persistent networks' resources when a disabled host is enabled
* small fix
* use wildcard instead of hard-coding the bridge name
* allow clean up of resources when removing a host in maintenance mode
* skip test for simulator hypervisor
Co-authored-by: shatoboar <sang-woo.bae@campus.tu-berlin.de>
* Add persistence of VM stats
* Fix API 'since' attribute
* Add license
* Address GutoVeronezi's reviews
* Fix the order of VM stats in the API response
* Fix msid in VM stats data
* Fix disk stats and add minor improvements
* Add log message
* Build string using ReflectionToStringBuilderUtils
* Rerun checks
Co-authored-by: joseflauzino <jose@scclouds.com.br>
* VM snapshots of running KVM instance using storage providers plugins for disk snapshots
Added new virtual machine snapshot strategy which is using storage providers plugins to take/revert/delete snapshots.
You can take VM snapshot without VM memory on KVM instance, using storage providers implementations for disk snapshots.
Also revert and delete is added as functionality. Added Thaw/Freeze command for KVM instance.
The snapshots will be consistent, because we freeze the VM during the snapshotting. Backup to secondary storage is executed after
thaw of the VM and if it is enabled in global settings.
* Removed duplicated functionality
Set few methods in DefaultVMSnapshotStrategy to protected to reuse them
without duplicating the code. Remove code that is actualy not needed
* Added requirements in global setting kvm.vmstoragesnapshot.enabled
Added more information in kvm.vmstoragesnapshot.enabled global setting,
that it needs installation of:
- qemu version 1.6+
- qemu-guest-agent installed on guest virtual machine
when the option is enabled
* Added Apache license header
* Removed commented code
* If "kvm.vmstoragesnapshot.enabled" is null should be considered as false
* removed unused imports, replaced default template
Removed unused imports which causing failures and replaced template to
CentOS8
* "kvm.vmstoragesnapshot.enabled" set to dynamic
* Getting status of freeze/thaw commands not the return code
Will chacke the status if freeze/thaw of Guest VM succeded, rather than
looking for return code. Code refactoring
* removed "CreatingKVM" VMsnapshot state and events related to it
* renamed AllocatedKVM to AllocatedVM
the states should not be associated to a hypervisor type
* loggin the result of "drive-backup" command
* Check which VM snapshot strategy could handle the vm snapshots
gets the best match of VM snapshot strategy which could handle the vm
snapshots on KVM.
Other storage plugins could integrate with this functionality to support group snapshots
* Added poolId in canHandle for KVM hypervisors
Added poolId into canHandle method used to check if all volumes are on
the same PowerFlex's storage pool
* skip smoke tests if the hypervisor's OS type is CentOS
This PR works with functionality included in qemu-kvm-ev which
does not come by default on CentOS. The smoke tests will be skipped if
the hypervisor OS is CentOS
* Added missed import in smoke test
* Suggested change to use ` org.apache.commons.lang.StringUtils.isNotBlank`
* Fix getting device on Ubuntu
On Ubuntu the device isn't provided and we have to get it from
node-name parameter. For drive-backup command (for Ubuntu) is needed and job-id which
is the value of node-name (this extra param works on Ubuntu and CentOS as well).
* Removed new snapshot states and functionality for NFS
* throw CloudRuntimeException
provide a properer error message when delete VM snapshot fails
* exclude GROUP snapshots when listing snapshots
* Skip tests if there is pool with NFS/Local
* address comments
* Mount disabled storage pool on host reboot
Add a global setting so that disabled pools will be mounted
again on host reboot
* fix build error
* Update description
* add cluster-wide support
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
* CKS: Support deployment of CKS clusters on Advanced zones with security groups
* use available constant
* address comments -
- Ingress sg rule for port 22 & 6443
- Use constant to define securityGroup Name
- rename variable name from type -> vmType
* unique name for security group + foreign key
* use constants
Sometimes when host isput into maintenance, the connection get
disconnected and as result vm's are stopped. So check for extra state
before considering host as down and stopping the vm's
* Reserve and release a public IP
* Update #6046: show orange color for Reserved public ip
* Update #6046 reserve IP: fix ui conflicts
* Update #6046: fix resource count
* Update #6046: associate Reserved public IP to network
* Update #6046: fix unit tests
* Update #6046: fix ui bugs
* Update #6046: make api/ui available for domain admin and users
* Create profiles to download systemvm-templates
* Rename profiles
* Add support to pass necessary flags to the packaging jobs
* Escape flags
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* server: mark volume snapshots as Destroyed in some cases when delete a volume in QCOW2 format
when delete a volume in QCOW2 format, if volume snapshot does not exist on primary and secondary storage, mark the snapshot as Destroyed.
* Update #6057: remove check on volume format
This PR fixes: #6060
Bash version 3 does not have support for associative arrays. Hence during the packaging phase the metadata.ini file created (on osx) isn't in proper format as the script used to generate it i.e., templateConfig.sh made use of associative arrays - which is supported from bash v4 onward. This eventually leads failure to deploy DB on OSX.
This PR modifies the script to work on systems using bash v3.
* keypairs added in api-constants
* names parameter added
* findbynames method added in dao
* change in impl to find and reset multiple keys
* findbynames method implemented
* log the publickeys, check the ssh keys given exists or not
* new ArrayList<>
* SQL IN toArray
* keypair
* null pointer exception solved with + concatanation
* null pointer exception solved with + concatanation
* error resolved
* keypair name to names in uservmresponse
* keypair name is set in the uservmresponse, from the details
* null checks are removed, keypairnames are stored in a string, sent to the resetvmsshinternal, and added in details
* commit first eval
* deploy vm takes multiple ssh-keys
* Deploy VM UI changed to accept multiple ssh keys
* Reset SSH UI API changed
* ResetSSH.vue
* ssh keys joined, ssh added in infocard
* changes made
* schema error resolved
* potential null pointer exception removed
* Update UserVmManagerImpl.java
unnecessary check removed.
* Update DeployVMCmd.java
* Update DeployVMCmd.java
* Update ResetVMSSHKeyCmd.java
* Update UserVmJoinDaoImpl.java
* .
* arraylist
* Update DeployVMCmd.java
* Update UserVmManagerImpl.java
* Update ResetVMSSHKeyCmd.java
* Update db
* Fix list vm by keypair
* ui fixes
* Fix typos
* ui fixes
* Cleanup
* Adding deprecated and since in api params
* Adding upgrade for existing vms with ssh keys
* Handle no key for cks
* Show existing keyparis in reset ssh key form
* get keys from the right account
Co-authored-by: bicrxm <bickrombishsass@gmail.com>
* This PR/commit comprises of the following:
- Support to fallback on the older systemVM template in case of no change in template across ACS versions
- Update core user to cloud in CKS
- Display details of accessing CKS nodes in the UI - K8s Access tab
- Update systemvm template from debian 11 to debian 11.2
- Update letsencrypt cert
- Remove docker dependency as from ACS 4.16 onward k8s has deprecated support for docker - use containerd as container runtime
* support for private registry - containerd
* Enable updating template type (only) for system owned templates via UI
* edit indents
* Address comments and move cmd from patch file to cloud-init runcmd
* temporary change
* update k8s test to use k8s version 1.21.5 (instead of 1.21.3 - due to https://github.com/kubernetes/kubernetes/pull/104530)
* support for private registry - containerd
* Enable updating template type (only) for system owned templates via UI
* smooth upgrade of cks clusters
* update pom file with temp download.cloudstack.org testing links
* fix pom
* add cgroup config for containerd
* add systemd config for kubelet
* add additional info during image registry config
* update to official links
* Update 'endpointe.url' global settings to 'endpoint.url'
* Add PR number on 'schema-41610to41700.sql'
* Use ApiServiceConfiguration.ApiServletPath.key() instead of "hardcoded" string
* vm-import: fix unmanaged instance listing
When the host and last host ID is not set for the VM, it may appear in the list of unmanaged instances.
This changes fixes the behaviour by filtering unmanaged instances list for host for following three criteria:
- host is set as host_id for the VM
- host is set as the last_host_id for the VM
- pod of the host is set as the pod_id for the VM and both host_id and last_host_id is NULL
* use SearchBuilder to fix query condition
* add paranthesis
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* api,server: add params for updatehypervisorcapabilities API
Allows updating following capabilities for a hypervisor, version:
- Max DATA volumes limit
- Storage motion supported
- Max hosts per cluster
- VM snapshot enabled
* added test
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Update test/integration/smoke/test_hypervisor_capabilities.py
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Add NFS version to mount command
* Remove extra line
* Extend NFS version to mount secondary storage
* Unused import
* Refactor NFS version to be granular
* Make use of the ConfigKey on the NFS version setting value
* In progress primary keys
* Refactor in progress to idempotent way
* Finish SQL changes
* Add java code to match new columns
* Fix imports
* Fix tests
* Remove comments
* Fix index name on vmsnapshot
* Fix parse from correct column on usage storage
* Fix parser columns
* Fix NPE
* Fix NPE for the rest of the occurrences
* Further fix for similar issue
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.
more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring
* Schema changes and disk offering column change from "type" to "compute_only"
* Few more changes
* Decoupled service offering and disk offering
* Remove diskofferingid from vminstance VO
* Decouple service offering and disk offering states
* diskoffering getsize() is only for strict disk offerings
* Fix deployVM flow
* Added new API params to compute offering creation
* Add diskofferingstrictness to serviceoffering vo under quota
* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case
Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume
* Fix User vm response to show proper service offering and disk offerings
* Added disk size strictness in disk offering response
* Added disk offering strictness to the service offering response
* Remove comments
* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form
* Added diskoffering details to the service offering response
* Added UI changes in deployvm wizard to accept override disk offering id
* Fix delete compute offering
* Fix VM deployment from custom service offering
* Move uselocalstorage column access from service offering to disk offering
* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering
* Fixed diskoffering automatic selection on add compute offering wizard
* UI: move compute only toggle button outside the box in add compute offering wizard
* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings
* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration
* Added disk offering change checks during resize volume operation
* Added new API changeofferingforVolume API and corresponding changes
* Add UI form for changeOfferingForVolume API
* Fix UI conflicts
* Fix service offering usage as disk offering
* Fix unit test failures
* fix user_vm_view
* Addressed review comments
* Fixed service_offering_view
* Fix service offering edit flow
* Fix service offering constructor to address custom offering
* Fix domain_router_view to get proper service offering id
* Removed unused import
* Addressed review comments and fixed update service offering flow with storage tags
* Added marvin test cases for checking disk offering strictness
* review comments addressed
* Remove system_use column from disk offering join
* update volume_view to update system_use column from service offering and not disk offering
* Fix changeOfferingForVolume API for custom disk offering
* Fix global setting implementation
* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view
* Changes for override root disk offering in deployvm wizard in case of custom offering
* Fix a unit test case
* Fixed recent unit test cases with new serviceofferingvo constructor
* Fix unit test in VolumeApiServiceImpl
* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow
* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering
* Fix smoke test failures
* Added tool tip for migrate volume UI form
* Address review comments and fix UI form of deploy VM in case of ISO.
* Fixed resize volume UI form for data disk
* UI changes to disable override root disk size when override root disk offering is enabled
* UI fix in deploy vm wizard
* Fix listdiskoffering after rebasing with main
* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list
* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form
* Fix false response on updateDiskOffering API
* Added search field for changeofferingforvolume UI form
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster
* Removed DB changes from 4.16 upgrade file
* Resolving merge conflicts with main 4.17
* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.
* UI: Added automigrate checkbox in scale VM form
* Addes since attributes to new API params
* Added shrinkOK parameter to changeofferingforvolume API
* Added shrinkOk param to UI in changeOfferingforVolume form
* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form
* Removed old foreign key constraint on IDs of service offering and disk offering
* Allow resize and automigrate of root volume if required in all cases of service offering change
* Allow only resize to higher disk size from UI
* Fixing vue syntax error
* Make UI changes to provide root disk size box when the linked disk offering is of custom
* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms
* Fix resize volume operation to update the VM settings
* Fix migratevolume form to pick selected storage pool id in list diskofferings API
* Do not fail if there are existing role permissions for annotations
* Refactor
* Improve refactor
* Do not update if there are existing role permissions for annotations
* Fix exception on upgrade
* Remove extra space from suggestion
* Apply suggestions from code review
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* Use Physical size to evaluate if migration is possible
* Improve logging and consider files skipped as failure in complete migration
* skipped can't be negative
* remove useless method
* group multidisk templates for secstor migration
* use enum
* Update engine/orchestration/src/main/java/org/apache/cloudstack/engine/orchestration/DataMigrationUtility.java
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl d'Silva <pearl.dsilva@shapeblue.com>
* Enable resetting config values to default value
Provide reset button to zone,cluster,domain,account,
primary and secondary storage so that config values
can be reset to default value
* fix ui issue
* Update test/integration/smoke/test_reset_configuration_settings.py
* Update test/integration/smoke/test_reset_configuration_settings.py
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Fix metrics stats for VMs that are not running
* Improves the way to get vmIdsToRemoveStats
* Improves test
Co-authored-by: José Flauzino <jose@scclouds.com.br>
* Improve logs
* Remove unnecessary comments
* Use diamond inference
* Fix some logs
* Remove unnecessary unboxing
* Create method to handle job result
* Remove unused vars and fix some logics
* Extract code to method and few adjusts
* Use CollectionUtils
* Extract pending work job validation to method
* Create new constructors
* Extract work job and info creation to a method
* Extract submit async job to a method
* Extract find vm by id to a method
* Change log level from trace to debug
* Remove unnused methods and add logs
* Undo code remotion
* Remove asserts and fix conditionals
* Address @GabrielBrascher reviews
* Remove double quotes from keys in manual json
* Undo code remotion
* Add object to log
* Remove statement from try/catch
* Implement toString with ReflectionToStringBuilderUtils
* Fix errors related to merge main
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* api,server,engine/schema: admin listvm api clusterid
Add clusterid parameter in listVirtualMachines API for admin
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* import order
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* set clusterid only for ListVMsCmdByAdmin
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* upgrade/systemvm: add template_zone_ref entries
Fixes#5641
When registering a system VM template during an upgrade, entries in cloud.template_zone_ref must be created for the new template.
For a cross-zones template, entry for each zone must be added.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix for template-zone entry create
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* change
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Check the pool used space from the bytes used in the storage pool stats collector, for non-default primary storage pools that cannot provide stats.
Also, Update the used bytes from the pool stats answer for non-default primary storage pools if the pool can provide stats.
* Update server/src/main/java/com/cloud/storage/StorageManagerImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* space fix
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* VPC: support LB in multiple vpc tiers if LB provider is VpcVirtualRouter
* server: fix unit test CreateNetworkOfferingTest failures
[ERROR] Tests run: 10, Failures: 0, Errors: 10, Skipped: 0, Time elapsed: 13.902 s <<< FAILURE! - in org.apache.cloudstack.networkoffering.CreateNetworkOfferingTest
[ERROR] createIsolatedNtwkOffWithVlan(org.apache.cloudstack.networkoffering.CreateNetworkOfferingTest) Time elapsed: 0.662 s <<< ERROR!
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'loadBalancerDaoImpl': Invocation of init method failed; nested exception is java.lang.NullPointerException
at org.apache.cloudstack.networkoffering.CreateNetworkOfferingTest.setUp(CreateNetworkOfferingTest.java:110)
Caused by: java.lang.NullPointerException
at org.apache.cloudstack.networkoffering.CreateNetworkOfferingTest.setUp(CreateNetworkOfferingTest.java:110)
* update #5580: use java.util.Optional
* update #5580: create method listByNetworkIdOrVpcIdAndScheme
This adds unique constraints much like other tables, instead of using
query that maybe incompatible with older 5.x mysql servers.
Fixes#5564
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster
* Change in constructors
* Naming changes
* Remove commented code
* Refactor code for more readability
* Addressed review comments on code refactor
* vmware, network: add maclearning option
Adds option for specifying MAC Learning property for network offering (useful for VMware Distributed Virtual Portgroup). Added global config - network.mac.learning for the default value.
MAC Learning is supported for DV portgroups for VMware Distributed vSwitches v6.6.0+ and vSphere 6.7+
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix warning msg
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* trace nics additions
* work queue patch for network to add
* add secondary key to job
* logging improvements and naming of field(s)
* several naming corrections
* extra check if net already exists for vm
* placeholder job with secondary object
* constraint on entering the same job multiple times
* error handling/warning message
* review comments applied
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Wei Zhou <wei.zhou@shapeblue.com>
Enhanced update network form in the UI.
On network offering change for an isolated network,
- VMware portgroup should be updated accordingly.
- VMs on the network should be placed on the correct VMware portgroup based on the network rate, https://docs.cloudstack.apache.org/en/latest/adminguide/service_offerings.html#network-throttling.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Enable account settings to be visible under domain settings
All the account settings can't be configured under domain
level settings right now.
By default, if account setting is not configured then
its value will be taken from global setting.
Add a global setting "enable.account.settings.for.domain"
so that if its enabled then all the account level settings
will be visible under domain levelsettings also.
If account level setting is configured then that value will
be considered else it will take domain scope value. If
domain scope value is not configured then it will pick
it up from global setting.
If domain level setting is not configured then by default
the value will be taken from global setting
Add another global setting "enable.domain.settings.for.child.domain"
so that when its true, if a value for domain setting is not
configured then its parent domain value is considered until
it reaches ROOT domain. If no value is configured till ROOT
domain then global setting value will be taken.
Also display all the settings configured under the domain level
in list domains api response
* rename variables
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
* resource limit: Fix resource limit check on VM start
* add check to validate if cpu/memory are within limits for custom offering + exception handling
* unit tests
Co-authored-by: utchoang <hoangnm@unitech.vn>
This adds a volume(primary) storage plugin for the Linstor SDS.
Currently it can create/delete/migrate volumes, snapshots should be possible,
but currently don't work for RAW volume types in cloudstack.
* plugin-storage-volume-linstor: notify libvirt guests about the resize
* Resource Icon support - backend
* Add API support for resourceicon
* update reponse params + ui support
* Add exclusive list api for icons and UI changes
* refactor upload view
* UI changes to support resource icon wherever necessary
* convert api to POST + refactor icon view
* Add response name to list API + cosmetic changes in UI
* Added support for the following:
resource icon support for vpcs, networks, domains, and projects
add icons to list view if reosurces support icons to be added
support for showing project icons in the project switching drop-down menu
* List resourceicon cmds to be allowed for user role too
Users to inherit account icon if present (in listUsers response)
Move common code to plugin.js
Add icon to project list view - while switching between projects - Dashboard page
Show icons against zones - Capacity Dashboard view
Show user / account icon at the login button if present
* cosmetic changes
* optimize ui code
* fix reload issue for domain view
* add access check for delete operation
* ui-related changes to show iso icons
* iso image in uservm response
* add icons to custom form's list resources
* some more custom forms aligned to show icon for resources
* conmitic changes + add listing of icons to listdomainchildren cmd
* Add backend/server-side validation for base64 string passed for image
* change preview border
* preselect zone if there's only one
* add default icon
* show icon for network list in deploy vm view
* add custom icons if any to the import-export VM view
* preselect zone persistence on clearing cache
* prevent root vol from inheriting template/iso icon
* show tempalte icon in the info card details
* fix icon not being show on hard-refresh / initial traversal
* fx success message
If vm has multiple nics belonging to different shared networks then
wrong statistics will be collected since network id is not considred
as primary key. Make the change so that primary key contains network
id so that traffic belonging to that corresponding network is shown
If network id is not added to primary key then all the traffic of all
shared networks will show up in one nic.
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
* Add commons-lang3 to Utils
* Create an util to provide methods that ReflectionToStringBuilder does not have yet
* Create method to retrieve map of tags from resource
* Enable tests on volume components and remove useless tests
* Refactor VolumeObject and add unit tests
* Extract createPolicy in several methods
* Create method to copy policies between volumes and add unit tests
* Copy policies to new volume before removing old volume on volume migration
* Extract "destroySourceVolumeAfterMigration" to a method and test it
* Remove javadoc @param with no sensible information
* Rename method name to a generic name
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Extend addAnnotation and listAnnotations APIs
* Allow users to add, list and remove comments
* Add adminsonly UI and allow admins or owners to remove comments
* New annotations tab
* In progress: new comments section
* Address review comments
* Fix
* Fix annotationfilter and comments section
* Add keyword and delete action
* Fix and rename annotations tab
* Update annotation visibility API and update comments table accordingly
* Allow users seeing all the comments for their owned resources
* Extend comments for volumes and snapshots
* Extend comments to multiple entities
* Add uuid to ssh keypairs
* SSH keypair UI refactor
* Extend comments to the infrastructure entities
* Add missing entities
* Fix upgrade version for ssh keypairs
* Fix typo on DB upgrade schema
* Fix annotations table columns when there is no data
* Extend the list view of items showing they if they have comments
* Remove extra test
* Add annotation permissions
* Address review comments
* Extend marvin tests for annotations
* updating ui stuff
* addition to toggle visibility
* Fix pagination on comments section
* Extend to kubernetes clusters
* Fixes after last review
* Change default value for adminsonly column
* Remove the required field for the annotationfilter parameter
* Small fixes on visibility and other fixes
* Cleanup to reduce files changed
* Rollback extra line
* Address review comments
* Fix cleanup error on smoke test
* Fix sending incorrect parameter to checkPermissions method
* Add check domain access for the calling account for domain networks
* Fix only display annotations icon if there are comments the user can see
* Simply change the Save button label to Submit
* Change order of the Tools menu to provent users getting 404 error on clicking the text instead of expanding
* Remove comments when removing entities
* Address review comments on marvin tests
* Allow users to list annotations for an entity ID
* Allow users to see all comments for allowed entities
* Fix search filters
* Remove username from search filter
* Add pagination to the annotations tab
* Display username for user comments
* Fix add permissions for domain and resource admins
* Fix for domain admins
* Trivial but important UI fix
* Replace pagination for annotations tab
* Add confirmation for delete comment
* Lint warnings
* Fix reduced list as domain admin
* Fix display remove comment button for non admins
* Improve display remove action button
* Remove unused parameter on groupShow
* Include a clock icon to the all comments filter except for root admin
* Move cleanup SQL to the correct file after rebasing main
Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
* server: Optional destination host when migrate a vm
* #4378: migrate systemvms/routers with optional host
* Migrate vms across clusters
After enabling maintenance mode on host, if no suitable hosts
are found in the same cluster then search for hosts in
different clusters having the same hypervisor type
set global setting migrate.vm.across.clusters to true
* search all clusters in zone when migrate vm across clusters if applicable
* Honor migrate.vm.across.clusters when migrate vm without destination
* Check MIGRATE_VM_ACROSS_CLUSTERS in zone setting
* #4534 Fix Vms are migrated to same clusters in CloudStack caused by dedicated resources.
* #4534 extract some codes to methods
* fix#4534: an error in 'git merge'
* fix#4534: remove useless methods in FirstFitPlanner.java
* fix#4534: vms are stopped in host maintenance
* fix#4534: across-cluster migration of vms with cluster-scoped pools is supported by vmware vmotion
* fix#4534: migrate systemvms is only possible across clusters in same pod to avoid potential network errors.
* fix#4534: code optimization
Co-authored-by: Rakesh Venkatesh <r.venkatesh@global.leaseweb.com>
Co-authored-by: Sina Kashipazha <s.kashipazha@global.leaseweb.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Sina Kashipazha <soreana@users.noreply.github.com>
This PR allows migration of public templates that are created from snapshots / volumes. Data migration across secondary stores initially excluded all public templates on the pretext that public templates are automatically synced when a new image store is added; however, this assumption isn't true for templates marked as "public" when created from snapshots / volumes. Such templates can be identified if their url is null
* Filter disk / service offerings by domain at DB level
* Search for tags in the db
* Update search to include host tags
* Differenciate between tags
* Refactor
* Fix of creating volumes from snapshots without backup
When few snaphots are created onyl on primary storage, and try to create
a volume or a template from the snapshot only the first operation is
successful. Its because the snapshot is backup on secondary storage with
wrong SQL query. The problem appears on Ceph/NFS but may affects other
storage plugins.
Bypassing secondary storage is implemented only for Ceph primary storage
and it didn't cover the functionality to create volume from snapshot
which is kept only on Ceph
* Address review
* CLOUDSTACK-9175: [VMware DRS] Adding new host to DRS cluster does not participate in load balancing.
Summary: When a new host is added to a cluster, Cloudstack doesn't create all the port groups (created by cloudstack earlier in other hosts) present in the cluster. Since the new host doesn't have all the necessary networking port groups of cloudstack, it is not eligible to participate in DRS load balancing or HA.
Solution: When adding a host to the cluster in Cloudstack, use VMware API to find the list of unique port groups on a previously added host (older host in the cluster) if exists and then create them on the new host.
* Added few checks for cluster details
* Create utility to centralize byte convertions
* Add/change toString definitions
* Create Libvirt handler to ScaleVmCommand
* Enable dynamic scalling VM with KVM
* Move config from interface to class and rename it
As every variable declared in interfaces are already final,
this moving will be needed to mock tests in nexts commits
* Configure VM max memory and cpu cores
The values are according to service offering or global configs
* Extract dpdk configuration to a method and test it
* Extract OS desc config to a method and test it
* Extract guest resource def to a method and test it
Improve libvirt def
* Refactor LibvirtVMDef.GuestResourceDef
* Refactor ScaleVmCommand
* Improve VMInstaVO toString()
* Refactor upgradeRunningVirtualMachine method
* Turn int variables into long on utility
* Verify if VM is scalable on KVMGuru
* Rename some KVMGuruTest's methods
* Change vm's xml to work with max memory
* Verify if service offering is dynamic before scale
* Create methods to retrieve data from domain
* Create def to hotplug memory
* Adjust the way command was scaling the VM
* Fix database persistence before executing command
* Send more info to host to improve log
* Fix var name
* Fix missing "}"
* Undo unnecessary changes
* Address review
* Fix scale validation
* Add VM prepared for dynamic scaling validation
* Refactor LibvirtScaleVmCommandWrapper and improve unit tests
* Remove duplicated method
* Add RuntimeException check
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Update ByteScaleUtilsTest.java
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Add SharedMountPoint to KVMs supported storage pool types
* Fix live migration to iSCSI and improve logs
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* [#4398] adapt code to handle multi tag string with commas
* [#4398] remove trailing spaces
* [#4398] add multi host tag support for ingest process
* [#4398] add test for multi tag support in offerings
* [#4398] update multitag support for DeploymentPlanningManagerImpl
encapsulate multi tag check from Ingest Feature, DepolymentPlanningManager into
HostDaoImpl to prevent code duplicates
* [#4398] move logic to HostVO and add tests
* rename test method
* [#4398] Change string method to apaches StringUtils
* [#4398] modify test for multi tag support
* adapt sql for double tags
Co-authored-by: Dirk Klahre <Dirk.Klahre@Itelligence.de>
Fixes#4897
Some details tables were allowing null values for detail value which can cause NPE in some cases.
mysql> SELECT TABLE_NAME, COLUMN_NAME, COLUMN_TYPE FROM information_schema.columns WHERE table_schema='cloud' AND table_name LIKE'%_details' AND column_name='value' AND IS_NULLABLE='YES';
+-------------------------------+-------------+---------------+
| TABLE_NAME | COLUMN_NAME | COLUMN_TYPE |
+-------------------------------+-------------+---------------+
| account_details | value | varchar(255) |
| cluster_details | value | varchar(255) |
| data_center_details | value | varchar(1024) |
| domain_details | value | varchar(255) |
| image_store_details | value | varchar(255) |
| storage_pool_details | value | varchar(255) |
| template_deploy_as_is_details | value | text |
| user_vm_deploy_as_is_details | value | text |
| user_vm_details | value | varchar(5120) |
+-------------------------------+-------------+---------------+
9 rows in set (0.00 sec)
Brings consistency for value column of *_details tables with preventing null values.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Add new registers in guest_os
* Create a procedure to insert guest_os and guest_os_hypervisor data
* Remove ';' as the last char of the procedure
* Set the right category_id on guest_os
Ubuntu 20.04 LTS - Ubuntu - Linux
Ubuntu 21.04 - Ubuntu - Linux
pfSense 2.4 - FreeBSD - Unix
OpenBSD 6.7 - Unix
OpenBSD 6.8 - Unix
AlmaLinux 8.3 - CentOS
* Fix SQL line's last character
* Add from with dummy table
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Global setting to select preferred storage pool
Currently all the volumes are allocated on storage pools
based on the capacity or the algorithm selected. Sometimes
we need to deploy all volumes of particular account in a
specific storage pool and in that case its not possible.
with this change, we can specify the uuid of the preferred
storage pool, so that all volumes of the account will be
deployed in this pool
* code feedback
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
Currently we can send a default value of 4K/32K for GET/POST request of
user data field. Most new browsers and also nginx support till 1MB of
post data.
Added a new global setting `vm.userdata.max.length` with default value of
32KB which can be increased till 1MB.
* server: skip zone check for PERHOST iso during attachIso
Hypervisor tools ISO - vmware-toools.iso, xs-tools.iso are marked as PERHOST in DB. They are active but not downloaded to the secondary storages and hence no template-zone entry.
Skips the template-zone check for such templates.
Fixes#5265
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* inverted check
* use constants in TemplateManager
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Added disk provisioning type support for VMWare
* Review changes
* Fixed unit test
* Review changes
* Added missing licenses
* Review changes
* Update StoragePoolInfo.java
Removed white space
* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id
* Delete __init__.py
* Merge fix
* Fixed failing test
* Added comment about parameters
* Added error log when update fails
* Added exception when using API
* Ordering storage pool selection to prefer thick disk capable pools if available
* Removed unused parameter
* Reordering changes
* Returning storage pool details after update
* Removed multiple pool update, updated marvin test, removed duplicate enum
* Removed comment
* Removed unused import
* Removed for loop
* Added missing return statements for failed checks
* Class name change
* Null pointer
* Added more info when a deployment fails
* Null pointer
* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Small bug fix on API response and added missing bracket
* Removed datastore cluster code
* Removed unused imports, added missing signature
* Removed duplicate config key
* Revert "Added more info when a deployment fails"
This reverts commit 2486db78dc.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Externalize secondary storage capacity threshold
* Use default value as threshold when config value is lower than 0.0
* Move config to CapacityManager
* Validate config in CapacityManagerImpl
* Use config in StorageOrchestrator
* Change config description
* Remove unused import
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Enhance log messages with hostName
* Use host.toString() on most of host logs.
* Remove redundant "Host" in logs and enhance logs
* duplicated "for"
* Adopt String.format, and enhance code
* Address reviews enhancing log messages
Update server/src/main/java/com/cloud/resource/ResourceManagerImpl.java
-- server/src/main/java/com/cloud/vm/UserVmManagerImpl.java
-- server/src/main/java/com/cloud/resource/RollingMaintenanceManagerImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Fix String.format issue and change log message from debug to warn
* Fix checkstyle issue
* Fix string.format log
* Address review: enhance logs
* Enhance log of hosts in maintenance avoid list
* Remove "VM" on logs as vm.toString() already appends VM-<details>
* Add more details of the VM when postStateTransitionEvent
* Address reviewer and enhance VMInstanceVO.toString()
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
- Added connection manager to the gateway client.
- Renew the client session on '401 Unauthorized' response.
- Refactored the gateway client calls, for GET and POST methods.
- Consume the http entity content after login/(re)authentication and close the content stream if exists.
- Updated storage pool client connection timeout configuration 'storage.pool.client.timeout' to non-dynamic.
- Added storage pool client max connections configuration 'storage.pool.client.max.connections' (default: 100) to specify the maximum connections for the ScaleIO storage pool client.
- Updated unit tests.
and blocked the attach volume operation for uploaded volume on ScaleIO/PowerFlex storage pool
his PR fixes the problem of not updating the chain info or setting chain info to null after volume migrations.
Problem: While fetching the volume chain info, management server assumes datastore name to be a UUID (this is true only for NFS storages added by CloudStack) but datastore name can be with any name.
Solution: To fetch the volume chain info, use datastore name instead of UUID.
The fix is made in the flow of following API operations
migrateVirtualMachine
migrateVirtualMachineWithVolume
migrateVolume
This PR introduces new granularity levels to configure VM dynamic scalability. Previously VM is configured to be dynamically scalable based on the template and global setting. Now we bringing this option to configure at service offering and VM level also.
VM can dynamically scale only when all flags are ON at VM level, template, service offering and global setting. If any of the flags is set to false then VM cannot be scalable. This result will be persisted in DB for each VM and will be honoured for that VM till it is updated.
We are introducing 'dynamicscalingallowed' parameter with permitted values of true or false for deployVM API and createServiceOffering API.
Following are the API parameter changes:
createServiceOffering API:
dynamicscalingenabled: an optional parameter of type Boolean with default value “true”.
deployVirtualMachine API:
dynamicscalingenabled: an optional parameter of type Boolean with default value “true”.
Following are the UI changes:
Service offering creation has ON/OFF switch for dynamic scaling enabled with default value true
Inclusivity changes for CloudStack
- Change default git branch name from 'master' to 'main' (post renaming/changing default git branch to 'main' in git repo)
- Rename some offensive words/terms as appropriate for inclusiveness.
This PR updates the default git branch to 'main', as part of #4887.
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes: #4990
When a VM associated with a backup offering is destroyed/expunged, the backup offering isn't unassigned, and despite the VM having no backups present, backup usage is generated. This PR prevent usage record generation when there are no backups present for a VM with a backup offering associated to it. This is done by ensuring that usage event for backups is generated only when a the backup size > 0
This PR fixes#5047 which can be reproduced on Zones with _(I) Advanced Networks, (II) Security Groups enabled for the Zone, (III) network offering without Security Groups_; for instance, `DefaultSharedNetworkOffering` which does not list Security Group as supported service.
The issue is due to the following code inside the method `VirtualMachineManagerImpl.orchestrateReboot`:
[VirtualMachineManagerImpl.java#L3340](280c13a4bb/engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java (L3340)).
```
final Answer rebootAnswer = cmds.getAnswer(RebootAnswer.class);
if (rebootAnswer != null && rebootAnswer.getResult()) {
if (dc.isSecurityGroupEnabled() && vm.getType() == VirtualMachine.Type.User) {
List<Long> affectedVms = new ArrayList<Long>();
affectedVms.add(vm.getId());
_securityGroupManager.scheduleRulesetUpdateToHosts(affectedVms, true, null);
}
return;
}
```
Fixes: #4972
This PR sets systevms' agent state to disconnected when it is stopped. Currently, when a systemVM (Console Proxy VM / Secondary storage VM) is stopped, the agent state still appears to be 'Up'
* server: destroy ssvm, cpvm on last host maintenance
When a single or last UP host enters into maintenance just stopping SSVM and CPVM will leave behind VMs on hypervisor side. As these system vms will be recreated they can be destroyed.
Fixes#3719
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix methods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* immediately destroy systemvms
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix destroy
Added bypassHostMaintenance flag in Comma.java class to allow command to be handled by host agent even when host is in maintenace.
Flag is set true only for delete commands for ssvm and cpvm.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unit test fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing return statement
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix
VM should be stopped with cleanup before calling expunge else it server may through error with host in PrepareForMaintenance state.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* rename
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* forceha: fix vm is not started if it is poweroff from inside
steps to reproduce the issue
(1) make sure force.ha is true in global setting. if not, change it to true, and restart mgt server
(2) create a service offering , ha is not enabled
(3) create a vm
(4) log into the vm, and power off via cli.
expected result: vm is started again by cloudstack
actual result: vm is not started.
* forceha: fix vms are still running if host is force-removed
when host can be force removed, however vms are stopped in cloudstack, but not stopped on host
```
(localcloud) 🐱 > delete host id="a5625393-444d-4d0a-b31d-62baf88a8be1" forced=true
{
"success": true
}```
after some minutes, vms are still runnning on host
```
root@mgt01:~# ssh node63 virsh list
Id Name State
---------------------------
1 i-2-19-VM running
2 i-2-11-VM running
```
error message are
```
Cannot transmit host 2 to Enabled state
com.cloud.utils.fsm.NoTransitionException: No next resource state found for current state = Enabled event = DeleteHost
at com.cloud.resource.ResourceManagerImpl.resourceStateTransitTo(ResourceManagerImpl.java:1216)
at com.cloud.resource.ResourceManagerImpl$1.doInTransactionWithoutResult(ResourceManagerImpl.java:907)
```
* forceha: Make ForceHA dynamic
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.
Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
IKE version allows selecting ike (autoselect), ikev1, or ikev2.
Split connections gives an option of separating the first right subnet from the rest, and kicking out individual statements for each right subnet for better cross-compatibility.
Backported from PR: #4137
update per PR suggestion
Fixes#3138
Co-authored-by: Greg Goodrich <ggoodrich@ippathways.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This PR addresses the issue raised at #4545 (Fail to change Service offering from local <> shared storage).
When upgrading a VM service offering it is validated if the new offering has the same storage scope (local or shared) as the current offering. I think that the validation makes sense in a way of preventing running Root disks with an offering that does not match the current storage pool. However, the validation only compares both offerings and does not consider that it is possible to migrate Volumes between local <> shared storage pools.
The idea behind this implementation is that CloudStack should check the scope of the current storage pool which the ROOT volume is allocated; this, it is possible to migrate the volume between storage pools and list/upgrade according to the offerings that are supported for such pool.
This PR also fixes an issue where the API command that lists offerings for a VM should follow the same idea and list based on the storage pool that the volume is allocated and not the previous offering.
Fixes: #4545
The default length is 255, which caused a truncation of data if
the JSON object representing the backup volumes is too big.
It caused errors when backups were made on VMs with 3 volumes
or more.
`vm_instance.backup_volumes` has the type TEXT, which has a
maximal length of 65535 characters.
Fixes#4965
This PR makes sure no orphaned snapshot details are considered in the cleanup at startup job.
a real solution would be to implement some kind of cascading delete, but as the parent record is "only" marked as removed this would be a bit com
Co-authored-by: Daan Hoogland <dahn@onecht.net>
* prevent other vm disks getting deleted
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix inter-cluster stopped vm migration
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix detached volume inter-cluster migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* cleanup unused method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* review changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: allow attached volume migration using VmwareStorageMotionStrategy
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* find vm clusterid with multiple ROOT volumes
VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix successive storage migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix intercluster check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor vm cluster, host method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove inter-pod check
Added by mistake, VMware won't have pods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address review comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
When invoking migrateVirtualMachineWithVolume API call and a strategy isn't found the volumes are left in Migrating state
This PR puts back the volumes to Ready state.
This PR fixes: #4462
Problem Statement:
In case of VMware, when a VM having multiple data disk is destroyed (without expunge) and tried to recover the VM then the previous data disks are not attached to the VM like before destroy. Only root disk is attached to the VM.
Root cause:
All data disks were removed as part of VM destroy. Only the volumes which are selected to delete (while destroying VM) are supposed to be detached and destroyed.
Solution:
During VM destroy, detach and destroy only volumes which are selected during VM destroy. Detach the other volumes during expunge of VM.
Fixes#4201
This PR addresses the issue of a vm snapshot being indefinitely stuck is Expunging state in case deletion fails.
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster during a stopped VM migration. Also, find target datastore using the host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Adds new/missing guest os mappings for XCP-ng/Xenserver 8.1
Copy guest OS mappings from XCP-ng/Xenserver 8.1 for XCP-ng/Xenserver 8.2
Adds Ubuntu 20.04 guest os mapping for XCP-ng/Xenserver 8.2
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#4517
Adds capacity checks for RandomAllocator (host allocator)
Factors out host cpu capability and capacity check wrt serviceoffering code into CapacityManager.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR aims at introducing persistence mode in L2 networks and enhancing the behavior in Isolated networks
Doc PR apache/cloudstack-documentation#183
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This contains 3 main changes
(1) add NETWORK_STATS_ethX for all nics with public ips in VPC VRs (current: NETWORK_STATS_eth1)
(2) DO NOT create records in user_statistics for each VPC tier (only one record per public nic per VPC VR)
(3) send NetworkUsageCommand before unplugging a NIC with public IPs from VPC VR
Public IP addresses dedicated to one domain should not be accessed
by other domains. Also, root admin should be able to display all
public ip addresses in system.
Currently following issues exist
1. Public IP address assigned to one domain can be accessed by
other sibling domains
If use.system.public.ip is false then child domains should not
see public ip of ROOT domain
Before fix
```
(test1) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
"count": 59,
"publicipaddress": [
```
After fix
```
(test) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
"count": 10,
```
* server: create DB entry for storage pool capacity when create storage pool
* Revert "server: create DB entry for storage pool capacity when create storage pool"
This reverts commit e790167bfe.
* server: create DB entry for storage pool capacity when create zone-wide storage pools
Update new systemvmtemplate for 4.15.1.0; synced:
http://download.cloudstack.org/systemvm/4.15/
A new template is necessary due to many security fixes over the last year, the 4.15.0 systemvmtemplate was created about a year ago.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* Update vm_template table removed field when template is deleted
* Update method name
* address comment
* Extracted code to separate methods
* Address test failure
* refactor test cleanup
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* Updated libvirt's native reboot operation for VM on KVM using ACPI event, and Added 'forced' reboot option to stop and start the VM (using rebootVirtualMachine API)
* Added 'forced' reboot option for System VM and Router
- New parameter 'forced' in rebootSystemVm API, to stop and then start System VM
- New parameter 'forced' in rebootRouter API, to force stop and then start Router
* Added force reboot tests for User VM, System VM and Router
Duplicated volumes after failed migration in Allocated state
Fix: Clean up the duplicate volume when the destination managed volume creation failed on migrate volume operation
While finding pools for volume migration list following compatible storages:
- all zone-wide storages of the same hypervisor.
- when the volume is attached to a VM, then all storages from the same cluster as that of VM.
- for detached volume, all storages that belong to clusters of the same hypervisor.
Fixes#4692Fixes#4400
If the template from which VR is created got deleted, the state
is set to inactive and removed to null.
Since the template is already deleted, the VR can't be created
using this template again.
If someone restarts network with cleanup then it will try to
deploy the vr from the old non existing template again.
So search only for active template which are not yet deleted.
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ
Documentation PR: apache/cloudstack-documentation#169
This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack
Other improvements addressed in addition to PowerFlex/ScaleIO support:
- Added support for config drives in host cache for KVM
=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)
- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates
- Updated full deployment destination for preparing the network(s) on VM start
- Propagate the direct download certificates uploaded to the newly added KVM hosts
- Discover the template size for direct download templates using any available host from the zones specified on template registration
=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones
- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)
- Retry VM deployment/start when the host cannot grant access to volume/template
- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore
- Check the router filesystem is writable or not, before performing health checks
=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not
- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)
* Addressed some issues for few operations on PowerFlex storage pool.
- Updated migration volume operation to sync the status and wait for migration to complete.
- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.
- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).
- Updated resize volume error message string.
- Blocked the below operations on PowerFlex storage pool:
-> Extract Volume
-> Create Snapshot for VMSnapshot
* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.
- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html
Other fixes included:
- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)
- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.
- Perform basic tests (for connectivity and file system) on router before updating the health check config data
=> Validate the basic tests (connectivity and file system check) on router
=> Cleanup the health check results when router is destroyed
* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0
* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool
and Minor improvements in PowerFlex/ScaleIO storage plugin code
* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.
- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.
* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py
* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)
* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.
* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration
* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure
* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
- Fixes inter-cluster migration of VMs
- Allows migration of stopped VM with disks attached to different and suitable pools
- Improves inter-cluster detached volume migration
- Allows inter-cluster migration (clusters of same Pod) for system VMs, VRs on VMware
- Allows storage migration for stopped system VMs, VRs on VMware within same Pod if StoragePool cluster scopetype
Linked Primate PR: https://github.com/apache/cloudstack-primate/pull/789 [Changes merged in this PR after new UI merge]
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/170
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Steps to reproduce the issue:
(1)Create 10000 service offerings (by db changes below or cloudmonkey).
```
DROP PROCEDURE IF EXISTS cloud.insert_service_offering;
DELIMITER $$
CREATE PROCEDURE cloud.insert_service_offering()
BEGIN
DECLARE count INT DEFAULT 10000;
SET @offeringid = (select max(id)+1 from disk_offering);
WHILE count > 0 DO
INSERT INTO disk_offering (id,name,uuid,display_text,disk_size,type,created) values (@offeringid,'test-offering-wei',uuid(), 'test-offering-wei',0,'Service',now());
INSERT INTO service_offering (id,cpu,speed,ram_size) values (@offeringid, 1, 500,256);
SET @offeringid = @offeringid + 1;
SET count = count - 1;
END WHILE;
END $$
DELIMITER ;
CALL cloud.insert_service_offering();
mysql> CALL cloud.insert_service_offering();
Query OK, 0 rows affected (2 min 30.85 sec)
```
(2) Check the total time of periodical capacity check in cloudstack.
Without this patch, it spend 2.5 seconds (2 hosts)
```
2021-01-15 16:10:12,793 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Running Capacity Checker ...
2021-01-15 16:10:15,287 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Done running Capacity Checker ...
```
With this patch ,it spend 1.3 seconds (2 hosts)
```
2021-01-15 16:12:43,604 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Running Capacity Checker ...
2021-01-15 16:12:44,927 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Done running Capacity Checker ...
```
If there are 100 hosts, the total time will be reduced from 100+ seconds to around 10 seconds.
* 4.15:
server: select root disk based on user input during vm import (#4591)
kvm: Use Q35 chipset for UEFI x86_64 (#4576)
server: fix wrong error message when create isolated network without SourceNat (#4624)
server: add possibility to scale vm to current customer offerings (#4622)
server: keep networks order and ips while move a vm with multiple networks (#4602)
server: throw exception when update vm nic on L2 network (#4625)
doc: fix typo in install notes (#4633)
* 4.14:
server: select root disk based on user input during vm import (#4591)
kvm: Use Q35 chipset for UEFI x86_64 (#4576)
server: fix wrong error message when create isolated network without SourceNat (#4624)
server: add possibility to scale vm to current customer offerings (#4622)
server: keep networks order and ips while move a vm with multiple networks (#4602)
server: throw exception when update vm nic on L2 network (#4625)
doc: fix typo in install notes (#4633)
This PR fixes an issue when move a vm from an account to another account.
Steps to reproduce the issue
(1) create a vm with multiple shared networks (in advanced zone, or advanced zone with security groups)
(2) create another account (in same domain who can also access the shared networks)
(3) move vm to new account, with a list of networkid
expected result: the vm has nics on the networks in same order as specified in API request, and nics have the same ips as before actual result: network order is not same as specified, ips are changed.
* server: fix cannot create vm if another vm with same name has been added and removed on the network
steps to reproduce the issue
(1) create vm-1 on network-1
(2) add vm-1 to network-2
(3) remove vm-1 from network-2
(4) create another vm with same name vm-1 on network-2
expected result: operation succeed
actual result: operation failed.
* #4600: add back a removed line
Fix db upgrade path conflict, add 4.15.1.0->4.16.0.0 for master, bump
systemvmtemplate version to 4.16.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Fix for mapping guest OS type read from OVF to existing guest OS in CloudStack database while registering VMware template
* Added unit tests to String Utils methods and updated the code
* Updated the java doc section
* Updated os description logic to keep equals ignore match with guest os display name
Update the guest OS from the OVF file after upload is completed
This PR fixes the template upload from local on VMware
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
This PR addresses an error that appears when you try to add a new host. I don't even understand why there was a cast to String in the first place. I will assume some classes send HypervisorType and some send a string (empty or otherwise). Shouldn't this be addressed to use the same type everywhere? With this fix adding a new xenserver host works fine.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Setting snapshot state to error on timeout
* Setting removed field so snapshot record is ignored by garbage collection
* Removed explicitly setting error status, renamed method from markFailed to markRemoved
* Renamed method, moved code a few lines down
* Moved remove logic
* Removed unused service
* Moved removed logic - last time, promise
For Basic network isolation methods are not provided, and exception is
thrown when trying to encode the Vlan id. That's why we have to check
before encoding that the list with isolation methods is not empty