* Allow configkey to set 'cloud-name' cloud-init metadata
* Update engine/api/src/main/java/com/cloud/vm/VirtualMachineManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/network/NetworkModelImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/network/router/CommandSetupHelper.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Revert "Update server/src/main/java/com/cloud/network/router/CommandSetupHelper.java"
This reverts commit 8abc3e38c4.
* Revert "Update server/src/main/java/com/cloud/network/NetworkModelImpl.java"
This reverts commit 7f239be919.
* Rework/Fix review code suggestions
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
The uprgade from 4.18.1.0 to 4.18.2.0-SNAPSHOT failed with error
```
2023-09-12 16:12:19,003 INFO [c.c.u.DatabaseUpgradeChecker] (main:null) (logid:) DB version = 4.18.1.0 Code Version = 4.18.2.0
2023-09-12 16:12:19,004 INFO [c.c.u.DatabaseUpgradeChecker] (main:null) (logid:) Database upgrade must be performed from 4.18.1.0 to 4.18.2.0
2023-09-12 16:12:19,036 DEBUG [c.c.u.DatabaseUpgradeChecker] (main:null) (logid:) Running upgrade Upgrade41800to41810 to upgrade from 4.18.0.0-4.18.1.0 to 4.18.1.0
...
2023-09-12 16:12:19,041 DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:) -- Schema upgrade from 4.18.0.0 to 4.18.1.0
...
2023-09-12 16:12:21,602 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) (logid:) Statement: CREATE INDEX i_cluster_details__name on cluster_details (name)
2023-09-12 16:12:21,663 DEBUG [c.c.u.d.DatabaseAccessObject] (main:null) (logid:) Created index i_cluster_details__name
2023-09-12 16:12:21,673 DEBUG [c.c.u.d.T.Transaction] (main:null) (logid:) Rolling back the transaction: Time = 2632 Name = Upgrade; called by -TransactionLegacy.rollback:888-TransactionLegacy.removeUpTo:831-TransactionLegacy.close:655-TransactionContextInterceptor.invoke:36-ReflectiveMethodInvocation.proceed:175-ExposeInvocationInterceptor.invoke:97-ReflectiveMethodInvocation.proceed:186-JdkDynamicAopProxy.invoke:215-$Proxy30.persist:-1-DatabaseUpgradeChecker.upgrade:319-DatabaseUpgradeChecker.check:403-CloudStackExtendedLifeCycle.checkIntegrity:64
```
It succeeded with this change.
This fixes the following cases in which Solidfire storage integration
caused issues when using Solidfire datadisks with VMware:
1. Take Volume Snapshot of Solidfire data disk
2. Delete an active Instance with Solidfire data disk attached
3. Attach used existing Solidfire data disk to a running/stopped VM
4. Stop and Start an instance with Solidfire data disks attached
5. Expand disk by resizing Solidfire data disk by providing size
6. Expand disk by changing disk offering for the Solidfire data disk
Additional changes:
- Use VMFS6 as managed datastore type if the host supports
- Refactor detection and splitting of managed storage ds name in storage
processor
- Restrict storage rescanning for managed datastore when resizing
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* VMware: add support for 8.0b (8.0.0.2)
* VMware 8: add new guest os mappings in VirtualMachineGuestOsIdentifier
The full list can be found at https://developer.vmware.com/apis/1355/vsphere
* VMware: get guest os mappings of parent version
* VMware8: remove guest os mappings for 8.0.0.2
* VMware8: fix code smells
* vmware: remove annotations in VmwareVmImplementerTest which caused 0.0% code coverage
* VMware8: add a unit test case
* VMware: add support for 8.0c (8.0.0.3)
* VMware8: move to CloudStackVersion.getVMwareParentVersion
* VMware: add support for 8.0u1 (8.0.1.0)
* Copy engine/schema/src/main/java/com/cloud/upgrade/GuestOsMapper.java from PR 6979
* Copy engine/schema/src/main/java/com/cloud/storage/dao/GuestOSHypervisorDao.java from PR 6979
* VMware: ignore the last number in VMware versions
* VMware: copy guest os mapping from 8.0 to 8.0.1
* VMware: add unit tests in VmwareVmImplementerTest.java
* Copy engine/schema/src/test/java/com/cloud/upgrade/GuestOsMapperTest.java from PR 6979
* VMware8: retry vm poweron if fails due to exception "File system specific implementation of Ioctl[file] failed"
This fixes a weird issue on vmware8. When power on a vm, sometimes it fails due to error
2023-04-27 07:04:43,207 ERROR [c.c.h.v.r.VmwareResource] (DirectAgent-442:ctx-cdd42b03 10.0.32.133, job-105/job-106, cmd: StartCommand) (logid:8a24a607) StartCommand failed due to [Exception: java.lang.RuntimeException
Message: File system specific implementation of Ioctl[file] failed
].
java.lang.RuntimeException: File system specific implementation of Ioctl[file] failed
at com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:426)
at com.cloud.hypervisor.vmware.mo.VirtualMachineMO.powerOn(VirtualMachineMO.java:288)
in vmware.log on ESXi host, it shows
2023-04-27T09:20:41.713Z In(05)+ vmx - Power on failure messages: File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of LookupAndOpen[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - File system specific implementation of Ioctl[file] failed
2023-04-27T09:20:41.713Z In(05)+ vmx - Failed to lock the file
2023-04-27T09:20:41.713Z In(05)+ vmx - Cannot open the disk '/vmfs/volumes/7b29c876-ac102328/i-2-167-VM/ROOT-167.vmdk' or one of the snapshot disks it depends on.
2023-04-27T09:20:41.713Z In(05)+ vmx - Module 'Disk' power on failed.
2023-04-27T09:20:41.713Z In(05)+ vmx - Failed to start the virtual machine.
There is a KB article for it, but I still do not know why and how to fix it.
https://kb.vmware.com/s/article/1004232
* VMware: extract to method powerOnVM
* vmware: fix mistake in logs
* vmware8: use curl instead of wget to fix test failures
Traceback (most recent call last):
File "/root/test_internal_lb.py", line 555, in test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80
self.execute_internallb_roundrobin_tests(vpc_offering)
File "/root/test_internal_lb.py", line 641, in execute_internallb_roundrobin_tests
client_vm, applb.sourceipaddress, max_http_requests)
File "/root/test_internal_lb.py", line 497, in run_ssh_test_accross_hosts
(e, clienthost.public_ip))
AssertionError: list index out of range: SSH failed for VM with IP Address: 10.0.52.187
and
sshClient: DEBUG: {Cmd: /usr/bin/wget -T3 -qO- --user=admin --password=password http://10.1.2.253:8081/admin?stats via Host: 10.0.52.188} {returns: ["/usr/bin/wget: '/usr/lib/libpcre.so.1' is not an ELF file", "/usr/bin/wget: can't load library 'libpcre.so.1'"]}
* VMware: correct guest OS names in hypervisor mappings for VMware 8.0
el9 and variants were introduced by https://github.com/apache/cloudstack/pull/7059
they are supported with guest os identifiers since VMware 8.0
see https://vdc-repo.vmware.com/vmwb-repository/dcr-public/c476b64b-c93c-4b21-9d76-be14da0148f9/04ca12ad-59b9-4e1c-8232-fd3d4276e52c/SDK/vsphere-ws/docs/ReferenceGuide/vim.vm.GuestOsDescriptor.GuestOsIdentifier.html
* VMware: add Ubuntu 20.04 and 22.04 support for vmware 7.0+
* PR7380: only add guest os mappings for Ubuntu 20.04
* PR7380: Correct RHEL9 guest os names and others for VMware 8.0
* PR7380: correct guest os names on 8.0.0.1 as well
* PR7380: remove Windows 12 and Windows Server 2025 which are not released yet
### Description
Design document: https://cwiki.apache.org/confluence/display/CLOUDSTACK/%5BDRAFT%5D+Minimal+changes+to+allow+new+dynamic+hypervisor+type%3A+Custom+Hypervisor
This PR introduces the minimal changes to add a new hypervisor type (internally named Custom in the codebase, and configurable display name), allowing to write an external hypervisor plugin as a Custom Hypervisor to CloudStack
The custom hypervisor name is set by the setting: 'hypervisor.custom.display.name'. The new hypervisor type does not affect the behaviour of any CloudStack operation, it simply introduces a new hypervisor type into the system.
CloudStack does not have any means to dynamically add new hypervisor types. The hypervisor types are internally preset by an enum defined within the CloudStack codebase and unless a new version supports a new hypervisor it is not possible to add a host of a hypervisor that is not in part of the enum. It is possible to implement minimal changes in CloudStack to support a new hypervisor plugin that may be developed privately
This PR is an initial work on allowing new dynamic hypervisor types (adds a new element to the HypervisorType enum, but allows variable display name for the hypervisor)
##### Proposed Future work:
Replace the HypervisorType from a fixed enum to an extensible registry mechanism, registered from the hypervisor plugin
#### Feature Specifications
- The new hypervisor type is internally named 'Custom' to the CloudStack services (management server and agent services, database records).
- A new global setting ‘hypervisor.custom.display.name’ allows administrators to set the display name of the hypervisor type. The display name will be shown in the CloudStack UI and API.
- In case the ‘hypervisor.list’ setting contains the display name of the new hypervisor type, the setting value is automatically updated after the ‘hypervisor.custom.display.name’ setting is updated.
- The new Custom hypervisor type supports:
- Direct downloads (the ability to download templates into primary storage from the hypervisor hosts without using secondary storage)
- Local storage (use hypervisor hosts local storage as primary storage)
- Template format: RAW format (the templates to be registered on the new hypervisor type must be in RAW format)
- The UI is also extended to display the new hypervisor type and the supported features listed above.
- The above are the minimal changes for CloudStack to support the new hypervisor type, which can be tested by integrating the plugin codebase with this feature.
#### Use cases
This PR allows the cloud administrators to test custom hypervisor plugins implementations in CloudStack and easily integrate it into CloudStack as a new hypervisor type ("Custom"), reducing the implementation to only the hypervisor supported specific storage/networking and the hypervisor resource to communicate with the management server.
- CloudStack admin should be able to create a zone for the new custom hypervisor and add clusters, hosts into the zone with normal operations
- CloudStack users should be able to execute normal VMs/volumes/network/storage operations on VMs/volumes running on the custom hypervisor hosts
* 4.18:
server: remove registered userdata when cleanup an account (#7777)
server: Use max secondary storage defined on the account during upload (#7441)
test: upgrade kubernetes versions to 1.25.0/1.26.0 (#7685)
kvm: Added VNI Devices as normal bridge slave devs (#7836)
noVNC: fix JP keyboard on vmware7+ which uses websocket URL (#7694)
CPU cap limitation was enabled as part of
https://github.com/apache/cloudstack/pull/6420 that changes behaviour
for existing environments. The CPU cap limitation on KVM causes
systemvms to not start or be really slow in nested and virtualised
environments.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.18:
UI: allow new keys for VM details (#7793)
Refactoring StorPool's smoke tests (#7392)
UI: decode userdata in EditVM dialog (#7796)
packaging: unalias cp before package upgrade (#7722)
make NoopDbUpgrade do a systemvm template check (#7564)
UI unit test: fix expected values (#7792)
* 4.18:
UI: Filter templates by zone and hypervisor type when reinstall a VM (#7739)
KVM: fix SSVM starting when overprovisioning memory (#7663)
pom.xml: add property project.systemvm.template.location (#7706)
cloudutils: fix adding rocky9 host failure due to missing /etc/sysconfig/libvirtd (#7779)
server: get id from persisted object ReservationVO (#7785)
search in (too) large result sets (#7766)
ui: fix 404 error when list volumes of system vms (#7772)
packaging: install tzdata-java on centos7/centos8 (#7768)
Fixes case of appending userdata when both template and vm data are either shellscript or cloudconfig
Fixes error when appending gzip userdata
Fixes case when userdata manual text from VM is not getting decoded-encoded correctly.
Fixes case of appending multipart data when both template and vm data contain same format types.
Refactor - moved validateUserData method to UserDataManager class
Refactor userdata test to check resultant multipart userdata thoroughly
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* 4.18:
Storage and volumes statistics tasks for StorPool primary storage (#7404)
proper storage construction (#6797)
guarantee MAC uniqueness (#7634)
server: allow migration of all VMs with local storage on KVM (#7656)
Add L2 networks to Zones with SG (#7719)
When deploying a VM is failed during the allocation process it may leave the resources that have been already allocated before the failure. They will get removed from the database as the whole code block is wrapped inside a transaction twice but the server would not inform the network or storage plugins to clean up the allocated resources.
This PR removes Transactions during VM allocation which results in the allocated VM and its resource records being persisted in DB even during failures. When failure is encountered VM is moved to Error state. This helps VM and its resources to be properly deallocated when it is expunged either by a server task such as ExpungeTask or during manual expunge.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
There are tools like cluster-api which create and manage kubernetes cluster on CloudStack. This PR adds the option to add unmanaged kubernetes cluster which are not managed by CKS plugin. This helps provide a consolidated view of unmanaged clusters on CloudStack. The changes done make sure that operations for managed clusters are not executed for unmanaged clusters.
Two new APIs have also been added:
1. addVirtualMachinesToKubernetesCluster - to add VMs to unmanaged clusters.
2. removeVirtualMachinesFromKubernetesCluster - to remove VMs to unmanaged clusters.
Two APIs have been updated:
1. createKubernetesCluster - made KUBERNETES_VERSION_ID, SERVICE_OFFERING_ID, SIZE as not required for unmanaged clusters. Add an additional parameter, managed, which is true by default.
2. listKubernetesClusters - Add a parameter managed to filter on managed field.
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
Fixes case of account/domain having negative storage count when there is a volume size difference while deploying volumes on certain stroages. In case of Powerflex volume size results in a multiple of 8. If user deploys a volume of 12GB it will result in 16GB in size. But currently CloudStack will not update resource count after deployment.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Guest OS mapping improvements
- Checks the OS mapping name in hypervisor (VMware, XenServer)
- Displays guest OS mappings in UI
* Added API getHypervisorGuestOsNames to list the guest OS names in the hypervisor, and code improvements
* Some static analysis fixes
* Removed commented code in listview
* Guest OS list
* UI changes for adding guest os and mappings
* Added guest os mappings in guest os form
* Added new filter to guest os mapping
* Name and description changes
* VMWare Host and cluster MO unit tests
* CheckGuestOsMapping command and answer unit tests
* GetHypervisorGuestOsNames command and answer unit tests
* VmwareResource unitests
* GuestOsMapper unittests
* icon changes
* Addressed review comments
* Renaming fixes
* Removed comments
* marvin tests for guest os operations
* Added marvin tests for OS mappings
* Document links and UI improvements
* Added deduplication for the list guest OS API
* Fixed linter failure
* Few bug fixes and UI changes
* Few improvements
* Addressed code smells
* Fixed UI issues after rebase
---------
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Supported Virtual machine operations:
- live migration of VM to another host
- virtual machine snapshots (group snapshot without memory)
- revert VM snapshot
- delete VM snapshot
Supported Volume operations:
- attach/detach volume
- live migrate volume between two StorPool primary storages
- volume snapshot
- delete snapshot
- revert snapshot
* Live storage migration of volume in scaleIO within same storage scaleio cluster
* Added migrate command
* Recent changes of migration across clusters
* Fixed uuid
* recent changes
* Pivot changes
* working blockcopy api in libvirt
* Checking block copy status
* Formatting code
* Fixed failures
* code refactoring and some changes
* Removed unused methods
* removed unused imports
* Unit tests to check if volume belongs to same or different storage scaleio cluster
* Unit tests for volume livemigration in ScaleIOPrimaryDataStoreDriver
* Fixed offline volume migration case and allowed encrypted volume migration
* Added more integration tests
* Support for migration of encrypted volumes across different scaleio clusters
* Fix UI notifications for migrate volume
* Data volume offline migration: save encryption details to destination volume entry
* Offline storage migration for scaleio encrypted volumes
* Allow multiple Volumes to be migrated with migrateVirtualMachineWithVolume API
* Removed unused unittests
* Removed duplicate keys in migrate volume vue file
* Fix Unit tests
* Add volume secrets if does not exists during volume migrations. secrets are getting cleared on package upgrades.
* Fix secret UUID for encrypted volume migration
* Added a null check for secret before removing
* Added more unit tests
* Fixed passphrase check
* Add image options to the encypted volume conversion
In troubleshooting ops issues we see logs like:
Maximum domain resource limits of Type 'user_vm' for Domain Id = 763 is exceeded: Domain Resource Limit = (1 bytes) 1, Current Domain Resource Amount = (0 bytes) 0, Requested Resource Amount = (1 bytes) 1."
However there is one missing value (currentResourceReservation) that is used in the calculation of limit check but it is not logged, which leads to confusion. Above we see we are using “0” and requested 1, with our limit being 1, but was rejected. Without logging all the values used in the calculation we don’t understand why it failed.
Additionally, if we had this log above it would be clearer that a second bug is occurring. When we query for domain level resource reservations in “getDomainReservation” the actual SearchBuilder is the listAccountAndTypeSearch, not the listDomainAndTypeSearch. As a result, when we call getDomainReservation the query returns any outstanding domain reservation for any account, as domain ID is not a valid filter for the account search.
This PR:
Increases detailed information in log for checking resource limit to include reservations information for functions: checkDomainResourceLimit() and checkAccountResourceLimit
Fixes getDomainReservation() to use listDomainAndTypeSearch instead of listAccountAndTypeSearch
Co-authored-by: Oscar Sandoval <osandovalocana@apple.com>
Fixes#7389
Fixes listing of service offerings for VM scale when the current offering has `disk_offering_strictness=true`
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes an issue observed on multiple zones and direct download templates on KVM, in which a template gets multiple records on the template_store_ref table. When this happens, the template cannot be used as direct download. In case of a system VM template using direct download, system VM deployments fail
This PR fixes#7362 and also other search criteria to use the name as an exact search where keyword is also there.
Made UI changes for roles search to make use of keyword instead of name.
This fix ensures when datastore cluster in VMware is added as a primary storage pool in CloudStack then all the child datastores (which already exists in CS) should be in Up state.
For example:
1. Datastore Cluster DS has two child datastores A and B in vCenter. (B is already added as a storage pool in CloudStack)
2. Now try to add datastore cluster DS into CloudStack as a primary storage pool
3. CloudStack tries to add child datastores A and B in CloudStack, since B is already there in CloudStack, it will reuse the existing storagepool entry and will keep under parent Storage pool DS.
During Step 3 we are now checking if B is Up state or not.
* Auto Enable Disable KVM hosts
* Improve health check result
* Fix corner cases
* Script path refactor
* Fix sonar cloud reports
* Fix last code smells
* Add marvin tests
* Fix new line on agent.properties to prevent host add failures
* Send alert on auto-enable-disable and add annotations when the setting is enabled
* Address reviews
* Add a reason for enabling or disabling a host when the automatic feature is enabled
* Fix comment on the marvin test description
* Fix for disabling the feature if the admin has manually updated the host resource state before any health check result
* server: fix error on deleted template vm start
When a VM is deployed with start flag as false and the template is deleted before the VM start, NPE is obeserved it is started for the first time.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix range of configuration `task.cleanup.retry.interval`
* delete unused configuration
* fix on sql
* add name of the PR to the sql
Co-authored-by: Gabriel Ortiga Fernandes <gabriel.fernandes@scclouds.com.br>
Due to merge conflict, and schema changes in 4.17 branch the previous
4.17.1->4.18.0 DB upgrade path class was renamed to 4.17.2->4.18.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This implements a blank/noop upgrade path from 4.17.1.0 to 4.17.2.0
which implements DbUpgradeSystemVmTemplate to kick the systemvm template
upgrade.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
PR #5909 was created before the announce of release 4.17.1.0 and the changes in the databases were addressed in the 4.17.0.0 -> 4.18.0.0 migration path. However, #5909 was merged after 4.17.1.0 releasing, with the original migration path.
This PR intends to fix the migration path of PR #5909.
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
There's no DB upgrade path b/w 4.17.1.0 and 4.17.2.0, this adds the
same upgrade path of 4.17.1.0 when source version is 4.17.2.0.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes incorrect call of using service offering's ID while trying to retrieve linked disk offering.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
ACS + Xenserver works with differential snapshots. ACS takes a volume full snapshot and the next ones are referenced as a child of the previous snapshot until the chain reaches the limit defined in the global setting snapshot.delta.max; then, a new full snapshot is taken. PR #5297 introduced disk-only snapshots for KVM volumes. Among the changes, the delete process was also refactored. Before the changes, when one was removing a snapshot with children, ACS was marking it as Destroyed and it was keeping the Image entry on the table cloud.snapshot_store_ref as Ready. When ACS was rotating the snapshots (the max delta was reached) and all the children were already marked as removed; then, ACS would start removing the whole hierarchy, completing the differential snapshot cycle. After the changes, the snapshots with children stopped being marked as removed and the differential snapshot cycle was not being completed.
This PR intends to honor again the differential snapshot cycle for XenServer, making the snapshots to be marked as removed when deleted while having children and following the differential snapshot cycle.
Also, when one takes a volume snapshot and ACS backs it up to the secondary storage, ACS inserts 2 entries on table cloud.snapshot_store_ref (Primary and Image). When one deletes a volume snapshot, ACS first tries to remove the snapshot from the secondary storage and mark the entry Image as removed; then, it tries to remove the snapshot from the primary storage and mark the entry Primary as removed. If ACS cannot remove the snapshot from the primary storage, it will keep the snapshot as BackedUp; however, If it does not exist in the secondary storage and without the entry SNAPSHOT.DELETE on cloud.usage_event. In the end, after the garbage collector flow, the snapshot will be marked as BackedUp, with a value in the field removed and still being rated. This PR also addresses the correction for this situation.
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
The description of the configuration secstorage.encrypt.copy fails to mention that it is also used to make sure the certificate assigned to the zone is used when creating links for external access (download/upload of disks,templates and ISOs). This PR improves this description.
Co-authored-by: Gabriel Ortiga Fernandes <gabriel.fernandes@scclouds.com.br>
The alert.email.addresses description is ambiguous and can cause doubts to operators. This description has been altered to avoid confusion. In addition, typos in alert.smtp.useStartTLS and project.smtp.useStartTLS have been fixed.
Co-authored-by: Stephan Krug <stephan.krug@scclouds.com.br>
* Export count of total/up/down hosts by tags
* Export count of vms by state and host tag.
* Add host tags to host cpu/cores/memory usage in Prometheus exporter
* Cloudstack Prometheus exporter: Add allocated capacity group by host tag.
* Show count of Active domains on grafana.
* Show count of Active accounts and vms by size on grafana
* Use prepared statement to query database for a number of VM who use a specific tag.
* Extract repeated codes to new methods.
This PR introduces a feature designed to allow CloudStack to manage a generic volume encryption setting. The encryption is handled transparently to the guest OS, and is intended to handle VM guest data encryption at rest and possibly over the wire, though the actual encryption implementation is up to the primary storage driver.
In some cases cloud customers may still prefer to maintain their own guest-level volume encryption, if they don't trust the cloud provider. However, for private cloud cases this greatly simplifies the guest OS experience in terms of running volume encryption for guests without the user having to manage keys, deal with key servers and guest booting being dependent on network connectivity to them (i.e. Tang), etc, especially in cases where users are attaching/detaching data disks and moving them between VMs occasionally.
The feature can be thought of as having two parts - the API/control plane (which includes scheduling aspects), and the storage driver implementation.
This initial PR adds the encryption setting to disk offerings and service offerings (for root volume), and implements encryption support for KVM SharedMountPoint, NFS, Local, and ScaleIO storage pools.
NOTE: While not required, operations can be significantly sped up by ensuring that hosts have the `rng-tools` package and service installed and running on the management server and hypervisors. For EL hosts the service is `rngd` and for Debian it is `rng-tools`. In particular, the use of SecureRandom for generating volume passphrases can be slow if there isn't a good source of entropy. This could affect testing and build environments, and otherwise would only affect users who actually use the encryption feature. If you find tests or volume creates blocking on encryption, check this first.
### Management Server
##### API
* createDiskOffering now has an 'encrypt' Boolean
* createServiceOffering now has an 'encryptroot' Boolean. The 'root' suffix is added here in case there is ever any other need to encrypt something related to the guest configuration, like the RAM of a VM. This has been refactored to deal with the new separation of service offering from disk offering internally.
* listDiskOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listServiceOfferings shows encryption support on each offering, and has an encrypt boolean to choose to list only offerings that do or do not support encryption
* listHosts now shows encryption support of each hypervisor host via `encryptionsupported`
* Volumes themselves don't show encryption on/off, rather the offering should be referenced. This follows the same pattern as other disk offering based settings such as the IOPS of the volume.
##### Volume functions
A decent effort has been made to ensure that the most common volume functions have either been cleanly supported or blocked. However, for the first release it is advised to mark this feature as *experimental*, as the code base is complex and there are certainly edge cases to be found.
Many of these features could eventually be supported over time, such as creating templates from encrypted volumes, but the effort and size of the change is already overwhelming.
Supported functions:
* Data Volume create
* VM root volume create
* VM root volume reinstall
* Offline volume snapshot/restore
* Migration of VM with storage (e.g. local storage VM migration)
* Resize volume
* Detach/attach volume
Blocked functions:
* Online volume snapshot
* VM snapshot w/memory
* Scheduled snapshots (would fail when VM is running)
* Disk offering migration to offerings that don't have matching encryption
* Creating template from encrypted volume
* Creating volume from encrypted volume
* Volume extraction (would we decrypt it first, or expose the key? Probably the former).
##### Primary Storage Support
For storage developers, adding encryption support involves:
1. Updating the `StoragePoolType` for your primary storage to advertise encryption support. This is used during allocation of storage to match storage types that support encryption to storage that supports it.
2. Implementing encryption feature when your `PrimaryDataStoreDriver` is called to perform volume lifecycle functions on volumes that are requesting encryption. You are free to do what your storage supports - this could be as simple as calling a storage API with the right flag when creating a volume. Or (as is the case with the KVM storage types), as complex as managing volume details directly at the hypervisor host. The data objects passed to the storage driver will contain volume passphrases, if encryption is requested.
##### Scheduling
For the KVM implementations specified above, we are dependent on the KVM hosts having support for volume encryption tools. As such, the hosts `StartupRoutingCommand` has been modified to advertise whether the host supports encryption. This is done via a probe during agent startup to look for functioning `cryptsetup` and support in `qemu-img`. This is also visible via the listHosts API and the host details in the UI. This was patterned after other features that require hypervisor support such as UEFI.
The `EndPointSelector` interface and `DefaultEndpointSelector` have had new methods added, which allow the caller to ask for endpoints that support encryption. This can be used by storage drivers to find the proper hosts to send storage commands that involve encryption. Not all volume activities will require a host to support encryption (for example a snapshot backup is a simple file copy), and this is the reason why the interface has been modified to allow for the storage driver to decide, rather than just passing the data objects to the EndpointSelector and letting the implementation decide.
VM scheduling has also been modified. When a VM start is requested, if any volume that requires encryption is attached, it will filter out hosts that don't support encryption.
##### DB Changes
A volume whose disk offering enables encryption will get a passphrase generated for it before its first use. This is stored in the new 'passphrase' table, and is encrypted using the CloudStack installation's standard configured DB encryption. A field has been added to the volumes table, referencing this passphrase, and a foreign key added to ensure passphrases that are referenced can't be removed from the database. The volumes table now also contains an encryption format field, which is set by the implementer of the encryption and used as it sees fit.
#### KVM Agent
For the KVM storage pool types supported, the encryption has been implemented at Qemu itself, using the built-in LUKS storage support. This means that the storage remains encrypted all the way to the VM process, and decrypted before the block device is visible to the guest. This may not be necessary in order to implement encryption for /your/ storage pool type, maybe you have a kernel driver that decrypts before the block device on the system, or something like that. However, it seemed like the simplest, common place to terminate the encryption, and provides the lowest surface area for decrypted guest data.
For qcow2 based storage, `qemu-img` is used to set up a qcow2 file with LUKS encryption. For block based (currently just ScaleIO storage), the `cryptsetup` utility is used to format the block device as LUKS for data disks, but `qemu-img` and its LUKS support is used for template copy.
Any volume that requires encryption will contain a passphrase ID as a byte array when handed down to the KVM agent. Care has been taken to ensure this doesn't get logged, and it is cleared after use in attempt to avoid exposing it before garbage collection occurs. On the agent side, this passphrase is used in two ways:
1. In cases where the volume experiences some libvirt interaction it is loaded into libvirt as an ephemeral, private secret and then referenced by secret UUID in any libvirt XML. This applies to things like VM startup, migration preparation, etc.
2. In cases where `qemu-img` needs to use this passphrase for volume operations, it is written to a `KeyFile` on the cloudstack agent's configured tmpfs and passed along. The `KeyFile` is a `Closeable` and when it is closed, it is deleted. This allows us to try-with-resources any volume operations and get the KeyFile removed regardless.
In order to support the advanced syntax required to handle encryption and passphrases with `qemu-img`, the `QemuImg` utility has been modified to support the new `--object` and `--image-opts` flags. These are modeled as `QemuObject` and `QemuImageOptions`. These `qemu-img` flags have been designed to supersede some of the existing, older flags being used today (such as choosing file formats and paths), and an effort could be made to switch over to these wholesale. However, for now we have instead opted to keep existing functions and do some wrapping to ensure backward compatibility, so callers of `QemuImg` can choose to use either way.
It should be noted that there are also a few different Enums that represent the encryption format for various purposes. While these are analogous in principle, they represent different things and should not be confused. For example, the supported encryption format strings for the `cryptsetup` utility has `LuksType.LUKS` while `QemuImg` has a `QemuImg.PhysicalDiskFormat.LUKS`.
Some additional effort could potentially be made to support advanced encryption configurations, such as choosing between LUKS1 and LUKS2 or changing cipher details. These may require changes all the way up through the control plane. However, in practice Libvirt and Qemu currently only support LUKS1 today. Additionally, the cipher details aren't required in order to use an encrypted volume, as they're stored in the LUKS header on the volume there is no need to store these elsewhere. As such, we need only set the one encryption format upon volume creation, which is persisted in the volumes table and then available later as needed. In the future when LUKS2 is standard and fully supported, we could move to it as the default and old volumes will still reference LUKS1 and have the headers on-disk to ensure they remain usable. We could also possibly support an automatic upgrade of the headers down the road, or a volume migration mechanism.
Every version of cryptsetup and qemu-img tested on variants of EL7 and Ubuntu that support encryption use the XTS-AES 256 cipher, which is the leading industry standard and widely used cipher today (e.g. BitLocker and FileVault).
Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
This PR allows the cloud admin to set either a global or domain-specific value "metadata.allow.expose.domain", and when set this allows the VM to see the name and ID of the immediate domain that contains the VM in instance metadata. This can be useful or a variety of things such as bootstrapping VM configuration and access according to domain.
This PR also deletes the CloudZonesNetworkElement because it isn't referred to anywhere, and there was initially some confusion as to whether this code needed to be updated when extending metadata. If it needs to be kept we can remove that delete from the PR.
Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
This PR addresses parallel resource allocation as a generalization of the problem and solution described in #6644. Instead of the Global lock on the resources a reservation record is created which is added in the resource check count in the ResourceLimitService/ResourceLimitManagerImpl. As a convenience a CheckedReservation is created. This is an implementation of AutoClosable and can be used as a guard in a try-with-resource fashion. The close method of the CheckedReservation wil delete the reservation record.
Co-authored-by: Boris Stoyanov - a.k.a Bobby <bss.stoyanov@gmail.com>
This PR creates a new API createConsoleAccess to create VM console URL allowing it to connect using other UI implementations. To avoid reply attacks, the console access is enhanced to use a one time token per session
New configuration added:
consoleproxy.extra.security.validation.enabled: Enable/disable extra security validation for console proxy using a token
Documentation PR: apache/cloudstack-documentation#284
This PR tries to fix a problem with a privately backported feature. The columns added for the feature are not added idem potent so people can not backport them. I propose that all DB alteration from here on in will be done with the IDEM_POTENT_...() set of stored procedures that we have to prevent these kind of issues for users.
Adds option to provide custom DNS servers for isolated network, shared network and VPC tier.
New API parameters added in createNetwork API along with the corresponding response parameters.
Doc PR: apache/cloudstack-documentation#276
* Updated resource counter to include correct size after volume creation/resize and other improvements
- Recalculate resource counters for root domain in the periodic task
- Update correct size in the primary_storage resource counter after volume creation/resize
- Some code improvements
* review and sonarcloud issues
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Co-authored-by: Daan Hoogland <daan@onecht.net>
This PR increases the column value at table account_details from 255 chars to 4096, matching with the value allowed in the API command for updating the configuration of accounts.
When the value length is bigger than 255, the following log is presented right after the updateConfiguration API call:
2022-03-09 17:50:24,627 ERROR [c.c.a.ApiServer] (qtp30578394-234766:ctx-cad18b45 ctx-32e954dd) (logid:0948e203) unhandled exception executing api command: [Ljava.lang.String;@117c6ba7
com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.cj.jdbc.ClientPreparedStatement: INSERT INTO account_details (account_details.account_id, account_details.name, account_details.value) VALUES (123, _binary'api.allowed.source.cidr.list', _binary'<huge binary>')
at com.cloud.utils.db.GenericDaoBase.persist(GenericDaoBase.java:1450)
at jdk.internal.reflect.GeneratedMethodAccessor168.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
....
....
....
Caused by: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'value' at row 1
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:104)
at com.mysql.cj.jdbc.ClientPreparedStatement.executeInternal(ClientPreparedStatement.java:953)
at com.mysql.cj.jdbc.ClientPreparedStatement.executeUpdateInternal(ClientPreparedStatement.java:1092)
... 83 more
Co-authored-by: Bart Meyers <bart.meyers@cldin.eu>
Fixes#6455
The default storage adaptor - LibvirtStorageAdaptor - is used by different storage types and doesn't use the annotation @StorageAdaptorInfo. In this case, a storage plugin that wants to adopt one of the predefined storage pool types will override the default behaviour. If fixing the issue in general (for new storage plugins or current ones that want to reuse the existing storage pool types) would affect all volume/snapshot/VM cases. This will lead to the need of extensive testing for each storage plugin for which we don't have the resources to do it. That's why this patch fixes the old behaviour for the SharedMountPoint by adding a new storage pool type for the StorPool plugin.
Release 4.16.0.0 introduced a feature for migrating system VM volumes (#4385). However, it was enabled only for VMWare.
This PR intends to enable the feature for KVM too.
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
This PR fixes the issue #6209 where the snapshot revert operation fails after certain volume operations like Migrate VM with volume / migrate volume / reinstall VM.
The root cause of the issue after these volume operations, the primary storage entry is getting deleted for that volume. We have fixed it here to get the primary datastore entry wrt volume and continue the operation.
* add global setting to allow parallel execution on vmware
* cleanup setting distribution for vmware.create.full.clone
* query setting in vmware guru
* don´t touch other hypervisor's commands
* guru hierarchy cleanup
- Refactor IPv6 related tests
- Adds smoke test for IPv4 network to IPv6 upgrade
- Adds smoke test for IPv6 VPC
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
While deleting a traffic type, ACS validates if there is any VM related to it. However, if we have several physical networks containing a traffic type, ACS does not filter the physical network to do the validation. For instance, if we have two (2) physical networks containing the traffic type Guest, the first one having VMs related, and the second not having VMs related, if we try to remove the second traffic type, ACS give us the message The Traffic Type is not deletable because there are existing networks with this traffic type:Guest.
The API deleteTrafficType was designed to filter the physical network where the traffic type is, however, due to a typo this filtering was not been applied correctly. This PR intends to fix this typo to honor the API behavior.
In an advanced zone I created 4 physical networks, one for each traffic type (Public, Guest, Management, Storage). I instantiated some VMs so they get guest IPs. In the Public physical network I added a Guest traffic type. I tried to remove the new Guest traffic type from Public physical network, which did not have any VMs related to it, and, before the changes, I was getting the message The Traffic Type is not deletable because there are existing networks with this traffic type:Guest. After the changes, I could remove successfully the new Guest traffic type via API deleteTrafficType. I also tried to remove the Guest traffic type which had VMs related to it, however, as expected, I received the The Traffic Type is not deletable... message.
I also created a unit test to validate the data retrieving.
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
This PR enhances the existing PowerFlex/ScaleIO storage plugin to support separate (storage) network for Hosts(KVM)/Storage connection, mainly the SDC (ScaleIo Data Client) connection.
* refactor and log trace
* tracelogs
* shuffle pools with real randomiser
* sinlge retrieval of async job context
* some review comments addressed
* Apply suggestions from code review
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* log formatting
* integration test for distribution of volumes over storages
* move test to smoke tests
* imports
* sonarcloud issue # AYCOmVntKzsfKlhz0HDh
* spellos
* review comments
* review comments
* sonarcloud issues
* unittest
* import
* Update AbstractStoragePoolAllocatorTest.java
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Prevent NPE on reboot stopped VM
* Use VM UUID instead of VM ID
* Apply suggestion
* Refactor and fix start VM output
* Use format instead of concatenation
* ms stats thread added
* initial data collection for management server
* empty list management server metrics command
* bean copy into MS metrics object
* ms status VO
* further API and DB plumbing
* minimal metrics response in API
* remove commented, refactor data collection plumbing
* javadocs
* surpress stacktrace on expected error
* update status experiment
* ms status publish framework added
* review comment addressed
* static data to DB and API, /proc/ reading
* addressing review comments
* ui for ms details
* small ui adjustment
* beanCopy
* agentcount response and system parameter
* labels
* package-lock
* add version strings to regular list API
* add shutdown time to DB
* add last start and last stop to regular list response
* distro info in regular response/session count added
* metrics as details
* add heap used and remove details map
* thread-statusses
* move db upgrade to 4.17
* sysmem
* procmem
* ui demo comments applied
* javadoc
* get conf and log file locations
* loginfo
* cpuLoadStats
* no.remote
* extra spaces removed
* clusterlistener
* add unit to kb value
* revert accidental rename
* silly fqcn removed
* get mem info from bean is possible
* refactor long sequence for readability
* registerListener
* listUsageMetrics and isDbLocal
* rats
* local usage and db or not
* minimal listDbMetrics
* db vars and stats
* cleanup and #queries queried
* db stats calculation
* rat
* remove list response wrapper from sinlge details-lists responses
* rudimentary metrics view
* metrics table cleanup
* table makeup, collection dates
* move component to appropriate location
* capitalisation removed
* rebase error resolved
* rename deamon to daemon
* small style comments applied
* another merge issue
* naming comments and boot time
* stop/start prefixed with server
* layout-fix
* listMSMetrics test and test refactor
* usage metrics test
* db metrics test
* extra validations
* Update ui/public/locales/en.json
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* descriptions of loadaverages and replica's
* collection time on top
* cpu load on metrics overview
* DbStatsCollection
* some parameter description texts
* labels adjusted
* new output 'kernelversion' and log info cleanup
* labels
* Update api/src/main/java/com/cloud/server/ManagementServerHostStats.java
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/response/DbMetricsResponse.java
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/dao/ManagementServerHostDao.java
Co-authored-by: Rodrigo D. Lopez <19981369+RodrigoDLopez@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Rodrigo D. Lopez <19981369+RodrigoDLopez@users.noreply.github.com>
* Update api/src/main/java/org/apache/cloudstack/api/response/ManagementServerResponse.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update api/src/main/java/org/apache/cloudstack/api/response/ManagementServerResponse.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update engine/schema/src/main/java/com/cloud/host/dao/HostDao.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/server/StatsCollector.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/ClusterManager.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update framework/cluster/src/main/java/com/cloud/cluster/dao/ManagementServerHostDao.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/server/StatsCollector.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/server/StatsCollector.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update server/src/main/java/com/cloud/server/StatsCollector.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
* Update plugins/metrics/src/main/java/org/apache/cloudstack/metrics/MetricsServiceImpl.java
* some (more) refactorring suggestions applied
* human readable memory sizes
* rat
* actual collection time instead of query time, improved descriptions
* merge errors fixed
* optional metric values
* javadoc and logging
* names of jmx vars have changed
* vue3-compatibility
* new output parameter type
* lower retention default
* vue3 fixes
* polish comments
* polish comments 2, the reckoning
* note on usage servers
* merge conflict errors
* pollish
* conditional assertion to deal with simulator restart
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
Co-authored-by: Rodrigo D. Lopez <19981369+RodrigoDLopez@users.noreply.github.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Support for live patching systemVMs and deprecating systemVM.iso. Includes:
- fix systemVM template version
- Include agent.zip, cloud-scripts.tgz to the commons package
- Support for live-patching systemVMs - CPVM, SSVM, Routers
- Fix Unit test
- Remove systemvm.iso dependency
* The following commit:
- refactors logic added to support SystemVM deployment on KVM
- Adds support to copy specific files (required for patching) to the hosts on Xenserver
- Modifies vmops method - createFileInDomr to take cleanup param
- Adds configuratble sleep param to CitrixResourceBase::connect() used to verify if telnet to specifc port is possible (if sleep is 0, then default to _sleep = 10000ms)
- Adds Command/Answer for patch systemVMs on XenServer/Xcp
* - Support to patch SystemVMs - VMWare
- Remove attaching systemvm.iso to systemVMs
- Modify / Refactor VMware start command to copy patch related files to the systemvms
- cleanup
* Commit comprises of:
- remove docker from systemvm template - use containerd as container runtime
- update create-k8s-binaries script to use ctr for all docker operations
- Update userdata sent to the k8s nodes
- update cksnode script, run during patching of the cks/k8s nodes
* Add ssh to k8s nodes details in the Access tab on the UI
* test
* Refactor ca/cert patching logic
* Commit comprises of the following changes:
- Use restart network/VPC API to patch routers
- use livePatch API support patching of only cpvm/ssvm
- add timeout to the keystore setup/import script
* remove all references of systemvm.iso
* Fix keystore-cert-import invocation + refactor cert timeout in CP/SS VMs
* fix script timeout
* Refactor cert patching for systemVMs + update keystore-cert-import script + patch-sysvms script + remove patchSysvmCommand from networkelementcommand
* remove commented code + change core user to cloud for cks nodes
* Update ownership of ssh directory
* NEED TO DISCUSS - add on the fly template conversion as an ExecStartPre action (systemd)
* Add UI changes + move changes from patch file to runcmd
* test: validate performance for template modification during seeding
* create vms folder in cloudstack-commons directory - debian rules
* remove logic for on the fly template convert + update k8s test
* fix syntax issue - causing issue with shared network tests
* Code cleanup
* refactor patching logic - certs
* move logic of fixing rootdiskcontroller from upgrade to kubernetes service
* add livepatch option to restart network & vpc
* smooth upgrade of cks clusters
* Support for live patching systemVMs and deprecating systemVM.iso. Includes:
- fix systemVM template version
- Include agent.zip, cloud-scripts.tgz to the commons package
- Support for live-patching systemVMs - CPVM, SSVM, Routers
- Fix Unit test
- Remove systemvm.iso dependency
* The following commit:
- refactors logic added to support SystemVM deployment on KVM
- Adds support to copy specific files (required for patching) to the hosts on Xenserver
- Modifies vmops method - createFileInDomr to take cleanup param
- Adds configuratble sleep param to CitrixResourceBase::connect() used to verify if telnet to specifc port is possible (if sleep is 0, then default to _sleep = 10000ms)
- Adds Command/Answer for patch systemVMs on XenServer/Xcp
* - Support to patch SystemVMs - VMWare
- Remove attaching systemvm.iso to systemVMs
- Modify / Refactor VMware start command to copy patch related files to the systemvms
- cleanup
* Commit comprises of:
- remove docker from systemvm template - use containerd as container runtime
- update create-k8s-binaries script to use ctr for all docker operations
- Update userdata sent to the k8s nodes
- update cksnode script, run during patching of the cks/k8s nodes
* Add ssh to k8s nodes details in the Access tab on the UI
* test
* Refactor ca/cert patching logic
* Commit comprises of the following changes:
- Use restart network/VPC API to patch routers
- use livePatch API support patching of only cpvm/ssvm
- add timeout to the keystore setup/import script
* remove all references of systemvm.iso
* Fix keystore-cert-import invocation + refactor cert timeout in CP/SS VMs
* fix script timeout
* Refactor cert patching for systemVMs + update keystore-cert-import script + patch-sysvms script + remove patchSysvmCommand from networkelementcommand
* remove commented code + change core user to cloud for cks nodes
* Update ownership of ssh directory
* NEED TO DISCUSS - add on the fly template conversion as an ExecStartPre action (systemd)
* Add UI changes + move changes from patch file to runcmd
* test: validate performance for template modification during seeding
* create vms folder in cloudstack-commons directory - debian rules
* remove logic for on the fly template convert + update k8s test
* fix syntax issue - causing issue with shared network tests
* Code cleanup
* add cgroup config for containerd
* add systemd config for kubelet
* add additional info during image registry config
* address comments
* add temp links of download.cloudstack.org
* address part of the comments
* address comments
* update containerd config - as version has upgraded to 1.5 from 1.4.12 in 4.17.0
* address comments - simplify
* fix vue3 related icon changes
* allow network commands when router template version is lower but is patched
* add internal LB to the list of routers to be patched on network restart with live patch
* add unit tests for API param validations and new helper utilities - file scp & checksum validations
* perform patching only for non-user i.e., system VMs
* add test to validate params
* remove unused import
* add column to domain_router to display software version and support networkrestart with livePatch from router view
* Requires upgrade column to consider package (cloud-scripts) checksum to identify if true/false
* use router software version instead of checksum
* show N/A if no software version reported i.e., in upgraded envs
* fix deb failure
* update pom to official links of systemVM template