* Rough start swapping DB Encryption, add CLI PoC
* Enhance EncryptionCLI to have command line parsing
* Refactor new encryption behind AeadBase64Encryptor for every use
* Add comment about encryption passwords
* EncryptionSecretKeyChanger - use reflection to find all encrypted tables
Over the years this hasn't been updated properly. Use reflection to find
the tables with encrypted fields. This will also ensure any plugins in
the classpath that add tables will get their encrypted fields updated as well.
Table vpn_users has encrypted columns [password]
Table sslcerts has encrypted columns [password, key]
Table user_view has encrypted columns [secret_key]
Table account_details has encrypted columns [value]
Table domain_details has encrypted columns [value]
Table s2s_customer_gateway has encrypted columns [ipsec_psk]
Table ucs_manager has encrypted columns [password]
Table vm_instance has encrypted columns [vnc_password]
Table passphrase has encrypted columns [passphrase]
Table keystore has encrypted columns [key]
Table external_stratosphere_ssp_credentials has encrypted columns [password]
Table storage_pool has encrypted columns [user_info]
Table remote_access_vpn has encrypted columns [ipsec_psk]
Table user has encrypted columns [secret_key]
Table oobm has encrypted columns [password]
* Apple FR68: add new class CloudStackEncryptor
* Apple FR68: add interface com.cloud.utils.crypt.Encryptor
* Apple FR68: update com.cloud.utils.EncryptionUtil
* Apple FR68: add cloudstack-utils.jar to cloudstack-common package
* Apple FR68: use cloudstack-utils.jar in scripts
* Apple FR68: revert replace.properties to original version
* Apple FR68: update EncryptionSecretKeyChanger
* Apple FR68: Add EncryptorVersion to CloudStackEncryptor
* Apple FR68: Update com.cloud.utils.crypt.EncryptionCLI
* Apple FR68: Remove check on EncryptionSecretKeyChecker.useEncryption in CloudStackEncryptor
* Apple FR68: update EncryptionSecretKeyChanger part2
* Apple FR68: update EncryptionSecretKeyChanger part3 (force update)
* Apple FR68: move cloud-migrate-databases.in to deprecated and recreate it with java command
* Apple FR68: update EncryptionSecretKeyChanger part4 (add skip-database-migration)
* Apple FR68: set encryptor in first encryption in CloudStackEncryptor
* Apple FR68: save db.cloud.encryptor.version in db.properties
* Apple FR68: update EncryptionSecretKeyChanger part4 (clear db.cloud.encryptor.version)
* Apple FR68: load and save db.cloud.encryptor.version in db.properties
* Apple FR68: Add caller class name in debug messages
* Apple FR68: consider non-exist tables and columns
* Apple FR68: skip tables if no data exists
* Apple FR68: remove GeneralSecurityException from code
* Apple FR68: hide value with Asterisks in CloudStackEncryptor
* Apple FR68: log an error message when fail to load 'init'
* Apple FR68: remove setup/bindir/cloud-migrate-databases.deprecated.in which I think it is not needed
* Apple FR68: add new encryptor version to EncryptionSecretKeyChanger
* Apple FR68: use System.exit(1) in EncryptionSecretKeyChanger
* Apple FR68: check arguments in cloudstack-migrate-databases
* Apple FR68: remove all org.jasypt.* in code
* Apple FR68: initilize database encryptors by getting 'init'
* Apple FR68: migrate server.properties
* Apple FR68: load new management key from environment variable CLOUD_SECRET_KEY_NEW
* Apple FR68: fix unable to load 'init' in fresh installation
* Apple FR68: fix 'Rolling back the transaction' in txn.close
* Apple FR68: improve logging in cloudstack-migrate-databases
* Apple FR68: hide value with Asterisks in other encryptors
* Apple FR68: System.exit(1) if fail to migrate server.properties
* Apple FR68: migrate values from cluster_details,user_vm_details,etc
* Apple FR68: refactor EncryptionSecretKeyChanger
* Apple FR68: update user_vm_deploy_as_is_details values
* Apple FR68: update image_store.url (if protocol is cifs) and storage_pool.path (if pool_type is SMB)
* Apple FR68: minor improvement EncryptionSecretKeyChanger
* Apple FR68: add unit test EncryptionSecretKeyChangerTest
* Apple FR68: support encryption type 'env' in cloudstack-setup-databases to get env "CLOUD_SECRET_KEY" before passed value
* Apple FR68: rename Encryptor to Base64Encryptor
* Apple FR68: Backport community PR 6542
* Apple FR68: code optimization
* Apple FR68: use Options and StringUtils
* Apple FR68: add license headers
* Apple FR68: refactor CloudStackEncryptor as per Daan's review
* Apple FR68: refactor DatabaseUpgradeChecker as per Daan's review
* Apple FR68: show error message in usage.log if fail to get encrypted configurations
* Apple FR68: load new MS key from env before migration
* Apple FR68: return 1 if fail to parse arguments of EncryptionCLI
* Apple FR68: fix code smells
* Apple FR68: fix code smells (part2)
* Apple FR68: revert FOOTER of cloudstack-migrate-databases to use \n
* Apple FR68: update help message of cloudstack-setup-databases
* Apple FR68: fix code smells (part3)
* Apple FR68: make changes as per suggestions
* Apple FR68: migrate database if new encryptor version is set to different
Testing result: (assume db.cloud.encryptor.version=V1)
(1) migrate only db.properties (same db key, same db encryptor version)
Command: /usr/bin/cloudstack-migrate-databases -m mgmtkey -d dbkey -n newmgmtkey -v V1
Changes: db.cloud.encrypt.secret is encrypted by V2 (always)
db.cloud.encryptor.version=V1
cloudstack database is not migrated
(2) migrate only db.properties (same db key, new db encryptorversion)
Command: /usr/bin/cloudstack-migrate-databases -m mgmtkey -d dbkey -n newmgmtkey -v V2 --skip-database-migration
Changes: db.cloud.encrypt.secret is encrypted by V2 (always)
db.cloud.encryptor.version=V2
cloudstack database is not migrated (mostly on secondary management servers)
(3) migrate only db.properties (same db key, db encryptor version is not set)
Command: /usr/bin/cloudstack-migrate-databases -m mgmtkey -d dbkey -n newmgmtkey
Changes: db.cloud.encrypt.secret is encrypted by V2 (always)
db.cloud.encryptor.version=V1
cloudstack database is not migrated
(4) migrate only db.properties (different db key, same db encryptor version)
Command: /usr/bin/cloudstack-migrate-databases -m mgmtkey -d dbkey -n newmgmtkey -e newdbkey -v V1 --skip-database-migration
Changes: db.cloud.encrypt.secret is encrypted by V2 (always)
db.cloud.encryptor.version=V1
cloudstack database is not migrated (mostly on secondary management servers)
(5) migrate only db.properties (different db key, new db version)
Command: /usr/bin/cloudstack-migrate-databases -m mgmtkey -d dbkey -n newmgmtkey -e newdbkey -v V2 --skip-database-migration
Changes: db.cloud.encrypt.secret is encrypted by V2 (always)
db.cloud.encryptor.version=V2
cloudstack database is not migrated (mostly on secondary management servers)
(6) migrate only db.properties (different db key, db encryptor version is not set)
Command: /usr/bin/cloudstack-migrate-databases -m mgmtkey -d dbkey -n newmgmtkey -e newdbkey --skip-database-migration
Changes: db.cloud.encrypt.secret is encrypted by V2 (always)
db.cloud.encryptor.version=V1
cloudstack database is not migrated (mostly on secondary management servers)
(7) migrate db.properties and database (same db key, same db encryptor version)
Command: /usr/bin/cloudstack-migrate-databases -m mgmtkey -d dbkey -n newmgmtkey -v V1 --force-database-migration
Changes: db.cloud.encrypt.secret is encrypted by V2 (always)
db.cloud.encryptor.version=V1
cloudstack database is migrated using encryptor V1
(8) migrate db.properties and database (same db key, new db encryptor version)
Command: /usr/bin/cloudstack-migrate-databases -m mgmtkey -d dbkey -n newmgmtkey -v V2
Changes: db.cloud.encrypt.secret is encrypted by V2 (always)
db.cloud.encryptor.version=V2
cloudstack database is migrated using encryptor V2
(9) migrate db.properties and database (same db key, db encryptor version is not set)
Command: /usr/bin/cloudstack-migrate-databases -m mgmtkey -d dbkey -n newmgmtkey --force-database-migration
Changes: db.cloud.encrypt.secret is encrypted by V2 (always)
db.cloud.encryptor.version=V1
cloudstack database is migrated using encryptor V1
(10) migrate db.properties and database (different db key, same db encryptor version)
Command: /usr/bin/cloudstack-migrate-databases -m mgmtkey -d dbkey -n newmgmtkey -e newdbkey -v V1
Changes: db.cloud.encrypt.secret is encrypted by V2 (always)
db.cloud.encryptor.version=V1
cloudstack database is migrated using encryptor V1
(11) migrate db.properties and database (different db key, new db encryptor version)
Command: /usr/bin/cloudstack-migrate-databases -m mgmtkey -d dbkey -n newmgmtkey -e newdbkey -v V2
Changes: db.cloud.encrypt.secret is encrypted by V2 (always)
db.cloud.encryptor.version=V2
cloudstack database is migrated using encryptor V2
(12) migrate db.properties and database (different db key, db encryptor version is not set)
Command: /usr/bin/cloudstack-migrate-databases -m mgmtkey -d dbkey -n newmgmtkey -e newdbkey
Changes: db.cloud.encrypt.secret is encrypted by V2 (always)
db.cloud.encryptor.version=V1
cloudstack database is migrated using encryptor V1
* smoke test: fix test_primary_storage.py
* smoke test: Do NOT run tests in test_primary_storage.py in parallel
This also fixes an issue in detachvolume
'Failed to detach volume Test Volume-yyyyyy from VM VM-zzzzzz; com.cloud.exception.InternalErrorException: Could not detach volume. Probably the VM is in boot state at the moment'
* Update PR7003: rename method
---------
Co-authored-by: Marcus Sorensen <mls@apple.com>
* Console access enhancements
* Remove extra logging
* Fix security hotspot
* Fix sonar cloud code smells
* Refactor API response
* Minor fix
* Refactor and increase timeout on ssh to cpvm
* Add marvin tests and extend permissions
* Fix account type
* Add unit tests
* Check vncport file exits on CPVM before attempting to add rules
* Change how vncport is read on cpvm
* Extra validation refactor
* Fix wrong token API param on UI
* Refactor vnc port selection to 8080 or 8443
* Do not display the input token modal and improve error message on console
* Improve error message and prevent opening blank popup when errors
* Fix logging exception due to algorithm
This PR introduces volume encryption option to service offerings and disk offerings. Fixes#136
There is a hypervisor component and a storage pool component. Hypervisors are responsible for being capable of running/using the encrypted volumes. Storage pools are responsible for being able to create, copy, resize, etc. Hypervisors will report encryption support in their details, storage pools are marked for encryption support by pool type.
The initial offering for experimental release of this feature will have support for encryption on Local, NFS, SharedMountPoint, and ScaleIO storage types.
When volumes choosing an encrypted offering are allocated to a pool, the pool type must be capable of supporting encryption and this is enforced.
When VMs are started and they have an encrypted volume, the hypervisor must be capable of supporting encryption. Also, if volumes are attached to running VMs, the attach will only work if the hypervisor supports encryption.
This change includes a few other minor changes - for example the ability to force the KVM hypervisor private IP. This was necessary in my testing of ScaleIO, where the KVM hypervisors had multiple IPs and the ScaleIO storage only functions if the hypervisor as a ScaleIO client matches IPs with what CloudStack sees as the hypervisor IP.
For experimental release of this feature, some volume workflows like extract volume and migrate volume aren't supported for encrypted volumes. In the future we could support these, as well as migrating from unencrypted to encrypted offerings, and vice versa.
It may also be possible to configure encryption specifics in the future, perhaps at the pool level or the offering level. Currently, there is only one workable encryption offering for KVM that is supported by Libvirt and Qemu for raw and qcow2 disk files, LUKS version 1. This PR ensures we at least store this encryption format associated with each volume, with the expectation that later we may have LUKS v2 volumes or something else. Thus we will have the information necessary to use each volume with Libvirt if/when other formats are introduced.
I think the most disruptive change here is probably a refactoring of the QemuImg utility to support newer flags like --object. I've tested the change against the basic Qemu 1.5.3 that comes with EL7 and I believe it is good, but it will be nice to see the results of some functional tests. Most of the other changes are limited to changing behavior only if volume encryption is requested.
Working on documentation for the CloudStack docs. One thing to note is that hypervisors that run the stock EL7 version of Qemu will not support encryption. This is tested to be detected and report properly via the CloudStack API/UI. I intend to like to have a support matrix in the CloudStack docs.
I may add a few more unit tests. I'd also like some guidance on having functional tests. I'm not sure if there's a separate framework, or if Marvin is still used, or what the current thing is.
* Add Qemu object flag to QemuImg create
* Add apache license header to new files
* Add Qemu object flag to QemuImg convert
* Set host details if hypervisor supports LUKS
* Add disk encrypt flag to APIs, diskoffering
* Schema upgrade 4.16.0.0 to 4.16.1.0 to support vol encryption
* Add Libvirt secret on disk attach, and refer to it in disk XML
* Add implementation of luks volume encryption to QCOW2 and RAW disk prep
* Start VMs that have encrypted volumes
* Add encrypt option to service offering and root volume provisioning
* Refactor volume passphrase into its own table and object
* CryptSetup, use key files to pass keys instead of command line
* Update storage types and allocators to select encryption support
* Allow agent.properties to define the hypervisor's private IP
* Implement createPhysicalDisk for ScaleIOStorageAdaptor
* UI: Add encrypt options to offerings
* UI module security updates
* Revert "UI module security updates" - belongs in base
This reverts commit a7cb7cf7f57aad38f0b5e5d67389c187b88ffd94.
* Add --target-is-zero support for QemuImg
* Allow qemu image options to be passed, API support convert encrypted
* Switch hypervisor encryption support detection to use KeyFiles
* Fixes for ScaleIO root disk encryption
* Resize root disk if it won't fit encryption header
* Use cryptsetup to prep raw root disks, when supported
* Create qcow2 formatting if necessary during initial template copy to ScaleIO
* Allow setting no cache for qemu-img during disk convert
* Use 1M sparse on qemu-img convert for zero target disks
* UI: Add volume encryption support to hypervisor details
* QemuImg use --image-opts and --object depending on version
* Only send storage commands that require encryption to hosts that support encryption
* Move host encryption detail to a static constant
* Update host selection to account for volume encryption support
Only attach volumes if encryption requirements are met
* Ensure resizeVolume won't allow changing encryption
* Catch edge cases for clearing passphrase when volume is removed
* Disable volume migration and extraction for encrypted volumes
* Register volume secret on destination host during live migration
* Fix configdrive path editing during live migration
* Ensure configdrive path is edited properly during live migration
* Pass along and store volume encryption format during creation
* Fixes for rebase
* Fix tests after rebase
* Add unit tests for DeploymentPlanningManagerImpl to support encryption
* Deployment planner tests for encryption support on last host
* Add deployment tests for encryption when calling planner
* Added Libvirt DiskDef test for encryption details
* Add test for KeyFile utility
* Add CryptSetup tests
* Add QemuImageOptionsTest
* add smoke tests for API level changes on create/list offerings
* Fix schema upgrade, do disk_offering_view first
* Fix UI to show hypervisor encryption support
* Load details into hostVO before trying to query them for encryption
* Remove whitespace in CreateNetworkOfferingTest
* Move QemuImageOptions to use constants for flag keys
* Set physical disk encrypt format during createDiskFromTemplate in KVM Agent
* Whitespace in AbstractStoragePoolAllocator
* Fix whitespace in VolumeDaoImpl
* Support old Qemu in convert
* Log how long it takes to generate a passphrase during volume creation
* Move passphrase generation to async portion of createVolume
* Revert "Allow agent.properties to define the hypervisor's private IP"
This reverts commit 6ea9377505f0e5ff9839156771a241aaa1925e70.
* Updated ScaleIO/PowerFlex storage plugin to support separate (storage) network for Host(KVM) SDC connection. (#144)
* Added smoke tests for volume encryption (in KVM). (#149)
* Updated ScaleIO pool unit tests.
* Some improvements/fixes for code smells (in ScaleIO storage plugin).
* Updated review changes for ScaleIO improvements.
* Updated host response parameter 'encryptionsupported' in the UI.
* Move passphrase generation for the volume to async portion, while deploying VM (#158)
* Move passphrase generation for the volume to async portion, while deploying VM.
* Updated logs, to include volume details.
* Fix schema upgrade, create passphrase table first
* Fixed the DB upgrade issue (as noticed in the logs below.)
DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:) CALL `cloud`.`IDEMPOTENT_ADD_FOREIGN_KEY`('cloud.volumes', 'passphrase', 'id')
ERROR [c.c.u.d.ScriptRunner] (main:null) (logid:) Error executing: CALL `cloud`.`IDEMPOTENT_ADD_FOREIGN_KEY`('cloud.volumes', 'passphrase', 'id')
ERROR [c.c.u.d.ScriptRunner] (main:null) (logid:) java.sql.SQLException: Failed to open the referenced table 'passphrase'
ERROR [c.c.u.DatabaseUpgradeChecker] (main:null) (logid:) Unable to execute upgrade script
* Fixes for snapshots with encrypted qcow2
Fixes#159#160#163
* Support create/delete encrypted snapshots of encrypted qcow2 volumes
* Select endpoints that support encryption when snapshotting encrypted volumes
* Update revert snapshot to be compatible with encrypted snapshots
* Disallow volume and template create from encrypted vols/snapshots
* Disallow VM memory snapshots on encrypted vols. Fixes#157
* Fix for TemplateManagerImpl unit test failure
* Support offline resize of encrypted volumes. Fixes#168
* Fix for resize volume unit tests
* Updated libvirt resize volume unit tests
* Support volume encryption on kvm only, and passphrase generation refactor (#169)
* Fail deploy VM when ROOT/DATA volume's offering has encryption enabled, on non-KVM hypervisors
* Fail attach volume when volume's offering has encryption enabled, on non-KVM hypervisors
* Refactor passphrase generation for volume
* Apply encryption to dest volume for live local storage migration
fixes#161
* Apply encryption to data volumes during live storage migration
Fixes#161
* Use the same encryption passphrase id for migrating volumes
* Pass secret consumer during storage migration prepare
Fix for #161
* Fixes create / delete volume snapshot issue, for stopped VMs
* Block volume snapshot if encrypted and VM is running
Fixes#159
* Block snap schedules on encrypted volumes
Fix for #159
* Support cryptsetup where luks type defaults to 2
Fixes#170
* Modify domain XML secret UUID when storage migrating VM
Fix for #172
* Remove any libvirt secrets on VM stop and post migration
Fix for #172
* Update disk profile with encryption requirement from the disk offering (#176)
Update disk profile with encryption requirement from the disk offering
and some code improvements
* Updated review changes / javadoc in ScaleIOUtil
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Add resource ID and resource type to event.
In UI, adds Events tab in resource view for the supporting resources.
Following SQL changes needed to support events with resource details in DB,
```
-- Alter event table to add resource_id and resource_type
ALTER TABLE `cloud`.`event`
ADD COLUMN `resource_id` bigint unsigned COMMENT 'ID of the resource associated with the even' AFTER `domain_id`,
ADD COLUMN `resource_type` varchar(32) COMMENT 'Account role in the project (Owner or Regular)' AFTER `resource_id`;
DROP VIEW IF EXISTS `cloud`.`event_view`;
CREATE VIEW `cloud`.`event_view` AS
SELECT
event.id,
event.uuid,
event.type,
event.state,
event.description,
event.resource_id,
event.resource_type,
event.created,
event.level,
event.parameters,
event.start_id,
eve.uuid start_uuid,
event.user_id,
event.archived,
event.display,
user.username user_name,
account.id account_id,
account.uuid account_uuid,
account.account_name account_name,
account.type account_type,
domain.id domain_id,
domain.uuid domain_uuid,
domain.name domain_name,
domain.path domain_path,
projects.id project_id,
projects.uuid project_uuid,
projects.name project_name
FROM
`cloud`.`event`
INNER JOIN
`cloud`.`account` ON event.account_id = account.id
INNER JOIN
`cloud`.`domain` ON event.domain_id = domain.id
INNER JOIN
`cloud`.`user` ON event.user_id = user.id
LEFT JOIN
`cloud`.`projects` ON projects.project_account_id = event.account_id
LEFT JOIN
`cloud`.`event` eve ON event.start_id = eve.id;
```
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* This PR/commit comprises of the following:
- Support to fallback on the older systemVM template in case of no change in template across ACS versions
- Update core user to cloud in CKS
- Display details of accessing CKS nodes in the UI - K8s Access tab
- Update systemvm template from debian 11 to debian 11.2
- Update letsencrypt cert
- Remove docker dependency as from ACS 4.16 onward k8s has deprecated support for docker - use containerd as container runtime
* support for private registry - containerd
* Enable updating template type (only) for system owned templates via UI
* edit indents
* Address comments and move cmd from patch file to cloud-init runcmd
* temporary change
* update k8s test to use k8s version 1.21.5 (instead of 1.21.3 - due to https://github.com/kubernetes/kubernetes/pull/104530)
* support for private registry - containerd
* Enable updating template type (only) for system owned templates via UI
* smooth upgrade of cks clusters
* update pom file with temp download.cloudstack.org testing links
* fix pom
* add cgroup config for containerd
* add systemd config for kubelet
* add additional info during image registry config
* update to official links
* maven: migrate short-term to reload4j v1.2.18
This migrate to log4j 1.x fork, reload4j 1.2.18.0 which is drop-in
replacement and addresses some immediate CVE and issues.
* log4j migration to reload4j in pom xmls
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Exclude log4j from transitive dependencies (#73)
Co-authored-by: Marcus Sorensen <shadowsor@gmail.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
* Allow templates & ISOs from IPv6 host address.
* fix checkstyle issue
* Allow only direct download templates from IPv6 address
Co-authored-by: gabriel <gabriel@apache.org>
* Add commons-lang3 to Utils
* Create an util to provide methods that ReflectionToStringBuilder does not have yet
* Create method to retrieve map of tags from resource
* Enable tests on volume components and remove useless tests
* Refactor VolumeObject and add unit tests
* Extract createPolicy in several methods
* Create method to copy policies between volumes and add unit tests
* Copy policies to new volume before removing old volume on volume migration
* Extract "destroySourceVolumeAfterMigration" to a method and test it
* Remove javadoc @param with no sensible information
* Rename method name to a generic name
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Create utility to centralize byte convertions
* Add/change toString definitions
* Create Libvirt handler to ScaleVmCommand
* Enable dynamic scalling VM with KVM
* Move config from interface to class and rename it
As every variable declared in interfaces are already final,
this moving will be needed to mock tests in nexts commits
* Configure VM max memory and cpu cores
The values are according to service offering or global configs
* Extract dpdk configuration to a method and test it
* Extract OS desc config to a method and test it
* Extract guest resource def to a method and test it
Improve libvirt def
* Refactor LibvirtVMDef.GuestResourceDef
* Refactor ScaleVmCommand
* Improve VMInstaVO toString()
* Refactor upgradeRunningVirtualMachine method
* Turn int variables into long on utility
* Verify if VM is scalable on KVMGuru
* Rename some KVMGuruTest's methods
* Change vm's xml to work with max memory
* Verify if service offering is dynamic before scale
* Create methods to retrieve data from domain
* Create def to hotplug memory
* Adjust the way command was scaling the VM
* Fix database persistence before executing command
* Send more info to host to improve log
* Fix var name
* Fix missing "}"
* Undo unnecessary changes
* Address review
* Fix scale validation
* Add VM prepared for dynamic scaling validation
* Refactor LibvirtScaleVmCommandWrapper and improve unit tests
* Remove duplicated method
* Add RuntimeException check
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Update ByteScaleUtilsTest.java
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Add mail dependencies
* Create util to send SMTP mail
* Add unit tests to SMTP mail sender
* Use SMTP mail util on quota alert
* Use SMTP mail util on alert
* Use SMTP mail util on project
* Use SMTP mail util on usage alert
* Remove copyright line in license header
Co-authored-by: Gabriel Beims Bräscher <gabrascher@gmail.com>
* Remove copyright line in license header
Co-authored-by: Gabriel Beims Bräscher <gabrascher@gmail.com>
* Remove copyright line in license header
Co-authored-by: Gabriel Beims Bräscher <gabrascher@gmail.com>
* Remove copyright line in license header
Co-authored-by: Gabriel Beims Bräscher <gabrascher@gmail.com>
* Remove copyright line in license header
Co-authored-by: Gabriel Beims Bräscher <gabrascher@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
Co-authored-by: Gabriel Beims Bräscher <gabrascher@gmail.com>
* using forked version of trilead-ssh2 (from org.jenkins-ci)
- upgrade to support newer algorithms
* Update latest jar release
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit@apache.org>
* Break loop if no exception on http request
* Add new tests ensuring the correct execution flow of the RedfishClient retry
* Log retry as "retry attempt %d/%d"
* Fix string.format parameters order at RedfishClient.retryHttpRequest
* Break loop if no exception on http request
* Add new tests ensuring the correct execution flow of the RedfishClient retry
* Log retry as "retry attempt %d/%d"
* 4.15:
ui: Consider overprovisioning factor when displaying allocated progress bar (#4850)
ui: Fix the styles action button (#4856)
ui: Fill out the search filter form field after performing a filter (#4855)
ui: fix add cluster form for vmware (#4841)
ui: Fix add primary store during Zone Deployment for PreSetup protocol (#4845)
tests: Extend wait time after interrupt (#4815)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ
Documentation PR: apache/cloudstack-documentation#169
This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack
Other improvements addressed in addition to PowerFlex/ScaleIO support:
- Added support for config drives in host cache for KVM
=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)
- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates
- Updated full deployment destination for preparing the network(s) on VM start
- Propagate the direct download certificates uploaded to the newly added KVM hosts
- Discover the template size for direct download templates using any available host from the zones specified on template registration
=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones
- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)
- Retry VM deployment/start when the host cannot grant access to volume/template
- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore
- Check the router filesystem is writable or not, before performing health checks
=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not
- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)
* Addressed some issues for few operations on PowerFlex storage pool.
- Updated migration volume operation to sync the status and wait for migration to complete.
- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.
- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).
- Updated resize volume error message string.
- Blocked the below operations on PowerFlex storage pool:
-> Extract Volume
-> Create Snapshot for VMSnapshot
* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.
- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html
Other fixes included:
- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)
- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.
- Perform basic tests (for connectivity and file system) on router before updating the health check config data
=> Validate the basic tests (connectivity and file system check) on router
=> Cleanup the health check results when router is destroyed
* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0
* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool
and Minor improvements in PowerFlex/ScaleIO storage plugin code
* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.
- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.
* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py
* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)
* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.
* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration
* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure
* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
It is not common, but HTTP requests can fail due to connection issues. In order to mitigate such situations and also improve logging, this PR enhances the Redfish request handling by adding an execution flow for re-trying HTTP requests; the retry happens only if the global settings redfish.retries is set to 1 or more retries; default is of 2 (two). One can disable the retries by setting redfish.retries to 0 (zero).
This is an extention of #3732 for kvm.
This is restricted to ovs > 2.9.2
Since Xen uses ovs 2.6, pvlan is unsupported.
This also fixes issues of vms on the same pvlan unable to communicate if they're on the same host
* DB : Add support for MySQL 8
- Splits commands to create user and grant access on database, the old
statement is no longer supported by MySQL 8.x
- `NO_AUTO_CREATE_USER` is no longer supported by MySQL 8.x so remove
that from db.properties conn parameters
For mysql-server 8.x setup the following changes were added/tested to
make it work with CloudStack in /etc/mysql/mysql.conf.d/mysqld.cnf and
then restart the mysql-server process:
server_id = 1
sql-mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,ERROR_FOR_DIVISION_BY_ZERO,NO_ZERO_DATE,NO_ZERO_IN_DATE,NO_ENGINE_SUBSTITUTION"
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=1000
log-bin=mysql-bin
binlog-format = 'ROW'
default-authentication-plugin=mysql_native_password
Notice the last line above, this is to reset the old password based
authentication used by MySQL 5.x.
Developers can set empty password as follows:
> sudo mysql -u root
ALTER USER 'root'@'localhost' IDENTIFIED BY '';
In libvirt repository, there are two related commits
2019-08-23 13:13 Daniel P. Berrangé ● rpm: don't enable socket activation in upgrade if --listen present
2019-08-22 14:52 Daniel P. Berrangé ● remote: forbid the --listen arg when systemd socket activation
In libvirt.spec.in
/bin/systemctl mask libvirtd.socket >/dev/null 2>&1 || :
/bin/systemctl mask libvirtd-ro.socket >/dev/null 2>&1 || :
/bin/systemctl mask libvirtd-admin.socket >/dev/null 2>&1 || :
/bin/systemctl mask libvirtd-tls.socket >/dev/null 2>&1 || :
/bin/systemctl mask libvirtd-tcp.socket >/dev/null 2>&1 || :
Co-authored-by: Wei Zhou <w.zhou@global.leaseweb.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR adds outputting human readable byte sizes in the management server logs, agent logs, and usage records. A non-dynamic global variable is added (display.human.readable.sizes) to control switching this feature on and off. This setting is sent to the agent on connection and is only read from the database when the management server is started up. The setting is kept in memory by the use of a static field on the NumbersUtil class and is available throughout the codebase.
Instead of seeing things like:
2020-07-23 15:31:58,593 DEBUG [c.c.a.t.Request] (AgentManager-Handler-12:null) (logid:) Seq 8-1863645820801253428: Processing: { Ans: , MgmtId: 52238089807, via: 8, Ver: v1, Flags: 10, [{"com.cloud.agent.api.NetworkUsageAnswer":{"routerName":"r-224-VM","bytesSent":"106496","bytesReceived":"0","result":"true","details":"","wait":"0",}}] }
The KB MB and GB values will be printed out:
2020-07-23 15:31:58,593 DEBUG [c.c.a.t.Request] (AgentManager-Handler-12:null) (logid:) Seq 8-1863645820801253428: Processing: { Ans: , MgmtId: 52238089807, via: 8, Ver: v1, Flags: 10, [{"com.cloud.agent.api.NetworkUsageAnswer":{"routerName":"r-224-VM","bytesSent":"(104.00 KB) 106496","bytesReceived":"(0 bytes) 0","result":"true","details":"","wait":"0",}}] }
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Human+Readable+Byte+sizes
This PR adds support for the OOBM Redfish protocol, implementing a Java client to send HTTP requests to Redfish supported systems.
Implementation overview:
- Redfish Java client: a Java Client for Redfish that makes Redfish actions available to the HA workflow via an OOB driver.
- OOB Redfish driver: a new Out-of-band driver was created for Redfish, allowing to integrate the Redfish Client with the CloudStack Out-of-band management implementation.
Fixes: #3624
Currently CloudStack is using logging frameworks as log4j and Java util logging, logging wrappers as slf4j and Apache common logging.
Here changes are to made it uniform, using only log4j framework.
Removed Java util logging, slf4j and Apache common logging.
* db.properties: Enforce UTC timezone by default
This would give users ability to change the timezone
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* fix server time to UTC
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Update the db.usage.url.params=serverTimezone=UTC per Liridon's testing
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>