* Console access enhancements
* Remove extra logging
* Fix security hotspot
* Fix sonar cloud code smells
* Refactor API response
* Minor fix
* Refactor and increase timeout on ssh to cpvm
* Add marvin tests and extend permissions
* Fix account type
* Add unit tests
* Check vncport file exits on CPVM before attempting to add rules
* Change how vncport is read on cpvm
* Extra validation refactor
* Fix wrong token API param on UI
* Refactor vnc port selection to 8080 or 8443
* Do not display the input token modal and improve error message on console
* Improve error message and prevent opening blank popup when errors
* Fix logging exception due to algorithm
* Use cryptsetup w/o zeroing for encrypted scaleio - faster
Signed-off-by: Marcus Sorensen <mls@apple.com>
* Pass storage scope during KVM volume migration to avoid remotely mounting local storage
Signed-off-by: Marcus Sorensen <mls@apple.com>
* Add method to choose template pool based on scope
Signed-off-by: Marcus Sorensen <mls@apple.com>
* Clean up null check when creating migration options
Signed-off-by: Marcus Sorensen <mls@apple.com>
* ScaleIO enhancements - thin/thick encrypted, online resize
Signed-off-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
This PR introduces volume encryption option to service offerings and disk offerings. Fixes#136
There is a hypervisor component and a storage pool component. Hypervisors are responsible for being capable of running/using the encrypted volumes. Storage pools are responsible for being able to create, copy, resize, etc. Hypervisors will report encryption support in their details, storage pools are marked for encryption support by pool type.
The initial offering for experimental release of this feature will have support for encryption on Local, NFS, SharedMountPoint, and ScaleIO storage types.
When volumes choosing an encrypted offering are allocated to a pool, the pool type must be capable of supporting encryption and this is enforced.
When VMs are started and they have an encrypted volume, the hypervisor must be capable of supporting encryption. Also, if volumes are attached to running VMs, the attach will only work if the hypervisor supports encryption.
This change includes a few other minor changes - for example the ability to force the KVM hypervisor private IP. This was necessary in my testing of ScaleIO, where the KVM hypervisors had multiple IPs and the ScaleIO storage only functions if the hypervisor as a ScaleIO client matches IPs with what CloudStack sees as the hypervisor IP.
For experimental release of this feature, some volume workflows like extract volume and migrate volume aren't supported for encrypted volumes. In the future we could support these, as well as migrating from unencrypted to encrypted offerings, and vice versa.
It may also be possible to configure encryption specifics in the future, perhaps at the pool level or the offering level. Currently, there is only one workable encryption offering for KVM that is supported by Libvirt and Qemu for raw and qcow2 disk files, LUKS version 1. This PR ensures we at least store this encryption format associated with each volume, with the expectation that later we may have LUKS v2 volumes or something else. Thus we will have the information necessary to use each volume with Libvirt if/when other formats are introduced.
I think the most disruptive change here is probably a refactoring of the QemuImg utility to support newer flags like --object. I've tested the change against the basic Qemu 1.5.3 that comes with EL7 and I believe it is good, but it will be nice to see the results of some functional tests. Most of the other changes are limited to changing behavior only if volume encryption is requested.
Working on documentation for the CloudStack docs. One thing to note is that hypervisors that run the stock EL7 version of Qemu will not support encryption. This is tested to be detected and report properly via the CloudStack API/UI. I intend to like to have a support matrix in the CloudStack docs.
I may add a few more unit tests. I'd also like some guidance on having functional tests. I'm not sure if there's a separate framework, or if Marvin is still used, or what the current thing is.
* Add Qemu object flag to QemuImg create
* Add apache license header to new files
* Add Qemu object flag to QemuImg convert
* Set host details if hypervisor supports LUKS
* Add disk encrypt flag to APIs, diskoffering
* Schema upgrade 4.16.0.0 to 4.16.1.0 to support vol encryption
* Add Libvirt secret on disk attach, and refer to it in disk XML
* Add implementation of luks volume encryption to QCOW2 and RAW disk prep
* Start VMs that have encrypted volumes
* Add encrypt option to service offering and root volume provisioning
* Refactor volume passphrase into its own table and object
* CryptSetup, use key files to pass keys instead of command line
* Update storage types and allocators to select encryption support
* Allow agent.properties to define the hypervisor's private IP
* Implement createPhysicalDisk for ScaleIOStorageAdaptor
* UI: Add encrypt options to offerings
* UI module security updates
* Revert "UI module security updates" - belongs in base
This reverts commit a7cb7cf7f57aad38f0b5e5d67389c187b88ffd94.
* Add --target-is-zero support for QemuImg
* Allow qemu image options to be passed, API support convert encrypted
* Switch hypervisor encryption support detection to use KeyFiles
* Fixes for ScaleIO root disk encryption
* Resize root disk if it won't fit encryption header
* Use cryptsetup to prep raw root disks, when supported
* Create qcow2 formatting if necessary during initial template copy to ScaleIO
* Allow setting no cache for qemu-img during disk convert
* Use 1M sparse on qemu-img convert for zero target disks
* UI: Add volume encryption support to hypervisor details
* QemuImg use --image-opts and --object depending on version
* Only send storage commands that require encryption to hosts that support encryption
* Move host encryption detail to a static constant
* Update host selection to account for volume encryption support
Only attach volumes if encryption requirements are met
* Ensure resizeVolume won't allow changing encryption
* Catch edge cases for clearing passphrase when volume is removed
* Disable volume migration and extraction for encrypted volumes
* Register volume secret on destination host during live migration
* Fix configdrive path editing during live migration
* Ensure configdrive path is edited properly during live migration
* Pass along and store volume encryption format during creation
* Fixes for rebase
* Fix tests after rebase
* Add unit tests for DeploymentPlanningManagerImpl to support encryption
* Deployment planner tests for encryption support on last host
* Add deployment tests for encryption when calling planner
* Added Libvirt DiskDef test for encryption details
* Add test for KeyFile utility
* Add CryptSetup tests
* Add QemuImageOptionsTest
* add smoke tests for API level changes on create/list offerings
* Fix schema upgrade, do disk_offering_view first
* Fix UI to show hypervisor encryption support
* Load details into hostVO before trying to query them for encryption
* Remove whitespace in CreateNetworkOfferingTest
* Move QemuImageOptions to use constants for flag keys
* Set physical disk encrypt format during createDiskFromTemplate in KVM Agent
* Whitespace in AbstractStoragePoolAllocator
* Fix whitespace in VolumeDaoImpl
* Support old Qemu in convert
* Log how long it takes to generate a passphrase during volume creation
* Move passphrase generation to async portion of createVolume
* Revert "Allow agent.properties to define the hypervisor's private IP"
This reverts commit 6ea9377505f0e5ff9839156771a241aaa1925e70.
* Updated ScaleIO/PowerFlex storage plugin to support separate (storage) network for Host(KVM) SDC connection. (#144)
* Added smoke tests for volume encryption (in KVM). (#149)
* Updated ScaleIO pool unit tests.
* Some improvements/fixes for code smells (in ScaleIO storage plugin).
* Updated review changes for ScaleIO improvements.
* Updated host response parameter 'encryptionsupported' in the UI.
* Move passphrase generation for the volume to async portion, while deploying VM (#158)
* Move passphrase generation for the volume to async portion, while deploying VM.
* Updated logs, to include volume details.
* Fix schema upgrade, create passphrase table first
* Fixed the DB upgrade issue (as noticed in the logs below.)
DEBUG [c.c.u.d.ScriptRunner] (main:null) (logid:) CALL `cloud`.`IDEMPOTENT_ADD_FOREIGN_KEY`('cloud.volumes', 'passphrase', 'id')
ERROR [c.c.u.d.ScriptRunner] (main:null) (logid:) Error executing: CALL `cloud`.`IDEMPOTENT_ADD_FOREIGN_KEY`('cloud.volumes', 'passphrase', 'id')
ERROR [c.c.u.d.ScriptRunner] (main:null) (logid:) java.sql.SQLException: Failed to open the referenced table 'passphrase'
ERROR [c.c.u.DatabaseUpgradeChecker] (main:null) (logid:) Unable to execute upgrade script
* Fixes for snapshots with encrypted qcow2
Fixes#159#160#163
* Support create/delete encrypted snapshots of encrypted qcow2 volumes
* Select endpoints that support encryption when snapshotting encrypted volumes
* Update revert snapshot to be compatible with encrypted snapshots
* Disallow volume and template create from encrypted vols/snapshots
* Disallow VM memory snapshots on encrypted vols. Fixes#157
* Fix for TemplateManagerImpl unit test failure
* Support offline resize of encrypted volumes. Fixes#168
* Fix for resize volume unit tests
* Updated libvirt resize volume unit tests
* Support volume encryption on kvm only, and passphrase generation refactor (#169)
* Fail deploy VM when ROOT/DATA volume's offering has encryption enabled, on non-KVM hypervisors
* Fail attach volume when volume's offering has encryption enabled, on non-KVM hypervisors
* Refactor passphrase generation for volume
* Apply encryption to dest volume for live local storage migration
fixes#161
* Apply encryption to data volumes during live storage migration
Fixes#161
* Use the same encryption passphrase id for migrating volumes
* Pass secret consumer during storage migration prepare
Fix for #161
* Fixes create / delete volume snapshot issue, for stopped VMs
* Block volume snapshot if encrypted and VM is running
Fixes#159
* Block snap schedules on encrypted volumes
Fix for #159
* Support cryptsetup where luks type defaults to 2
Fixes#170
* Modify domain XML secret UUID when storage migrating VM
Fix for #172
* Remove any libvirt secrets on VM stop and post migration
Fix for #172
* Update disk profile with encryption requirement from the disk offering (#176)
Update disk profile with encryption requirement from the disk offering
and some code improvements
* Updated review changes / javadoc in ScaleIOUtil
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
* Extract the IO_URING configuration into the agent.properties (#6253)
When using advanced virtualization the IO Driver is not supported. The
admin will decide if want to enable/disable this configuration from
agent.properties file. The default value is true
* kvm: truncate vnc password to 8 chars (#6244)
This PR truncates the vnc password of kvm vms to 8 chars to support latest versions of libvirt.
* merge fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* [KVM] Enable IOURING only when it is available on the host (#6399)
* [KVM] Disable IOURING by default on agents
* Refactor
* Remove agent property for iouring
* Restore property
* Refactor suse check and enable on ubuntu by default
* Refactor irrespective of guest OS
* Improvement
* Logs and new path
* Refactor condition to enable iouring
* Improve condition
* Refactor property check
* Improvement
* Doc comment
* Extend comment
* Move method
* Add log
* [KVM] Fix VM migration error due to VNC password on libvirt limiting versions (#6404)
* [KVM] Fix VM migration error due to VNC password on libvirt limiting versions
* Fix passwd value
* Simplify implementation
Co-authored-by: slavkap <51903378+slavkap@users.noreply.github.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
* Use base clock when detecting host CPU speed from file, to match lscpu
Allow for manually setting the CPU speed via agent.properties if all else fails
Signed-off-by: Marcus Sorensen <mls@apple.com>
* Update agent/conf/agent.properties
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
* kvm: don't force scsi controller for aarch64 VMs
This would allow use of virtio disk controller with Ceph, etc or as
defined in the VM's root disk controller setting, rather than always
enforce SCSI.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* remove test that doesn't apply now
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* address review comment
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* KVM: Add MV Settings for virtual GPU hardware type and memory
* fix method createVideoDef argument in test package
* add available options for KVM virtual GPU hardware VM setting
* fix videoRam default value
* fix _videoRam is 0, it will use default provided by libvirt
This adds a volume(primary) storage plugin for the Linstor SDS.
Currently it can create/delete/migrate volumes, snapshots should be possible,
but currently don't work for RAW volume types in cloudstack.
* plugin-storage-volume-linstor: notify libvirt guests about the resize
* Create utility to centralize byte convertions
* Add/change toString definitions
* Create Libvirt handler to ScaleVmCommand
* Enable dynamic scalling VM with KVM
* Move config from interface to class and rename it
As every variable declared in interfaces are already final,
this moving will be needed to mock tests in nexts commits
* Configure VM max memory and cpu cores
The values are according to service offering or global configs
* Extract dpdk configuration to a method and test it
* Extract OS desc config to a method and test it
* Extract guest resource def to a method and test it
Improve libvirt def
* Refactor LibvirtVMDef.GuestResourceDef
* Refactor ScaleVmCommand
* Improve VMInstaVO toString()
* Refactor upgradeRunningVirtualMachine method
* Turn int variables into long on utility
* Verify if VM is scalable on KVMGuru
* Rename some KVMGuruTest's methods
* Change vm's xml to work with max memory
* Verify if service offering is dynamic before scale
* Create methods to retrieve data from domain
* Create def to hotplug memory
* Adjust the way command was scaling the VM
* Fix database persistence before executing command
* Send more info to host to improve log
* Fix var name
* Fix missing "}"
* Undo unnecessary changes
* Address review
* Fix scale validation
* Add VM prepared for dynamic scaling validation
* Refactor LibvirtScaleVmCommandWrapper and improve unit tests
* Remove duplicated method
* Add RuntimeException check
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Update ByteScaleUtilsTest.java
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Refactor method createVMFromSpec
* Add unit tests
* Fix test
* Extract if block to method for add extra configs to VM Domain XML
* Split travis tests trying to isolate which test is causing an error
* Override toString() method
* Update documentation
* Fix checkstyle error (line with trailing spaces)
* Change VirtualMachineTO print of object
* Add try except to find message error. Remove after test
* Fix indent
* Trying to understanding why is happening in this code
* Refactor method createVMFromSpec
* Add unit tests
* Fix test
* Extract if block to method for add extra configs to VM Domain XML
* Split travis tests trying to isolate which test is causing an error
* Override toString() method
* Update documentation
* Fix checkstyle error (line with trailing spaces)
* Remove unnecessary comment
* Revert travis tests
Co-authored-by: SadiJr <17a0db2854@firemailbox.club>
Currently there is no disk IO driver configuration for VMs running on KVM. That's OK for most the cases; however, recently there have been added some quite interesting optimizations with the IO driver io_uring.
Note that IO URING requires:
Qemu >= 5.0, and
Libvirt >= 6.3.0.
By using io_uring we can see a massive I/O performance improvement within Virtual Machines running from Local and/or NFS storage.
This implementation enhances the KVM disk configuration by adding workflow for setting the disk IO drivers. Additionally, if the Qemu and Libvirt versions matches with the required for having io_uring we are going to set it on the VM. If there is no support for such driver we keep it as it is nowadays, without any IO driver configured.
Fixes: #4883
This PR aims at introducing persistence mode in L2 networks and enhancing the behavior in Isolated networks
Doc PR apache/cloudstack-documentation#183
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ
Documentation PR: apache/cloudstack-documentation#169
This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack
Other improvements addressed in addition to PowerFlex/ScaleIO support:
- Added support for config drives in host cache for KVM
=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)
- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates
- Updated full deployment destination for preparing the network(s) on VM start
- Propagate the direct download certificates uploaded to the newly added KVM hosts
- Discover the template size for direct download templates using any available host from the zones specified on template registration
=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones
- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)
- Retry VM deployment/start when the host cannot grant access to volume/template
- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore
- Check the router filesystem is writable or not, before performing health checks
=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not
- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)
* Addressed some issues for few operations on PowerFlex storage pool.
- Updated migration volume operation to sync the status and wait for migration to complete.
- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.
- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).
- Updated resize volume error message string.
- Blocked the below operations on PowerFlex storage pool:
-> Extract Volume
-> Create Snapshot for VMSnapshot
* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.
- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html
Other fixes included:
- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)
- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.
- Perform basic tests (for connectivity and file system) on router before updating the health check config data
=> Validate the basic tests (connectivity and file system check) on router
=> Cleanup the health check results when router is destroyed
* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0
* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool
and Minor improvements in PowerFlex/ScaleIO storage plugin code
* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.
- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.
* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py
* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)
* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.
* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration
* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure
* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
This fixes issue introduced in c3554ec31d
which enable block of code that will double escape rados host/monitor
port.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This is an extention of #3732 for kvm.
This is restricted to ovs > 2.9.2
Since Xen uses ovs 2.6, pvlan is unsupported.
This also fixes issues of vms on the same pvlan unable to communicate if they're on the same host
Ceph used to use port 6789 (no need to specify it), but with the messenger v2
from Ceph it switched to port 3300 while 6789 still works.
librados/librbd/libvirt will automatically figure out the ports to use if none is
specified.
Therefor there is no need for CloudStack to explicitely define the port in the XML
passed to Libvirt or Qemu.
Leave blank if no port number has been defined by the user.
* Remove constraint for NFS storage
* Add new property on agent.properties
* Add free disk space on the host prior template download
* Add unit tests for the free space check
* Fix free space check - retrieve avaiable size in bytes
* Update default location for direct download
* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope
* Verify location for temporary download exists before checking free space
* In progress - refactor and extension
* Refactor and fix
* Last fixes and marvin tests
* Remove unused test file
* Improve logging
* Change default path for direct download
* Fix upload certificate
* Fix ISO failure after retry
* Fix metalink filename mismatch error
* Fix iso direct download
* Fix for direct download ISOs on local storage and shared mount point
* Last fix iso
* Fix VM migration with ISO
* Refactor volume migration to remove secondary storage intermediate
* Fix simulator issue
This adds support for JDK11 in CloudStack 4.14+:
- Fixes code to build against JDK11
- Bump to Debian 9 systemvmtemplate with openjdk-11
- Fix Travis to run smoketests against openjdk-11
- Use maven provided jdk11 compatible mysql-connector-java
- Remove old agent init.d scripts
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
KVM is supported on arm64 Linux (https://www.linux-kvm.org/page/Processor_support#ARM:).
For a small (IoT) platform such as the new Raspberry Pi 4 that uses armv8 processor
(cortex-a72) it's possible to run Linux host with `/dev/kvm`
accleration. This adds support for IoT IaaS in CloudStack.
This PR is from a fun weekend project where:
- I set up a Raspberry Pi 4 - 4GB RAM model with 4 CPU cores @ 1.5Ghz, 128GB SD samsung evo plus card
- Installed Ubuntu 19.10 raspi3 base image: http://cdimage.ubuntu.com/releases/19.10/release/ubuntu-19.10-preinstalled-server-arm64+raspi3.img.xz
- Build a custom Linux 5.3 kernel with KVM enabled, deb here: http://dl.rohityadav.cloud/cloudstack-rpi/kernel-19.10/ and install the linux-image and linux-module
- Then install/setup CloudStack on it (fix some issues around jna, by manually installing newer libjna-java to /usr/share/cloudstack-agent/lib)
- Since the host processor is not x86_64, I had to build a new arm64 (or aarch64) systemvmtemplate: http://dl.rohityadav.cloud/cloudstack-rpi/systemvmtemplate/
I could finally get a 4.13 CloudStack + Adv zone/networking to run on it
and deployed a KVM based Ubuntu 19.10 environment and NFS storage.
Deployed a test vm with isolated network, VR works as expected. Console
proxy works as well, for this tested against arm64 openstack Debian 9/10
templates.
I raised the issue of enabling KVM in upstream Ubuntu arm64 build: https://bugs.launchpad.net/ubuntu/+source/linux-raspi2/+bug/1783961
Ubuntu kernel team has come back and future arm64 releases may have
KVM enabled by default.
Limitation: on my aarch64 env, it did not support IDE, therefore all
default bus type for volumes are SCSI by default. With VIRTIO it fails
sometimes.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548
Live storage migration on KVM under these conditions:
From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.
Additional notes:
To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:
Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* DPDK vHost User mode selection
* SQL text field and DPDK classes refactor
* Fix NullPointerException after refactor
* Fix unit test
* Refactor details type
When I use SandyBridge as custom cpu in my testing, vm failed to start due to following error:
```
org.libvirt.LibvirtException: unsupported configuration: guest and host CPU are not compatible: Host CPU does not provide required features: avx, xsave, aes, tsc-deadline, x2apic, pclmuldq
```
With this patch, it works with the following setting in agent.properties:
```
guest.cpu.mode=custom
guest.cpu.model=SandyBridge
guest.cpu.features=-avx -xsave -aes -tsc-deadline -x2apic -pclmuldq
```
vm cpu is defined as below:
```
<cpu mode='custom' match='exact'>
<model fallback='allow'>SandyBridge</model>
<feature policy='disable' name='avx'/>
<feature policy='disable' name='xsave'/>
<feature policy='disable' name='aes'/>
<feature policy='disable' name='tsc-deadline'/>
<feature policy='disable' name='x2apic'/>
<feature policy='disable' name='pclmuldq'/>
</cpu>
```
- Fixes PR #3146 db cleanup to the correct 4.12->4.13 upgrade path
- Fixes failing unit test due to jdk specific changes after forward
merging
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Mock Scanner, instead of scan the computer running the test.
This allows non linux machines to run the tests without scanning for a
non existing /proc/meminfo.
* test fixes on 'other' platforms libvirt wrapper unit tests (#3)
* Keep iotune section in the VM's XML after live migration
When live migrating a KVM VM among local storages, the VM loses the
<iotune> section on its XML, therefore, having no IO limitations.
This commit removes the piece of code that deletes the <iotune> section
in the XML.
* Add test for replaceStorage in LibvirtMigrateCommandWrapper
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* Fix Javadoc for method replaceIpForVNCInDescFile
* feature: add libvirt / qemu io bursting
Adds the ability to set bursting features from libvirt / qemu
This allows you to utilize the iops and bytes temporary "burst" mode
introduced with libvirt 2.4 and improved upon with libvirt 2.6.
https://blogs.igalia.com/berto/2016/05/24/io-bursts-with-qemu-2-6/
* updates per rafael et al
The KVM Agent had two mechanisms for reporting its capabilities
and memory to the Management Server.
On startup it would ask libvirt the amount of Memory the Host has
and subtract and add the reserved and overcommit memory.
When the HostStats were however reported to the Management Server
these two configured values on the Agent were no longer reported
in the statistics thus showing all the available memory in the
Agent/Host to the Management Server.
This commit unifies this by using the same logic on Agent Startup
and during statistics reporting.
memory=3069636608, reservedMemory=1073741824
This was reported by a 4GB Hypervisor with this setting:
host.reserved.mem.mb=1024
The GUI (thus API) would then show:
Memory Total 2.86 GB
This way the Agent properly 'lies' to the Management Server about its
capabilities in terms of Memory.
This is very helpful if you want to overprovision or undercommit machines
for various reasons.
Overcommitting can be done when KSM or ZSwap or a fast SWAP device is
installed in the machine.
Underprovisioning is done when the Host might run other tasks then a KVM
hypervisor, for example when it runs in a hyperconverged setup with Ceph.
In addition internally many values have been changed from a Double to a Long
and also store the amount of bytes instead of Kilobytes.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
The additional queues can enhance the performance of the VirtIO SCSI disk
and it is recommended to set this to the amount of vCPUs a Instance is assigned.
The optional queues attribute specifies the number of queues for the
controller. For best performance, it's recommended to specify a value matching
the number of vCPUs. Since 1.0.5 (QEMU and KVM only)
Source: https://libvirt.org/formatdomain.html#elementsVirtio
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* Allow KVM VM live migration with ROOT volume on file
* Allow KVM VM live migration with ROOT volume on file
- Add JUnit tests
* Address reviewers and change some variable names to ease future
implementation (developers can easily guess the name and use
autocomplete)
Added dummy and lo devices to be treated as a normal bridge slave devs.
Fixes#2998
Added two more device names (lo* and dummy*). Implemented tests. Code was refactored.
Improved paths concatenation code from "+" to Paths.get.
Windows has support for several paravirt features that it will use when running on Hyper-V, Microsoft's hypervisor. These features are called enlightenments. Many of the features are similar to paravirt functionality that exists with Linux on KVM (virtio, kvmclock, PV EOI, etc.)
Nowadays QEMU/KVM can also enable support for several Hyper-V enlightenments. When enabled, Windows VMs running on KVM will use many of the same paravirt optimizations they would use when running on Hyper-V.
A number of years ago, a PR was introduced that added a good portion of the code to enable this feature set, but it was never completed. This PR enables the existing features. The previous patch set detailed in #1013 also included the tests.
By selecting Windows PV, the enlightenment additions will be applied to the libvirt configuration. This is support on Windows Server 2008 and beyond, so all currently supported versions of Windows Server.
In our testing, we've seen benchmark improvements of around 20-25% running on Centos 7 hosts and it is also supported on Centos/RHEL 6.5 and later. Testing on Ubuntu would be appreciated.
When a Instance is (attempted to be) started in KVM Host the Agent
should not worry about the allocated memory on this host.
To make a proper judgement we need to take more into account:
- Memory Overcommit ratio
- Host reserved memory
- Host overcommit memory
The Management Server has all the information and the DeploymentPlanner
has to make the decision if a Instance should and can be started on a
Host, not the host itself.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.
- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>