Commit Graph

47 Commits

Author SHA1 Message Date
Bryan Lima afdc73f911
Update VM priority (cpu_shares) when live scaling it (#6031)
* Update VM priority (cpu_chares) when live scaling it

* Apply suggestions from code review

Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>

* Addressing javadoc review

* Addressing review typo

Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
2022-03-14 11:34:10 -03:00
Rohit Yadav 2f6fc56e74
maven: Fix build on osx (#6056)
kvm: don't read /proc/meminfo on non-Linux environments as part of constructor

Co-authored-by: Nicolas Vazquez <nicovazquez90@gmail.com>
2022-03-07 14:55:42 +05:30
Harikrishna f15cab16da
server: Decouple service (compute) offering and disk offering (#5008)
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.

more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring

* Schema changes and disk offering column change from "type" to "compute_only"

* Few more changes

* Decoupled service offering and disk offering

* Remove diskofferingid from vminstance VO

* Decouple service offering and disk offering states

* diskoffering getsize() is only for strict disk offerings

* Fix deployVM flow

* Added new API params to compute offering creation

* Add diskofferingstrictness to serviceoffering vo under quota

* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case

Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume

* Fix User vm response to show proper service offering and disk offerings

* Added disk size strictness in disk offering response

* Added disk offering strictness to the service offering response

* Remove comments

* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form

* Added diskoffering details to the service offering response

* Added UI changes in deployvm wizard to accept override disk offering id

* Fix delete compute offering

* Fix VM deployment from custom service offering

* Move uselocalstorage column access from service offering to disk offering

* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering

* Fixed diskoffering automatic selection on add compute offering wizard

* UI: move compute only toggle button outside the box in add compute offering wizard

* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings

* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration

* Added disk offering change checks during resize volume operation

* Added new API changeofferingforVolume API and corresponding changes

* Add UI form for changeOfferingForVolume API

* Fix UI conflicts

* Fix service offering usage as disk offering

* Fix unit test failures

* fix user_vm_view

* Addressed review comments

* Fixed service_offering_view

* Fix service offering edit flow

* Fix service offering constructor to address custom offering

* Fix domain_router_view to get proper service offering id

* Removed unused import

* Addressed review comments and fixed update service offering flow with storage tags

* Added marvin test cases for checking disk offering strictness

* review comments addressed

* Remove system_use column from disk offering join

* update volume_view to update system_use column from service offering and not disk offering

* Fix changeOfferingForVolume API for custom disk offering

* Fix global setting implementation

* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view

* Changes for override root disk offering in deployvm wizard in case of custom offering

* Fix a unit test case

* Fixed recent unit test cases with new serviceofferingvo constructor

* Fix unit test in VolumeApiServiceImpl

* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow

* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering

* Fix smoke test failures

* Added tool tip for migrate volume UI form

* Address review comments and fix UI form of deploy VM in case of ISO.

* Fixed resize volume UI form for data disk

* UI changes to disable override root disk size when override root disk offering is enabled

* UI fix in deploy vm wizard

* Fix listdiskoffering after rebasing with main

* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list

* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form

* Fix false response on updateDiskOffering API

* Added search field for changeofferingforvolume UI form

* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster

* Removed DB changes from 4.16 upgrade file

* Resolving merge conflicts with main 4.17

* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.

* UI: Added automigrate checkbox in scale VM form

* Addes since attributes to new API params

* Added shrinkOK parameter to changeofferingforvolume API

* Added shrinkOk param to UI in changeOfferingforVolume form

* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form

* Removed old foreign key constraint on IDs of service offering and disk offering

* Allow resize and automigrate of root volume if required in all cases of service offering change

* Allow only resize to higher disk size from UI

* Fixing vue syntax error

* Make UI changes to provide root disk size box when the linked disk offering is of custom

* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms

* Fix resize volume operation to update the VM settings

* Fix migratevolume form to pick selected storage pool id in list diskofferings API
2022-01-27 15:08:42 +05:30
Rohit Yadav 6728b696cd
kvm: don't always force scsi controller for aarch64 VMs (#5802)
* kvm: don't force scsi controller for aarch64 VMs

This would allow use of virtio disk controller with Ceph, etc or as
defined in the VM's root disk controller setting, rather than always
enforce SCSI.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>

* remove test that doesn't apply now

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>

* address review comment

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-12-30 15:19:32 +05:30
div8cn 52a9dbdcd2
kvm available memory calculation optimization (#5540)
* Update LibvirtComputingResource.java

* Update LibvirtComputingResource.java

* Update LibvirtComputingResource.java

re

* Update MemStat.java

* Update MemStat.java

* Update MemStatTest.java
2021-10-05 12:34:28 -03:00
Leo (Hsueh Yu-Min) 72a1c0e7f1
[KVM] Add MV Settings for virtual GPU hardware type and memory (#5513)
* KVM: Add MV Settings for virtual GPU hardware type and memory

* fix method createVideoDef argument in test package

* add available options for KVM virtual GPU hardware VM setting

* fix videoRam default value

* fix _videoRam is 0, it will use default provided by libvirt
2021-10-04 09:55:32 +05:30
Peinthor Rene 66c39c1589
storage: Linstor volume plugin (#4994)
This adds a volume(primary) storage plugin for the Linstor SDS.
Currently it can create/delete/migrate volumes, snapshots should be possible,
but currently don't work for RAW volume types in cloudstack.

* plugin-storage-volume-linstor: notify libvirt guests about the resize
2021-09-16 10:50:58 +05:30
Daniel Augusto Veronezi Salvador 8a16729fcf
Support vm dynamic scaling with kvm (#4878)
* Create utility to centralize byte convertions

* Add/change toString definitions

* Create Libvirt handler to ScaleVmCommand

* Enable dynamic scalling VM with KVM

* Move config from interface to class and rename it

As every variable declared in interfaces are already final,
this moving will be needed to mock tests in nexts commits

* Configure VM max memory and cpu cores

The values are according to service offering or global configs

* Extract dpdk configuration to a method and test it

* Extract OS desc config to a method and test it

* Extract guest resource def to a method and test it

Improve libvirt def

* Refactor LibvirtVMDef.GuestResourceDef

* Refactor ScaleVmCommand

* Improve VMInstaVO toString()

* Refactor upgradeRunningVirtualMachine method

* Turn int variables into long on utility

* Verify if VM is scalable on KVMGuru

* Rename some KVMGuruTest's methods

* Change vm's xml to work with max memory

* Verify if service offering is dynamic before scale

* Create methods to retrieve data from domain

* Create def to hotplug memory

* Adjust the way command was scaling the VM

* Fix database persistence before executing command

* Send more info to host to improve log

* Fix var name

* Fix missing "}"

* Undo unnecessary changes

* Address review

* Fix scale validation

* Add VM prepared for dynamic scaling validation

* Refactor LibvirtScaleVmCommandWrapper and improve unit tests

* Remove duplicated method

* Add RuntimeException check

* Remove copyright from header

* Remove copyright from header

* Remove copyright from header

* Remove copyright from header

* Remove copyright from header

* Update ByteScaleUtilsTest.java

Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
2021-08-21 09:29:02 +02:00
SadiJr eff2da2518
Refactor and improvements for method com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.createVMFromSpec() (#5149)
* Refactor method createVMFromSpec

* Add unit tests

* Fix test

* Extract if block to method for add extra configs to VM Domain XML

* Split travis tests trying to isolate which test is causing an error

* Override toString() method

* Update documentation

* Fix checkstyle error (line with trailing spaces)

* Change VirtualMachineTO print of object

* Add try except to find message error. Remove after test

* Fix indent

* Trying to understanding why is happening in this code

* Refactor method createVMFromSpec

* Add unit tests

* Fix test

* Extract if block to method for add extra configs to VM Domain XML

* Split travis tests trying to isolate which test is causing an error

* Override toString() method

* Update documentation

* Fix checkstyle error (line with trailing spaces)

* Remove unnecessary comment

* Revert travis tests

Co-authored-by: SadiJr <17a0db2854@firemailbox.club>
2021-07-21 15:07:25 -03:00
Gabriel Beims Bräscher 1d831a32a9
kvmk: KVM NFS disk IO driver supporting IO_URING (#5012)
Currently there is no disk IO driver configuration for VMs running on KVM. That's OK for most the cases; however, recently there have been added some quite interesting optimizations with the IO driver io_uring.

Note that IO URING requires:

Qemu >= 5.0, and
Libvirt >= 6.3.0.
By using io_uring we can see a massive I/O performance improvement within Virtual Machines running from Local and/or NFS storage.

This implementation enhances the KVM disk configuration by adding workflow for setting the disk IO drivers. Additionally, if the Qemu and Libvirt versions matches with the required for having io_uring we are going to set it on the VM. If there is no support for such driver we keep it as it is nowadays, without any IO driver configured.

Fixes: #4883
2021-07-15 13:02:44 +05:30
Pearl Dsilva 0dbeb262e4
server: Support for persistence mode in L2 networks (#4561)
This PR aims at introducing persistence mode in L2 networks and enhancing the behavior in Isolated networks
Doc PR apache/cloudstack-documentation#183

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2021-04-05 14:37:11 +05:30
Rohit Yadav def65ec873 Merge remote-tracking branch 'origin/4.15' 2021-04-04 13:09:41 +05:30
Wei Zhou 09428380f7
kvm: remove unnecessary new String (#4870)
Thanks @rubieHess to point it out.
see #4800 (comment)
2021-04-04 13:08:29 +05:30
Daniel Augusto Veronezi Salvador 4e90a8c454
Qemu 2.10 requires `-U` flag to read volume metadata (#4567)
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
2021-03-18 11:54:01 +01:00
sureshanaparti eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Rohit Yadav c82688a355
kvm: Fix double-escape issue while creating rbd disk options (#4568)
This fixes issue introduced in c3554ec31d
which enable block of code that will double escape rados host/monitor
port.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2021-01-05 15:23:38 +05:30
davidjumani 3872bf1ff9
kvm: Enable PVLAN support on L2 networks (#4040)
This is an extention of #3732 for kvm.
This is restricted to ovs > 2.9.2
Since Xen uses ovs 2.6, pvlan is unsupported.
This also fixes issues of vms on the same pvlan unable to communicate if they're on the same host
2020-08-20 15:46:34 +05:30
Wido den Hollander c3554ec31d
kvm: For ceph only if a port number has been specified define in the XML (#4231)
Ceph used to use port 6789 (no need to specify it), but with the messenger v2
from Ceph it switched to port 3300 while 6789 still works.

librados/librbd/libvirt will automatically figure out the ports to use if none is
specified.

Therefor there is no need for CloudStack to explicitely define the port in the XML
passed to Libvirt or Qemu.

Leave blank if no port number has been defined by the user.
2020-08-05 13:44:40 +05:30
Pearl Dsilva a73712ec4e
server: Enable sending hypervior host name via metadata - VR and Config Drive (#3976)
Enable sending hypervisor host details via metadata for VR and Config Drive providers

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
2020-07-01 08:44:11 +05:30
Bitworks LLC 750abf3551
FEATURE-3823: kvm agent hooks (#3839) 2020-03-14 09:22:08 +01:00
Nicolas Vazquez 73122fd0a9
[KVM] Direct download agnostic of the storage provider (#3828)
* Remove constraint for NFS storage

* Add new property on agent.properties

* Add free disk space on the host prior template download

* Add unit tests for the free space check

* Fix free space check - retrieve avaiable size in bytes

* Update default location for direct download

* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope

* Verify location for temporary download exists before checking free space

* In progress - refactor and extension

* Refactor and fix

* Last fixes and marvin tests

* Remove unused test file

* Improve logging

* Change default path for direct download

* Fix upload certificate

* Fix ISO failure after retry

* Fix metalink filename mismatch error

* Fix iso direct download

* Fix for direct download ISOs on local storage and shared mount point

* Last fix iso

* Fix VM migration with ISO

* Refactor volume migration to remove secondary storage intermediate

* Fix simulator issue
2020-03-06 19:56:54 +01:00
Wei Zhou 458d3b5b47
Multiple networks support for vms in advanced zone with securit… (#3639) 2020-02-19 14:02:12 +00:00
Rohit Yadav d90341ebf1
cloudstack: add JDK11 support (#3601)
This adds support for JDK11 in CloudStack 4.14+:

- Fixes code to build against JDK11
- Bump to Debian 9 systemvmtemplate with openjdk-11
- Fix Travis to run smoketests against openjdk-11
- Use maven provided jdk11 compatible mysql-connector-java
- Remove old agent init.d scripts

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2020-02-12 12:58:25 +05:30
Rohit Yadav 524b995083
IoT/ARM64 support: allow cloudstack-agent on Raspberry Pi 4 (armv8) to use kvm acceleration (#3644)
KVM is supported on arm64 Linux (https://www.linux-kvm.org/page/Processor_support#ARM:).
For a small (IoT) platform such as the new Raspberry Pi 4 that uses armv8 processor
(cortex-a72) it's possible to run Linux host with `/dev/kvm`
accleration. This adds support for IoT IaaS in CloudStack.

This PR is from a fun weekend project where:
- I set up a Raspberry Pi 4 - 4GB RAM model with 4 CPU cores @ 1.5Ghz, 128GB SD samsung evo plus card
- Installed Ubuntu 19.10 raspi3 base image: http://cdimage.ubuntu.com/releases/19.10/release/ubuntu-19.10-preinstalled-server-arm64+raspi3.img.xz
- Build a custom Linux 5.3 kernel with KVM enabled, deb here: http://dl.rohityadav.cloud/cloudstack-rpi/kernel-19.10/ and install the linux-image and linux-module
- Then install/setup CloudStack on it (fix some issues around jna, by manually installing newer libjna-java to /usr/share/cloudstack-agent/lib)
- Since the host processor is not x86_64, I had to build a new arm64 (or aarch64) systemvmtemplate: http://dl.rohityadav.cloud/cloudstack-rpi/systemvmtemplate/

I could finally get a 4.13 CloudStack + Adv zone/networking to run on it
and deployed a KVM based Ubuntu 19.10 environment and NFS storage.
Deployed a test vm with isolated network, VR works as expected. Console
proxy works as well, for this tested against arm64 openstack Debian 9/10
templates.

I raised the issue of enabling KVM in upstream Ubuntu arm64 build: https://bugs.launchpad.net/ubuntu/+source/linux-raspi2/+bug/1783961
Ubuntu kernel team has come back and future arm64 releases may have 
KVM enabled by default.

Limitation: on my aarch64 env, it did not support IDE, therefore all
default bus type for volumes are SCSI by default. With VIRTIO it fails
sometimes.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-11-11 22:01:05 +05:30
Nicolas Vazquez a75444a585
KVM: DPDK live migrations (#3365)
* DPDK live migrations

* Remove DPDK created ports if VM migration fails or prepare migration fails

* Rename DPDK classes lowercase
2019-06-25 12:23:09 -03:00
Nicolas Vazquez 0fbf5006b8 kvm: live storage migration intra cluster from NFS source and destination (#2983)
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548

Live storage migration on KVM under these conditions:

From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.

Additional notes:

To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026
https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:

Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-10 15:35:26 +05:30
Nicolas Vazquez 501aa7cd91
DPDK vHost User mode selection (#3153)
* DPDK vHost User mode selection

* SQL text field and DPDK classes refactor

* Fix NullPointerException after refactor

* Fix unit test

* Refactor details type
2019-05-29 08:36:33 -03:00
ustcweizhou 798b79fa5b kvm: disable cpu features if feature starts with '-' (#3335)
When I use SandyBridge as custom cpu in my testing, vm failed to start due to following error:
```
org.libvirt.LibvirtException: unsupported configuration: guest and host CPU are not compatible: Host CPU does not provide required features: avx, xsave, aes, tsc-deadline, x2apic, pclmuldq
```

With this patch, it works with the following setting in agent.properties:
```
  guest.cpu.mode=custom
  guest.cpu.model=SandyBridge
  guest.cpu.features=-avx -xsave -aes -tsc-deadline -x2apic -pclmuldq
```

vm cpu is defined as below:
```
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>SandyBridge</model>
    <feature policy='disable' name='avx'/>
    <feature policy='disable' name='xsave'/>
    <feature policy='disable' name='aes'/>
    <feature policy='disable' name='tsc-deadline'/>
    <feature policy='disable' name='x2apic'/>
    <feature policy='disable' name='pclmuldq'/>
  </cpu>
```
2019-05-27 18:43:38 +05:30
Rohit Yadav 0700d91a68 Merge branch '4.12'
- Fixes PR #3146 db cleanup to the correct 4.12->4.13 upgrade path
- Fixes failing unit test due to jdk specific changes after forward
  merging

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-14 15:15:17 +05:30
Frank Maximus e11f7ee1ba RIP Nuage Cloudstack Plugin (#3146)
may it rest in peaces
2019-05-14 10:58:24 +02:00
Rohit Yadav 00ff536f81 Merge remote-tracking branch 'origin/4.11' into 4.12
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-05-14 14:26:11 +05:30
Gabriel Beims Bräscher 8f7b27bbdc Mock Scanner, instead of scan the computer running the test. (#3173)
* Mock Scanner, instead of scan the computer running the test.

This allows non linux machines to run the tests without scanning for a
non existing /proc/meminfo.

* test fixes on 'other' platforms libvirt wrapper unit tests (#3)
2019-04-24 13:33:06 +02:00
Gabriel Beims Bräscher 709845f4a3
Keep iotune section in the VM's XML after live migration (#3171)
* Keep iotune section in the VM's XML after live migration

When live migrating a KVM VM among local storages, the VM loses the
<iotune> section on its XML, therefore, having no IO limitations.

This commit removes the piece of code that deletes the <iotune> section
in the XML.

* Add test for replaceStorage in LibvirtMigrateCommandWrapper

Signed-off-by: Wido den Hollander <wido@widodh.nl>

* Fix Javadoc for method replaceIpForVNCInDescFile
2019-02-12 22:07:03 -02:00
Nathan Johnson 637cc6ec4e feature: add libvirt / qemu io bursting (#3133)
* feature: add libvirt / qemu io bursting

Adds the ability to set bursting features from libvirt / qemu

This allows you to utilize the iops and bytes temporary "burst" mode
introduced with libvirt 2.4 and improved upon with libvirt 2.6.

https://blogs.igalia.com/berto/2016/05/24/io-bursts-with-qemu-2-6/

* updates per rafael et al
2019-02-04 19:47:44 -02:00
Wido den Hollander c496c84c6c kvm: Properly report available memory to Management Server (#2795)
The KVM Agent had two mechanisms for reporting its capabilities
and memory to the Management Server.

On startup it would ask libvirt the amount of Memory the Host has
and subtract and add the reserved and overcommit memory.

When the HostStats were however reported to the Management Server
these two configured values on the Agent were no longer reported
in the statistics thus showing all the available memory in the
Agent/Host to the Management Server.

This commit unifies this by using the same logic on Agent Startup
and during statistics reporting.

  memory=3069636608, reservedMemory=1073741824

This was reported by a 4GB Hypervisor with this setting:

  host.reserved.mem.mb=1024

The GUI (thus API) would then show:

  Memory Total	2.86 GB

This way the Agent properly 'lies' to the Management Server about its
capabilities in terms of Memory.

This is very helpful if you want to overprovision or undercommit machines
for various reasons.

Overcommitting can be done when KSM or ZSwap or a fast SWAP device is
installed in the machine.

Underprovisioning is done when the Host might run other tasks then a KVM
hypervisor, for example when it runs in a hyperconverged setup with Ceph.

In addition internally many values have been changed from a Double to a Long
and also store the amount of bytes instead of Kilobytes.

Signed-off-by: Wido den Hollander <wido@widodh.nl>
2019-01-24 20:18:04 -02:00
Wido den Hollander c565db2cf2 kvm: Set amount of queues for Virtio SCSI driver to vCPU of Instance (#3101)
The additional queues can enhance the performance of the VirtIO SCSI disk
and it is recommended to set this to the amount of vCPUs a Instance is assigned.

  The optional queues attribute specifies the number of queues for the
  controller. For best performance, it's recommended to specify a value matching
  the number of vCPUs. Since 1.0.5 (QEMU and KVM only)

Source: https://libvirt.org/formatdomain.html#elementsVirtio

Signed-off-by: Wido den Hollander <wido@widodh.nl>
2019-01-08 10:39:21 +01:00
Gabriel Beims Bräscher bf209405e7 Allow KVM VM live migration with ROOT volume on file storage type (#2997)
* Allow KVM VM live migration with ROOT volume on file

* Allow KVM VM live migration with ROOT volume on file
- Add JUnit tests

* Address reviewers and change some variable names to ease future
implementation (developers can easily guess the name and use
autocomplete)
2018-12-14 09:01:28 -02:00
Bitworks LLC 9dce8a5dea kvm: Added two more device name patterns to valid bridge slaves (lo* and dummy*) (#3000)
Added dummy and lo devices to be treated as a normal bridge slave devs.
Fixes #2998  
Added two more device names (lo* and dummy*). Implemented tests. Code was refactored.
Improved paths concatenation code from "+" to Paths.get.
2018-12-07 01:59:00 +05:30
Rohit Yadav ac9562a4a1 Merge remote-tracking branch 'origin/4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-11-29 15:06:06 +05:30
Nicolas Vazquez 4de4eabd18
Enable DPDK support on KVM (#2839)
* Enable DPDK support on KVM

* Allow DPDK deployments on user VMs only

* Fix port name ordering
2018-11-07 09:29:01 -03:00
Simon Weller c4b621a418 kvm: HyperV Enlightment for Improved Windows Server 2008+ Performance (#2870)
Windows has support for several paravirt features that it will use when running on Hyper-V, Microsoft's hypervisor. These features are called enlightenments. Many of the features are similar to paravirt functionality that exists with Linux on KVM (virtio, kvmclock, PV EOI, etc.)

Nowadays QEMU/KVM can also enable support for several Hyper-V enlightenments. When enabled, Windows VMs running on KVM will use many of the same paravirt optimizations they would use when running on Hyper-V.

A number of years ago, a PR was introduced that added a good portion of the code to enable this feature set, but it was never completed. This PR enables the existing features. The previous patch set detailed in #1013 also included the tests.

By selecting Windows PV, the enlightenment additions will be applied to the libvirt configuration. This is support on Windows Server 2008 and beyond, so all currently supported versions of Windows Server.

In our testing, we've seen benchmark improvements of around 20-25% running on Centos 7 hosts and it is also supported on Centos/RHEL 6.5 and later. Testing on Ubuntu would be appreciated.
2018-10-25 06:54:13 +05:30
Wido den Hollander 65f31f1a9f kvm: Agent should not check if remaining memory on host is sufficient (#2766)
When a Instance is (attempted to be) started in KVM Host the Agent
should not worry about the allocated memory on this host.

To make a proper judgement we need to take more into account:

- Memory Overcommit ratio
- Host reserved memory
- Host overcommit memory

The Management Server has all the information and the DeploymentPlanner
has to make the decision if a Instance should and can be started on a
Host, not the host itself.

Signed-off-by: Wido den Hollander <wido@widodh.nl>
2018-08-08 12:14:26 +05:30
Daan Hoogland 1d05fead49 Merge branch '4.11' 2018-06-21 13:08:55 +02:00
Rohit Yadav 644b0910cd Merge branch '4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-04-20 00:46:43 +05:30
Rohit Yadav 8ef131745a Merge branch '4.11' 2018-03-15 16:46:50 +05:30
Rohit Yadav f96398c127 Merge branch '4.11' 2018-02-14 11:56:00 +01:00
Marc-Aurèle Brothier 893a88d225 CLOUDSTACK-10105: Use maven standard project structure in all projects (#2283)
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.

- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml

Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
2018-01-20 03:19:27 +05:30