* Support for live patching systemVMs and deprecating systemVM.iso. Includes:
- fix systemVM template version
- Include agent.zip, cloud-scripts.tgz to the commons package
- Support for live-patching systemVMs - CPVM, SSVM, Routers
- Fix Unit test
- Remove systemvm.iso dependency
* The following commit:
- refactors logic added to support SystemVM deployment on KVM
- Adds support to copy specific files (required for patching) to the hosts on Xenserver
- Modifies vmops method - createFileInDomr to take cleanup param
- Adds configuratble sleep param to CitrixResourceBase::connect() used to verify if telnet to specifc port is possible (if sleep is 0, then default to _sleep = 10000ms)
- Adds Command/Answer for patch systemVMs on XenServer/Xcp
* - Support to patch SystemVMs - VMWare
- Remove attaching systemvm.iso to systemVMs
- Modify / Refactor VMware start command to copy patch related files to the systemvms
- cleanup
* Commit comprises of:
- remove docker from systemvm template - use containerd as container runtime
- update create-k8s-binaries script to use ctr for all docker operations
- Update userdata sent to the k8s nodes
- update cksnode script, run during patching of the cks/k8s nodes
* Add ssh to k8s nodes details in the Access tab on the UI
* test
* Refactor ca/cert patching logic
* Commit comprises of the following changes:
- Use restart network/VPC API to patch routers
- use livePatch API support patching of only cpvm/ssvm
- add timeout to the keystore setup/import script
* remove all references of systemvm.iso
* Fix keystore-cert-import invocation + refactor cert timeout in CP/SS VMs
* fix script timeout
* Refactor cert patching for systemVMs + update keystore-cert-import script + patch-sysvms script + remove patchSysvmCommand from networkelementcommand
* remove commented code + change core user to cloud for cks nodes
* Update ownership of ssh directory
* NEED TO DISCUSS - add on the fly template conversion as an ExecStartPre action (systemd)
* Add UI changes + move changes from patch file to runcmd
* test: validate performance for template modification during seeding
* create vms folder in cloudstack-commons directory - debian rules
* remove logic for on the fly template convert + update k8s test
* fix syntax issue - causing issue with shared network tests
* Code cleanup
* refactor patching logic - certs
* move logic of fixing rootdiskcontroller from upgrade to kubernetes service
* add livepatch option to restart network & vpc
* smooth upgrade of cks clusters
* Support for live patching systemVMs and deprecating systemVM.iso. Includes:
- fix systemVM template version
- Include agent.zip, cloud-scripts.tgz to the commons package
- Support for live-patching systemVMs - CPVM, SSVM, Routers
- Fix Unit test
- Remove systemvm.iso dependency
* The following commit:
- refactors logic added to support SystemVM deployment on KVM
- Adds support to copy specific files (required for patching) to the hosts on Xenserver
- Modifies vmops method - createFileInDomr to take cleanup param
- Adds configuratble sleep param to CitrixResourceBase::connect() used to verify if telnet to specifc port is possible (if sleep is 0, then default to _sleep = 10000ms)
- Adds Command/Answer for patch systemVMs on XenServer/Xcp
* - Support to patch SystemVMs - VMWare
- Remove attaching systemvm.iso to systemVMs
- Modify / Refactor VMware start command to copy patch related files to the systemvms
- cleanup
* Commit comprises of:
- remove docker from systemvm template - use containerd as container runtime
- update create-k8s-binaries script to use ctr for all docker operations
- Update userdata sent to the k8s nodes
- update cksnode script, run during patching of the cks/k8s nodes
* Add ssh to k8s nodes details in the Access tab on the UI
* test
* Refactor ca/cert patching logic
* Commit comprises of the following changes:
- Use restart network/VPC API to patch routers
- use livePatch API support patching of only cpvm/ssvm
- add timeout to the keystore setup/import script
* remove all references of systemvm.iso
* Fix keystore-cert-import invocation + refactor cert timeout in CP/SS VMs
* fix script timeout
* Refactor cert patching for systemVMs + update keystore-cert-import script + patch-sysvms script + remove patchSysvmCommand from networkelementcommand
* remove commented code + change core user to cloud for cks nodes
* Update ownership of ssh directory
* NEED TO DISCUSS - add on the fly template conversion as an ExecStartPre action (systemd)
* Add UI changes + move changes from patch file to runcmd
* test: validate performance for template modification during seeding
* create vms folder in cloudstack-commons directory - debian rules
* remove logic for on the fly template convert + update k8s test
* fix syntax issue - causing issue with shared network tests
* Code cleanup
* add cgroup config for containerd
* add systemd config for kubelet
* add additional info during image registry config
* address comments
* add temp links of download.cloudstack.org
* address part of the comments
* address comments
* update containerd config - as version has upgraded to 1.5 from 1.4.12 in 4.17.0
* address comments - simplify
* fix vue3 related icon changes
* allow network commands when router template version is lower but is patched
* add internal LB to the list of routers to be patched on network restart with live patch
* add unit tests for API param validations and new helper utilities - file scp & checksum validations
* perform patching only for non-user i.e., system VMs
* add test to validate params
* remove unused import
* add column to domain_router to display software version and support networkrestart with livePatch from router view
* Requires upgrade column to consider package (cloud-scripts) checksum to identify if true/false
* use router software version instead of checksum
* show N/A if no software version reported i.e., in upgraded envs
* fix deb failure
* update pom to official links of systemVM template
* get vdisk uuid from vcenter and store it into database
* add vdisk uuid as external_uuid to listVolume response
* add sql upgrade file
* Update vmware-base/src/main/java/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* update sql add column external_uuid
* Update server/src/main/java/com/cloud/storage/VolumeApiServiceImpl.java
Co-authored-by: Wei Zhou <weizhou@apache.org>
* adapt param description for externalUuid
* add 'idempotent column add' to create external_uuid col
* rename method to getExternalDiskUUID
* remove line disk_offering.system_use
Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Co-authored-by: Wei Zhou <weizhou@apache.org>
When using advanced virtualization the IO Driver is not supported. The
admin will decide if want to enable/disable this configuration from
agent.properties file. The default value is true
* Refactor create volume snapshot with running VM
* Refactor create volume snapshot with stopped VM
* Refactor create volume from snapshot
* Refactor create template from snapshot
* Refactor volume migration (migrateVolume/ migrateVirtualMachineWithVolume)
* Refactor snapshot deletion
* Refactor snapshot revertion
* Adjusts and fix cherry-pick conflicts
* Remove diffuse tests
* Add validation to add flag '--delete' on command 'virsh blockcommand' only if libvirt version is equal or higher 6.0.0
* Expunge temporary snapshot only if template creation is from snapshot
* Extract strings to constant
* Remove unused imports
* Fix error on revert backed up snapshot
* Turn method's return to void as it is not used
* Rename method in SnapshotHelper
* Fix folder creation when using SharedMountPoint pool
* Remove static import
* Remove unnused method
* Cover take snapshot in centos 7
* Handle right snapshot flag according to qemu version
Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
* Add persistence of VM stats
* Fix API 'since' attribute
* Add license
* Address GutoVeronezi's reviews
* Fix the order of VM stats in the API response
* Fix msid in VM stats data
* Fix disk stats and add minor improvements
* Add log message
* Build string using ReflectionToStringBuilderUtils
* Rerun checks
Co-authored-by: joseflauzino <jose@scclouds.com.br>
* VM snapshots of running KVM instance using storage providers plugins for disk snapshots
Added new virtual machine snapshot strategy which is using storage providers plugins to take/revert/delete snapshots.
You can take VM snapshot without VM memory on KVM instance, using storage providers implementations for disk snapshots.
Also revert and delete is added as functionality. Added Thaw/Freeze command for KVM instance.
The snapshots will be consistent, because we freeze the VM during the snapshotting. Backup to secondary storage is executed after
thaw of the VM and if it is enabled in global settings.
* Removed duplicated functionality
Set few methods in DefaultVMSnapshotStrategy to protected to reuse them
without duplicating the code. Remove code that is actualy not needed
* Added requirements in global setting kvm.vmstoragesnapshot.enabled
Added more information in kvm.vmstoragesnapshot.enabled global setting,
that it needs installation of:
- qemu version 1.6+
- qemu-guest-agent installed on guest virtual machine
when the option is enabled
* Added Apache license header
* Removed commented code
* If "kvm.vmstoragesnapshot.enabled" is null should be considered as false
* removed unused imports, replaced default template
Removed unused imports which causing failures and replaced template to
CentOS8
* "kvm.vmstoragesnapshot.enabled" set to dynamic
* Getting status of freeze/thaw commands not the return code
Will chacke the status if freeze/thaw of Guest VM succeded, rather than
looking for return code. Code refactoring
* removed "CreatingKVM" VMsnapshot state and events related to it
* renamed AllocatedKVM to AllocatedVM
the states should not be associated to a hypervisor type
* loggin the result of "drive-backup" command
* Check which VM snapshot strategy could handle the vm snapshots
gets the best match of VM snapshot strategy which could handle the vm
snapshots on KVM.
Other storage plugins could integrate with this functionality to support group snapshots
* Added poolId in canHandle for KVM hypervisors
Added poolId into canHandle method used to check if all volumes are on
the same PowerFlex's storage pool
* skip smoke tests if the hypervisor's OS type is CentOS
This PR works with functionality included in qemu-kvm-ev which
does not come by default on CentOS. The smoke tests will be skipped if
the hypervisor OS is CentOS
* Added missed import in smoke test
* Suggested change to use ` org.apache.commons.lang.StringUtils.isNotBlank`
* Fix getting device on Ubuntu
On Ubuntu the device isn't provided and we have to get it from
node-name parameter. For drive-backup command (for Ubuntu) is needed and job-id which
is the value of node-name (this extra param works on Ubuntu and CentOS as well).
* Removed new snapshot states and functionality for NFS
* throw CloudRuntimeException
provide a properer error message when delete VM snapshot fails
* exclude GROUP snapshots when listing snapshots
* Skip tests if there is pool with NFS/Local
* address comments
This enables jacoco, which didn't run before with the -P quality due to
missing passing of jacoco arg line to surefire plugin.
This also adds support for jacoco/quality builds using Github action and
posting of the PR coverage data using a new action step.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Update VM priority (cpu_chares) when live scaling it
* Apply suggestions from code review
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Addressing javadoc review
* Addressing review typo
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* [VMware] Support for removal of NIC on IP disassociation on the VR
* address comments - extract code + add appropriate operation name in logs
* Do not remove/unplug nic on removal of static nat on last IP (Public NIC)
* address comments
* address comment
* prevent role access escallation
* hierarchy issue fixed
* create api list in account manager for checking new account access
* full api list check
* strange role restriction removed for BareMetal
* add role check on upfdate account as well
* more selective use of api checkers
* error msg and var name
Co-authored-by: Daan Hoogland <dahn@onecht.net>
* maven: migrate short-term to reload4j v1.2.18
This migrate to log4j 1.x fork, reload4j 1.2.18.0 which is drop-in
replacement and addresses some immediate CVE and issues.
* log4j migration to reload4j in pom xmls
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Exclude log4j from transitive dependencies (#73)
Co-authored-by: Marcus Sorensen <shadowsor@gmail.com>
Co-authored-by: Marcus Sorensen <mls@apple.com>
* kvm: Use lscpu to get cpu max speed
* Fix str conversion
* Reorder
* Refactor
* Apply suggestions from code review
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Updated the calling method name getCpuSpeedFromCommandLscpu
* Make it more readable
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* Add NFS version to mount command
* Remove extra line
* Extend NFS version to mount secondary storage
* Unused import
* Refactor NFS version to be granular
* Make use of the ConfigKey on the NFS version setting value
Currently, our compute offerings and disk offerings are tightly coupled with respect to many aspects. For example, if a compute offering is created, a corresponding disk offering entry is also created with the same ID as the reference. Also creating compute offering takes few disk-related parameters which anyway goes to the corresponding disk offering only. I think this design was initially made to address compute offering for the root volume created from a template. Also changing the offering of a volume is tightly coupled with storage tags and has to be done in different APIs either migrateVolume or resizeVolume. Changing of disk offering should be seamless and should consider new storage tags, new size and place the volume in appropriate state as defined in disk offering.
more details are mentioned here https://cwiki.apache.org/confluence/display/CLOUDSTACK/Compute+offering+and+disk+offering+refactoring
* Schema changes and disk offering column change from "type" to "compute_only"
* Few more changes
* Decoupled service offering and disk offering
* Remove diskofferingid from vminstance VO
* Decouple service offering and disk offering states
* diskoffering getsize() is only for strict disk offerings
* Fix deployVM flow
* Added new API params to compute offering creation
* Add diskofferingstrictness to serviceoffering vo under quota
* Added overrideDiskOfferingId parameter in deploy VM API which will override disk offering for the root disk both in template and ISO case
Added diskSizeStrictness parameter in create Disk offering API which will decide whether to restrict resize or disk offering change of a volume
* Fix User vm response to show proper service offering and disk offerings
* Added disk size strictness in disk offering response
* Added disk offering strictness to the service offering response
* Remove comments
* Added UI changes for Disk offering strictness in add compute offering form and Disk size strictness in add disk offering form
* Added diskoffering details to the service offering response
* Added UI changes in deployvm wizard to accept override disk offering id
* Fix delete compute offering
* Fix VM deployment from custom service offering
* Move uselocalstorage column access from service offering to disk offering
* UI: Separated compute and disk releated parameters in add compute offering wizard, also added association to disk offering
* Fixed diskoffering automatic selection on add compute offering wizard
* UI: move compute only toggle button outside the box in add compute offering wizard
* Added volumeId parameter to listDiskOfferings API and the disksizestrictness flag of the current disk offering is honored while list disk offerings
* Added configuration parameter to decide whether to check volume tags on the destination storagepool during migration
* Added disk offering change checks during resize volume operation
* Added new API changeofferingforVolume API and corresponding changes
* Add UI form for changeOfferingForVolume API
* Fix UI conflicts
* Fix service offering usage as disk offering
* Fix unit test failures
* fix user_vm_view
* Addressed review comments
* Fixed service_offering_view
* Fix service offering edit flow
* Fix service offering constructor to address custom offering
* Fix domain_router_view to get proper service offering id
* Removed unused import
* Addressed review comments and fixed update service offering flow with storage tags
* Added marvin test cases for checking disk offering strictness
* review comments addressed
* Remove system_use column from disk offering join
* update volume_view to update system_use column from service offering and not disk offering
* Fix changeOfferingForVolume API for custom disk offering
* Fix global setting implementation
* Fix list volumes, after changing system_use column from disk offering to service offering in volume_view
* Changes for override root disk offering in deployvm wizard in case of custom offering
* Fix a unit test case
* Fixed recent unit test cases with new serviceofferingvo constructor
* Fix unit test in VolumeApiServiceImpl
* Added storage id for the list disk offering API and corresponding UI changes in migrateVolume and changeOfferingForVolume flow
* Rename global configuration parameter from storage.pool.tags.disk.offering.strictness to match.storage.pool.tags.with.disk.offering
* Fix smoke test failures
* Added tool tip for migrate volume UI form
* Address review comments and fix UI form of deploy VM in case of ISO.
* Fixed resize volume UI form for data disk
* UI changes to disable override root disk size when override root disk offering is enabled
* UI fix in deploy vm wizard
* Fix listdiskoffering after rebasing with main
* Fixed UI in migrate and changeofferingfor volume to handle empty disk offering list
Removed the volume's current disk offering from listDiskOffering response list
* Added custom Iops to resize volume form and removed the current disk offering during change offering for volume UI form
* Fix false response on updateDiskOffering API
* Added search field for changeofferingforvolume UI form
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster
* Removed DB changes from 4.16 upgrade file
* Resolving merge conflicts with main 4.17
* Added support for auto migration and auto resize of the root volume upon changing the service offering for VM.
* UI: Added automigrate checkbox in scale VM form
* Addes since attributes to new API params
* Added shrinkOK parameter to changeofferingforvolume API
* Added shrinkOk param to UI in changeOfferingforVolume form
* Added shrinkOk flag to scaleVM and changeServiceForVirtualMachines and UI form
* Removed old foreign key constraint on IDs of service offering and disk offering
* Allow resize and automigrate of root volume if required in all cases of service offering change
* Allow only resize to higher disk size from UI
* Fixing vue syntax error
* Make UI changes to provide root disk size box when the linked disk offering is of custom
* Converted from check box to toggle in scale VM, changeoffering, resize and migrate volume forms
* Fix resize volume operation to update the VM settings
* Fix migratevolume form to pick selected storage pool id in list diskofferings API
* kvm: don't force scsi controller for aarch64 VMs
This would allow use of virtio disk controller with Ceph, etc or as
defined in the VM's root disk controller setting, rather than always
enforce SCSI.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* remove test that doesn't apply now
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* address review comment
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR improves the volume file search on the datastore while computing the VM snapshot chain size in VMware. The file search is now performed in the VM directory on the datastore, instead of all the directories on the datastore (which is leading to incorrect VM snapshot chain size computation when VM id has 6 digits).
With diskful set to true, linstor will fail if it cannot create local
storage for the resource. Which in turn will make it impossible to have a
setup with just compute nodes on cloudstack.
* xcp-ng: fix vm boot options
Use value boot option values from VM details directly similar to other hypervisor plugins instead of relying on boot properties explicitly set for VirtualMachineTO.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster
* Change in constructors
* Naming changes
* Remove commented code
* Refactor code for more readability
* Addressed review comments on code refactor
* KVM: Add MV Settings for virtual GPU hardware type and memory
* fix method createVideoDef argument in test package
* add available options for KVM virtual GPU hardware VM setting
* fix videoRam default value
* fix _videoRam is 0, it will use default provided by libvirt
* Fix export snapshot and export template to secondary storage in VMware to export only one required disk
* Move clone operation into virtual machine mo
* Code refactored for readability
* Added disk key check even for successful clone operation
* Delete dettached disks from cloned VM and added few logs
This adds a volume(primary) storage plugin for the Linstor SDS.
Currently it can create/delete/migrate volumes, snapshots should be possible,
but currently don't work for RAW volume types in cloudstack.
* plugin-storage-volume-linstor: notify libvirt guests about the resize
* vmware: delete snapshot disk after backup to secondary storage
WIP - This ensures that worker VM is destroyed along with any of its own
disks that are backed up to secondary storage.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix for volume backup and confuding vm var name
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* change
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* tag as worker vm
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix copy systemvm.iso for same version
For VMware, systemvm.iso is copied from MS to secondary store. Current server checks if the desired file is present on the secondary store or not. If it is not present ISO is copied.
This change adds a check for checksum for source and destination ISO which would allow copying new ISO if there is a mismatch.
Useful in case when file is corrupted in secondary store or new systemvm.iso is generated for dev environment.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Fix of creating volumes from snapshots without backup
When few snaphots are created onyl on primary storage, and try to create
a volume or a template from the snapshot only the first operation is
successful. Its because the snapshot is backup on secondary storage with
wrong SQL query. The problem appears on Ceph/NFS but may affects other
storage plugins.
Bypassing secondary storage is implemented only for Ceph primary storage
and it didn't cover the functionality to create volume from snapshot
which is kept only on Ceph
* Address review
Worker VM tags are missed for few cloned VMs in VMware, and so these are skipped when tracking / cleaning up of Worker VMs. Adding proper Worker VM tags to these VMs would make them trackable from CloudStack.
* Added support for removing unused port groups on VMWare
* Fixed error handling around unavailable portgroup name
* Review changes, defaulting glovbal var to false, added warning to description, changed if statement.
* Cleanup unused network port groups on all the hosts.
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Co-authored-by: nicolas <nicovazquez90@gmail.com>
* CLOUDSTACK-9175: [VMware DRS] Adding new host to DRS cluster does not participate in load balancing.
Summary: When a new host is added to a cluster, Cloudstack doesn't create all the port groups (created by cloudstack earlier in other hosts) present in the cluster. Since the new host doesn't have all the necessary networking port groups of cloudstack, it is not eligible to participate in DRS load balancing or HA.
Solution: When adding a host to the cluster in Cloudstack, use VMware API to find the list of unique port groups on a previously added host (older host in the cluster) if exists and then create them on the new host.
* Added few checks for cluster details
* remove hot enable cpu und memory in case of reservation
ram and cpu reservation have not relation to ram and cpu hot add
* add custom ram_reservation and it to vm details
* system vms haven't this property, for this reason add additional check
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
* replace 0.0 with NumberUtils
* remove default value and remove return MinRam(seems to be not necessary)
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/guru/VmwareVmImplementer.java
Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: davidjumani <dj.davidjumani1994@gmail.com>
* Create utility to centralize byte convertions
* Add/change toString definitions
* Create Libvirt handler to ScaleVmCommand
* Enable dynamic scalling VM with KVM
* Move config from interface to class and rename it
As every variable declared in interfaces are already final,
this moving will be needed to mock tests in nexts commits
* Configure VM max memory and cpu cores
The values are according to service offering or global configs
* Extract dpdk configuration to a method and test it
* Extract OS desc config to a method and test it
* Extract guest resource def to a method and test it
Improve libvirt def
* Refactor LibvirtVMDef.GuestResourceDef
* Refactor ScaleVmCommand
* Improve VMInstaVO toString()
* Refactor upgradeRunningVirtualMachine method
* Turn int variables into long on utility
* Verify if VM is scalable on KVMGuru
* Rename some KVMGuruTest's methods
* Change vm's xml to work with max memory
* Verify if service offering is dynamic before scale
* Create methods to retrieve data from domain
* Create def to hotplug memory
* Adjust the way command was scaling the VM
* Fix database persistence before executing command
* Send more info to host to improve log
* Fix var name
* Fix missing "}"
* Undo unnecessary changes
* Address review
* Fix scale validation
* Add VM prepared for dynamic scaling validation
* Refactor LibvirtScaleVmCommandWrapper and improve unit tests
* Remove duplicated method
* Add RuntimeException check
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Update ByteScaleUtilsTest.java
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Add SharedMountPoint to KVMs supported storage pool types
* Fix live migration to iSCSI and improve logs
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* disable hot add memory and cpu via vm settings
* add alternative implementation for hot add memory and cpu
* add log entry
* Modify and add log entry for hotadd
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
Co-authored-by: DK101010 <dirk.klahre@itelligence.de>
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* server: skip zone check for PERHOST iso during attachIso
Hypervisor tools ISO - vmware-toools.iso, xs-tools.iso are marked as PERHOST in DB. They are active but not downloaded to the secondary storages and hence no template-zone entry.
Skips the template-zone check for such templates.
Fixes#5265
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* inverted check
* use constants in TemplateManager
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Externalize KVM Agent storage's timeout configuration
* Address @nvazquez review
* Add empty line at the end of the agent.properties file
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Refactor method createVMFromSpec
* Add unit tests
* Fix test
* Extract if block to method for add extra configs to VM Domain XML
* Split travis tests trying to isolate which test is causing an error
* Override toString() method
* Update documentation
* Fix checkstyle error (line with trailing spaces)
* Change VirtualMachineTO print of object
* Add try except to find message error. Remove after test
* Fix indent
* Trying to understanding why is happening in this code
* Refactor method createVMFromSpec
* Add unit tests
* Fix test
* Extract if block to method for add extra configs to VM Domain XML
* Split travis tests trying to isolate which test is causing an error
* Override toString() method
* Update documentation
* Fix checkstyle error (line with trailing spaces)
* Remove unnecessary comment
* Revert travis tests
Co-authored-by: SadiJr <17a0db2854@firemailbox.club>
* Externalize KVM Agent storage's timeout configuration
Created a class of constant agent's properties available to configure on "agent.properties".
Created a class to provides a facility to read the agent's properties file and get its properties.
* Refactored KVHAMonitor nested thread and changed some logs
* It has been added the timeout's config in the agent.properties file
* Rename classes
* Rename var and remove comment
* Fix typo with word "heartbeat"
* Extract multiple methods call to variables
* Add unit tests to file handler
* Increase info about the property
* Create inner class Property
* Rename method getProperty to getPropertyValue
* Remove copyright
* Remove copyright
* Extract code to createHeartBeatCommand
* Change method access from protected to private
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Added disk provisioning type support for VMWare
* Review changes
* Fixed unit test
* Review changes
* Added missing licenses
* Review changes
* Update StoragePoolInfo.java
Removed white space
* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id
* Delete __init__.py
* Merge fix
* Fixed failing test
* Added comment about parameters
* Added error log when update fails
* Added exception when using API
* Ordering storage pool selection to prefer thick disk capable pools if available
* Removed unused parameter
* Reordering changes
* Returning storage pool details after update
* Removed multiple pool update, updated marvin test, removed duplicate enum
* Removed comment
* Removed unused import
* Removed for loop
* Added missing return statements for failed checks
* Class name change
* Null pointer
* Added more info when a deployment fails
* Null pointer
* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Small bug fix on API response and added missing bracket
* Removed datastore cluster code
* Removed unused imports, added missing signature
* Removed duplicate config key
* Revert "Added more info when a deployment fails"
This reverts commit 2486db78dc.
Co-authored-by: dahn <daan.hoogland@gmail.com>
On newer libvirt/qemu it seems PCI hot-plugging could be an issue as
seen in:
https://www.suse.com/support/kb/doc/?id=000019383https://bugs.launchpad.net/nova/+bug/1836065
This was found to be true on ARM64/aarch64 platform (tested on
RaspberryPi4). As per the default machine doc, it advises to
pre-allocate PCI controllers on the machine and pcie-to-pci-bridge based
controller for legacy PCI models:
https://libvirt.org/pci-hotplug.html#x86_64-q35
This patch introduces the concept as a workaround until a proper fix is
done (ideally in the upstream libvirt/qemu projects). Until then client
code can add 32 PCI controllers and a pcie-to-pci-bridge controller for
aarch64 platforms.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Currently there is no disk IO driver configuration for VMs running on KVM. That's OK for most the cases; however, recently there have been added some quite interesting optimizations with the IO driver io_uring.
Note that IO URING requires:
Qemu >= 5.0, and
Libvirt >= 6.3.0.
By using io_uring we can see a massive I/O performance improvement within Virtual Machines running from Local and/or NFS storage.
This implementation enhances the KVM disk configuration by adding workflow for setting the disk IO drivers. Additionally, if the Qemu and Libvirt versions matches with the required for having io_uring we are going to set it on the VM. If there is no support for such driver we keep it as it is nowadays, without any IO driver configured.
Fixes: #4883
* vmware: fix migrate vm with volume
Recent forward merge of 4.15 branch accidentally brought a bug in VM relocation method for VMware while trying to find datastore for the migrated volume.
This PR fixes it by using either of available target or source host.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Update plugins/hypervisors/vmware/src/main/java/com/cloud/hypervisor/vmware/resource/VmwareResource.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* server: fix failed to apply userdata when enable static nat
* server: fix cannot expunge vm as applyUserdata fails
* configdrive: fix ISO is not recognized when plug a new nic
* configdrive: detach and attach configdrive ISO as it is changed when plug a new nic or migrate vm
* configdrive test: (1) password file does not exists in recreated ISO; (2) vm hostname should be changed after migration
* configdrive: use centos55 template with sshkey and configdrive support
* configdrive: disklabel is 'config-2' for configdrive ISO
* configdrive: use copy for configdrive ISO and move for other template/volume/iso
* configdrive: use public-keys.txt
* configdrive test: fix (1) update_template ; (2) ssh into vm by keypair
This PR intends to improve logging on agent start to facilitate troubleshooting.
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
Co-authored-by: sureshanaparti <12028987+sureshanaparti@users.noreply.github.com>
* vxlan: arp does not work between hosts as multicast group is communicated over physical nic instead of linux bridge
when linux bridge is setup (refer to http://docs.cloudstack.apache.org/projects/archived-cloudstack-getting-started/en/latest/networking/vxlan.html#configure-product-to-use-vxlan-plugin) and used as the kvm traffic label of physical networks, the vms on different hosts cannot reach each other.
(1) does not work:
```
/usr/share/cloudstack-common/scripts/vm/network/vnet/modifyvxlan.sh -v 1001 -p eth1 -b brvx-1001 -o add
```
"bridge fdb" shows
```
00:00:00:00:00:00 dev vxlan1001 dst 239.0.3.233 via eth1 self permanent
```
(2) this works:
```
/usr/share/cloudstack-common/scripts/vm/network/vnet/modifyvxlan.sh -v 1001 -p cloudbr1 -b brvx-1001 -o add
```
"bridge fdb" shows
```
00:00:00:00:00:00 dev vxlan1001 dst 239.0.3.233 via cloudbr1 self permanent
```
* vxlan: fix issue if kvm network label is not set
his PR fixes the problem of not updating the chain info or setting chain info to null after volume migrations.
Problem: While fetching the volume chain info, management server assumes datastore name to be a UUID (this is true only for NFS storages added by CloudStack) but datastore name can be with any name.
Solution: To fetch the volume chain info, use datastore name instead of UUID.
The fix is made in the flow of following API operations
migrateVirtualMachine
migrateVirtualMachineWithVolume
migrateVolume
* Fix of some UEFI related issues
1 - fix of attach/detach ISO of VM with UEFI boot type
2 - if OS type of an ISO is categorized as "Other" the bus type of the disk
will be set to "sata"
* Simplify the validation of OS types
Inclusivity changes for CloudStack
- Change default git branch name from 'master' to 'main' (post renaming/changing default git branch to 'main' in git repo)
- Rename some offensive words/terms as appropriate for inclusiveness.
This PR updates the default git branch to 'main', as part of #4887.
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR fixes the issue of missing fcd folder in local storage in case of VMware vSphere.
with this fix, a folder with name fcd is created whenever local storage is initiated.
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.
Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
Fixes: #4808, #4941
This PR adds a force flag to the attachIso / detachIso commands, especially for VMware where it is noticed that when trying to either detach an iso or attach an iso when there already exists another present it fails to do the necessary operation as from ACS end we either answer the question returned by Esxi for CDRom disconnect operation as No (for detach operation) or do not answer the question at all (for Attach operation).
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* prevent other vm disks getting deleted
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix inter-cluster stopped vm migration
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix detached volume inter-cluster migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* cleanup unused method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* review changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: allow attached volume migration using VmwareStorageMotionStrategy
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* find vm clusterid with multiple ROOT volumes
VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix successive storage migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix intercluster check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor vm cluster, host method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove inter-pod check
Added by mistake, VMware won't have pods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address review comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster during a stopped VM migration. Also, find target datastore using the host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Adds new/missing guest os mappings for XCP-ng/Xenserver 8.1
Copy guest OS mappings from XCP-ng/Xenserver 8.1 for XCP-ng/Xenserver 8.2
Adds Ubuntu 20.04 guest os mapping for XCP-ng/Xenserver 8.2
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes a small bug when explicitly setting VM hardware versions lower than version 10.
Vmware expects the hardware version in format: vmx-DD where DD is a two-digit representation of the virtual hardware version. For hardware version lower than 10, CloudStack was not using to digits for the hardware version number, which ended up on an error while creating worker VMs. (vmx-8 for example instead of vmx-08)
This PR fixes#4244
deploying of VMs from ISOs and from templates with UEFI boot type
deploying of VMs from ISOs and from templates with UEFI boot type with
volumes in RAW format
This PR aims at introducing persistence mode in L2 networks and enhancing the behavior in Isolated networks
Doc PR apache/cloudstack-documentation#183
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Fixes#4729
As reported in the issue ACP 4.7 used a normal UUID in db for a presetup primary store on Xenserver.
Later the value has been changed to store's path with '/' removed.
Current changes try to retrieve SR's name-lable from store's path if UUID doesn't match path field for a pre-setup store.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes the issue pertaining to volume resize on VMWare for deploy as-is templates. VMware deploy as-is templates are those that are deployed as per the specification in the imported OVF. Hence override root disk size will not be adhered to for such templates. Moreover, when we deploy VMs in stopped state and resize the volume, the root disk doesn't get resized but the volume size is merely updated in the DB.
This PR also includes the following (for deploy as-is templates):
- Disables overriding root disk size during VM deployment on the UI
- Disables selection of compute offerings with root disk size specified, at the time of deployment
- Provided users with the option to deploy VM is stopped state via UI (so as to give an option to users to resize the volumes before starting the VM)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* Updated libvirt's native reboot operation for VM on KVM using ACPI event, and Added 'forced' reboot option to stop and start the VM (using rebootVirtualMachine API)
* Added 'forced' reboot option for System VM and Router
- New parameter 'forced' in rebootSystemVm API, to stop and then start System VM
- New parameter 'forced' in rebootRouter API, to force stop and then start Router
* Added force reboot tests for User VM, System VM and Router
These changes are related to PR #3194, but include suspending/resuming the VM when doing a VM snapshot as well, when deleting a VM snapshot, as it is performing the same operations via Libvirt. Also, there was an issue with the UI/localization changes in the prior PR, as that PR was altering the Volume snapshot behavior, but was altering the VM snapshot wording. Both have been altered in this PR.
Issuing this in response to the work happening in PR #4029.
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ
Documentation PR: apache/cloudstack-documentation#169
This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack
Other improvements addressed in addition to PowerFlex/ScaleIO support:
- Added support for config drives in host cache for KVM
=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)
- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates
- Updated full deployment destination for preparing the network(s) on VM start
- Propagate the direct download certificates uploaded to the newly added KVM hosts
- Discover the template size for direct download templates using any available host from the zones specified on template registration
=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones
- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)
- Retry VM deployment/start when the host cannot grant access to volume/template
- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore
- Check the router filesystem is writable or not, before performing health checks
=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not
- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)
* Addressed some issues for few operations on PowerFlex storage pool.
- Updated migration volume operation to sync the status and wait for migration to complete.
- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.
- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).
- Updated resize volume error message string.
- Blocked the below operations on PowerFlex storage pool:
-> Extract Volume
-> Create Snapshot for VMSnapshot
* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.
- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html
Other fixes included:
- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)
- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.
- Perform basic tests (for connectivity and file system) on router before updating the health check config data
=> Validate the basic tests (connectivity and file system check) on router
=> Cleanup the health check results when router is destroyed
* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0
* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool
and Minor improvements in PowerFlex/ScaleIO storage plugin code
* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.
- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.
* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py
* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)
* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.
* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration
* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure
* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
In previous cloudstack versions, qcow2 image does not have a backing file format.
however, it is required in newer qemu versions, for example qemu 4.2 on ubuntu 20.04.
steps to reproduce the issue
(1) install cloudstack 4.14 or previous version, and ubuntu 19.04 or 18.04/16.04 LTS.
(2) create vms.
(3) upgrade to 4.15, upgrade os to ubuntu 20.04 , or install a new server with ubuntu 20.04.
(4) migrate vm from old ubuntu version to ubuntu 20.04, failed with exception below
```
2021-02-04 13:43:07,397 DEBUG [resource.wrapper.LibvirtMigrateCommandWrapper] (agentRequest-Handler-1:null) (logid:93da9385) ExecutionException : org.libvirt.LibvirtException: Requested operation is not valid: format of backing image '/mnt/03b6f487-9eaf-38bf-ad2d-d985423b832f/66990fcc-fd98-4932-9649-989bf6583d59' of image '/mnt/03b6f487-9eaf-38bf-ad2d-d985423b832f/a3dd1f0f-2557-4e07-951c-e4eb7b3f38b2' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting)
```
(5)stop vm, and start it on ubuntu 20.04 server. failed with exception below
```
2021-02-04 13:46:29,766 WARN [resource.wrapper.LibvirtStartCommandWrapper] (agentRequest-Handler-5:null) (logid:b54745a7) LibvirtException
org.libvirt.LibvirtException: Requested operation is not valid: format of backing image '/mnt/03b6f487-9eaf-38bf-ad2d-d985423b832f/66990fcc-fd98-4932-9649-989bf6583d59' of image '/mnt/03b6f487-9eaf-38bf-ad2d-d985423b832f/a3dd1f0f-2557-4e07-951c-e4eb7b3f38b2' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting)
```
To make testing easier, step 1 and 2 can be replaced by
```
qemu-img create -f qcow2 -b <backing file> <qcow2 image>
```
so qcow2 image does not have a backing file format.
- Fixes inter-cluster migration of VMs
- Allows migration of stopped VM with disks attached to different and suitable pools
- Improves inter-cluster detached volume migration
- Allows inter-cluster migration (clusters of same Pod) for system VMs, VRs on VMware
- Allows storage migration for stopped system VMs, VRs on VMware within same Pod if StoragePool cluster scopetype
Linked Primate PR: https://github.com/apache/cloudstack-primate/pull/789 [Changes merged in this PR after new UI merge]
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/170
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* 4.15:
server: select root disk based on user input during vm import (#4591)
kvm: Use Q35 chipset for UEFI x86_64 (#4576)
server: fix wrong error message when create isolated network without SourceNat (#4624)
server: add possibility to scale vm to current customer offerings (#4622)
server: keep networks order and ips while move a vm with multiple networks (#4602)
server: throw exception when update vm nic on L2 network (#4625)
doc: fix typo in install notes (#4633)
* 4.14:
server: select root disk based on user input during vm import (#4591)
kvm: Use Q35 chipset for UEFI x86_64 (#4576)
server: fix wrong error message when create isolated network without SourceNat (#4624)
server: add possibility to scale vm to current customer offerings (#4622)
server: keep networks order and ips while move a vm with multiple networks (#4602)
server: throw exception when update vm nic on L2 network (#4625)
doc: fix typo in install notes (#4633)
The bus type to `data disk` volumes is hardcoded to `virtio` or `scsi`, when using virtio-scsi (or, based on the template type). Therefore, there is no way to specify the bus type to data disk volumes (as we have for root disks).
This PR intends to replicate the `rootDiskController` behavior to `dataDiskController`, allowing the definition of the controller.
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
This merges apache/cloudstack-primate under ui and removes the legacy UI
from ui/legacy in master/4.16 as voted on dev ML.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Add RBD main storage through UI, it will fail when there is no host port parameter;
Because when we created the pool, we did not add the port target in the xml
This fixes issue introduced in c3554ec31d
which enable block of code that will double escape rados host/monitor
port.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR fixes a regression issue in #4497
In cloudstack 4.14 or before, the cpu topology is set only when cpucore per socket is set (to 4 or 6).
in other conditions, there is no cpu topology in vm xml definition.
with #4497, vm will have cpu topology in its xml definition, if cpucore per socket is not set.
<topology sockets='<vm cpu cores>' cores='1' threads='1'/>
Not sure if it causes any issue. I think it would be better not to add this part in vm xml definition if cpucore per socket is not set.
XenServer 7.1 has an file descriptor/tapdisk iso-caching issue where new systemvm.iso are not recognised and inside the VR/ssvm/cpvm file IO error is seen. This was only reproducible with XS7.1 (intermittently), the fix was to check and eject the systemvm.iso (old/stale/cached), then insert the new systemvm.iso and then eject it.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem:
When migrateVMwithVolumes API is tried on a VM with two volumes to migrate to a different host and tried to migrate only one volume, Cloudstack migrates both the Volumes but then marks only one of them migrated. This makes volume inaccessible due to inconsitency in path of volume in cloudstack and vsphere
Solution:
Set the target datastore in relocate spec properly for each volume