* Fix of shrinking volumes with QCOW2 format
If the volumes are with QCOW2 format the shrinking will be handled on
the agents side. There are cases in some storage plugins where the
volumes' format is kept in the DB in QCOW2 but the actual format is raw.
Till the current implementation this was limiting the plugins to shrink
the volumes. Now this will be handled by the storage plugins
* Addressed @nvazquez suggested change
Will log the exception instead the exception message
* Added disk provisioning type support for VMWare
* Review changes
* Fixed unit test
* Review changes
* Added missing licenses
* Review changes
* Update StoragePoolInfo.java
Removed white space
* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id
* Delete __init__.py
* Merge fix
* Fixed failing test
* Added comment about parameters
* Added error log when update fails
* Added exception when using API
* Ordering storage pool selection to prefer thick disk capable pools if available
* Removed unused parameter
* Reordering changes
* Returning storage pool details after update
* Removed multiple pool update, updated marvin test, removed duplicate enum
* Removed comment
* Removed unused import
* Removed for loop
* Added missing return statements for failed checks
* Class name change
* Null pointer
* Added more info when a deployment fails
* Null pointer
* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Small bug fix on API response and added missing bracket
* Removed datastore cluster code
* Removed unused imports, added missing signature
* Removed duplicate config key
* Revert "Added more info when a deployment fails"
This reverts commit 2486db78dc.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Externalize secondary storage capacity threshold
* Use default value as threshold when config value is lower than 0.0
* Move config to CapacityManager
* Validate config in CapacityManagerImpl
* Use config in StorageOrchestrator
* Change config description
* Remove unused import
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* server: fix failed to apply userdata when enable static nat
* server: fix cannot expunge vm as applyUserdata fails
* configdrive: fix ISO is not recognized when plug a new nic
* configdrive: detach and attach configdrive ISO as it is changed when plug a new nic or migrate vm
* configdrive test: (1) password file does not exists in recreated ISO; (2) vm hostname should be changed after migration
* configdrive: use centos55 template with sshkey and configdrive support
* configdrive: disklabel is 'config-2' for configdrive ISO
* configdrive: use copy for configdrive ISO and move for other template/volume/iso
* configdrive: use public-keys.txt
* configdrive test: fix (1) update_template ; (2) ssh into vm by keypair
* Enhance log messages with hostName
* Use host.toString() on most of host logs.
* Remove redundant "Host" in logs and enhance logs
* duplicated "for"
* Adopt String.format, and enhance code
* Address reviews enhancing log messages
Update server/src/main/java/com/cloud/resource/ResourceManagerImpl.java
-- server/src/main/java/com/cloud/vm/UserVmManagerImpl.java
-- server/src/main/java/com/cloud/resource/RollingMaintenanceManagerImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Fix String.format issue and change log message from debug to warn
* Fix checkstyle issue
* Fix string.format log
* Address review: enhance logs
* Enhance log of hosts in maintenance avoid list
* Remove "VM" on logs as vm.toString() already appends VM-<details>
* Add more details of the VM when postStateTransitionEvent
* Address reviewer and enhance VMInstanceVO.toString()
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
On API `resetSSHKeyForVirtualMachine`, ACS also regenerates VM password when it uses a template with `Password Enabled` as true; there is already anAPI to reset VM password, therefore, the reset SSH keys API should not reset the VM SSH password as well.
Besides running a meaningless process, the VM's password regeneration slows down the main process and may cause a confusion in operations due to password change in the VM without being explicity requested.
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
This PR fixes the issue that nic has wrong gateway after updating vm nic.
Steps to reproduce the issue
(1) create shared network (in advanced zone or advanced zone with sg)
(2) create new shared network (with same startip/endip/netmask, but different gateway).
(3) create a vm in new network
(4) stop vm and update vm nic ip address
Expected result:
The vm has correct gateway and netmask (of second network)
Actual result:
The vm has wrong gateway and netmask (of first network)
- Added connection manager to the gateway client.
- Renew the client session on '401 Unauthorized' response.
- Refactored the gateway client calls, for GET and POST methods.
- Consume the http entity content after login/(re)authentication and close the content stream if exists.
- Updated storage pool client connection timeout configuration 'storage.pool.client.timeout' to non-dynamic.
- Added storage pool client max connections configuration 'storage.pool.client.max.connections' (default: 100) to specify the maximum connections for the ScaleIO storage pool client.
- Updated unit tests.
and blocked the attach volume operation for uploaded volume on ScaleIO/PowerFlex storage pool
* Add mail dependencies
* Create util to send SMTP mail
* Add unit tests to SMTP mail sender
* Use SMTP mail util on quota alert
* Use SMTP mail util on alert
* Use SMTP mail util on project
* Use SMTP mail util on usage alert
* Remove copyright line in license header
Co-authored-by: Gabriel Beims Bräscher <gabrascher@gmail.com>
* Remove copyright line in license header
Co-authored-by: Gabriel Beims Bräscher <gabrascher@gmail.com>
* Remove copyright line in license header
Co-authored-by: Gabriel Beims Bräscher <gabrascher@gmail.com>
* Remove copyright line in license header
Co-authored-by: Gabriel Beims Bräscher <gabrascher@gmail.com>
* Remove copyright line in license header
Co-authored-by: Gabriel Beims Bräscher <gabrascher@gmail.com>
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
Co-authored-by: Gabriel Beims Bräscher <gabrascher@gmail.com>
* Cover a case where resizing root disk failed; add isNotPossibleToResize method.
* remove format from resize validation
* Revert if-conditional changes that removed ImageFormat.ISO validation
* Add JUnit tests for VolumeApiServiceImpl.isNotPossibleToResize
* Fix checkstyle of test Class
* Use _templateDao.findByIdIncludingRemoved instead of _templateDao.findById
* Prevent null serviceOfferingView and Mock findByIdIncludingRemoved instead of findById
his PR fixes the problem of not updating the chain info or setting chain info to null after volume migrations.
Problem: While fetching the volume chain info, management server assumes datastore name to be a UUID (this is true only for NFS storages added by CloudStack) but datastore name can be with any name.
Solution: To fetch the volume chain info, use datastore name instead of UUID.
The fix is made in the flow of following API operations
migrateVirtualMachine
migrateVirtualMachineWithVolume
migrateVolume
* Fix of some UEFI related issues
1 - fix of attach/detach ISO of VM with UEFI boot type
2 - if OS type of an ISO is categorized as "Other" the bus type of the disk
will be set to "sata"
* Simplify the validation of OS types
This PR introduces new granularity levels to configure VM dynamic scalability. Previously VM is configured to be dynamically scalable based on the template and global setting. Now we bringing this option to configure at service offering and VM level also.
VM can dynamically scale only when all flags are ON at VM level, template, service offering and global setting. If any of the flags is set to false then VM cannot be scalable. This result will be persisted in DB for each VM and will be honoured for that VM till it is updated.
We are introducing 'dynamicscalingallowed' parameter with permitted values of true or false for deployVM API and createServiceOffering API.
Following are the API parameter changes:
createServiceOffering API:
dynamicscalingenabled: an optional parameter of type Boolean with default value “true”.
deployVirtualMachine API:
dynamicscalingenabled: an optional parameter of type Boolean with default value “true”.
Following are the UI changes:
Service offering creation has ON/OFF switch for dynamic scaling enabled with default value true
Inclusivity changes for CloudStack
- Change default git branch name from 'master' to 'main' (post renaming/changing default git branch to 'main' in git repo)
- Rename some offensive words/terms as appropriate for inclusiveness.
This PR updates the default git branch to 'main', as part of #4887.
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes: #4990
When a VM associated with a backup offering is destroyed/expunged, the backup offering isn't unassigned, and despite the VM having no backups present, backup usage is generated. This PR prevent usage record generation when there are no backups present for a VM with a backup offering associated to it. This is done by ensuring that usage event for backups is generated only when a the backup size > 0
* server: fixes NPE on empty vmware.root.disk.controller config
When global config - vmware.root.disk.controller is set to empty and template is registered with deployasis, server will throw NPE while deploying a VM. This change fixes the problem by using default value of the config in this case.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* use StringUtils utility
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
* fix indentation
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
Fixes: #4972
This PR sets systevms' agent state to disconnected when it is stopped. Currently, when a systemVM (Console Proxy VM / Secondary storage VM) is stopped, the agent state still appears to be 'Up'
* server: destroy ssvm, cpvm on last host maintenance
When a single or last UP host enters into maintenance just stopping SSVM and CPVM will leave behind VMs on hypervisor side. As these system vms will be recreated they can be destroyed.
Fixes#3719
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix methods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* immediately destroy systemvms
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix destroy
Added bypassHostMaintenance flag in Comma.java class to allow command to be handled by host agent even when host is in maintenace.
Flag is set true only for delete commands for ssvm and cpvm.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unit test fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing return statement
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix
VM should be stopped with cleanup before calling expunge else it server may through error with host in PrepareForMaintenance state.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* rename
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* forceha: fix vm is not started if it is poweroff from inside
steps to reproduce the issue
(1) make sure force.ha is true in global setting. if not, change it to true, and restart mgt server
(2) create a service offering , ha is not enabled
(3) create a vm
(4) log into the vm, and power off via cli.
expected result: vm is started again by cloudstack
actual result: vm is not started.
* forceha: fix vms are still running if host is force-removed
when host can be force removed, however vms are stopped in cloudstack, but not stopped on host
```
(localcloud) 🐱 > delete host id="a5625393-444d-4d0a-b31d-62baf88a8be1" forced=true
{
"success": true
}```
after some minutes, vms are still runnning on host
```
root@mgt01:~# ssh node63 virsh list
Id Name State
---------------------------
1 i-2-19-VM running
2 i-2-11-VM running
```
error message are
```
Cannot transmit host 2 to Enabled state
com.cloud.utils.fsm.NoTransitionException: No next resource state found for current state = Enabled event = DeleteHost
at com.cloud.resource.ResourceManagerImpl.resourceStateTransitTo(ResourceManagerImpl.java:1216)
at com.cloud.resource.ResourceManagerImpl$1.doInTransactionWithoutResult(ResourceManagerImpl.java:907)
```
* forceha: Make ForceHA dynamic
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.
Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
This NPE may happen when a VM is marked removed in the DB but not its
nics on a shared network. This can usually happen due to a failed
expunged VM or when an admin manually marks a VM as removed in DB but
does not cleanup the nics/network resources.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
IKE version allows selecting ike (autoselect), ikev1, or ikev2.
Split connections gives an option of separating the first right subnet from the rest, and kicking out individual statements for each right subnet for better cross-compatibility.
Backported from PR: #4137
update per PR suggestion
Fixes#3138
Co-authored-by: Greg Goodrich <ggoodrich@ippathways.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This PR addresses the issue raised at #4545 (Fail to change Service offering from local <> shared storage).
When upgrading a VM service offering it is validated if the new offering has the same storage scope (local or shared) as the current offering. I think that the validation makes sense in a way of preventing running Root disks with an offering that does not match the current storage pool. However, the validation only compares both offerings and does not consider that it is possible to migrate Volumes between local <> shared storage pools.
The idea behind this implementation is that CloudStack should check the scope of the current storage pool which the ROOT volume is allocated; this, it is possible to migrate the volume between storage pools and list/upgrade according to the offerings that are supported for such pool.
This PR also fixes an issue where the API command that lists offerings for a VM should follow the same idea and list based on the storage pool that the volume is allocated and not the previous offering.
Fixes: #4545
This PR makes sure no orphaned snapshot details are considered in the cleanup at startup job.
a real solution would be to implement some kind of cascading delete, but as the parent record is "only" marked as removed this would be a bit com
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Fixes: #4808, #4941
This PR adds a force flag to the attachIso / detachIso commands, especially for VMware where it is noticed that when trying to either detach an iso or attach an iso when there already exists another present it fails to do the necessary operation as from ACS end we either answer the question returned by Esxi for CDRom disconnect operation as No (for detach operation) or do not answer the question at all (for Attach operation).
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Volume can either have an associated disk offering (for DATA disks & ROOT disks for VMs created from ISO) or a compute/service offering (for ROOT disks of VMs created from templates).
This fix simplifies and fixes check to return the appropriate response keys in these cases.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes the CLOUDSTACK-10434. I think some APIs lack access check and list them in below table. I also give the pattch to add the access check for the api in this table. Anyone chould change this table, If you think the APIs do not need access check and change their lable as "no".
API Lack?
VolumeApiServiceImpl # updateVolume yes
VolumeApiServiceImpl # detachVolumeViaDestroyVM yes
VolumeApiServiceImpl # takeSnapshot yes
VolumeApiServiceImpl # migrateVolume yes
AccountManagerImpl#createApiKeyAndSecretKey yes
LoadBalancingRulesManagerImpl#applyLBStickinessPolicy yes
LoadBalancingRulesManagerImpl#applyLBHealthCheckPolicy yes
TemplateManagerImpl#createPrivateTemplate yes
SnapshotManagerImpl#updateSnapshotPolicy
Co-authored-by: lujie <lujie@foxmail.com>
* prevent other vm disks getting deleted
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix inter-cluster stopped vm migration
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix detached volume inter-cluster migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* cleanup unused method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* review changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: allow attached volume migration using VmwareStorageMotionStrategy
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* find vm clusterid with multiple ROOT volumes
VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix successive storage migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix intercluster check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor vm cluster, host method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove inter-pod check
Added by mistake, VMware won't have pods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address review comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes regression introduced in 71c5dbcf49
which would cause capacity bytes of certain pools to be update which
shouldn't get updated by StatsCollector such as solidfire.
Fixes#4911
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR fixes: #4462
Problem Statement:
In case of VMware, when a VM having multiple data disk is destroyed (without expunge) and tried to recover the VM then the previous data disks are not attached to the VM like before destroy. Only root disk is attached to the VM.
Root cause:
All data disks were removed as part of VM destroy. Only the volumes which are selected to delete (while destroying VM) are supposed to be detached and destroyed.
Solution:
During VM destroy, detach and destroy only volumes which are selected during VM destroy. Detach the other volumes during expunge of VM.
If VM details contain rootdisksize, volume entry in DB should reflect correct size when VM reset is performed.
Fixes#3957
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
When calling the listUasageRecords API records per domain are fetched recursively. This is not the case if you specify a domain id.
This PR adds a new parameter to enable fetching records recursively (isRecursive) when passing the domain id.
Fixes#4517
Adds capacity checks for RandomAllocator (host allocator)
Factors out host cpu capability and capacity check wrt serviceoffering code into CapacityManager.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes#4244
deploying of VMs from ISOs and from templates with UEFI boot type
deploying of VMs from ISOs and from templates with UEFI boot type with
volumes in RAW format
This PR aims at introducing persistence mode in L2 networks and enhancing the behavior in Isolated networks
Doc PR apache/cloudstack-documentation#183
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This contains 3 main changes
(1) add NETWORK_STATS_ethX for all nics with public ips in VPC VRs (current: NETWORK_STATS_eth1)
(2) DO NOT create records in user_statistics for each VPC tier (only one record per public nic per VPC VR)
(3) send NetworkUsageCommand before unplugging a NIC with public IPs from VPC VR
Public IP addresses dedicated to one domain should not be accessed
by other domains. Also, root admin should be able to display all
public ip addresses in system.
Currently following issues exist
1. Public IP address assigned to one domain can be accessed by
other sibling domains
If use.system.public.ip is false then child domains should not
see public ip of ROOT domain
Before fix
```
(test1) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
"count": 59,
"publicipaddress": [
```
After fix
```
(test) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
"count": 10,
```
Fixes https://github.com/apache/cloudstack/issues/4566
Sets `memoryintfreekbs` to zero if it is greater than `memorykbs`. Caused by KVM returning the RSS memory of the process running the VM rather than the free memory inside the VM.
Co-authored-by: dahn <daan.hoogland@gmail.com>
This PR fixes the issue pertaining to volume resize on VMWare for deploy as-is templates. VMware deploy as-is templates are those that are deployed as per the specification in the imported OVF. Hence override root disk size will not be adhered to for such templates. Moreover, when we deploy VMs in stopped state and resize the volume, the root disk doesn't get resized but the volume size is merely updated in the DB.
This PR also includes the following (for deploy as-is templates):
- Disables overriding root disk size during VM deployment on the UI
- Disables selection of compute offerings with root disk size specified, at the time of deployment
- Provided users with the option to deploy VM is stopped state via UI (so as to give an option to users to resize the volumes before starting the VM)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* Update vm_template table removed field when template is deleted
* Update method name
* address comment
* Extracted code to separate methods
* Address test failure
* refactor test cleanup
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
* vpc: dnsmasq is not started if use.external.dns is true
* Revert "vpc: dnsmasq is not started if use.external.dns is true"
This reverts commit ee58fe0787.
* #4806 vpc: fix zone dns1/dns2 are missing in vpc VR when restart vpc or vpc VR
* Fix NPE while cloudstack agent failed to connect to mgt server
If `ramOvercommitRatio` field is missing in user_vm_details table
is missing then agent throws NPE after restarting
It is because in user_vm_details, there are 'cpuOvercommitRatio' for all
vms, but for vms the field 'ramOvercommitRatio' is missing in the table.
* code feedback
* server: delete template on storage over capacity threshold
While deleting template for a specific zone, check should be done only for writable secondary storages and not for storages with available capacity threshold.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix for ISOs and refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove writable store check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix exception message
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This fixes the ostype ID returned in listUsageRecords API response to
be uuid instead of internal DB ID and also returns the os category ID
(uuid) and name.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* novnc: Add client IP check for novnc console in cloudstack 4.16
* novnc ip check : Fix restart CPVM or mgt server does not update novnc param
* novnc ip check: move to method
* Updated libvirt's native reboot operation for VM on KVM using ACPI event, and Added 'forced' reboot option to stop and start the VM (using rebootVirtualMachine API)
* Added 'forced' reboot option for System VM and Router
- New parameter 'forced' in rebootSystemVm API, to stop and then start System VM
- New parameter 'forced' in rebootRouter API, to force stop and then start Router
* Added force reboot tests for User VM, System VM and Router
* server: fix failed to remove template/iso if upload from local fails
When upload template/iso/volume from local fails, the install_path will not be a full path of file so removing it will fail.
```
mysql> select install_path from template_store_ref;
+--------------------------------------------------------------------+
| install_path |
+--------------------------------------------------------------------+
| template/tmpl/1/3/805f4763-248e-40ec-b79a-b868cc480d0a.qcow2 |
| template/tmpl/1/4/c7e32c9e-5e72-3726-85cf-aa5ccd84118d.qcow2 |
| template/tmpl/2/201/bc4f4f08-138a-31b8-af1a-d4450eff7982.qcow2 |
| template/tmpl/2/202 |
| template/tmpl/2/203/203-2-d47f8cde-a2a8-31e7-a826-2628ad98a6c8.iso |
| template/tmpl/2/204 |
| template/tmpl/5/205 |
| template/tmpl/2/206 |
| template/tmpl/2/207 |
| template/tmpl/2/208 |
| template/tmpl/2/209 |
| template/tmpl/2/210 |
+--------------------------------------------------------------------+
12 rows in set (0.00 sec)
mysql> select install_path from volume_store_ref;
+---------------------------------------------------------+
| install_path |
+---------------------------------------------------------+
| volumes/2/22 |
| volumes/2/19/f93face9-6521-4184-b89a-cb07f86bbae8.qcow2 |
| volumes/2/23 |
| volumes/2/24 |
+---------------------------------------------------------+
4 rows in set (0.00 sec)
```
* server: disallow removing template/iso in NotUpload and UploadInProgress state
While finding pools for volume migration list following compatible storages:
- all zone-wide storages of the same hypervisor.
- when the volume is attached to a VM, then all storages from the same cluster as that of VM.
- for detached volume, all storages that belong to clusters of the same hypervisor.
Fixes#4692Fixes#4400
This PR aims at restoring the previous level of leniency in listing templates on stores that have been marked as deleted / removed by updating the DB.
While Cloudstack doesn't allow deleting stores that have resources on them, it may so happen that users may mimic a deletion of a store by merely updating the DB. Under such a case, listing of templates is hampered due to an NPE that is caused. (as seen in #4606)
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ
Documentation PR: apache/cloudstack-documentation#169
This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack
Other improvements addressed in addition to PowerFlex/ScaleIO support:
- Added support for config drives in host cache for KVM
=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)
- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates
- Updated full deployment destination for preparing the network(s) on VM start
- Propagate the direct download certificates uploaded to the newly added KVM hosts
- Discover the template size for direct download templates using any available host from the zones specified on template registration
=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones
- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)
- Retry VM deployment/start when the host cannot grant access to volume/template
- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore
- Check the router filesystem is writable or not, before performing health checks
=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not
- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)
* Addressed some issues for few operations on PowerFlex storage pool.
- Updated migration volume operation to sync the status and wait for migration to complete.
- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.
- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).
- Updated resize volume error message string.
- Blocked the below operations on PowerFlex storage pool:
-> Extract Volume
-> Create Snapshot for VMSnapshot
* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.
- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html
Other fixes included:
- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)
- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.
- Perform basic tests (for connectivity and file system) on router before updating the health check config data
=> Validate the basic tests (connectivity and file system check) on router
=> Cleanup the health check results when router is destroyed
* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0
* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool
and Minor improvements in PowerFlex/ScaleIO storage plugin code
* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.
- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.
* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py
* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)
* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.
* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration
* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure
* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
* Prevent KVM from performing volume migrations of running instances
KVM has a limitation to modify instances definitions while they are on running state. Therefore, it is not possible to change volumes backend location easily.
There is a problem in the `migrateVolume` API. This API command ignores that limitation and causes an inconsistence on the database. ACS processes the migrate command, copies the volume to the destination storage, modifies the database and finishes the process with success. However, the running backend is still using the "old volume file".
This PR intends to prevent KVM to perform volumes migrations while KVM instances are in the running state and inform the user of an alternative API command that enables such operation on running instances.
* Update VolumeApiServiceImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
Co-authored-by: Rohit Yadav <rohit@apache.org>
* Show network name in exception message
* Update server/src/main/java/com/cloud/vm/UserVmManagerImpl.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
When domain is deleted, all the settings configured under
the domain scope still exists in domain_details table.
All the entries for the domain should be deleted as well
- Fixes inter-cluster migration of VMs
- Allows migration of stopped VM with disks attached to different and suitable pools
- Improves inter-cluster detached volume migration
- Allows inter-cluster migration (clusters of same Pod) for system VMs, VRs on VMware
- Allows storage migration for stopped system VMs, VRs on VMware within same Pod if StoragePool cluster scopetype
Linked Primate PR: https://github.com/apache/cloudstack-primate/pull/789 [Changes merged in this PR after new UI merge]
Documentation PR: https://github.com/apache/cloudstack-documentation/pull/170
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Steps to reproduce the issue:
(1)Create 10000 service offerings (by db changes below or cloudmonkey).
```
DROP PROCEDURE IF EXISTS cloud.insert_service_offering;
DELIMITER $$
CREATE PROCEDURE cloud.insert_service_offering()
BEGIN
DECLARE count INT DEFAULT 10000;
SET @offeringid = (select max(id)+1 from disk_offering);
WHILE count > 0 DO
INSERT INTO disk_offering (id,name,uuid,display_text,disk_size,type,created) values (@offeringid,'test-offering-wei',uuid(), 'test-offering-wei',0,'Service',now());
INSERT INTO service_offering (id,cpu,speed,ram_size) values (@offeringid, 1, 500,256);
SET @offeringid = @offeringid + 1;
SET count = count - 1;
END WHILE;
END $$
DELIMITER ;
CALL cloud.insert_service_offering();
mysql> CALL cloud.insert_service_offering();
Query OK, 0 rows affected (2 min 30.85 sec)
```
(2) Check the total time of periodical capacity check in cloudstack.
Without this patch, it spend 2.5 seconds (2 hosts)
```
2021-01-15 16:10:12,793 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Running Capacity Checker ...
2021-01-15 16:10:15,287 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-5d5f3b3b) (logid:f5eb68ba) Done running Capacity Checker ...
```
With this patch ,it spend 1.3 seconds (2 hosts)
```
2021-01-15 16:12:43,604 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Running Capacity Checker ...
2021-01-15 16:12:44,927 DEBUG [c.c.a.AlertManagerImpl] (CapacityChecker:ctx-a2a7f3f1) (logid:f7e0a4c5) Done running Capacity Checker ...
```
If there are 100 hosts, the total time will be reduced from 100+ seconds to around 10 seconds.
* 4.14:
server: select root disk based on user input during vm import (#4591)
kvm: Use Q35 chipset for UEFI x86_64 (#4576)
server: fix wrong error message when create isolated network without SourceNat (#4624)
server: add possibility to scale vm to current customer offerings (#4622)
server: keep networks order and ips while move a vm with multiple networks (#4602)
server: throw exception when update vm nic on L2 network (#4625)
doc: fix typo in install notes (#4633)
We can use cloudmonkey to scale a vm with dynamic offering, to same offering but with different cpunumber or memory.
Enable it on UI to improve user experience.
This PR fixes an issue when move a vm from an account to another account.
Steps to reproduce the issue
(1) create a vm with multiple shared networks (in advanced zone, or advanced zone with security groups)
(2) create another account (in same domain who can also access the shared networks)
(3) move vm to new account, with a list of networkid
expected result: the vm has nics on the networks in same order as specified in API request, and nics have the same ips as before actual result: network order is not same as specified, ips are changed.
* server: fix cannot create vm if another vm with same name has been added and removed on the network
steps to reproduce the issue
(1) create vm-1 on network-1
(2) add vm-1 to network-2
(3) remove vm-1 from network-2
(4) create another vm with same name vm-1 on network-2
expected result: operation succeed
actual result: operation failed.
* #4600: add back a removed line
Update the guest OS from the OVF file after upload is completed
This PR fixes the template upload from local on VMware
Co-authored-by: dahn <daan.hoogland@gmail.com>
Co-authored-by: dahn <daan.hoogland@gmail.com>
This PR addresses an error that appears when you try to add a new host. I don't even understand why there was a cast to String in the first place. I will assume some classes send HypervisorType and some send a string (empty or otherwise). Shouldn't this be addressed to use the same type everywhere? With this fix adding a new xenserver host works fine.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Add vpcid in usage network response
Currently vpcid is displayed in listUsageNetworks response.
Add the vpcid so that we can see to which vpc, the network belongs
* use new function to get removed
* Display VPC name to which the network belongs to
If an isolated network is created in VPC then display
its name along with vpc id which is used for UI
* Change description
* vpc: fix ips on wrong interfaces after rebooting vpc vrs
* #4467: Rename to updateNicWithDeviceId
* CLSTACK-8923 vr: Force a restart of keepalived if conntrackd is not running or configuration has changed