* Filter disk / service offerings by domain at DB level
* Search for tags in the db
* Update search to include host tags
* Differenciate between tags
* Refactor
* Fix of creating volumes from snapshots without backup
When few snaphots are created onyl on primary storage, and try to create
a volume or a template from the snapshot only the first operation is
successful. Its because the snapshot is backup on secondary storage with
wrong SQL query. The problem appears on Ceph/NFS but may affects other
storage plugins.
Bypassing secondary storage is implemented only for Ceph primary storage
and it didn't cover the functionality to create volume from snapshot
which is kept only on Ceph
* Address review
* CLOUDSTACK-9175: [VMware DRS] Adding new host to DRS cluster does not participate in load balancing.
Summary: When a new host is added to a cluster, Cloudstack doesn't create all the port groups (created by cloudstack earlier in other hosts) present in the cluster. Since the new host doesn't have all the necessary networking port groups of cloudstack, it is not eligible to participate in DRS load balancing or HA.
Solution: When adding a host to the cluster in Cloudstack, use VMware API to find the list of unique port groups on a previously added host (older host in the cluster) if exists and then create them on the new host.
* Added few checks for cluster details
* Create utility to centralize byte convertions
* Add/change toString definitions
* Create Libvirt handler to ScaleVmCommand
* Enable dynamic scalling VM with KVM
* Move config from interface to class and rename it
As every variable declared in interfaces are already final,
this moving will be needed to mock tests in nexts commits
* Configure VM max memory and cpu cores
The values are according to service offering or global configs
* Extract dpdk configuration to a method and test it
* Extract OS desc config to a method and test it
* Extract guest resource def to a method and test it
Improve libvirt def
* Refactor LibvirtVMDef.GuestResourceDef
* Refactor ScaleVmCommand
* Improve VMInstaVO toString()
* Refactor upgradeRunningVirtualMachine method
* Turn int variables into long on utility
* Verify if VM is scalable on KVMGuru
* Rename some KVMGuruTest's methods
* Change vm's xml to work with max memory
* Verify if service offering is dynamic before scale
* Create methods to retrieve data from domain
* Create def to hotplug memory
* Adjust the way command was scaling the VM
* Fix database persistence before executing command
* Send more info to host to improve log
* Fix var name
* Fix missing "}"
* Undo unnecessary changes
* Address review
* Fix scale validation
* Add VM prepared for dynamic scaling validation
* Refactor LibvirtScaleVmCommandWrapper and improve unit tests
* Remove duplicated method
* Add RuntimeException check
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Remove copyright from header
* Update ByteScaleUtilsTest.java
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Add SharedMountPoint to KVMs supported storage pool types
* Fix live migration to iSCSI and improve logs
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* [#4398] adapt code to handle multi tag string with commas
* [#4398] remove trailing spaces
* [#4398] add multi host tag support for ingest process
* [#4398] add test for multi tag support in offerings
* [#4398] update multitag support for DeploymentPlanningManagerImpl
encapsulate multi tag check from Ingest Feature, DepolymentPlanningManager into
HostDaoImpl to prevent code duplicates
* [#4398] move logic to HostVO and add tests
* rename test method
* [#4398] Change string method to apaches StringUtils
* [#4398] modify test for multi tag support
* adapt sql for double tags
Co-authored-by: Dirk Klahre <Dirk.Klahre@Itelligence.de>
Fixes#4897
Some details tables were allowing null values for detail value which can cause NPE in some cases.
mysql> SELECT TABLE_NAME, COLUMN_NAME, COLUMN_TYPE FROM information_schema.columns WHERE table_schema='cloud' AND table_name LIKE'%_details' AND column_name='value' AND IS_NULLABLE='YES';
+-------------------------------+-------------+---------------+
| TABLE_NAME | COLUMN_NAME | COLUMN_TYPE |
+-------------------------------+-------------+---------------+
| account_details | value | varchar(255) |
| cluster_details | value | varchar(255) |
| data_center_details | value | varchar(1024) |
| domain_details | value | varchar(255) |
| image_store_details | value | varchar(255) |
| storage_pool_details | value | varchar(255) |
| template_deploy_as_is_details | value | text |
| user_vm_deploy_as_is_details | value | text |
| user_vm_details | value | varchar(5120) |
+-------------------------------+-------------+---------------+
9 rows in set (0.00 sec)
Brings consistency for value column of *_details tables with preventing null values.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Add new registers in guest_os
* Create a procedure to insert guest_os and guest_os_hypervisor data
* Remove ';' as the last char of the procedure
* Set the right category_id on guest_os
Ubuntu 20.04 LTS - Ubuntu - Linux
Ubuntu 21.04 - Ubuntu - Linux
pfSense 2.4 - FreeBSD - Unix
OpenBSD 6.7 - Unix
OpenBSD 6.8 - Unix
AlmaLinux 8.3 - CentOS
* Fix SQL line's last character
* Add from with dummy table
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Global setting to select preferred storage pool
Currently all the volumes are allocated on storage pools
based on the capacity or the algorithm selected. Sometimes
we need to deploy all volumes of particular account in a
specific storage pool and in that case its not possible.
with this change, we can specify the uuid of the preferred
storage pool, so that all volumes of the account will be
deployed in this pool
* code feedback
Co-authored-by: Rakesh Venkatesh <rakeshv@apache.org>
Currently we can send a default value of 4K/32K for GET/POST request of
user data field. Most new browsers and also nginx support till 1MB of
post data.
Added a new global setting `vm.userdata.max.length` with default value of
32KB which can be increased till 1MB.
* server: skip zone check for PERHOST iso during attachIso
Hypervisor tools ISO - vmware-toools.iso, xs-tools.iso are marked as PERHOST in DB. They are active but not downloaded to the secondary storages and hence no template-zone entry.
Skips the template-zone check for such templates.
Fixes#5265
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* inverted check
* use constants in TemplateManager
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Added disk provisioning type support for VMWare
* Review changes
* Fixed unit test
* Review changes
* Added missing licenses
* Review changes
* Update StoragePoolInfo.java
Removed white space
* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id
* Delete __init__.py
* Merge fix
* Fixed failing test
* Added comment about parameters
* Added error log when update fails
* Added exception when using API
* Ordering storage pool selection to prefer thick disk capable pools if available
* Removed unused parameter
* Reordering changes
* Returning storage pool details after update
* Removed multiple pool update, updated marvin test, removed duplicate enum
* Removed comment
* Removed unused import
* Removed for loop
* Added missing return statements for failed checks
* Class name change
* Null pointer
* Added more info when a deployment fails
* Null pointer
* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Small bug fix on API response and added missing bracket
* Removed datastore cluster code
* Removed unused imports, added missing signature
* Removed duplicate config key
* Revert "Added more info when a deployment fails"
This reverts commit 2486db78dc.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* Externalize secondary storage capacity threshold
* Use default value as threshold when config value is lower than 0.0
* Move config to CapacityManager
* Validate config in CapacityManagerImpl
* Use config in StorageOrchestrator
* Change config description
* Remove unused import
Co-authored-by: Daniel Augusto Veronezi Salvador <daniel@scclouds.com.br>
* Enhance log messages with hostName
* Use host.toString() on most of host logs.
* Remove redundant "Host" in logs and enhance logs
* duplicated "for"
* Adopt String.format, and enhance code
* Address reviews enhancing log messages
Update server/src/main/java/com/cloud/resource/ResourceManagerImpl.java
-- server/src/main/java/com/cloud/vm/UserVmManagerImpl.java
-- server/src/main/java/com/cloud/resource/RollingMaintenanceManagerImpl.java
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
* Fix String.format issue and change log message from debug to warn
* Fix checkstyle issue
* Fix string.format log
* Address review: enhance logs
* Enhance log of hosts in maintenance avoid list
* Remove "VM" on logs as vm.toString() already appends VM-<details>
* Add more details of the VM when postStateTransitionEvent
* Address reviewer and enhance VMInstanceVO.toString()
Co-authored-by: Daniel Augusto Veronezi Salvador <38945620+GutoVeronezi@users.noreply.github.com>
- Added connection manager to the gateway client.
- Renew the client session on '401 Unauthorized' response.
- Refactored the gateway client calls, for GET and POST methods.
- Consume the http entity content after login/(re)authentication and close the content stream if exists.
- Updated storage pool client connection timeout configuration 'storage.pool.client.timeout' to non-dynamic.
- Added storage pool client max connections configuration 'storage.pool.client.max.connections' (default: 100) to specify the maximum connections for the ScaleIO storage pool client.
- Updated unit tests.
and blocked the attach volume operation for uploaded volume on ScaleIO/PowerFlex storage pool
his PR fixes the problem of not updating the chain info or setting chain info to null after volume migrations.
Problem: While fetching the volume chain info, management server assumes datastore name to be a UUID (this is true only for NFS storages added by CloudStack) but datastore name can be with any name.
Solution: To fetch the volume chain info, use datastore name instead of UUID.
The fix is made in the flow of following API operations
migrateVirtualMachine
migrateVirtualMachineWithVolume
migrateVolume
This PR introduces new granularity levels to configure VM dynamic scalability. Previously VM is configured to be dynamically scalable based on the template and global setting. Now we bringing this option to configure at service offering and VM level also.
VM can dynamically scale only when all flags are ON at VM level, template, service offering and global setting. If any of the flags is set to false then VM cannot be scalable. This result will be persisted in DB for each VM and will be honoured for that VM till it is updated.
We are introducing 'dynamicscalingallowed' parameter with permitted values of true or false for deployVM API and createServiceOffering API.
Following are the API parameter changes:
createServiceOffering API:
dynamicscalingenabled: an optional parameter of type Boolean with default value “true”.
deployVirtualMachine API:
dynamicscalingenabled: an optional parameter of type Boolean with default value “true”.
Following are the UI changes:
Service offering creation has ON/OFF switch for dynamic scaling enabled with default value true
Inclusivity changes for CloudStack
- Change default git branch name from 'master' to 'main' (post renaming/changing default git branch to 'main' in git repo)
- Rename some offensive words/terms as appropriate for inclusiveness.
This PR updates the default git branch to 'main', as part of #4887.
Signed-off-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes: #4990
When a VM associated with a backup offering is destroyed/expunged, the backup offering isn't unassigned, and despite the VM having no backups present, backup usage is generated. This PR prevent usage record generation when there are no backups present for a VM with a backup offering associated to it. This is done by ensuring that usage event for backups is generated only when a the backup size > 0
This PR fixes#5047 which can be reproduced on Zones with _(I) Advanced Networks, (II) Security Groups enabled for the Zone, (III) network offering without Security Groups_; for instance, `DefaultSharedNetworkOffering` which does not list Security Group as supported service.
The issue is due to the following code inside the method `VirtualMachineManagerImpl.orchestrateReboot`:
[VirtualMachineManagerImpl.java#L3340](280c13a4bb/engine/orchestration/src/main/java/com/cloud/vm/VirtualMachineManagerImpl.java (L3340)).
```
final Answer rebootAnswer = cmds.getAnswer(RebootAnswer.class);
if (rebootAnswer != null && rebootAnswer.getResult()) {
if (dc.isSecurityGroupEnabled() && vm.getType() == VirtualMachine.Type.User) {
List<Long> affectedVms = new ArrayList<Long>();
affectedVms.add(vm.getId());
_securityGroupManager.scheduleRulesetUpdateToHosts(affectedVms, true, null);
}
return;
}
```
Fixes: #4972
This PR sets systevms' agent state to disconnected when it is stopped. Currently, when a systemVM (Console Proxy VM / Secondary storage VM) is stopped, the agent state still appears to be 'Up'
* server: destroy ssvm, cpvm on last host maintenance
When a single or last UP host enters into maintenance just stopping SSVM and CPVM will leave behind VMs on hypervisor side. As these system vms will be recreated they can be destroyed.
Fixes#3719
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix methods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* immediately destroy systemvms
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix destroy
Added bypassHostMaintenance flag in Comma.java class to allow command to be handled by host agent even when host is in maintenace.
Flag is set true only for delete commands for ssvm and cpvm.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* unit test fix
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix missing return statement
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix
VM should be stopped with cleanup before calling expunge else it server may through error with host in PrepareForMaintenance state.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* rename
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* forceha: fix vm is not started if it is poweroff from inside
steps to reproduce the issue
(1) make sure force.ha is true in global setting. if not, change it to true, and restart mgt server
(2) create a service offering , ha is not enabled
(3) create a vm
(4) log into the vm, and power off via cli.
expected result: vm is started again by cloudstack
actual result: vm is not started.
* forceha: fix vms are still running if host is force-removed
when host can be force removed, however vms are stopped in cloudstack, but not stopped on host
```
(localcloud) 🐱 > delete host id="a5625393-444d-4d0a-b31d-62baf88a8be1" forced=true
{
"success": true
}```
after some minutes, vms are still runnning on host
```
root@mgt01:~# ssh node63 virsh list
Id Name State
---------------------------
1 i-2-19-VM running
2 i-2-11-VM running
```
error message are
```
Cannot transmit host 2 to Enabled state
com.cloud.utils.fsm.NoTransitionException: No next resource state found for current state = Enabled event = DeleteHost
at com.cloud.resource.ResourceManagerImpl.resourceStateTransitTo(ResourceManagerImpl.java:1216)
at com.cloud.resource.ResourceManagerImpl$1.doInTransactionWithoutResult(ResourceManagerImpl.java:907)
```
* forceha: Make ForceHA dynamic
Datastore cluster as a primary storage support is already there. But if any changes at vCenter to datastore cluster like addition/removal of datastore is not synchronised with CloudStack directly. It needs removal of primary storage from CloudStack and add it again to CloudStack.
Here synchronisation of datastore cluster is fixed without need to remove or add the datastore cluster.
1. A new API is introduced syncStoragePool which takes datastore cluster storage pool UUID as the parameter. This API checks if there any changes in the datastore cluster and updates management server accordingly.
2. During synchronisation if a new child datastore is found in datastore cluster, then management server will create a new child storage pool in database under the datastore cluster. If the new child storage pool is already added as an individual storage pool then the existing storage pool entry will be converted to child storage pool (instead of creating a new storage pool entry)
3. During synchronisaton if the existing child datastore in CloudStack is found to be removed on vCenter then management server removes that child datastore from datastore cluster and makes it an individual storage pool.
The above behaviour is on par with the vCenter behaviour when adding and removing child datastore.
IKE version allows selecting ike (autoselect), ikev1, or ikev2.
Split connections gives an option of separating the first right subnet from the rest, and kicking out individual statements for each right subnet for better cross-compatibility.
Backported from PR: #4137
update per PR suggestion
Fixes#3138
Co-authored-by: Greg Goodrich <ggoodrich@ippathways.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This PR addresses the issue raised at #4545 (Fail to change Service offering from local <> shared storage).
When upgrading a VM service offering it is validated if the new offering has the same storage scope (local or shared) as the current offering. I think that the validation makes sense in a way of preventing running Root disks with an offering that does not match the current storage pool. However, the validation only compares both offerings and does not consider that it is possible to migrate Volumes between local <> shared storage pools.
The idea behind this implementation is that CloudStack should check the scope of the current storage pool which the ROOT volume is allocated; this, it is possible to migrate the volume between storage pools and list/upgrade according to the offerings that are supported for such pool.
This PR also fixes an issue where the API command that lists offerings for a VM should follow the same idea and list based on the storage pool that the volume is allocated and not the previous offering.
Fixes: #4545
The default length is 255, which caused a truncation of data if
the JSON object representing the backup volumes is too big.
It caused errors when backups were made on VMs with 3 volumes
or more.
`vm_instance.backup_volumes` has the type TEXT, which has a
maximal length of 65535 characters.
Fixes#4965
This PR makes sure no orphaned snapshot details are considered in the cleanup at startup job.
a real solution would be to implement some kind of cascading delete, but as the parent record is "only" marked as removed this would be a bit com
Co-authored-by: Daan Hoogland <dahn@onecht.net>
* prevent other vm disks getting deleted
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: fix inter-cluster stopped vm migration
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix detached volume inter-cluster migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* cleanup unused method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* review changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changes
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* vmware: allow attached volume migration using VmwareStorageMotionStrategy
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* find vm clusterid with multiple ROOT volumes
VM can have multiple ROOT volumes and some can be on zone-wide store therefore iterate over all of them till a cluster ID is found.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix successive storage migration
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* fix intercluster check
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* refactor vm cluster, host method
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* remove inter-pod check
Added by mistake, VMware won't have pods
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* address review comment
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
When invoking migrateVirtualMachineWithVolume API call and a strategy isn't found the volumes are left in Migrating state
This PR puts back the volumes to Ready state.
This PR fixes: #4462
Problem Statement:
In case of VMware, when a VM having multiple data disk is destroyed (without expunge) and tried to recover the VM then the previous data disks are not attached to the VM like before destroy. Only root disk is attached to the VM.
Root cause:
All data disks were removed as part of VM destroy. Only the volumes which are selected to delete (while destroying VM) are supposed to be detached and destroyed.
Solution:
During VM destroy, detach and destroy only volumes which are selected during VM destroy. Detach the other volumes during expunge of VM.
Fixes#4201
This PR addresses the issue of a vm snapshot being indefinitely stuck is Expunging state in case deletion fails.
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Fixes#4838
For inter-cluster migration without shared storage, VMware needs a host to be specified. Fix is to specify an appropriate host in the target cluster during a stopped VM migration. Also, find target datastore using the host in the target cluster.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Adds new/missing guest os mappings for XCP-ng/Xenserver 8.1
Copy guest OS mappings from XCP-ng/Xenserver 8.1 for XCP-ng/Xenserver 8.2
Adds Ubuntu 20.04 guest os mapping for XCP-ng/Xenserver 8.2
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes#4517
Adds capacity checks for RandomAllocator (host allocator)
Factors out host cpu capability and capacity check wrt serviceoffering code into CapacityManager.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR aims at introducing persistence mode in L2 networks and enhancing the behavior in Isolated networks
Doc PR apache/cloudstack-documentation#183
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This contains 3 main changes
(1) add NETWORK_STATS_ethX for all nics with public ips in VPC VRs (current: NETWORK_STATS_eth1)
(2) DO NOT create records in user_statistics for each VPC tier (only one record per public nic per VPC VR)
(3) send NetworkUsageCommand before unplugging a NIC with public IPs from VPC VR
Public IP addresses dedicated to one domain should not be accessed
by other domains. Also, root admin should be able to display all
public ip addresses in system.
Currently following issues exist
1. Public IP address assigned to one domain can be accessed by
other sibling domains
If use.system.public.ip is false then child domains should not
see public ip of ROOT domain
Before fix
```
(test1) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
"count": 59,
"publicipaddress": [
```
After fix
```
(test) mgt01 > list publicipaddresses listall=true fordisplay=true allocatedonly=false forvirtualnetwork=true filter=ipaddress,
{
"count": 10,
```
* server: create DB entry for storage pool capacity when create storage pool
* Revert "server: create DB entry for storage pool capacity when create storage pool"
This reverts commit e790167bfe.
* server: create DB entry for storage pool capacity when create zone-wide storage pools