Fixes#9331
Only those VMs should be considered network VM which have a NIC entry
that is not marked removed.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* New feature: Change storage pool scope
* Added checks for Ceph/RBD
* Update op_host_capacity table on primary storage scope change
* Storage pool scope change integration test
* pull 8875 : Addressed review comments
* Pull 8875: remove storage checks, AbstractPrimayStorageLifeCycleImpl class
* Pull 8875: Fixed integration test failure
* Pull 8875: Review comments
* Pull 8875: review comments + broke changeStoragePoolScope into smaller functions
* Added UT for changeStoragePoolScope
* Rename AbstractPrimaryDataStoreLifeCycleImpl to BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Dao review comments
* Pull 8875: Rename changeStoragePoolScope.vue to ChangeStoragePoolScope.vue
* Pull 8875: Created a new smokes test file + A single warning msg in ui
* Pull 8875: Added cleanup in test_primary_storage_scope.py
* Pull 8875: Type in en.json
* Pull 8875: cleanup array in test_primary_storage_scope.py
* Pull:8875 Removing extra whitespace at eof of StorageManagerImplTest
* Pull 8875: Added UT for PrimaryDataStoreHelper and BasePrimaryDataStoreLifeCycleImpl
* Pull 8875: Added license header
* Pull 8875: Fixed sql query for vmstates
* Pull 8875: Changed icon plus info on disabled mode in apidoc
* Pull 8875: Change scope should not work for local storage
* Pull 8875: Change scope completion event
* Pull 8875: Added api findAffectedVmsForStorageScopeChange
* Pull 8875: Added UT for findAffectedVmsForStorageScopeChange and removed listByPoolIdVMStatesNotInCluster
* Pull 8875: Review comments + Vm name in response
* Pull 8875: listByVmsNotInClusterUsingPool was returning duplicate VM entries because of multiple volumes in the VM satisfying the criteria
* Pull 8875: fixed listAffectedVmsForStorageScopeChange UT
* listAffectedVmsForStorageScopeChange should work if the pool is not disabled
* Fix listAffectedVmsForStorageScopeChangeTest UT
* Pull 8875: add volume.removed not null check in VmsNotInClusterUsingPool query
* Pull 8875: minor refactoring in changeStoragePoolScopeToCluster
* Update server/src/main/java/com/cloud/storage/StorageManagerImpl.java
* fix eof
* changeStoragePoolScopeToZone should connect pool to all Up hosts
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
* Create/Export OVA file of the VM on external vCenter host, to temporary conversion location (NFS)
* Fixed ova issue on untar/extract ovf from ova file
"tar -xf" cmd on ova fails with "ovf: Not found in archive" while extracting ovf file
* Updated VMware to KVM instance migration using OVA
* Refactoring and cleanup
* test fixes
* Consider zone wide pools in the destination cluster for instance conversion
* Remove local storage pool support as temporary conversion location
- OVA export not possible as the pool is not accessible outside host, NFS pools are supported.
* cleanup unused code
* some improvements, and refactoring
* import nic unit tests
* vmware guru unit tests
* Separate clone VM and create template file for VMware migration
- Export OVA (of the cloned VM) to the conversion location takes time.
- Do any validations with cloned VM before creating the template (and fail early).
- Updated unit tests.
* Check conversion support on host before clone vm / create template on vmware (and fail early)
* minor code improvements
* Auto select the host with instance conversion capability
* Skip instance conversion supported response param for non-KVM hosts
* Show supported conversion hosts in the UI
* Skip persistence map update if network doesn't exist
* Added support to export OVA from KVM host, through ovftool (when installed in KVM host)
* Updated importvm api param 'usemsforovaexport' to 'forcemstodownloadvmfiles', to be generic
* Updated hardcoded UI messages with message labels
* Updated UI to support importvm api param - forcemstodownloadvmfiles
* Improved instance conversion support checks on ubuntu hosts, and for windows guest vms
* Use OVF template (VM disks and spec files) for instance conversion from VMware, instead of OVA file
- this would further increase the migration performance (as it reduces the time for OVA preparation / archiving of the VM files into a single file)
* OVF export tool parallel threads code improvements
* Updated 'convert.vmware.instance.to.kvm.timeout' config default value to 3 hrs
* Config values check & code improvements
* Updated import log, with time taken and vm details
* Support for parallel downloads of VMware VM disk files while exporting OVF from MS, and other changes below.
- Skip clone for powered off VMs
- Fixes to support standalone host (with its default datacenter)
- Some code improvements
* rebase fixes
* rebase fixes
* minor improvement
* code improvements - threads configuration, and api parameter changes to import vm files
* typo fix in error msg
* Veeam: find storage pool by path for PreSetup and VMFS
* Veeam: support VMware distributed virtual switch
* Veeam: sync volumes on Solidfire after backup restoration
user faced the issue that backup is restored but the DATA disk is gone (ROOT disk is ok)
```
2024-05-03 12:00:32,868 ERROR [o.a.c.b.BackupManagerImpl] (API-Job-Executor-13:ctx-aa8a1d85 job-149661 ctx-73328567) (logid:6510cf06) Failed to import VM [vmInternalName: i-169-9679-VM] from backup restoration [{"backupType":"Full","externalId":"821ca400-a5da-4282-bf3f-7c7e38a6cdb4","id":257,"uuid":"69399101-5cbd-461c-8a48-f0c70eac0b24","vmId":9679}] with hypervisor [type: VMware] due to: [Couldn't find storage pool -iqn.2010-01.com.solidfire:3p53.data-9679.221-0].
```
On managed storage, the datastore name of DATA disk is determined by the iscsi_name of the volume.
* Veeam: set correct path for DATA disks on solidfire
* Temporarily backup StorPool volume before expunge
Sometimes the users delete the volumes by mistake. This enhancment
provides a solution to backup the volume before it's deleted. The user
will be able to see the snapshot in CloudStack UI/CLI and create only a
volume from it.
A task will check (by default on every 5mins) if the snapshots are
deleted from StorPool
Global settings to enable the delay delete option:
`storpool.delete.after.interval` - The interval (in seconds) after the StorPool snapshot will be deleted
`storpool.list.snapshots.delete.after.interval` - The interval (in seconds) to fetch the StorPool snapshots with deleteAfter flag
Minor fix when deleting snapshots
* added Apache licence
* addressed comments
* list by isEncrypted
* use filter on VO and cleanup
* add encryption type to volume response
* Update api/src/main/java/org/apache/cloudstack/api/command/user/volume/ListVolumesCmd.java
Co-authored-by: Suresh Kumar Anaparti <sureshkumar.anaparti@gmail.com>
Co-authored-by: Bryan Lima <bryan.lima@hotmail.com>
Co-authored-by: SadiJr <sadi@scclouds.com.br>
Co-authored-by: Bryan Lima <42067040+BryanMLima@users.noreply.github.com>
Co-authored-by: Henrique Sato <henriquesato2003@gmail.com>
This fixes https://github.com/apache/cloudstack/issues/8595
```
2024-02-01 16:23:52,473 INFO [c.c.n.s.SecurityGroupManagerImpl] (AgentManager-Handler-16:null) (logid:) Network Group full sync for agent 1 found 3 vms out of sync
2024-02-01 16:23:52,473 DEBUG [c.c.n.s.SecurityGroupManagerImpl] (AgentManager-Handler-16:null) (logid:) Security Group Mgr v2: scheduling ruleset updates for 3 vms (unique=3), current queue size=0
2024-02-01 16:23:52,473 DEBUG [c.c.n.s.SecurityGroupManagerImpl] (AgentManager-Handler-16:null) (logid:) Security Group Mgr v2: done scheduling ruleset updates for 3 vms: num new jobs=3 num rows insert or updated=0 time taken=0
2024-02-01 16:23:52,478 ERROR [c.c.n.s.SecurityGroupManagerImpl] (SecGrp-Worker-20:ctx-0aa3885d) (logid:472b30d2) Problem during SG work com.cloud.network.security.LocalSecurityGroupWorkQueue$LocalSecurityGroupWork@5
com.cloud.utils.exception.CloudRuntimeException: DB Exception on: com.mysql.cj.jdbc.ClientPreparedStatement: SELECT SQL_CACHE security_group_vm_map.id, security_group_vm_map.security_group_id, security_group_vm_map.instance_id, nics.ip4_address, vm_instance.state, security_group.name FROM security_group_vm_map INNER JOIN nics ON security_group_vm_map.instance_id=nics.instance_id INNER JOIN vm_instance ON security_group_vm_map.instance_id=vm_instance.id INNER JOIN security_group ON security_group_vm_map.security_group_id=security_group.id WHERE security_group_vm_map.security_group_id = 3 AND vm_instance.state='Running'
at com.cloud.utils.db.GenericDaoBase.searchIncludingRemoved(GenericDaoBase.java:424)
at com.cloud.utils.db.GenericDaoBase.listIncludingRemovedBy(GenericDaoBase.java:938)
at com.cloud.utils.db.GenericDaoBase.listBy(GenericDaoBase.java:928)
at com.cloud.network.security.dao.SecurityGroupVMMapDaoImpl.listBySecurityGroup(SecurityGroupVMMapDaoImpl.java:134)
at jdk.internal.reflect.GeneratedMethodAccessor555.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:215)
at com.sun.proxy.$Proxy245.listBySecurityGroup(Unknown Source)
at com.cloud.network.security.SecurityGroupManagerImpl2.generateRulesForVM(SecurityGroupManagerImpl2.java:246)
at com.cloud.network.security.SecurityGroupManagerImpl2.sendRulesetUpdates(SecurityGroupManagerImpl2.java:177)
at com.cloud.network.security.SecurityGroupManagerImpl2.work(SecurityGroupManagerImpl2.java:157)
at com.cloud.network.security.SecurityGroupManagerImpl2$WorkerThread$1.run(SecurityGroupManagerImpl2.java:75)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
at org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
at com.cloud.network.security.SecurityGroupManagerImpl2$WorkerThread.run(SecurityGroupManagerImpl2.java:72)
Caused by: java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '.id, security_group_vm_map.security_group_id, security_group_vm_map.instance_id,' at line 1
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
... 28 more
```
This PR fixes several issues in the testing of Veeam 11 and Veeam12
- Import Veeam.Backup.PowerShell and silently ignore the warning messages
- Fix issue when assign vm to backup offerings, which caused by separator (\r\n)
- Fix authorization failure in veeam 12a, which is because v1_4 is not supported in veeam 12a any more
- Fix exception if backup name has space
- Fix backup metrics in veeam12, which is because powershell command does not return the values needed
- Fix Incorrect datetime value, which is because powershell command returns a datetime which is not supported in Java
- Fix issue during backup restoration if VM has both ROOT and DATA disks.
This PR also has the following update
- Add integration test test/integration/smoke/test_backup_recovery_veeam.py
- Make some UI changes
- Add zone setting backup.plugin.veeam.version. If it is not set, CloudStack will get veeam version via powershell commands.
- Add zone setting backup.plugin.veeam.task.poll.interval and backup.plugin.veeam.task.poll.max.retry
This PR fixes reorder/list pools when cluster details are not set, while deploying vm / attaching volume.
Problem:
Attach volume to a VM fails, on infra with zone-wide pools & vm.allocation.algorithm=userdispersing as the cluster details are not set (passed as null) while reordering / listing pools by volumes.
Solution:
Ignore cluster details when not set, while reordering / listing pools by volumes.
Fixes#8412
Add support for 8.0.0.2 explicitly to prevent falling over to the parent version
Adds log when hypervisor capabilities fail over to the parent version
---------
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
GuestOS mappings are retrieved from the parent hypervisor version when a minor, patch hypervisor version doesn't exist.
Fixes#8412
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
When a public IP gets removed from quarantine, the removal reason gets saved to the database; however, it may also be useful for operators to know who removed the public IP from quarantine. For that reason, this PR extends the public IP quarantine feature so that the account that deliberately removed an IP from quarantine also gets saved to the database.
This PR adds missing indexes on `alerts` & `events` tables.
For alerts table, some of the queries are part of a couple of APIs and some operations. I have added the index for the same. Ref:
8f39087377/engine/schema/src/main/java/com/cloud/alert/dao/AlertDaoImpl.java (L40-L45)
For Events table, we query for `resource_id` & `resource_type` in the UI for a resource's events. Indexes were missing, so I have added those.
Sometimes users have the need to move resources between domains, for example, in a big company, a department may be moved from one part of the company to another, changing the company's department hierarchy, the easiest way of reflecting this change on the company's cloud environment would be to move subdomains between domains, but currently ACS offers no option to do that.
This PR adds the moveDomain API, which will move domains between subdomains. Furthermore, if the domain that is being moved has any subdomains, those will also be moved, maintaining the current subdomain tree.
This PR adds the capability in CloudStack to convert VMware Instances disk(s) to KVM using virt-v2v and import them as CloudStack instances. It enables CloudStack operators to import VMware instances from vSphere into a KVM cluster managed by CloudStack. vSphere/VMware setup might be managed by CloudStack or be a standalone setup.
CloudStack will let the administrator select a VM from an existing VMware vCenter in the CloudStack environment or external vCenter requesting vCenter IP, Datacenter name and credentials.
The migrated VM will be imported as a KVM instance
The migration is done through virt-v2v: https://access.redhat.com/articles/1351473, https://www.ovirt.org/develop/release-management/features/virt/virt-v2v-integration.html
The migration process timeout can be set by the setting convert.instance.process.timeout
Before attempting the virt-v2v migration, CloudStack will create a clone of the source VM on VMware. The clone VM will be removed after the registration process finishes.
CloudStack will delegate the migration action to a KVM host and the host will attempt to migrate the VM invoking virt-v2v. In case the guest OS is not supported then CloudStack will handle the error operation as a failure
The migration process using virt-v2v may not be a fast process
CloudStack will not perform any check about the guest OS compatibility for the virt-v2v library as indicated on: https://access.redhat.com/articles/1351473.
* 4.18:
server: Initial new vpnuser state (#8268)
UI: Removed redundant IP Address Column when create Port forwarding rules (#8275)
UI: Removed ICMP input fields for protocol number from ACL List rules modal (#8253)
server: check if there are active nics before network GC (#8204)