* Update AncientDataMotionStrategy.java
fix When secondary storage usage is> 90%, VOLUME migration across primary storage will cause the migration to fail and lose VOLUME
* Update AncientDataMotionStrategy.java
Volume is migrated across Primary storage. If no secondary storage is available(Or used capacity> 90% ), the migration is canceled.
Before modification, if secondary storage cannot be found, copyVolumeBetweenPools return NUll
copyAsync considers answer = null to be a sign of successful task execution, so it deletes the VOLUME on the old primary storage. This is the root cause of data loss, because VOLUME did not perform the migration at all.
* code in comment removed
Co-authored-by: div8cn <35140268+div8cn@users.noreply.github.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
* 4.13:
Snapshot deletion issues (#3969)
server: Cannot list affinity group if there are hosts dedicated… (#4025)
server: Search zone-wide storage pool when allocation algothrim is firstfitleastconsumed (#4002)
* Fixes snapshot deletion
* Remove legacy '@Component', it is not necessary in this bean/class.
* Fix log message missing %d and remove snapshot on DB
* Remove "dummy" boolean return statement
* Manage snapshot deletion for KVM + NFS (primary storage)
* checkstyle trailing spaces
* rename options strings to *_OPTION
* Fix typo on deleteSnapshotOnSecondaryStorage and enhance log message
* Move the snapshotDao.remove(snapshotId); (#4006)
* Fix deletesnapshot worflow to handle both snapshots created in primary storage and snapshots backed up to secondary storage
* Fix extra space
* refactor out separate handling methods for secondary and primary (reducing returns)
* return false on unexpected error or log when expected
* != instead of ==
* secondary instead of backup storage
* init to null
* Handle snapshot deletion on primary storage. When primary store ref not found for snapshot do not fail the operation.
* Fix debug levels on log messages
Co-authored-by: GabrielBrascher <gabriel@apache.org>
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Remove constraint for NFS storage
* Add new property on agent.properties
* Add free disk space on the host prior template download
* Add unit tests for the free space check
* Fix free space check - retrieve avaiable size in bytes
* Update default location for direct download
* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope
* Verify location for temporary download exists before checking free space
* In progress - refactor and extension
* Refactor and fix
* Last fixes and marvin tests
* Remove unused test file
* Improve logging
* Change default path for direct download
* Fix upload certificate
* Fix ISO failure after retry
* Fix metalink filename mismatch error
* Fix iso direct download
* Fix for direct download ISOs on local storage and shared mount point
* Last fix iso
* Fix VM migration with ISO
* Refactor volume migration to remove secondary storage intermediate
* Fix simulator issue
As previously described by PR #3929:
If vm has attached ISO, the migration fails with error message "org.libvirt.LibvirtException: Cannot access storage file /mnt/b33e5a1d-e4ea-3465-b6ac-c98dc8ff8af0/207-2-cc5fd717-2d57-3bb3-bcf6-2c930268db6c.iso"
This adds support for JDK11 in CloudStack 4.14+:
- Fixes code to build against JDK11
- Bump to Debian 9 systemvmtemplate with openjdk-11
- Fix Travis to run smoketests against openjdk-11
- Use maven provided jdk11 compatible mysql-connector-java
- Remove old agent init.d scripts
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* server: fix resource count of primary storage if some volumes are Expunged but not removed
Steps to reproduce the issue
(1) create a vm and stop it. check resource count of primary storage
(2) download volume. resource count of primary storage is not changed.
(3) expunge the vm, the volume will be Expunged state as there is a volume snapshot on secondary storage. The resource count of primary storage decreased.
(4) update resource count of the account (or domain), the resource count of primary storage is reset to the value in step (2).
* New feature: Add support to destroy/recover volumes
* Add integration test for volume destroy/recover
* marvin: check resource count of more types
* messages translate to JP
* Update messages for CN
* translate message for NL
* fix two issues per Daan's comments
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
Fixes#3191
When a template is registered, code stores md5sum of the downloaded file in the vm_template table. However, this downloaded file could be deleted after template installation if it is not an actual (.qcow2, .ova, etc.) file. When the user copies a template using copyTemplate API, the actual template file will be copied across the image stores. Matching checksum for the copied templated file and the stored value from the vm_template table will result in a mismatch.
Changes will set an empty checksum value for the copied template while passing to download service which allows skipping wrong checksum check for the copied while install.
However, this results in a change in checksum value for concerned template entry in vm_template table post template install.
Co-authored-by: dahn <daan.hoogland@gmail.com>
In case an older SSVM is removed without changing it's state from Up
to Destroyed/Removed etc, the SSVM may be randomly selected for image
store related operations. This fix ensures that endpoints for an image
store are found only from a set of SSVM hosts that are not removed.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Diagnostics are hard when a snapshot fails if a null pointer occurs. This is because no stack trace or location of the error is logged. I.E.
2019-10-21 12:55:00,056 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-131:ctx-80420156 job-10033827/job-10033828 ctx-4864e2f5) (logid:21454564) copy snasphot failed: java.lang.NullPointerException
Problem: In Vmware, appliances that have options that are required to be answered before deployments are configurable through vSphere vCenter user interface but it is not possible from the CloudStack user interface.
Root cause: CloudStack does not handle vApp configuration options during deployments if the appliance contains configurable options. These configurations are mandatory for VM deployment from the appliance on Vmware vSphere vCenter. As shown in the image below, Vmware detects there are mandatory configurations that the administrator must set before deploy the VM from the appliance (in red on the image below):
Solution:
On template registration, after it is downloaded to secondary storage, the OVF file is examined and OVF properties are extracted from the file when available.
OVF properties extracted from templates after being downloaded to secondary storage are stored on the new table 'template_ovf_properties'.
A new optional section is added to the VM deployment wizard in the UI:
If the selected template does not contain OVF properties, then the optional section is not displayed on the wizard.
If the selected template contains OVF properties, then the optional new section is displayed. Each OVF property is displayed and the user must complete every property before proceeding to the next section.
If any configuration property is empty, then a dialog is displayed indicating that there are empty properties which must be set before proceeding
image
The specific OVF properties set on deployment are stored on the 'user_vm_details' table with the prefix: 'ovfproperties-'.
The VM is configured with the vApp configuration section containing the values that the user provided on the wizard.
Fix regression bug that affects KVM local storage migration. Some of the desired execution flows for KVM local storage migration had been altered to allow only managed storage to execute. Fixed allowing managed and non managed storages to execute.
Fixes#3521
Retrieval of an image store using ImageStoreProviderManager has been refactored by introducing three different methods,
DataStore getRandomImageStore(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will not be used here.
DataStore getImageStoreWithFreeCapacity(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will be used here and the store with max free space will be returned. If no store with filled storage less than the threshold is found, the NULL value will be returned.
List<DataStore> listImageStoresWithFreeCapacity(List<DataStore> imageStores);
To get a list of image stores for writing purpose which fulfills threshold capacity check.
Correspondingly DataStoreManager methods have been refactored to return similar values for a given zone.
Fixes#3287 - NULL value will be returned when secondary storage is needed for writing but there is not store with free space.
Fixes#3041 - Rather than returning random secondary storage for writing, storage with max. free space will be returned.
Fixes#3478 - For migration on VMware, all writable secondary storage will be mounted while preparation.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Fixes checkstyle issue caused by previous commit 6a511fce40
from PR #3466 where a minor review fix did not address this. Merging this
one to unblock few other PRs after running a local build test.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Add CephSnapshotStrategy to handle RBD revert (rollback) snapshot. In order to support RBD revert (rbd_rollback), this PR adds a CephSnapshotStrategy class to handle Ceph/RBD snapshot actions.
- Fixes tests path from old layout to standard maven in src/test/java/
- Removed duplicate SnapshotManagerImpl at old path `server/src/com...`
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548
Live storage migration on KVM under these conditions:
From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.
Additional notes:
To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:
Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: When a multi-disk OVA template is uploaded, only the root disk is recognized and VMs deployed using such template only get the root disk provisioned.
Root Cause: The template processor for multi-disk OVA was not used in the template upload processor.
Solution: Added support for local multi-disk OVA template upload. After a multi-disk OVA template is
uploaded, the mechanism that worked on multi-disk OVA templates registered using URL is now also used to discovers and creates data-disk templates in cloud.vm_template table and on the secondary storage.
To enable SSL on SSVMs :
• Upload the certificates like you usually do via the API or UI->Infrastructure tab
• Set the global settings secstorage.encrypt.copy, secstorage.ssl.cert.domain to appropriate values
along with the CPVM ones
• Restart management server (no need to destroy/restart SSVM (or the ssvm agent))
Test cases:
- Upload template and check it creates multi-disk folders on secondary
storage and entries in cloud.vm_template table
- Upload template and kill/shutdown management server. Then restart MS
to check if template sync works
- Copy template across zone of an uploaded template
Signed-off-by: Rohit Yadav rohit.yadav@shapeblue.com
This commit simplifies the generateDestPath method and fixes an issue where an extra file, named as 'null', was created on the target storage pool during VM local storage volume migration. Without this fix, the VM is migrated and there is no data loss; however, 193 KB is allocated for the unused file named as 'null' and the file stays on the target storage.
- Fixes PR #3146 db cleanup to the correct 4.12->4.13 upgrade path
- Fixes failing unit test due to jdk specific changes after forward
merging
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* skip geting used bytes for volumes that are not in Ready state
* updated log message
* filter snapshots by state backedup
* removed * import
* filter templates by state 'DOWNLOADED'
* refactored getUsedBytes to use O(1) queries
* querying for ready volumes instead filtering in memory
* make listByStoreIdInReadyState more generic ex listByStoreIdAndState
* updated snapshot search criteria for listByStoreIdAndState
* updated template search criteria for listByPoolIdAndState
* fixed typo in search criteria for listByTemplateAndState
* fixed typo in search criteria for templates in listByPoolIdAndState
With this patch b766bf7
we started tracking disks in attaching state so that other attach request can fail gracefully. However this missed the case where disks were in allocated state but attach was requested.
For the use case where users want to attach disk in allocated state but not ready, we need to have allocated-attaching transition as well. We must take care of returning to the original state - allocated or ready - when attach request has completed.
For the use case of unstarted vm's the disk must proceed as follows - "Allocated" -> Attaching -> Allocated. When VM is started, the disk is "created" and pool is assigned. For the use case of started VMs it's more trivial and disk proceeds as follows - Ready -> Attaching -> Ready.
Test this by creating a VM with "startvm=false", create a disk and try attaching it in allocated state. It would give an exception on latest 4.11 but will be fixed on this patch.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: Users are billed for destroyed VMs with VM snapshots because usage records don't get that the VM and VM snapshots are removed.
Root Cause: The destroyVirtualMachine and expungeVirtualMachine APIs were removing VM snapshots but not generating VMSNAPSHOT.DELETE usage event due to which the VM snapshots were not marked as removed in the usage_vmsnapshot table.
Solution: The issue was fixed by emitting the proper usage event for all the VM snapshots of a VM that is destroyed.
* Migrate template to target host if needed.
Fix KVM VM local storage live migration by migrating its template to the
target host if needed.
* Address reviewer and add method that updates the DB template reference
* Remove deprecated Config.PrimaryStorageDownloadWait
* Code formating of @Inject to follow checkstyle
* feature: add libvirt / qemu io bursting
Adds the ability to set bursting features from libvirt / qemu
This allows you to utilize the iops and bytes temporary "burst" mode
introduced with libvirt 2.4 and improved upon with libvirt 2.6.
https://blogs.igalia.com/berto/2016/05/24/io-bursts-with-qemu-2-6/
* updates per rafael et al
* The snapshot.backup.rightafter configuration variable was removed by:
SHA: 6bb0ca2f85
This adds it back, though named snapshot.backup.to.secondary now instead.
This global parameter, once set, will allow you to prevent automatic backups of
snapshots to secondary storage, unless they're actually needed.
Fixes#3096
* updates per review
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations
* Fix indentation of marvin test file and reformat against PEP8
* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.
* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one
* Fix unhandled NPE during VM migration
* Revert back to distinct event descriptions for VM to host or storage pool migration
* Reformat test_primary_storage file against PEP-8 and Remove unused imports
* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils
* Allow KVM VM live migration with ROOT volume on file
* Allow KVM VM live migration with ROOT volume on file
- Add JUnit tests
* Address reviewers and change some variable names to ease future
implementation (developers can easily guess the name and use
autocomplete)
These boolean-return methods are named as "getXXX".
Other boolean-return methods are named as "isXXX".
Considering there methods will return boolean values, it should be more clear and consistent to rename them as "isXXX".
(rebase #2602 and #2816)
* rename package from "src" to "org.apache.cloudstack.storage.vmsnapshot"
* Remove Empty test class "SnapshotDataFactoryTest"
* Remove redundant imports (importing a class from its own package)
* Fix the problem at #1740 when it loads all snapshots in the primary storage
While checking a problem with @mike-tutkowski at PR #1740, we noticed that the method `org.apache.cloudstack.storage.snapshot.SnapshotDataFactoryImpl.getSnapshots(long, DataStoreRole)` was not loading the snapshots in `snapshot_store_ref` table according to their storage role. Instead, it would return a list of all snapshots (even if they are not in the storage role sent as a parameter) saying that they are in the storage that was sent as parameter.
* Add unit test
* 4.11:
Changed the implementation of isVolumeOnManagedStorage(VolumeInfo) to check if the data store in question is for primary storage (and added a unit test from Daan Hoogland)
vmware: reboot VR after mac updates (#2794)
* Cleaup and code-formatting POM files
* Remove obsolete mycila license-maven-plugin
* Remove obsolete console-proxy/plugin project
* Move console-proxy-rdbconsole under console-proxy parent
* Use correct parent path for rdpconsole
* Order alphabetally items in setnextversion.sh
* Unifiy License header in POMs
* Alphabetic order of modules definition
* Extract all defined versions into parent pom
* Remove obsolete files: version-info.in, configure-info.in
* Remove redundant defaultGoal
* Remove useless checkstyle plugin from checkstyle project
* Order alphabetally items in pom.xml
* Add aditional SPACEs to fix debian build
* Don't execute checkstyle on parent projects
* Use UTF-8 encoding in building checkstyle project
* Extract plugin versions into properties
* Execute PMD plugin on all the projects with -Penablefindbugs
* Upgrade maven plugins to latest version
* Make sure to always look for apache parent pom from repository
* Fix incorrect version grep in debian packaging
* Fix rebase conflicts
* Fix rebase conflicts
* Remove PMD for now to be fixed on another PR
Fixes the version in pom etc. to be consistent with versioning pattern as X.Y.Z.0-SNAPSHOT after a minor release.
Signed-off-by: Khosrow Moossavi <khos2ow@gmail.com>
This ensure that fewer mount points are made on hosts for either
primary storagepools or secondary storagepools.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Create unit test cases for 'ConfigDriveBuilder' class
* add method 'getProgramToGenerateIso' as suggested by rohit and Daan
* fix encoding for base64 to StandardCharsets.US_ASCII
* fix MockServerTest.testIsMockServerCanUpgradeConnectionToSsl()
This is another method that is causing Jenkins to fail for almost a month
This introduces a new global setting `vm.configdrive.primarypool.enabled` to toggle creation/hosting of config drive iso files on primary storage, the default will be false causing them to be hosted on secondary storage. The current support is limited from hypervisor resource side and in current implementation limited to `KVM` only. The next big change is that config drive is created at a temporary location by management server and shipped to either KVM or SSVM agent via cmd-answer pattern, the data of which is not logged in logs. This saves us from adding genisoimage dependency on cloudstack-agent pkg.
The APIs to reset ssh public key, password and user-data (via update VM API) requires that VM should be shutdown. Therefore, in the refactoring I removed the case of updation of existing ISO. If there are objections I'll re-put the strategy to detach+attach new config iso as a way of updation. In the refactored implementation, the folder name is changed to lower-cased configdrive. And during VM start, migration or shutdown/removal if primary storage is enable for use, the KVM agent will handle cleanup tasks otherwise SSVM agent will handle them.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Primary Storage count for an account does not decrease when a Data Disk is deleted
When a data disk is created and not attached in a running VM, the "deleteVolume" will not decrement the count for used primary storage in the VMs accounting information. The property that is not being decremented is called "primarystoragetotal"; this information can be retrieved via "listAccounts" API method.
Steps to reproduce this issue:
1 - Create an account, deploy a VM in it
2 - Check the primary storage count for the account with listAccounts API
3 - Create a data disk
4 - Check the primary storage count for the account with listAccounts API
5 - Delete the Data disk
6 - Check the primary storage count for the account with listAccounts API - It is the same as before deleting the data disk (it should not be the same as the value in step 2!)
* formatting and cleanups
* fix imports that were wrongly changed during rebase
* CLOUDSTACK-8855 Improve Error Message for Host Alert State
* [CLOUDSTACK-9846] create column to save the content of alert messages
Remove declaration of throws CloudRuntimeException
I also removed some unused variables and comments left behind
This closes#837
* Isolate a problematic test "smoke/test_certauthority_root"
* Refactored nuage tests
Added simulator support for ConfigDrive
Allow all nuage tests to run against simulator
Refactored nuage tests to remove code duplication
* Move test data from test_data.py to nuage_test_data.py
Nuage test data is now contained in nuage_test_data.py instead of
test_data.py
Removed all nuage test data from nuage_test_data.py
* CLOUD-1252 fixed cleanup of vpc tier network
* Import libVSD into the codebase
* CLOUDSTACK-1253: Volumes are not expunged in simulator
* Fixed some merge issues in test_nuage_vsp_mngd_subnets test
* Implement GetVolumeStatsCommand in Simulator
* Add vspk as marvin nuagevsp dependency, after removing libVSD dependency
* correct libVSD files for license purposes
pep8 pyflakes compliant
* [CLOUDSTACK-10240] ACS cannot migrate a volume from local to shared storage.
CloudStack is logically restricting the migration of local storages to shared storage and vice versa. This restriction is a logical one and can be removed for XenServer deployments. Therefore, we will enable migration of volumes between local-shared storages in XenServers independently of their service offering. This will work as an override mechanism to the disk offering used by volumes. If administrators want to migrate local volumes to a shared storage, they should be able to do so (the hypervisor already allows that). The same the other way around.
* Cleanups implemented while working on [CLOUDSTACK-10240]
* Fix test case test_03_migrate_options_storage_tags
The changes applied were:
- When loading hypervisors capabilities we must use "default" instead of nulls
- "Enable" storage migration for simulator hypervisor
- Remove restriction on "ClusterScopeStoragePoolAllocator" to find shared pools
This fixes move refactoring error introduced in #2283
For instance, the class DatadiskTO is supposed to be in com.cloud.agent.api.to package. However, the folder structure it was placed in is com.cloud.agent.api.api.to.
Skip tests for cloud-plugin-hypervisor-ovm3:
For some unknown reason, there are quite a lot of broken test cases for cloud-plugin-hypervisor-ovm3. They might have appeared after some dependency upgrade and was overlooked by the person updating them. I checked them to see if they could be fixed, but these tests are not developed in a clear and clean manner. On top of that, we do not see (at least I) people using OVM3-hypervisor with ACS. Therefore, I decided to skip them.
Identention corrected to use spaces instead of tabs in XML files
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.
- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)
Added support for root disks on managed storage with KVM
Added support for volume snapshots with managed storage on KVM
Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage
Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.
Included support for Reinstall VM on KVM with managed storage
Enabled offline migration on KVM from non-managed storage to managed storage and vice versa
Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)
Added support to download (extract) a managed-storage volume to a QCOW2 file
When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.
Included support for the KVM auto-convergence feature
The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
Added a new Global Setting: kvm.storage.live.migration.wait
For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)
Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.
Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot
Added an SIOC API plug-in to support VMware SIOC
When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.
Added the ability to revert a volume to a snapshot
Enabled cluster-scoped managed storage
Added support for VMware dynamic discovery
CloudStack volumes and templates are one single virtual disk in case of XenServer/XCP and KVM hypervisors since the files used for templates and volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates are in OVA format, which are archives that can contain a complete VM including multiple VMDKs and other files such as ISOs. And currently, Cloudstack only supports Template creation based on OVA files containing a single disk. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, only the first disk is attached to the new instance and other disks are ignored.
Similarly with uploaded volumes, attaching an uploaded volume that contains multiple disks to a VM will result in only one VMDK to being attached to the VM.
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Support+OVA+files+containing+multiple+disks
This behavior needs to be improved in VMWare to support OVA files with multiple disks for both uploaded volumes and templates. i.e. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, the first disk should be attached to the new instance as the ROOT disk and volumes should be created based on other VMDK disks in the OVA file and should be attached to the instance.
Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows using templates and ISOs avoiding secondary storage as intermediate cache on KVM. The virtual machine deployment process is enhanced to supported bypassed registered templates and ISOs, delegating the work of downloading them to primary storage to the KVM agent instead of the SSVM agent.
Template and ISO registration:
- When hypervisor is KVM, a checkbox is displayed with 'Direct Download' label.
- API methods registerTemplate and registerISO are both extended with this new parameter directdownload.
- On template or ISO registration, no download job is sent to SSVM agent, CloudStack would only persist an entry on template_store_ref indicating that template or ISO has been marked as 'Direct Download' (bypassing Secondary Storage). These entries are persisted as:
template_id = Template or ISO id on vm_template table
store_id NULL
download_state = BYPASSED
state = Ready
(Note: these entries allow users to deploy virtual machine from registered templates or ISOs)
- An URL validation command is sent to a random KVM host to check if template/ISO location can be reached. Metalink are also supported by this feature. In case of a metalink, it is fetched and URL check is performed on each of its URLs.
- Checksum should be provided as indicated on #2246: {ALGORITHM}CHKSUMHASH
- After template or ISO is registered, it would be displayed in the UI
Virtual machine deployment:
When a 'Direct Download' template is selected for deployment, CloudStack would delegate template downloading to destination storage pool via destination host by a new pluggable download manager.
Download manager would handle template downloading depending on URL protocol. In case of HTTP, request headers can be set by the user via vm_template_details. Those details should be persisted as:
Key: HTTP_HEADER
Value: HEADERNAME:HEADERVALUE
In case of HTTPS, a new API method is added uploadTemplateDirectDownloadCertificate to allow user importing a client certificate into all KVM hosts' keystore before deployment.
After template or ISO is downloaded to primary storage, usual entry would be persisted on template_spool_ref indicating the mapping between template/ISO and storage pool.
Snapshot on primary storage not cleaned up after Storage migration. This happens in the following scenario:
Steps To Reproduce
Create an instance on the local storage on any host
Create a scheduled snapshot of the volume:
Wait until ACS created the snapshot. ACS is creating a snapshot on local storage and is transferring this snapshot to secondary storage. But the latest snapshot on local storage will stay there. This is as expected.
Migrate the instance to another XenServer host with ACS UI and Storage Live Migration
The Snapshot on the old host on local storage will not be cleaned up and is staying on local storage. So local storage will fill up with unneeded snapshots.
This fixes the following:
- Unchecked thread growth in RemoteEndHostEndPoint
- Potential NPE while finding EP for a storage/scope
Unbounded thread growth can be reproduced with following findings:
- Every unreachable template would produce 6 new threads (in a single
ScheduledExecutorService instance) spaced by 10 seconds
- Every reachable template url without the template would produce 1 new
thread (and one ScheduledExecutorService instance), it errors out quickly without
causing more thread growth.
- Every valid url will produce upto 10 threads as the same ep (endpoint
instance) will be reused to query upload/download (async callback)
progresses.
Every RemoteHostEndPoint instances creates its own
ScheduledExecutorService instance which is why in the jstack dump, we
see several threads that share the prefix RemoteHostEndPoint-{1..10}
(given poolsize is defined as 10, it uses suffixes 1-10).
This fixes the discovered thread leakage with following notes:
- Instead of ScheduledExecutorService instance, a cached pool could be
used instead and was implemented, and with `static` scope to be reused
among other future RemoteHostEndPoint instances.
- It was not clear why we would want to wait when we've Answers returned
from the remote EP, and therefore a scheduled/delayed Runnable was
not required at all for processing answers. ScheduledExecutorService
was therefore not really required, moved to ExecutorService instead.
- Another benefit of using a cached pool is that it will shutdown
threads if they are not used in 60 seconds, and they get re-used for
future runnable submissions.
- Caveat: the executor service is still unbounded, however, the use-case
that this method is used for short jobs to check upload/download
progresses fits the case here.
- Refactored CmdRunner to not use/reference objects from parent class.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Migrate to embedded Jetty server.
- Improve ServerDaemon implementation.
- Introduce a new server.properties file for easier configuration.
- Have a single /etc/default/cloudstack-management to configure env.
- Reduce shaded jar file, removing unnecessary dependencies.
- Upgrade to Spring 5.x, upgrade several jar dependencies.
- Does not shade and include mysql-connector, used from classpath instead.
- Upgrade and use bountcastle as a separate un-shaded jar dependency.
- Remove tomcat related configuration and files.
- Have both embedded UI assets in uber jar and separate webapp directory.
- Refactor systemd and init scripts, cleanup packaging.
- Made cloudstack-setup-databases faster, using `urandom`.
- Remove unmaintained distro packagings.
- Moves creation and usage of server keystore in CA manager, this
deprecates the need to create/store cloud.jks in conf folder and
the db.cloud.keyStorePassphrase in db.properties file. This also
remove the need of the --keystore-passphrase in the
cloudstack-setup-encryption script.
- GZip contents dynamically in embedded Jetty
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This implements a CloudStack Prometheus exporter as a plugin, that serves
metrics on a HTTP port.
New global settings:
1. prometheus.exporter.enable - (default: false), Enable the prometheus
exporter plugin, management server restart needed.
2. prometheus.exporter.port - (default: 9595), The prometheus exporter
server port.
3. prometheus.exporter.allowed.ips - (default: 127.0.0.1), List of comma
separated prometheus server ips (with no spaces) that should be allowed to
access the URLs.
The following list of metrics are provided per pop (zone) with the exporter:
• Per host:
o CPU cores: used, total
o CPU usage: used, total (in MHz)
o Memory usage: used, total (in MiBs)
o Total VMs running on the host
• CPU cores: allocated (per zone)
• CPU usage: allocated (per zone, in MHz)
• Memory usage: allocated (per zone, in MiBs)
• Hosts: online, offline, total
• VMs: in all states -- starting, running, stopping, stopped, destroyed,
expunging, migrating, error, unknown
• Volumes: ready, destroyed, total
• Primary Storage Pool: (Disk size) used, allocated, unallocated, total (in GiBs)
• Secondary Storage Pool: (Disk size) used, allocated, unallocated, total (in GiBs)
• Private IPs: allocated, total
• Public IPs: allocated, total
• Shared Network IPs: allocated, total
• VLANs: allocated, total
Additional metrics for the environment:
• Summed domain (level=1) limit for CPU cores
• Summed domain (level=1) limit for memory/ram
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Snapshots are not deleted resulting unexpected storage consumption in case of VMware.
Steps to reproduce this issue :
In VMware setup, create a snapshot of volume say Snap1.
After successful creation of snapshot Snap1, create new snapshot of same volume say Snap2.snapshots
While Snap2 is in BackingUp state, delete Snap1.
Snap1 will disappear from Web UI, but when we check secondary storage, files associated with Snap1 still persists even after cleanup job is performed.
In snapshot_store_ref table in DB, Snap1 will be in ready state instead of Destroyed.
Also, in snapshots table, status of Snap1 will be Destroyed but removed column will be null and will never change to the date of snapshot removal.
Fix for this issue :
In VMware, snapshot chain is not maintained, instead full snapshot is taken every time.
So, it makes sense not to assign parent snapshot id for the snapshot. In this way, every snapshot will be individual and can be deleted successfully whenever required.
This fixes the agreed upon url on download.cloudstack.org in various
sql files and misc scripts.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
* pr/1825:
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This closes#1644
* 4.9:
CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable Unhides snapshot.backup.rightafter from global configuration
CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable
Unhides snapshot.backup.rightafter from global configuration
If snapshot.backup.rightafter is set to false (defaults to true), snapshots are
not backed up to secondary storage.
This is the same as PR #1644 applied to 4.9, as per @jburwell
* pr/1697:
CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable Unhides snapshot.backup.rightafter from global configuration
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
[4.10] CLOUDSTACK-8746: VM Snapshotting implementation for KVM
* pr/977:
Fixes for testing VM Snapshots on KVM. Related to PR 977
CLOUDSTACK-8746: vm snapshot implementation for KVM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9619: Updates for SAN-assisted snapshotsThis PR is to address a few issues in #1600 (which was recently merged to master for 4.10).
In StorageSystemDataMotionStrategy.performCopyOfVdi we call getSnapshotDetails. In one such scenario, the source snapshot in question is coming from secondary storage (when we are creating a new volume on managed storage from a snapshot of ours thats on secondary storage).
This usually worked in the regression tests due to a bit of "luck": We retrieve the ID of the snapshot (which is on secondary storage) and then try to pull out its StorageVO object (which is for primary storage). If you happen to have a primary storage that matches the ID (which is the ID of a secondary storage), then getSnapshotDetails populates its Map<String, String> with inapplicable data (that is later ignored) and you dont easily see a problem. However, if you dont have a primary storage that matches that ID (which I didnt today because I had removed that primary storage), then a NullPointerException is thrown.
I have fixed that issue by skipping getSnapshotDetails if the source is coming from secondary storage.
While fixing that, I noticed a couple more problems:
1) We can invoke grantAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
2) We can invoke revokeAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
I have corrected those issues, as well.
I then came across one more problem:
When using a SAN snapshot and copying it to secondary storage or creating a new managed-storage volume from a snapshot of ours on secondary storage, we attach to the SR in the XenServer code, but detach from it in the StorageSystemDataMotionStrategy code (by sending a message to the XenServer code to perform an SR detach). Since we know to detach from the SR after the copy is done, we should detach from the SR in the XenServer code (without that code having to be explicitly called from outside of the XenServer logic).
I went ahead and changed that, as well.
JIRA Ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-9619
* pr/1749:
CLOUDSTACK-9619: Updates for SAN-assisted snapshots
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
(1) add support to create/delete/revert vm snapshots on running vms with QCOW2 format
(2) add new API to create volume snapshot from vm snapshot
(3) delete metadata of vm snapshots before stopping/migrating and recover vm snapshots after starting/migrating
(4) enable deleting of VM snapshot on stopped vm or vm snapshot is not listed in qcow2 image.
(5) enable smoke tests for vmsnaphsots on KVM
store ref table so builtin template is never downloaded completely
In handleSysTemplateDownload method creating template only if there exists no entry
handleTemplateSync will take care of other scenario
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK8
- Bump servet dependency version
- Migrate spring xmls to version 4, fixes schema locations that are 3.0
dependent in various xmls.
- Fix failing tests due to spring upgrade
(Thanks @marcaurele Marc-Aurèle Brothier for fixing them)
* Fix test DeploymentPlanningManagerImplTest
* Fix GloboDNS test
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
CLOUDSTACK-9627 Fix template sync for region store.When using a region store like Swift or S3 as secondary storage,
the `zoneId` can be null. This causes an exception when we try
to convert it to a `long`. This fix guards against that.
Before this fix, if you restart the management server, all the templates
would change to "NOT READY" because the code which syncs the NFS cache
and the object store crashes due to the above mentioned issue.
This PR fixes that.
* pr/1772:
CLOUDSTACK-9627:Fix template sync for region store
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR adds an ability to Pass a new parameter, locationType,
to the “createSnapshot” API command. Depending on the locationType,
we decide where the snapshot should go in case of managed storage.
There are two possible values for the locationType param
1) `Standard`: The standard operation for managed storage is to
keep the snapshot on the device. For non-managed storage, this will
be to upload it to secondary storage. This option will be the
default.
2) `Archive`: Applicable only to managed storage. This will
keep the snapshot on the secondary storage. For non-managed
storage, this will result in an error.
The reason for implementing this feature is to avoid a single
point of failure for primary storage. Right now in case of managed
storage, if the primary storage goes down, there is no easy way
to recover data as all snapshots are also stored on the primary.
This features allows us to mitigate that risk.
Unhides snapshot.backup.rightafter from global configuration
If snapshot.backup.rightafter is set to false (defaults to true), snapshots are
not backed up to secondary storage
CLOUDSTACK-9238: Fix URL length to 2048 for all url fields in VOI will update the PR to add max field length in the API commands too
* pr/1567:
API: update url field max length
not needed on host table
Fix URL length to 2048 for all url fields in VO
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9368: Fix for Support configurable NFS version for Secondary Storage mounts## Description
JIRA TICKET: https://issues.apache.org/jira/browse/CLOUDSTACK-9368
This pull request address a problem introduced in #1361 in which NFS version couldn't be changed after hosts resources were configured on startup (for hosts using `VmwareResource`), and as host parameters didn't include `nfs.version` key, it was set `null`.
## Proposed solution
In this proposed solution `nfsVersion` would be passed in `NfsTO` through `CopyCommand` to `VmwareResource`, who will check if NFS version is still configured or not. If not, it will use the one sent in the command and will set it to its storage processor and storage handler. After those setups, it will proceed executing command.
* pr/1518:
CLOUDSTACK-9368: Fix for Support configurable NFS version for Secondary Storage mounts
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Taking fast and efficient volume snapshots with XenServer (and your storage provider)A XenServer storage repository (SR) and virtual disk image (VDI) each have UUIDs that are immutable.
This poses a problem for SAN snapshots, if you intend on mounting the underlying snapshot SR alongside the source SR (duplicate UUIDs).
VMware has a solution for this called re-signaturing (so, in other words, the snapshot UUIDs can be changed).
This PR only deals with the CloudStack side of things, but it works in concert with a new XenServer storage manager created by CloudOps (this storage manager enables re-signaturing of XenServer SR and VDI UUIDs).
I have written Marvin integration tests to go along with this, but cannot yet check those into the CloudStack repo as they rely on SolidFire hardware.
If anyone would like to see these integration tests, please let me know.
JIRA ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-9281
Here's a video I made that shows this feature in action:
https://www.youtube.com/watch?v=YQ3pBeL-WaA&list=PLqOXKM0Bt13DFnQnwUx8ZtJzoyDV0Uuye&index=13
* pr/1403:
Faster logic to see if a cluster supports resigning
Support for backend snapshots with XenServer
Signed-off-by: Will Stevens <williamstevens@gmail.com>
* 4.7:
Fix Sync of template.properties in Swift
Configure rVPC for router.redundant.vrrp.interval advert_int setting
Have rVPCs use the router.redundant.vrrp.interval setting
Resolve conflict as forceencap is already in master
Split the cidr lists so we won't hit the iptables-resture limits
Check the existence of 'forceencap' parameter before use
Do not load previous firewall rules as we replace everyhing anyway
Wait for dnsmasq to finish restart
Remove duplicate spaces, and thus duplicate rules.
Restore iptables at once using iptables-restore instead of calling iptables numerous times
Add iptables copnversion script.
CLOUDSTACK-9265 cleanup around httpclient versionssome cleanup done
- replaced HttpStatus from org.apache.commons.httpclient with that from org.apache.http
- removed unthrown HttpException
- left auto reformat in place
* pr/1385:
CLOUDSTACK-9265 cleanup around httpclient versions
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CID-1338387: Deletion of method endPointSelector.selectHypervisorHostFollowing the discussions and analysis presented on PR #1056 create by @DaanHoogland
This PR is intended to push those changes that were discussed there regarding the of endPointSelector.selectHypervisorHost method.
* pr/1124:
Deletion of method endPointSelector.selectHypervisorHost
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Snapshot removing implemented on RBD.
1. On management side: when created new shanpshot we checking if our primary storage is RBD,
then do not remove record from cloud.snapshot_store_ref with link to Ceph
image via 'install_path' field.
2. On management side: when removing snapshot, also send command to agent 'DeleteCommand'.
3. On agent side: method implemented 'public Answer deleteSnapshot(final DeleteCommand cmd)'
Fixed return type Void to void in DataMotionStrategy.The main changes are:
- Changing methods Void to void.
- Removal of the method Void copyAsync(DataObject srcData, DataObject
destData, AsyncCompletionCallback<CopyCommandResult> callback) that was
never used.
We noticed that methods form that class are using the return type Void
with capital V. This way that method has to return a null value at the
end.
Removed trim lines from XenServerStorageMotionStrategy.
The trim lines were removed from XenServerStorageMotionStrategy.
* pr/969:
Changed return of methods from DataMotionStrategy, Void to void
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The main changes are:
• Changing methods “Void” to “void”.
• Removal of the method “Void copyAsync(DataObject srcData, DataObject
destData, AsyncCompletionCallback callback)” that was never used. We
noticed that methods form that class are using the return type Void with
capital V. This way that method has to return a null value at the end.
The S3 implementation is far from finished, this commit focusses on the bases.
- Upgrade AWS SDK to latest version.
- Rewrite S3 Template downloader.
- Rewrite S3Utils utility class.
- Improve addImageStoreS3 API command.
- Split various classes for convenience.
- Various minor improvements and code optimalisations.
A side effect of the new AWS SDK is that it, by default, uses the V4 signature. Therefore I added an option to specify the Signer, so it stays compatible with previous versions.
CLOUDSTACK-8964: Can't create volume from snapshot of a removed volumeThis issue happens on KVM as well.
This is because the volume info is missing in the CopyCommand once the volume has been removed.
When the KVM agent tries to process the command, it will throws a NPE.
* pr/976:
CLOUDSTACK-8964: Can't create volume from snapshot of a removed volume
Signed-off-by: Remi Bergsma <github@remi.nl>
This issue happens on KVM as well.
This is because the volume info is missing in the CopyCommand once the volume has been removed.
When the KVM agent tries to process the command, it will throws a NPE.
Guys, can you review it? things need to be discussed:
(1) this supports KVM/QCOW2 only. Anyone want to implement for other Hypervisor/format ?
(2) The original data volume (on primary storage) will be removed.
(3) The script uses the default timeout in libvirtComputingResource. Do we need to add one in global configuration (like copy.volume.wait or backup.snapshot.wait, create.volume.from.snapshot.wait)
(4) In scripts/storage/qcow2/managesnapshot.sh, I use "qemu-img convert -f qcow2 -O qcow2" to copy the snapshot from secondary to primary (hence there is no base image file), instead of "cp -f", this is because convert is faster than cp in my testing.
* pr/732:
CLOUDSTACK-5863: revert volume snapshot for KVM/QCOW2
Signed-off-by: Wei Zhou <w.zhou@tech.leaseweb.com>
This reverts commit cd7218e241, reversing
changes made to f5a7395cc2.
Reason for Revert:
noredist build failed with the below error:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on project cloud-plugin-hypervisor-vmware: Compilation failure
[ERROR] /home/jenkins/acs/workspace/build-master-noredist/plugins/hypervisors/vmware/src/com/cloud/hypervisor/guru/VMwareGuru.java:[484,12] error: non-static variable logger cannot be referenced from a static context
[ERROR] -> [Help 1]
even the normal build is broken as reported by @koushik-das on dev list
http://markmail.org/message/nngimssuzkj5gpbz
This patch makes it possible to expose VM UUID to subsystems, this can be
useful for implementing VM Snapshots for KVM in future.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This reverts commit 180afe52e5.
This broke the build on master:
master build broken with the below error http://jenkins.buildacloud.org/job/build-master-slowbuild/2101/consoleText
```
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /home/jenkins/acs/workspace/build-master-slowbuild/plugins/hypervisors/xenserver/test/com/cloud/hypervisor/xenserver/resource/wrapper/xenbase/CitrixRequestWrapperTest.java:[1581,51] error: constructor CreateVMSnapshotCommand in class CreateVMSnapshotCommand cannot be applied to given types;
[ERROR] required: String,String,VMSnapshotTO,List<VolumeObjectTO>,String
found: String,VMSnapshotTO,List<VolumeObjectTO>,String
reason: actual and formal argument lists differ in length
/home/jenkins/acs/workspace/build-master-slowbuild/plugins/hypervisors/xenserver/test/com/cloud/hypervisor/xenserver/resource/wrapper/xenbase/CitrixRequestWrapperTest.java:[1623,53] error: no suitable constructor found for RevertToVMSnapshotCommand(String,VMSnapshotTO,List<VolumeObjectTO>,String)
[INFO] 2 errors
[INFO] -------------------------------------------------------------
```
This was PR #717 towards 4.5
This patch makes it possible to expose VM UUID to subsystems, this can be
useful for implementing VM Snapshots for KVM in future.
This was PR #717 towards 4.5
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit 0062ff2672)
Signed-off-by: Remi Bergsma <github@remi.nl>
* pr/547:
CLOUDSTACK-8601. VMFS storage added as local storage can be re-added as shared storage. Fail addition of a VMFS shared storage pool in case it has already been added as local storage in CS.
Signed-off-by: Mike Tutkowski <mike.tutkowski@solidfire.com>
As part of volume sync, that runs during of SSVM start-up, the volume_store_ref entry was getting deleted. Volume GC relies on this entry to move volume to destroyed state.
Since the entry was getting deleted, GC thread never moved the volume from UploadError/UploadAbandoned to Destroyed. Fix is to not remove the volume_store_ref entry as part
of volume sync and let GC thread handle the clean up.
This closes#611
The volume is already deleted (may be by the cleanup thread) and hence
the NPE. Added a not null check for the volumevo and returning false
from the state transition
This closes#321
... of new volumes. Following changes are implemented 1. Disable or enable a pool with the
updateStoragePool api. A new 'enabled' parameter added for the same. 2. When a
pool is disabled the state of the pool is updated to 'Disabled' in the db. On
enabling it is updated back to 'Up'. Alert is raised when a pool is disabled or
enabled. 3. Updated other storage providers to also honour the disabled state.
4. A disabled pool is skipped by allocators for provisioing of new volumes. 5.
Since the allocators skip a disabled pool for provisioning of volumes, the
volumes are also not listed as a destination for volume migration.
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Disabling+Storage+Pool+for+Provisioning
This closes#257
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Removes awsapi db properties usage across codebase
- Removes references from spring xmls, test cases and TransactionLegacy
- Adds sql command to drop database cloudbridge in schema-451to460-cleanup.sql
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Correctly update the status of an uploaded volume upon failure to attach it to a VM.
(cherry picked from commit 10a106f5d8)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
inconsistent state (NotUploaded or UploadInProgress) and doesn't transition to any terminal
state (Uploaded, Uploaded). As a result these volumes/templates cannot be removed. Fix is to handle such
volume/template entries correctly.
Set destination volume path as NULL while duplicating volume during migration.
If migration fails, destination volume will be marked as removed. And if migration succeeds, volume path will be rightly updated.
(cherry picked from commit d30d5644bb)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The copy command reply can have null size returned, so check and set values
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit 53ca0b1861)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Set destination volume path as NULL while duplicating volume during migration.
If migration fails, destination volume will be marked as removed. And if migration succeeds, volume path will be rightly updated.
Update volume chain_info to NULL during cold migration.
Otherwise during VM start, CCP will configure and try to power-on the VM with wrong disk information.
(cherry picked from commit 7b32b8a268)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>