mirror of https://github.com/apache/cloudstack.git
* Updated libvirt's native reboot operation for VM on KVM using ACPI event, and Added 'forced' reboot option to stop and start the VMs
- Added 'forced' reboot option for User VM (New parameter 'forced' in rebootVirtualMachine API, to stop and start User VM)
- Added 'forced' reboot option for System VM (New parameter 'forced' in rebootSystemVm API, to stop and then start System VM)
- Added 'forced' reboot option for Router (New parameter 'forced' in rebootRouter API, to force stop and then start Router)
- Added force reboot tests for User VM, System VM and Router
* Updated the PowerFlex/ScaleIO volume operations support in CloudStack. Added support for the folllowing:
- PowerFlex volume migration (with snapshots) within the same PowerFlex storage clusters, using native V-Tree migration.
- PowerFlex volume migration (without snapshots) across different PowerFlex storage clusters.
=> findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
=> Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
=> Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
=> Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.
- Template creation (on secondary storage) from PowerFelx/ScaleIO volume or snapshot.
- Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)
Other Changes:
- Fix to remove the duplicate zone wide pools listed while finding storage pools for migration
- Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.
* Provision to add PowerFlex/ScaleIO storage pool as Primary Storage from UI
* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
|
||
|---|---|---|
| .. | ||
| README.md | ||
| test_scaleio_volumes.py | ||
README.md
PowerFlex/ScaleIO storage plugin
================================== This directory contains the basic VM, Volume life cycle tests for PowerFlex/ScaleIO storage pool (in KVM hypervisor).
Running tests
=============== To run the basic volume tests, first update the below test data of the CloudStack environment
TestData.zoneId: <id of zone>
TestData.clusterId: <id of cluster>
TestData.domainId: <id of domain>
TestData.url: <management server IP>
TestData.primaryStorage "url": <PowerFlex/ScaleIO storage pool url (see the format below) to use as primary storage>
and to enable and run volume migration tests, update the below test data
TestData.migrationTests: True
TestData.primaryStorageSameInstance "url": <PowerFlex/ScaleIO storage pool url (see the format below) of the pool on same storage cluster as TestData.primaryStorage>
TestData.primaryStorageDistinctInstance "url": <PowerFlex/ScaleIO storage pool url (see the format below) of the pool not on the same storage cluster as TestData.primaryStorage>
PowerFlex/ScaleIO storage pool url format:
powerflex://<api_user>:<api_password>@<gateway>/<storagepool>
where,
- <api_user> : user name for API access
- <api_password> : url-encoded password for API access
- <gateway> : scaleio gateway host
- <storagepool> : storage pool name (case sensitive)
For example: "powerflex://admin:P%40ssword123@10.10.2.130/cspool"
Then run the tests using python unittest runner: nosetests
nosetests --with-marvin --marvin-config=<marvin-cfg-file> <cloudstack-dir>/test/integration/plugins/scaleio/test_scaleio_volumes.py --zone=<zone> --hypervisor=kvm
You can also run these tests out of the box with PyDev or PyCharm or whatever.