Commit Graph

97 Commits

Author SHA1 Message Date
Harikrishna 2c05b63495
kvm: Fix for Revert volume snapshot (#6527)
This PR fixes the issue #6209 where the snapshot revert operation fails after certain volume operations like Migrate VM with volume / migrate volume / reinstall VM.

The root cause of the issue after these volume operations, the primary storage entry is getting deleted for that volume. We have fixed it here to get the primary datastore entry wrt volume and continue the operation.
2022-07-20 11:34:02 +05:30
slavkap 4004dfcfd8
StorPool storage plugin (#6007)
* StorPool storage plugin

Adds volume storage plugin for StorPool SDS

* Added support for alternative endpoint

Added option to switch to alternative endpoint for SP primary storage

* renamed all classes from Storpool to StorPool

* Address review

* removed unnecessary else

* Removed check about the storage provider

We don't need this check, we'll get if the snapshot is on StorPool be
its name from path

* Check that current plugin supports all functionality before upgrade CS

* Smoke tests for StorPool plug-in

* Fixed conflicts

* Fixed conflicts and added missed Apache license header

* Removed whitespaces in smoke tests

* Added StorPool plugin jar for Debian

the StorPool jar will be included into cloudstack-agent package for
Debian/Ubuntu
2022-04-14 11:12:01 -03:00
Daniel Augusto Veronezi Salvador 39fad2d9d7
KVM disk-only based snapshot of volumes instead of taking VM's full snapshot and extracting disks (#5297)
* Refactor create volume snapshot with running VM

* Refactor create volume snapshot with stopped VM

* Refactor create volume from snapshot

* Refactor create template from snapshot

* Refactor volume migration (migrateVolume/ migrateVirtualMachineWithVolume)

* Refactor snapshot deletion

* Refactor snapshot revertion

* Adjusts and fix cherry-pick conflicts

* Remove diffuse tests

* Add validation to add flag '--delete' on command 'virsh blockcommand' only if libvirt version is equal or higher 6.0.0

* Expunge temporary snapshot only if template creation is from snapshot

* Extract strings to constant

* Remove unused imports

* Fix error on revert backed up snapshot

* Turn method's return to void as it is not used

* Rename method in SnapshotHelper

* Fix folder creation when using SharedMountPoint pool

* Remove static import

* Remove unnused method

* Cover take snapshot in centos 7

* Handle right snapshot flag according to qemu version

Co-authored-by: GutoVeronezi <daniel@scclouds.com.br>
2022-04-12 08:14:27 -03:00
Harikrishna cd4e7e031a
Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster (#5539)
* Fix resize volume and migrate volume to update volume path if DRS is applied on volume in datastore cluster

* Change in constructors

* Naming changes

* Remove commented code

* Refactor code for more readability

* Addressed review comments on code refactor
2021-10-04 20:58:25 -03:00
Abhishek Kumar 56f4da6dce Merge remote-tracking branch 'apache/4.15' into main 2021-09-02 16:13:33 +05:30
Wei Zhou 4e53997ca2
server: do not remove volume from DB if fail to expunge it from primary storage or secondary storage (#5373)
* server: do not remove volume from DB if fail to expunge it from primary storage or secondary storage

* server/VolumeApiServiceImpl.java: move to method

* update #5373
2021-08-31 13:48:58 -03:00
Rohit Yadav a1a3aff2b5 Merge remote-tracking branch 'origin/4.15' into main 2021-08-31 14:29:30 +05:30
slavkap 961e85eb60
Fix of creating volumes from snapshots without backup to secondary storage (#5349)
* Fix of creating volumes from snapshots without backup

When few snaphots are created onyl on primary storage, and try to create
a volume or a template from the snapshot only the first operation is
successful. Its because the snapshot is backup on secondary storage with
wrong SQL query. The problem appears on Ceph/NFS but may affects other
storage plugins.
Bypassing secondary storage is implemented only for Ceph primary storage
and it didn't cover the functionality to create volume from snapshot
which is kept only on Ceph

* Address review
2021-08-31 12:46:57 +05:30
Spaceman1984 96c9c5a5e2
Added disk provisioning type support for VMWare (#4640)
* Added disk provisioning type support for VMWare

* Review changes

* Fixed unit test

* Review changes

* Added missing licenses

* Review changes

* Update StoragePoolInfo.java

Removed white space

* Review change - Getting disk provisioning strictness setting using the zone id and not the pool id

* Delete __init__.py

* Merge fix

* Fixed failing test

* Added comment about parameters

* Added error log when update fails

* Added exception when using API

* Ordering storage pool selection to prefer thick disk capable pools if available

* Removed unused parameter

* Reordering changes

* Returning storage pool details after update

* Removed multiple pool update, updated marvin test, removed duplicate enum

* Removed comment

* Removed unused import

* Removed for loop

* Added missing return statements for failed checks

* Class name change

* Null pointer

* Added more info when a deployment fails

* Null pointer

* Update api/src/main/java/org/apache/cloudstack/api/BaseListCmd.java

Co-authored-by: dahn <daan.hoogland@gmail.com>

* Small bug fix on API response and added missing bracket

* Removed datastore cluster code

* Removed unused imports, added missing signature

* Removed duplicate config key

* Revert "Added more info when a deployment fails"

This reverts commit 2486db78dc.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2021-07-16 22:37:42 -03:00
Abhishek Kumar 426f14b6ed Merge remote-tracking branch 'apache/4.15'
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-05-18 15:19:20 +05:30
Abhishek Kumar dc91a1fd4d
server: destroy ssvm, cpvm on last host maintenance (#4644)
* server: destroy ssvm, cpvm on last host maintenance

When a single or last UP host enters into maintenance just stopping SSVM and CPVM will leave behind VMs on hypervisor side. As these system vms will be recreated they can be destroyed.
Fixes #3719

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix methods

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* immediately destroy systemvms

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix destroy

Added bypassHostMaintenance flag in Comma.java class to allow command to be handled by host agent even when host is in maintenace.
Flag is set true only for delete commands for ssvm and cpvm.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* unit test fix

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix missing return statement

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* fix

VM should be stopped with cleanup before calling expunge else it server may through error with host in PrepareForMaintenance state.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* rename

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>

* refactor

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2021-05-14 23:16:15 +05:30
sureshanaparti eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Alexandru Bagu fdb2ee3165
storage: Fix hypervisor type cast to string (#4516)
This PR addresses an error that appears when you try to add a new host. I don't even understand why there was a cast to String in the first place. I will assume some classes send HypervisorType and some send a string (empty or otherwise). Shouldn't this be addressed to use the same type everywhere? With this fix adding a new xenserver host works fine.

Co-authored-by: dahn <daan.hoogland@gmail.com>
2020-12-14 11:56:44 +05:30
Harikrishna Patnala 40934ba9ff Fix travis failures by removing dependency of vmware from storage.
Added a new command class to verify the vCenter details provided while adding primary storage
2020-10-19 14:57:16 +05:30
Harikrishna Patnala d4d372a9a4 Fix addition of datastores with invalid vCenter server details 2020-10-19 14:57:16 +05:30
Harikrishna Patnala 48786b2d31 DataStore Clusters addition as a storage pool 2020-10-19 14:57:15 +05:30
Harikrishna Patnala 6df819028e UI changes and accept any type of datastore as presetup in vmware 2020-10-19 14:57:15 +05:30
Spaceman1984 b586eb22f1
Human readable sizes in logs (#4207)
This PR adds outputting human readable byte sizes in the management server logs, agent logs, and usage records. A non-dynamic global variable is added (display.human.readable.sizes) to control switching this feature on and off. This setting is sent to the agent on connection and is only read from the database when the management server is started up. The setting is kept in memory by the use of a static field on the NumbersUtil class and is available throughout the codebase.

Instead of seeing things like:
2020-07-23 15:31:58,593 DEBUG [c.c.a.t.Request] (AgentManager-Handler-12:null) (logid:) Seq 8-1863645820801253428: Processing: { Ans: , MgmtId: 52238089807, via: 8, Ver: v1, Flags: 10, [{"com.cloud.agent.api.NetworkUsageAnswer":{"routerName":"r-224-VM","bytesSent":"106496","bytesReceived":"0","result":"true","details":"","wait":"0",}}] }

The KB MB and GB values will be printed out:

2020-07-23 15:31:58,593 DEBUG [c.c.a.t.Request] (AgentManager-Handler-12:null) (logid:) Seq 8-1863645820801253428: Processing: { Ans: , MgmtId: 52238089807, via: 8, Ver: v1, Flags: 10, [{"com.cloud.agent.api.NetworkUsageAnswer":{"routerName":"r-224-VM","bytesSent":"(104.00 KB) 106496","bytesReceived":"(0 bytes) 0","result":"true","details":"","wait":"0",}}] }

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Human+Readable+Byte+sizes
2020-08-13 15:55:16 +05:30
Wido den Hollander c3554ec31d
kvm: For ceph only if a port number has been specified define in the XML (#4231)
Ceph used to use port 6789 (no need to specify it), but with the messenger v2
from Ceph it switched to port 3300 while 6789 still works.

librados/librbd/libvirt will automatically figure out the ports to use if none is
specified.

Therefor there is no need for CloudStack to explicitely define the port in the XML
passed to Libvirt or Qemu.

Leave blank if no port number has been defined by the user.
2020-08-05 13:44:40 +05:30
Gabriel Beims Bräscher 93aad24bbb storage: Handle RBD snapshot deletion (#3615)
When deleting volume snapshots, only records in the database are deleted, and snapshots are not deleted on the main storage.

Fixes: #3586
2019-12-08 14:48:51 +05:30
Gabriel Beims Bräscher 7c5eca9481
Copy template to target KVM host if needed when migrating local <> local storage (#3154)
* Migrate template to target host if needed.

Fix KVM VM local storage live migration by migrating its template to the
target host if needed.

* Address reviewer and add method that updates the DB template reference

* Remove deprecated Config.PrimaryStorageDownloadWait

* Code formating of @Inject to follow checkstyle
2019-02-05 00:18:29 -02:00
Gabriel Beims Bräscher bf209405e7 Allow KVM VM live migration with ROOT volume on file storage type (#2997)
* Allow KVM VM live migration with ROOT volume on file

* Allow KVM VM live migration with ROOT volume on file
- Add JUnit tests

* Address reviewers and change some variable names to ease future
implementation (developers can easily guess the name and use
autocomplete)
2018-12-14 09:01:28 -02:00
Mike Tutkowski 3db33b7385 Support online migration of a virtual disk on XenServer from non-managed storage to managed storage 2018-08-12 00:23:36 -06:00
Marc-Aurèle Brothier 893a88d225 CLOUDSTACK-10105: Use maven standard project structure in all projects (#2283)
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.

- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml

Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
2018-01-20 03:19:27 +05:30
SudharmaJain c670691bfb CLOUDSTACK-8865: Adding SR doesn't create Storage_pool_host_ref entry for disabled host (#876)
This causes VM deployment failure on the host that was disabled while adding the storage repository.
In the attachCluster function of the PrimaryDataStoreLifeCycle, we were only selecting hosts that are up and are in enabled state. Here if we select all up hosts, it will populate the DB properly and will fix this issue. Also added a unit test for attachCluster function.
2017-09-21 10:49:11 +05:30
Mike Tutkowski 2bd035d199 Support for backend snapshots with XenServer 2016-05-13 01:02:04 -06:00
Wei Zhou 92344c006d CLOUDSTACK-5863: revert volume snapshot for KVM/QCOW2 2015-08-24 11:01:50 +02:00
Remi Bergsma 64ff67da55 Merge pull request #654 from DaanHoogland/CLOUDSTACK-8656
Cloudstack 8656: do away with more silently ignoring exceptions.a lot of messages added.
some restructuring for test exception assertions and try-with-resource blocks

* pr/654: (29 commits)
  CLOUDSTACK-8656: more logging instead of sysout
  CLOUDSTACK-8656: use catch block for validation
  CLOUDSTACK-8656: class in json specified not found
  CLOUDSTACK-8656: removed unused classes
  CLOUDSTACK-8656: restructure of tests
  CLOUDSTACK-8656: reorganise sychronized block
  CLOUDSTACK-8656: restructure tests to ensure exception throwing
  CLOUDSTACK-8656: validate the throwing of ServerApiException
  CLOUDSTACK-8656: logging ignored exceptions
  CLOUDSTACK-8656: try-w-r removes need for empty catch block
  CLOUDSTACK-8656: try-w-r instead of clunckey close-except
  CLOUDSTACK-8656: deal with empty SQLException catch block by try-w-r
  CLOUDSTACK-8656: unnecessary close construct removed
  CLOUDSTACK-8656: message about timed buffer logging
  CLOUDSTACK-8656: message about invalid number from store
  CLOUDSTACK-8656: move cli test tool to separate file
  CLOUDSTACK-8656: exception is the rule for some tests
  CLOUDSTACK-8656: network related exception logging
  CLOUDSTACK-8656: reporting ignored exceptions in server
  CLOUDSTACK-8656: log in case we are on a platform not supporting UTF8
  ...

Signed-off-by: Remi Bergsma <github@remi.nl>
2015-08-14 21:38:49 +02:00
Daan Hoogland a0ba7d310e CLOUDSTACK-8656: log in case we are on a platform not supporting UTF8 2015-08-04 14:41:33 +02:00
Likitha Shetty 13a98dd196 CLOUDSTACK-8601. VMFS storage added as local storage can be re-added as shared storage.
Fail addition of a VMFS shared storage pool in case it has already been added as local storage in CS.
2015-07-01 10:47:36 +05:30
Devdeep Singh a99c9d0e68 Implementation for the ability to disable a storage pool for provisioning
... of new volumes. Following changes are implemented 1. Disable or enable a pool with the
updateStoragePool api. A new 'enabled' parameter added for the same. 2. When a
pool is disabled the state of the pool is updated to 'Disabled' in the db. On
enabling it is updated back to 'Up'. Alert is raised when a pool is disabled or
enabled. 3. Updated other storage providers to also honour the disabled state.
4. A disabled pool is skipped by allocators for provisioing of new volumes. 5.
Since the allocators skip a disabled pool for provisioning of volumes, the
volumes are also not listed as a destination for volume migration.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Disabling+Storage+Pool+for+Provisioning

This closes #257

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2015-05-19 11:16:49 +01:00
Rohit Yadav fabab5460c CloudStackPrimaryDataStoreLifeCycleImpl: decode path as UTF8 or fallback
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit 6ddaef7a48)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2015-02-05 15:28:22 +05:30
Mike Tutkowski 2042660a68 Added a "long getUsedIops(StoragePool)" method to the PrimaryDataStoreDriver interface 2014-11-12 13:38:58 -07:00
Mike Tutkowski 57dacf99a2 Changed "boolean connectVolumeToHost(VolumeInfo, Host, DataStore)" to "boolean grantAccess(DataObject, Host, DataStore)"
Changed "void disconnectVolumeFromHost(VolumeInfo, Host, DataStore)" to "void revokeAccess(DataObject, Host, DataStore)"
2014-10-21 16:01:14 -06:00
Likitha Shetty f1e3e83bbf BUG-ID: CLOUDSTACK-7653. VM's are not getting deleted from hypervisor after deleting from UI when using zone wide primary storage.
While expunging a volume, CS chooses the endpoint to perform delete operation by selecting any host that has the storage containing the volume mounted on it.
Instead, if the volume to be deleted is attached to a VM, the endpoint chosen by CCP should be the host that contains the VM.
2014-09-30 15:20:15 +05:30
Mike Tutkowski 1d2f3300ad Adding support for SolidFire snapshots 2014-09-03 20:09:00 -06:00
Santhosh Edukulla c792f149fb Build failed with checkstyle error because of unused imports, removed them 2014-08-02 14:18:38 +05:30
Edison Su b5db68e2d1 CLOUDSTACK-7226: in case lun number is not provided, addprimarystorage cmd should report error instead of NPE 2014-08-01 16:23:30 -07:00
Hugo Trippaers ec43bfce90 Fix false positve in coverity, simple rewrite. 2014-07-24 12:21:37 +02:00
Mike Tutkowski c344693e48 Inform the applicable storage plug-in's life cycle that capacity (bytes and/or IOPS) can be updated 2014-06-24 14:39:57 -06:00
Damodar Reddy d3ffb7a565 CLOUDSTACK-6444: Fixing the issue with iSCSI path Format /targetIQN/LUN.
Signed-off-by: Koushik Das <koushik@apache.org>
2014-06-24 10:11:44 +05:30
Edison Su 25a6234a5b fix build 2014-03-28 16:24:45 -07:00
edison 6647802d08 CLOUDSTACK-5573: bump gson version to 1.7.2, fix https://code.google.com/p/google-gson/issues/detail?id=354 2014-03-28 16:22:32 -07:00
Niels de Vos fe83a85436 Add support for Primary Storage on Gluster using the libvirt backend
The support for Gluster as Primary Storage is mostly based on the
implementation for NFS. Like NFS, libvirt can address a Gluster environment
through the 'netfs' pool-type.
2014-02-20 14:52:01 +01:00
wrodrigues 0ff943337c fixing FindBugs scariest for replaceFirst() method call that does not assign the return value
Signed-off-by: Hugo Trippaers <htrippaers@schubergphilis.com>
2014-02-14 18:37:44 +01:00
Devdeep Singh a24263fe81 CLOUDSTACK-6030: Encrypt the primary and secondary smb storage password when it is stored in the db. 2014-02-05 15:44:09 +05:30
Mike Tutkowski ae35782ccd Merge from 4.3: CLOUDSTACK-5662: XenServer can't discover iSCSI targets with different credentials 2014-01-09 21:36:34 -07:00
Mike Tutkowski 03118c2969 Merge from 4.3: CLOUDSTACK-4810: Enable hypervisor snapshots for CloudStack-managed storage (for XenServer and VMware) 2014-01-09 14:44:35 -07:00
Sanjay Tripathi 46c6b91832 CLOUDSTACK-4402: [deleteStoragePool] There is no way to delete Primary storage if the last host with which it was associated is already removed. 2013-12-27 17:12:38 +05:30
Edison Su 0824c78761 CLOUDSTACK-4939 - Failed to create snapshot (KVM, Multiple hosts, Sharedstorage)
Conflicts:

	engine/storage/datamotion/src/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java
	engine/storage/src/org/apache/cloudstack/storage/endpoint/DefaultEndPointSelector.java
	plugins/storage/volume/default/src/org/apache/cloudstack/storage/datastore/driver/CloudStackPrimaryDataStoreDriverImpl.java

Conflicts:

	engine/storage/volume/src/org/apache/cloudstack/storage/volume/VolumeObject.java
	plugins/storage/volume/default/src/org/apache/cloudstack/storage/datastore/driver/CloudStackPrimaryDataStoreDriverImpl.java
2013-12-19 14:17:30 -08:00