... of new volumes. Following changes are implemented 1. Disable or enable a pool with the
updateStoragePool api. A new 'enabled' parameter added for the same. 2. When a
pool is disabled the state of the pool is updated to 'Disabled' in the db. On
enabling it is updated back to 'Up'. Alert is raised when a pool is disabled or
enabled. 3. Updated other storage providers to also honour the disabled state.
4. A disabled pool is skipped by allocators for provisioing of new volumes. 5.
Since the allocators skip a disabled pool for provisioning of volumes, the
volumes are also not listed as a destination for volume migration.
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Disabling+Storage+Pool+for+Provisioning
This closes#257
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
inconsistent state (NotUploaded or UploadInProgress) and doesn't transition to any terminal
state (Uploaded, Uploaded). As a result these volumes/templates cannot be removed. Fix is to handle such
volume/template entries correctly.
Fixed the following:
- Destroying volume in 'UploadAbandoned' state resulted in NPE
- Existing upload volume functionality interfered with this, added proper checks to prevent that
When we download volume then we create entry in volume_store_ref table.
We mark the volume entry to ready state once download_url gets generated.
When we migrate that volume, then again one more entry is created with same volume id.
Its state is marked as allocated. Later we try to list only one dataobject in datastore
for state transition during volume migration. If the listed volume's state is allocated
then migration passes otherwise it fails.
Below fix will remove the randomness and give priority to volume entry which is made for
migration (download_url/extracturl will be null in case of migration). Giving priority to
download volume case is not needed as there will be only one entry in that case so no randomness.
While taking a snapshot of a volume, CS chooses the endpoint to perform backup snapshot operation by selecting any host that has the storage containing the volume mounted on it.
Instead, if the volume is attached to a VM, the endpoint chosen by CS should be the host that contains the VM.
While expunging a volume, CS chooses the endpoint to perform delete operation by selecting any host that has the storage containing the volume mounted on it.
Instead, if the volume to be deleted is attached to a VM, the endpoint chosen by CCP should be the host that contains the VM.
to support IOPS capacity control in a cluster wide storage pool and a
local storage pool
to enable hypervisor type check, storage type check for root volume and
avoid list check
Since original commit(31de58edab) contained
a bug, it was reverted and this commit is a revised one.
to support IOPS capacity control in a cluster wide storage pool and a
local storage pool
to enable hypervisor type check, storage type check for root volume and
avoid list check
1. Adding the missing Template/Volume URLs expiration functionality
2. Improvement - While deleting the volume during expiration use rm -rf as vmware now contains directoy
3. Improvement - Use standard Answer so that the error gets logged in case deletion of expiration link didnt work fine.
4. Improvement - In case of domain change, expire the old urls
with hostid included was passed to the local storage pool allocator, it returned all the local
storage pools in the cluster, instead of just the local pool on the given host in the plan.
This was happening the search at a host level was happening only for data disk. Fixed this.
Additionally, the query to list the storage pools on a host was failing if the pool did have
tags. Fixed the query too.
CLOUDSTACK-6802: Fix for not being able to attach data disk on local. This issue gets fixed
with the above issue too. The query to list pools on a host was failing if there were no
tags on the storage pool.
template is downloading, template_store_ref has leftover not in ready
state, when create vm from that template, the code doesn't check either
zone id, nor template_store_ref state.
Conflicts:
engine/orchestration/src/org/apache/cloudstack/engine/orchestration/VolumeOrchestrator.java
storage pool (SMB) and attached to a running vm can be live migrated to another shared storage
pool. Also a vm and its volumes can be live migrated to another host and storage pool respectively.
2) Corrected some logging in MidoNetPublicNetworkGuru - removed .toString method call on the objects in the log body as toString is called on the object by default when use log4j
encoded. This cause createStoragePool or addImageStore command to fail if special
characters were present. Updated the code to pass user, password and domain as part
of details while adding primary or secondary. Also made changes on server side to
handle it.
Create two storage pools, one with storage tag X, one with storage tag Y.
Create a service offering with storage tag X.
Create a disk offering with storage tag Y.
Attempt to deploy a virtual machine with a datadisk, using given offerings, it fails.
Deployment planner keeps a global object 'avoid'. It loops through each volume to
be created, asking storage allocators for matching pools, passing this avoid object.
First disk matches a pool or pools, adds ALL other pools to avoid object, then
deployment planner attaches matching pools to a list for that disk.
Second disk matches a pool, adds all other pools to avoid object, then deployment
planner says "wait, matching pool is in avoid, can't use it". Oops. In fact, at this
point ALL pools are in avoid (unless there are other pools that have both tags).
Need to remove matching pool from the avoid set during each select phase.
1) added createDetail to ResourceDetailDao interface to provide generic way of creating resourceDetail DB objects
2) added resource details support for firewall rules
The cluster and zone wide storage pool allocators returned shared pools even for volumes meant to be on local storage pool.
If the VM uses local disk then cluster and zone storage allocators should not handle it and return null or empty list.
Also fixed the deployment planner to avoid a cluster if
a. avoid set returned by storage pool allocators is empty OR
b. all local or shared pools in a cluster are in avoid state
Conflicts:
engine/storage/src/org/apache/cloudstack/storage/allocator/ClusterScopeStoragePoolAllocator.java
engine/storage/src/org/apache/cloudstack/storage/allocator/ZoneWideStoragePoolAllocator.java
Introduction of a new Transaction API that is more consistent with the style
of Spring's transaction managment. The existing Transaction class was renamed
to TransactionLegacy. All of the non-DAO code in the management server has been
updated to use the new Transaction API.
These changes are a joint effort between Edison and I to refactor some
of the code around snapshotting VM volumes and creating
templates/volumes from VM volume snapshots. In general, we were working
towards allowing PrimaryDataStoreDrivers to create snapshots on primary
storage and not requiring the snapshots to be transferred to secondary
storage.
High level changes:
-Added uuid to NfsTO, SwiftTO & S3TO to cut down on the requirement of
PrimaryDataStoreTO and ImageStoreTO which don't really serve much of a
purpose
-Initial work towards enable reverting VM volume from snapshots
-Added hypervisor commands for introducing and forgetting new hypervisor
objects (snapshots, templates & volumes)
Signed-off-by: Edison Su <sudison@gmail.com>
ACS is now comprised of a hierarchy of spring application contexts.
Each plugin can contribute configuration files to add to an existing
module or create it's own module.
Additionally, for the mgmt server, ACS custom AOP is no longer used
and instead we use Spring AOP to manage interceptors.
The managed context framework provides a simple way to add logic
to ACS at the various entry points of the system. As threads are
launched and ran listeners can be registered for onEntry or onLeave
of the managed context. This framework will be used specifically
to handle DB transaction checking and setting up the CallContext.
This framework is need to transition away from ACS custom AOP to
Spring AOP.
It was implemented by extending the NFS provider. Its validation was updated so that you can pass it a URL containing the
details of a CIFS share. The code that mounts NFS shares was extended to allow it do the same for CIFS shares. Otherwise,
the secondary storage code is left unchanged.
Private templates would now get copied to only one of image storage chosen randamly as was the case earlier. Dont throw an exception for uploading volumes when there are multiple image stores, instead choose one of them randomly
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Available bytes was getting stored in the used bytes property of local storage pools. As a result of this, for newly added local pools Cloudstack thinks that there is no space available and generated alerts.
Filter the detail map sent over the wire into Map<String, String> before
processing underneath by storage life cycle
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Description:
When retrieving primary datastore, handle case for non-existing datastores/hosts.
Throw exception and handle the exception in datastore mgmt layer and pass onward
to create storage pool API.
Breaking down storage components among oss, nonoss and simulator
contexts. The default components are loaded by
OSS - applicationContext + componentContext
NonOSS - applicationContext + nonossComponentContext
Simulator - applicationContext + simulatorComponentContext
provider beans are are selectively overridden for simpler configuration.
Where possible beans are loaded by local reference.
<list merge=true> does not unfortunately work perfectly for bean merging
the providers causing a bit of bloat. Explore for later.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
CLOUDSTACK-3042 - handle Scaling up of vm memory/CPU based on the presence of XS tools in the template
This also takes care of updation of VM after XS tools are installed in the vm and set memory values accordingly to support dynamic scaling after stop start of VM
Signed-off-by: Abhinandan Prateek <aprateek@apache.org>
for number of commands participating in Vm deployment process, as parallel deployment is supported on the hypervisor side.
The behavior is controlled by global config varirables:
"execute.in.sequence.hypervisor.commands" (false by default) sets/resets the synchronization for commands:
=========================
StartCommand
StopCommand
CreateCommand
CopyVolumeCommand
"execute.in.sequence.network.element.commands" (false by default) sets/resets the synchronization for commands:
==========================
DhcpEntryCommand
SavePasswordCommand
UserDataCommand
VmDataCommand
As a part of the fix, increased the global lock timeout to 30 mins in several VR scripts:
===========================
edithosts.sh
savepassword.sh
userdata.sh
to support situations when multiple concurrent calls to the script are being made.
Added hypervisor type to CreateStoragePoolCmd & Storage pool responses.
DatastoreLifeCycle would consider hypervisor type while attaching datastore to zone.
ZoneWideStoragePoolAllocator would filter zone wide primary storage pools by hypervisor type along with tags in disk profile.
hypervisor type is mandatory parameter if scope is specified as ZONE while creating primary storage pool.
As of now KVM, VMware are allowed to use ZoneWideStoragePoolAllocator.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
RBD format 2 supports cloning (aka layering) where one base image can serve
as a parent image for multiple child images.
This enables fast deployment of a large amount of virtual machines, but it also
saves spaces on the Ceph cluster and improves performance due to better caching.
Qemu-img doesn't support RBD format 2 (yet), so to enable these functions the
RADOS/RBD Java bindings are required.
This patch also enables deployment of System VMs on RBD storage pools. Since we
no longer require a patchdisk for passing the boot arguments we are able to deploy
these VMs on RBD.
- Changes merged from planner_reserve branch
- Exposing deploymentplanner as an optional parameter while creating a service offering
- changes to DeploymentPlanningManagerImpl to make sure host reserve-release happens between conflicting planner usages.