Commit Graph

239 Commits

Author SHA1 Message Date
prachi fc784f1530 Bug 9585 - Existing Data Disk is being destroyed and recreated on Stop and Start of a User VM.
Changes:
- When the ROOT volume of a VM is found to be READY, changed planner to reuse the pool for every volume(root or data) that is READY and that has a pool not in maintenance and not in avoid state
- If ROOT volume is not ready, we dont care about the DATA disk. Both would get re-allocated.
- When a pool is reused for a ready volume, Planner does not call storagepool allocators. And such volumes are not assigned a pool in the deployment destination returned by the planner. Accordingly StorageManager :: prepare method wont recreate these volumes since they are not mentioned in the destination.
2011-04-27 11:56:14 -07:00
Alex Huang d5c7258b71 bug 9597: Fixed the recreatable problem. Also added the 2.2.1 upgrade step 2011-04-26 15:43:43 -07:00
Alex Huang f6258dae08 bug 9445: Signal alert for the host if a primary storage pool was unavailable on that host 2011-04-26 15:14:55 -07:00
prachi ee598d44c5 Bug 9548 [Cloud Stack Upgrade - 2.1.8 to 2.2.4] System VM's Volumes Recreation is not happening on an event of New Volume creation Failures
Changes:
- Reason was that the old volume's templateId was being updated before volume creation was attempted. So on the retry, we dint find a difference in volume's templateId and VM's templateId and did not enter the recreation logic.

- Fix is to update the new volume's templateId with the VM's templateId while creating the new volume. The old volume's templateId stays the same and the volume is marked as 'Destroy' when a new volume is created.
2011-04-26 11:55:05 -07:00
alena c8f4dacb0a bug 9550: get storagePool to Host mappings before doing processDisconnect because these references are being deleted as a part of processDisconnect call.
status 9550: resolved fixed
2011-04-23 18:34:39 -07:00
anthony eccec4ff11 bug 9541: fix one snapshot DB migrate, one java check and one script typo
status 9541: resolved fixed
2011-04-22 12:50:40 -07:00
Abhinandan Prateek 48eebe8e7a bug 9503: race condition in taking ownership of a Host when a Management server is restarted
status 9503: resolved fixed
2011-04-20 16:37:12 +05:30
anthony c6018bdc60 bug 9455: when host is disconnected, also remove entry in storage_pool_host_ref
status 9455: resolved fixed
2011-04-14 14:34:32 -07:00
anthony 21c936ab15 bug 9411:
1. adding storage pool fails, remove the entry in DB
2. in introduce SR, create pbd for master host

status 9411: resolved fixed
2011-04-14 11:16:45 -07:00
root 2ab9238ae7 Bug 9440: we specify the wrong disk size when attaching storage device with custom size offering
wrong usage of volume size which is in bytes for setting the size of disk offering  which in MB
2011-04-14 21:25:23 +05:30
anthony 3f4c5225dd fixed transaction usage 2011-04-13 19:35:13 -07:00
anthony 8d36af0033 fixed NPE 2011-04-13 16:03:41 -07:00
alena 98be7ea0f7 bug 9425: fixed detached volume removal.
status 9425: resolved fixed
2011-04-13 15:52:55 -07:00
prachi 5a73309e75 Bug 9387: Recreate system vms if template id changed....
Changes:
While starting a System VM:
- We check, incase the ROOT volume is READY, if the templateID of the volume matches the SystemVM's template.
- If it does not match, we update the volumes' templateId and ask deployment planner to reassign a pool to this volume even if it is READY.

In general:
- If a root volume is READY, we remove its entry from the deploydestination before calling storagemanager :: prepare()
- StorageManager creates a volume if a pool is assigned to it in deploydestination passed to it.
- If a volume has no pool assigned to it in deploydestination, it means the volume is ready and has a pool already allocated to it.
2011-04-13 13:54:08 -07:00
anthony e001436d94 fixed NPE when delete storagepool 2011-04-13 11:02:12 -07:00
prachi e2451f6b15 More changes for 9387:
Checks in StorageManagerImpl :: prepare() method to avoid NPE's if DeployDestination passed in is null.
2011-04-12 18:31:37 -07:00
prachi fc35aed2c9 Bug 9387 - Recreate system vms if template id changed...
Changes:
- Planner must reassign the storage pool if the template id for system vms has changed.  StorageManager must then recreate the volume if the volume has been
reassigned.  This is needed to do automatic update of the system template.
2011-04-12 18:31:27 -07:00
alena 90f79a8211 bug 9398: removed resource_type from volumes table as we no longer use it. Corresponding db upgrade scripts are updated
status 9398: resolved fixed
2011-04-11 15:25:34 -07:00
Murali Reddy 95cd5b6f3b bug 9273: [Stress Test] 'Count' in resource_count table has negative values
changing destroy volume logic to decrement volume resource count only for user vm's
2011-04-09 02:01:54 +05:30
anthony 5353e4abac hostid and poolid may overlap, fixed deletePoolStats 2011-04-05 16:01:30 -07:00
anthony 436dccb6d7 bug 9189: fixed it in master, modifystoragepool doesn't try to create/import any more, will port it to 2.1.x 2011-03-28 19:11:20 -07:00
Alex Huang b2eda8c71b Changes to the planners 2011-03-28 09:48:33 -07:00
Alex Huang 9d158dc060 Removed the async create status for volume now that our customers don't use it 2011-03-24 20:04:23 -07:00
prachi 923f562aa8 Bug 6873: disable/enable mode for clusters (and pods and zones and hosts)
- Added a new flag 'allocation_state' to zone,pod,cluster and host
- The possible values for this flag are 'Enabled' or 'Disabled'
- When a new zone,pod,cluster or host is added, allocation_state is 'Disabled' by default.
- For existing zone,pod,cluster or host, the state is 'Enabled'.
- All Add/Update/List  commands for each of zone,pod,cluster or host can now take a new parameter 'allocationstate'
- If 'allocation_state' is 'Disabled', Allocators skip that zone or pod or cluster or pod.
- For a root admin, ListZones lists all zones including the 'Disabled' zones. But for any other user, the 'Disabled' zones are not included in the response.
- For any usecase that creates/deploys/adds/registers a resource and takes in zone as parameter, now we check if the Zone is 'Disabled'. If yes then the operation cannot be performed by a user other than root-admin. Add volume, snapshot, templates are examples of this usecase.
- To enable the root admin to test a particular pod/cluster/host, deployVM command takes in 'host_id' parameter that can be passed in only by root admin.
If this parameter is passed in by the admin, allocators do not search for hosts and use that host only. StoragePools are searched in the cluster of that host.
If VM cannot be deployed to that host, allocators and deployVM fails without retrying
2011-03-23 22:15:35 -07:00
anthony bc0968d900 check the object before use it 2011-03-23 14:54:31 -07:00
anthony 41e75ab611 bug 9107: don't allow move volume if there are snapshot policy or snapshot on this volume
status 9107: resolved fixed
2011-03-22 14:40:21 -07:00
anthony 7f12876be1 bug 9087: destroy the source volume, after update the volume entry
status 9087: resolved fixed
2011-03-22 11:23:36 -07:00
Alex Huang 109c4eae0e restarting domr is close to working 2011-03-21 17:56:00 -07:00
Kelven Yang 65d4cc98be Allow template re-deployment once template is deleted from hypervisor and CloudStack still holds out-dated status 2011-03-17 17:59:51 -07:00
alena 63593c5057 bug 8510: increment resource count for volume after it's created 2011-03-15 18:06:00 -07:00
nit f88fb1e505 bug 8887 : Stats Calculation Improvement - Storage stats wont update DB anymore and would be kept "in memory" just like other stats. For the listCapacityCmd which consumes it (sec. storage used and primary storage used) this would be constructed using the in memory maps rather than DB which wont have sec storage and primary storage used in the DB anymore. 2011-03-14 18:45:00 -07:00
prachi 3624fee85d Changed the interface in StoragePoolAllocator to avoid a potential NPE in LocalStoragePoolAllocator. Allocators were taking in an instance of VM enclosed inside VirtualMachineProfile.
However in case of createVolume from Snapshot, there is no VM associated. So VM passed is null and this can cause a NPE.

Allocators hardly use the VM instance. LocalStoragePoolAllocator was mainly using it for checking if host has capacity. But it need not do this check, since that is done by HostAllocators anyway.
So removing the use of VM in StoragePoolAllocators.
2011-03-09 10:12:04 -08:00
anthony 5b1a421e62 bug 8712: prepare from snapshot db migration 2011-03-08 17:10:27 -08:00
Alex Huang 263244c938 more logging 2011-03-04 11:37:35 -08:00
anthony cd27202a26 bug 8216: create volume from snapshot can take a disk_offering_id, if disk_offering_id is not specified, use the one from original volume
status 8216: resolved fixed
2011-02-28 16:28:41 -08:00
prachi 889827b63a Bug 7845 - Productize DeploymentPlanner
Bug 7723 - merge or re-write host tagging into master / 2.2
Bug 7627 - Need more logging for Allocators
Bug 8317 - Add better resource allocation failure messages

Changes for Deployment Planner to use host and storagePool allocators to find deployment destination.
Also has the changes for host tag feature.
Improved the logging for allocators.
2011-02-28 13:47:51 -08:00
anthony 8658fbd1d4 fixed build 2011-02-28 10:32:52 -08:00
anthony 1970161844 bug 8714: support paraleel recursive snapshot
snapshot doesn't depend on volume any more, volume can be removed even there are snapshots on this volume

status 8714: resolved fixed
2011-02-25 22:17:13 -08:00
abhishek a84d34cc72 bug 8216: we do not need to create an event in createVolFromSnapshot(), as we do it in alloc vol 2011-02-25 12:00:27 -08:00
abhishek bae62f844d bug 8742,8216: reverting to use org vol's disk off id whilst creating vol from snapshot. Also changing event generation so that an event is generated @ data vol creation (as opposed to attaching to vm). we will correspondingly generate an event at data vol's deletion 2011-02-25 12:00:27 -08:00
kishan 8eb665246e bug 7935: Included hypervisor type to vm usage records
status 7935: resolved fixed
2011-02-24 20:08:12 +05:30
Alex Huang c22b37e402 latest work on db migration 2011-02-22 18:23:05 -08:00
anthony 94a9c86f46 try to send create command to the host where cpu&memory is allocated first 2011-02-22 16:44:58 -08:00
abhishek 1afc62e98f bug 8216: creating a vol from a snapshot will take in a priv disk offering id, which is used only for the tags; size is still taken from the original vol which the snapshot is based off of 2011-02-22 12:06:00 -08:00
nit 2efdc9d62b bug 8471: Check whether secondary storage URL is null when copying volumes across storage pools. 2011-02-22 17:49:56 +05:30
alena 15f59e6f58 bug 8637: throw ResourceAllocationException when resource limit is exceeded.
status 8637: resolved fixed
2011-02-18 12:26:58 -08:00
Kelven Yang 8695e7250c Update template and storage manager to allow hypervisor based command delegation 2011-02-18 11:37:50 -08:00
anthony 569bbfe585 bug 8513: creating volume from snapshot depends on the original volume
status 8513: resolved fixed
2011-02-16 15:47:05 -08:00
kishan 75e596bb80 bug 7952, 8363: Fixed usage events for Vm destroy and recover
status 7952, 8363: resolved fixed
2011-02-08 16:57:46 +05:30
alena a502b497f2 bug 8446: fixed creating volume from diskOffering with custom size
status 8446: resolved fixed
2011-02-07 12:42:46 -08:00