Commit Graph

233 Commits

Author SHA1 Message Date
anthony 60768d0014 bug 9411:
1. adding storage pool fails, remove the entry in DB
2. in introduce SR, create pbd for master host

status 9411: resolved fixed
2011-04-14 11:17:24 -07:00
Murali Reddy 9dddeaa5a5 Bug 9440: we specify the wrong disk size when attaching storage device with custom size offering
wrong usage of volume size which is in bytes for setting the size of disk offering  which in MB
pushing 2.2.4 fix to master
2011-04-14 21:16:08 +05:30
alena dad9dacc92 bug 9425: fixed detached volume removal.
status 9425: resolved fixed
2011-04-13 15:54:10 -07:00
Kelven Yang 1b9cbd9166 bug 9223, 9224: persist runid to form cluster session, based on cluster session and DB condition to issue isolation notification for self-fencing 2011-04-13 15:13:54 -07:00
prachi b1700af146 Bug 9387: Recreate system vms if template id changed....
Changes:
While starting a System VM:
- We check, incase the ROOT volume is READY, if the templateID of the volume matches the SystemVM's template.
- If it does not match, we update the volumes' templateId and ask deployment planner to reassign a pool to this volume even if it is READY.

In general:
- If a root volume is READY, we remove its entry from the deploydestination before calling storagemanager :: prepare()
- StorageManager creates a volume if a pool is assigned to it in deploydestination passed to it.
- If a volume has no pool assigned to it in deploydestination, it means the volume is ready and has a pool already allocated to it.
2011-04-13 13:47:07 -07:00
anthony e0ba2a2fa6 clean up transation code 2011-04-12 18:56:49 -07:00
prachi 47f43df01b More changes for 9387:
Checks in StorageManagerImpl :: prepare() method to avoid NPE's if DeployDestination passed in is null.
2011-04-12 18:19:59 -07:00
prachi 47c31a077a Bug 9387 - Recreate system vms if template id changed...
Changes:
- Planner must reassign the storage pool if the template id for system vms has changed.  StorageManager must then recreate the volume if the volume has been
reassigned.  This is needed to do automatic update of the system template.
2011-04-12 18:19:58 -07:00
alena 4d8df029d3 bug 8245: mark storage pool status as Removed before performing actual cleanup
status 8245: resolved fixed
2011-04-12 14:44:55 -07:00
alena 52bf157387 bug 9398: removed resource_type from volumes table as we no longer use it. Corresponding db upgrade scripts are updated
status 9398: resolved fixed

Conflicts:

	server/src/com/cloud/storage/StorageManagerImpl.java
2011-04-11 18:14:35 -07:00
nit debe236a8d bug 8710: CONTD....Introducing a new user role in cloudstack called RESOURCE_DOMAIN_ADMIN. The role would have all the domain_admin rights and the rights to list zone,pods,clusters and so on. More info in the bug 2011-04-11 19:40:37 +05:30
Murali Reddy 290c799b2c Bug 9273 : [Stress Test] 'Count' in resource_count table has negative values
pushing 2.2.4 changes in to master
2011-04-11 15:37:53 +05:30
anthony 2bcd7a13d4 hostid and poolid may overlap, fixed deletePoolStats 2011-04-05 15:43:21 -07:00
anthony f71986125a bug 9210: remove storage pool entry if adding storage pool fails
status 9210: resolved fixed
2011-03-29 17:44:55 -07:00
anthony 436dccb6d7 bug 9189: fixed it in master, modifystoragepool doesn't try to create/import any more, will port it to 2.1.x 2011-03-28 19:11:20 -07:00
Alex Huang b2eda8c71b Changes to the planners 2011-03-28 09:48:33 -07:00
Alex Huang 9d158dc060 Removed the async create status for volume now that our customers don't use it 2011-03-24 20:04:23 -07:00
prachi 923f562aa8 Bug 6873: disable/enable mode for clusters (and pods and zones and hosts)
- Added a new flag 'allocation_state' to zone,pod,cluster and host
- The possible values for this flag are 'Enabled' or 'Disabled'
- When a new zone,pod,cluster or host is added, allocation_state is 'Disabled' by default.
- For existing zone,pod,cluster or host, the state is 'Enabled'.
- All Add/Update/List  commands for each of zone,pod,cluster or host can now take a new parameter 'allocationstate'
- If 'allocation_state' is 'Disabled', Allocators skip that zone or pod or cluster or pod.
- For a root admin, ListZones lists all zones including the 'Disabled' zones. But for any other user, the 'Disabled' zones are not included in the response.
- For any usecase that creates/deploys/adds/registers a resource and takes in zone as parameter, now we check if the Zone is 'Disabled'. If yes then the operation cannot be performed by a user other than root-admin. Add volume, snapshot, templates are examples of this usecase.
- To enable the root admin to test a particular pod/cluster/host, deployVM command takes in 'host_id' parameter that can be passed in only by root admin.
If this parameter is passed in by the admin, allocators do not search for hosts and use that host only. StoragePools are searched in the cluster of that host.
If VM cannot be deployed to that host, allocators and deployVM fails without retrying
2011-03-23 22:15:35 -07:00
anthony bc0968d900 check the object before use it 2011-03-23 14:54:31 -07:00
anthony 41e75ab611 bug 9107: don't allow move volume if there are snapshot policy or snapshot on this volume
status 9107: resolved fixed
2011-03-22 14:40:21 -07:00
anthony 7f12876be1 bug 9087: destroy the source volume, after update the volume entry
status 9087: resolved fixed
2011-03-22 11:23:36 -07:00
Alex Huang 109c4eae0e restarting domr is close to working 2011-03-21 17:56:00 -07:00
Kelven Yang 65d4cc98be Allow template re-deployment once template is deleted from hypervisor and CloudStack still holds out-dated status 2011-03-17 17:59:51 -07:00
alena 63593c5057 bug 8510: increment resource count for volume after it's created 2011-03-15 18:06:00 -07:00
nit f88fb1e505 bug 8887 : Stats Calculation Improvement - Storage stats wont update DB anymore and would be kept "in memory" just like other stats. For the listCapacityCmd which consumes it (sec. storage used and primary storage used) this would be constructed using the in memory maps rather than DB which wont have sec storage and primary storage used in the DB anymore. 2011-03-14 18:45:00 -07:00
prachi 3624fee85d Changed the interface in StoragePoolAllocator to avoid a potential NPE in LocalStoragePoolAllocator. Allocators were taking in an instance of VM enclosed inside VirtualMachineProfile.
However in case of createVolume from Snapshot, there is no VM associated. So VM passed is null and this can cause a NPE.

Allocators hardly use the VM instance. LocalStoragePoolAllocator was mainly using it for checking if host has capacity. But it need not do this check, since that is done by HostAllocators anyway.
So removing the use of VM in StoragePoolAllocators.
2011-03-09 10:12:04 -08:00
anthony 5b1a421e62 bug 8712: prepare from snapshot db migration 2011-03-08 17:10:27 -08:00
Alex Huang 263244c938 more logging 2011-03-04 11:37:35 -08:00
anthony cd27202a26 bug 8216: create volume from snapshot can take a disk_offering_id, if disk_offering_id is not specified, use the one from original volume
status 8216: resolved fixed
2011-02-28 16:28:41 -08:00
prachi 889827b63a Bug 7845 - Productize DeploymentPlanner
Bug 7723 - merge or re-write host tagging into master / 2.2
Bug 7627 - Need more logging for Allocators
Bug 8317 - Add better resource allocation failure messages

Changes for Deployment Planner to use host and storagePool allocators to find deployment destination.
Also has the changes for host tag feature.
Improved the logging for allocators.
2011-02-28 13:47:51 -08:00
anthony 8658fbd1d4 fixed build 2011-02-28 10:32:52 -08:00
anthony 1970161844 bug 8714: support paraleel recursive snapshot
snapshot doesn't depend on volume any more, volume can be removed even there are snapshots on this volume

status 8714: resolved fixed
2011-02-25 22:17:13 -08:00
abhishek a84d34cc72 bug 8216: we do not need to create an event in createVolFromSnapshot(), as we do it in alloc vol 2011-02-25 12:00:27 -08:00
abhishek bae62f844d bug 8742,8216: reverting to use org vol's disk off id whilst creating vol from snapshot. Also changing event generation so that an event is generated @ data vol creation (as opposed to attaching to vm). we will correspondingly generate an event at data vol's deletion 2011-02-25 12:00:27 -08:00
kishan 8eb665246e bug 7935: Included hypervisor type to vm usage records
status 7935: resolved fixed
2011-02-24 20:08:12 +05:30
Alex Huang c22b37e402 latest work on db migration 2011-02-22 18:23:05 -08:00
anthony 94a9c86f46 try to send create command to the host where cpu&memory is allocated first 2011-02-22 16:44:58 -08:00
abhishek 1afc62e98f bug 8216: creating a vol from a snapshot will take in a priv disk offering id, which is used only for the tags; size is still taken from the original vol which the snapshot is based off of 2011-02-22 12:06:00 -08:00
nit 2efdc9d62b bug 8471: Check whether secondary storage URL is null when copying volumes across storage pools. 2011-02-22 17:49:56 +05:30
alena 15f59e6f58 bug 8637: throw ResourceAllocationException when resource limit is exceeded.
status 8637: resolved fixed
2011-02-18 12:26:58 -08:00
Kelven Yang 8695e7250c Update template and storage manager to allow hypervisor based command delegation 2011-02-18 11:37:50 -08:00
anthony 569bbfe585 bug 8513: creating volume from snapshot depends on the original volume
status 8513: resolved fixed
2011-02-16 15:47:05 -08:00
kishan 75e596bb80 bug 7952, 8363: Fixed usage events for Vm destroy and recover
status 7952, 8363: resolved fixed
2011-02-08 16:57:46 +05:30
alena a502b497f2 bug 8446: fixed creating volume from diskOffering with custom size
status 8446: resolved fixed
2011-02-07 12:42:46 -08:00
kishan 56f3343911 Added action events for VM, volume, Ip and snapshot actions 2011-02-04 19:59:41 +05:30
Edison Su 3cc5ce8642 add new configuration parameter: cmd.wait, for heavy timing-consuming commands, such as backupsnapshotcommand 2011-02-03 18:57:38 -05:00
kishan fcfd4e9e33 bug 8192: use volume size in bytes for usage
status 8192: resolved fixed
2011-02-03 16:38:48 +05:30
anthony b226861783 bug 8194: add new storage pool type PreSetup,
1. user setup SR for xenserver pool
2. in UI, add a new storage pool as PreSetup
   server: "ip of storage "
   path:"name of the SR"
2011-02-02 19:33:08 -08:00
Edison Su 4ea260cafd bug 8204: mgt server needs to pass down iso info before migration, if the vm has ISO attached
status 8204: resolved fixed
2011-02-02 19:13:12 -05:00
Kelven Yang b874bbda91 Give primary VMFS datastore meaningful name 2011-02-02 13:37:14 -08:00