System template upgrade is not required during 4.0 upgrade since we handle the same during 4.2 upgrade. So removing the system template update during 4.0 upgrade.
KVM.snapshot.enabled is lowercased by f025db95 to keep the configs
uniformly lower-case. But it missed the upgrade script and the
references in SnapshotManagerImpl. This commit will fix the issue in all
locations
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Private templates would now get copied to only one of image storage chosen randamly as was the case earlier. Dont throw an exception for uploading volumes when there are multiple image stores, instead choose one of them randomly
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Add the upgrade logic to populate the template/volume store ref table from upload table. We wont be using this table anymore.
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Implement the download url expiration functionality for templates. Also persist the template download urls after their creation
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Marked the system template new system template as dynamicallyScalable
- handled upgrade case
- moved "dynamicallyScalable" flag to vm_instance table from user_vm_details to support dynamic scaling of system vm
Signed off by : Nitin Mehta<nitin.mehta@citrix.com>
Changes:
- During host deletion, host entry in databse gets removed prior to the disconnect task getting processed.
- This causes the disconnect task to get NPE while trying to do the host state transition
Some existing scenarios for root and data volume combination was not working. These are
a. Local root + Shared data
b. Shared root + Local data
Enabled these scenarios as part of this fix
Conflicts:
server/src/com/cloud/storage/VolumeManagerImpl.java
The request to migrate was coming till ancient data motion strategy but wasn't worked on and forwarded
to the resource. The code probably got removed by a bad merge. Bringing it back.
The issue is that while calculating the used primary storage size, the updateResourceCount
API is also calculating the disk size of virtual router VM, created for that account and
because of this, the API is returning the incorrect result.
Changes:
- Passing the avoid set generated by the first pass of deployment to the second try.
- The second try is done, when the first pass that uses a reserved plan fails to deploy on the reserved host, to search over the entire zone again
Changes:
- Locking the group and save reservation mechanism done by DPM
- Added admin operation to cleanup VM reservations
- DPM will also cleanup VM reservations on startup
Available bytes was getting stored in the used bytes property of local storage pools. As a result of this, for newly added local pools Cloudstack thinks that there is no space available and generated alerts.
User should be able to delete/archive alerts and events by selecting a time period or by
choosing the alerts and events older than a date. Added the ability to choose a time period
too.
Summary of changes in the fix
- Optimized host scan logic, now instead of iterating over each cluster host scan is done for a batch of clusters
- Made host scan task interval configurable
only on first rule is created on the IP and last rule is revoked on the
IP
Current suboptima logic of IP Assoc
- On associate IP to GuestNetwork there is an IPAssoc command sent to
corresponding network service providers of the network
- On every rule apply on IP associated with the network send IP assoc
to the network service providers
- On every rule deletion on IP associated with a network sernd IP assoc
command to the network service providers
With this fix logic of IP assoc is changed as below which eliminates
executio of unnessary and expensive IpAssocCommand resource command
- On associate IP to GuestNetwork, associate IP only to the network,
Untill any service is associated with the IP dont send IP Assoc
- On creation of first rule on the IP send IPAssoc to corresponding
network service provider. Since IP is used for a service, IPAssoc
need to be sent to correpondign service provider
- On deletion of last rule on the IP send IPAssoc to corresponding
network service provider. When last rule is deleted, IP has no
service associated with it, so send IP assoc to service provider to
remove the IP association
Filter the detail map sent over the wire into Map<String, String> before
processing underneath by storage life cycle
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
(cherry picked from commit f96b89f25e)
Description:
When retrieving primary datastore, handle case for non-existing datastores/hosts.
Throw exception and handle the exception in datastore mgmt layer and pass onward
to create storage pool API.
involved in the GSLB
add weights to each site participating in the GSLB. Traffic will be load
balanced across the sites based on the weigths associated with each
site. If not specified weight of site is defaulted to 1.
Breaking down storage components among oss, nonoss and simulator
contexts. The default components are loaded by
OSS - applicationContext + componentContext
NonOSS - applicationContext + nonossComponentContext
Simulator - applicationContext + simulatorComponentContext
provider beans are are selectively overridden for simpler configuration.
Where possible beans are loaded by local reference.
<list merge=true> does not unfortunately work perfectly for bean merging
the providers causing a bit of bloat. Explore for later.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Update ImageFormat enum to include VHDX format introduced with Hyper-V
Server 2012.
Remove existing Hyper-V plugin, because it does not work and is dead
code.
Remove references to existing Hyper-V plugin from config files.
Remove Hypervisor.HypervisorType.Hyperv special cases from manager code
that are unused or unsupported.
Specifically, there is no CIFS secondary storage class
"CifsSecondaryStorageResource". Also, the Hyper-V plugin's
ServerResource is contacted by the management server and not the other
way around.
Add Hyperv-V support to ListHypervisorsCmd API call
Signed-off-by: Edison Su <sudison@gmail.com>
NAT does not work
making an exception for portabe IP, so that if the current datacenter with
portable IP is associated is different from destiantion data center
also on transfer on to new zone, transfer the portable ip association to
new data center, physical network id's
CLOUDSTACK-3042 - handle Scaling up of vm memory/CPU based on the presence of XS tools in the template
This also takes care of updation of VM after XS tools are installed in the vm and set memory values accordingly to support dynamic scaling after stop start of VM
Signed-off-by: Abhinandan Prateek <aprateek@apache.org>
for number of commands participating in Vm deployment process, as parallel deployment is supported on the hypervisor side.
The behavior is controlled by global config varirables:
"execute.in.sequence.hypervisor.commands" (false by default) sets/resets the synchronization for commands:
=========================
StartCommand
StopCommand
CreateCommand
CopyVolumeCommand
"execute.in.sequence.network.element.commands" (false by default) sets/resets the synchronization for commands:
==========================
DhcpEntryCommand
SavePasswordCommand
UserDataCommand
VmDataCommand
As a part of the fix, increased the global lock timeout to 30 mins in several VR scripts:
===========================
edithosts.sh
savepassword.sh
userdata.sh
to support situations when multiple concurrent calls to the script are being made.
has dedicated resources and the dedicated resources have all been consumed - use.system.public.ips and use.system.guest.vlans
Both configs are configurable at the account level too.
Recurring snapshot schedule not showing up in UI
For some of the volumes Recurring snapshot schedule was not showing up in UI because the active column was set to false. Since we dont use this column anymore I am removing the active=true check in the listSnapshotPolicies call.
combination prior to 3.0 release
Fix does following:
- add F5 network service provider into a physical network if there if F5
deployed in the zone
- add instance of F5 network service provider
- add SRX network service provider into a physical network if there if
SRX deployed in the zone
- add instance of SRX network service provider
- upgrade all the guest networks to network offering '"Isolated with
external providers"
Added hypervisor type to CreateStoragePoolCmd & Storage pool responses.
DatastoreLifeCycle would consider hypervisor type while attaching datastore to zone.
ZoneWideStoragePoolAllocator would filter zone wide primary storage pools by hypervisor type along with tags in disk profile.
hypervisor type is mandatory parameter if scope is specified as ZONE while creating primary storage pool.
As of now KVM, VMware are allowed to use ZoneWideStoragePoolAllocator.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
DB changes to support hypervisor specific zone wide storage pool.
Added method findZoneWideStoragePoolsByHypervisor to PrimaryStorageDaoImpl to find suitable zone wide storage pool of specific hypervisor type.
Added column 'hypervisor' to table storage_pool. This column can be NULL. Used/populated only for zone wide primary storage pools.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Upgrade handling.
Detect legacy zones during db upgrade and perform data migration.
If legacy zone is detected the table 'cloud'.'legacy_zones' is populated.
If an existing zone which has resources that belong to single VMware datacenter then such zone would not be marked as legacy zone. Such zone would be automatically associated with the specific Vmware datacenter of the clusters inside the zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
This feature allows a user to deploy VMs only in the resources dedicated to his account or domain.
1. Resources(Zones, Pods, Clusters or hosts) can be dedicated to an account or domain.
Implemented 12 new APIs to dedicate/list/release resources:
- dedicateZone, listDedicatedZones, releaseDedicatedZone for a Zone.
- dedicatePod, listDedicatedPods, releaseDedicatedPod for a Pod.
- dedicateCluster, listDedicatedClusters, releaseDedicatedCluster for a Cluster
- dedicateHost, listDedicatedHosts, releaseDedicatedHost for a Host.
2. Once a resource(eg. pod) is dedicated to an account, other resources(eg. clusters/hosts) inside that cannot be further dedicated.
3. Once a resource is dedicated to a domain, other resources inside that can be further dedicated to its sub-domain or account.
4. If any resource (eg.cluster) is dedicated to a account/domain, then resources(eg. Pod) above that cannot be dedicated to different accounts/domain (not belonging to the same domain)
5. To use Explicit dedication, user needs to create an Affinity Group of type 'ExplicitDedication'
6. A VM can be deployed with the above affinity group parameter as an input.
7. A new ExplicitDedicationProcessor has been added which will process the affinity group of type 'Explicit Dedication' for a deployment of a VM that demands dedicated resources.
This processor implements the AffinityGroupProcessor adapter. This processor will update the avoid list.
8. A VM requesting dedication will be deployed on dedicatd resources if available with the user account.
9. A VM requesting dedication can also be deployed on the dedicated resources available with the parent domains iff no dedicated resources are available with the current user's account or
domain.
10. A VM (without dedication) can be deployed on shared host but not on dedicated hosts.
11. To modify the dedication, the resource has to be released first.
12. Existing Private zone functionality has been redirected to Explicit dedication of zones.
13. Updated the db upgrade schema script. A new table "dedicated_resources" has been added.
14. Added the right permissions in commands.properties
15. Unit tests: For the new APIs and Service, added unit tests under : plugins/dedicated-resources/test/org/apache/cloudstack/dedicated/DedicatedApiUnitTest.java
16. Marvin Test: To dedicate host, create affinity group, deploy-vm, check if vm is deployed on the dedicated host.