* Update AncientDataMotionStrategy.java
fix When secondary storage usage is> 90%, VOLUME migration across primary storage will cause the migration to fail and lose VOLUME
* Update AncientDataMotionStrategy.java
Volume is migrated across Primary storage. If no secondary storage is available(Or used capacity> 90% ), the migration is canceled.
Before modification, if secondary storage cannot be found, copyVolumeBetweenPools return NUll
copyAsync considers answer = null to be a sign of successful task execution, so it deletes the VOLUME on the old primary storage. This is the root cause of data loss, because VOLUME did not perform the migration at all.
* code in comment removed
Co-authored-by: div8cn <35140268+div8cn@users.noreply.github.com>
Co-authored-by: Daan Hoogland <dahn@onecht.net>
* 4.13:
Snapshot deletion issues (#3969)
server: Cannot list affinity group if there are hosts dedicated… (#4025)
server: Search zone-wide storage pool when allocation algothrim is firstfitleastconsumed (#4002)
* Fixes snapshot deletion
* Remove legacy '@Component', it is not necessary in this bean/class.
* Fix log message missing %d and remove snapshot on DB
* Remove "dummy" boolean return statement
* Manage snapshot deletion for KVM + NFS (primary storage)
* checkstyle trailing spaces
* rename options strings to *_OPTION
* Fix typo on deleteSnapshotOnSecondaryStorage and enhance log message
* Move the snapshotDao.remove(snapshotId); (#4006)
* Fix deletesnapshot worflow to handle both snapshots created in primary storage and snapshots backed up to secondary storage
* Fix extra space
* refactor out separate handling methods for secondary and primary (reducing returns)
* return false on unexpected error or log when expected
* != instead of ==
* secondary instead of backup storage
* init to null
* Handle snapshot deletion on primary storage. When primary store ref not found for snapshot do not fail the operation.
* Fix debug levels on log messages
Co-authored-by: GabrielBrascher <gabriel@apache.org>
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
Co-authored-by: Harikrishna Patnala <harikrishna.patnala@gmail.com>
Co-authored-by: nvazquez <nicovazquez90@gmail.com>
* Remove constraint for NFS storage
* Add new property on agent.properties
* Add free disk space on the host prior template download
* Add unit tests for the free space check
* Fix free space check - retrieve avaiable size in bytes
* Update default location for direct download
* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope
* Verify location for temporary download exists before checking free space
* In progress - refactor and extension
* Refactor and fix
* Last fixes and marvin tests
* Remove unused test file
* Improve logging
* Change default path for direct download
* Fix upload certificate
* Fix ISO failure after retry
* Fix metalink filename mismatch error
* Fix iso direct download
* Fix for direct download ISOs on local storage and shared mount point
* Last fix iso
* Fix VM migration with ISO
* Refactor volume migration to remove secondary storage intermediate
* Fix simulator issue
As previously described by PR #3929:
If vm has attached ISO, the migration fails with error message "org.libvirt.LibvirtException: Cannot access storage file /mnt/b33e5a1d-e4ea-3465-b6ac-c98dc8ff8af0/207-2-cc5fd717-2d57-3bb3-bcf6-2c930268db6c.iso"
By default, once we create a security group we cant change its name.
In this feature, we introduce a new API command "updateSecurityGroup"
which allows us to rename the security group name. Although we can't
change the name of the "default" security group.
This adds support for JDK11 in CloudStack 4.14+:
- Fixes code to build against JDK11
- Bump to Debian 9 systemvmtemplate with openjdk-11
- Fix Travis to run smoketests against openjdk-11
- Use maven provided jdk11 compatible mysql-connector-java
- Remove old agent init.d scripts
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Enable PVLAN support on L2 networks
* Fix prevent null pointer on details
* Add marvin tests
* Fixes from comments
* Fix: missing pvlan type on plugniccommand
* Fix checks on network creation for vlans overlap
* Fix remove prefix from secondary vlan id
* Improve checks on physical network for pvlans
* Fix compatibility with previous pvlan creation
* Fix shared networks backwards pvlan compatibility
* Add ui fix for pvlan type not passed to api
* Add check for isolated vlan id overlap
* Include check for dynamic vlan reserved for secondary vlan
* Fix marvin tests errors
* Fix redundant imports
* Skip marvin test for pvlan if dvswitch is not present
* spelling
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
* server: fix resource count of primary storage if some volumes are Expunged but not removed
Steps to reproduce the issue
(1) create a vm and stop it. check resource count of primary storage
(2) download volume. resource count of primary storage is not changed.
(3) expunge the vm, the volume will be Expunged state as there is a volume snapshot on secondary storage. The resource count of primary storage decreased.
(4) update resource count of the account (or domain), the resource count of primary storage is reset to the value in step (2).
* New feature: Add support to destroy/recover volumes
* Add integration test for volume destroy/recover
* marvin: check resource count of more types
* messages translate to JP
* Update messages for CN
* translate message for NL
* fix two issues per Daan's comments
Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
The VM ingestion feature allows CloudStack to discover, on-board, import existing VMs in an infra. The feature currently works only for VMware, with a hypervisor agnostic framework which may be extended for KVM and XenServer in future.
Add a global setting to disable creating networks with same name in an account
Add a global setting to disable creating network without
mentioning the start and end IPv4 or IPv6 address
By default we can create networks with the same name in the account.
Sometimes we should not create the networks with same name.
This change adds a global setting which prevents creating the network with same name.
The default value is true and set it to false to prevent creating network with same names.
Also its possible to create a shared network without mentioning the
start and the end IPv4 or IPv6 address.
This change adds a global setting which prevents creating a shared
network without specifying the start and the end IPv4 or IPv6 address
If the disk size of the vm to be created is greater
than the volume size, then the exception message should
display the numeric value instead of variable name
Fixes#3191
When a template is registered, code stores md5sum of the downloaded file in the vm_template table. However, this downloaded file could be deleted after template installation if it is not an actual (.qcow2, .ova, etc.) file. When the user copies a template using copyTemplate API, the actual template file will be copied across the image stores. Matching checksum for the copied templated file and the stored value from the vm_template table will result in a mismatch.
Changes will set an empty checksum value for the copied template while passing to download service which allows skipping wrong checksum check for the copied while install.
However, this results in a change in checksum value for concerned template entry in vm_template table post template install.
Co-authored-by: dahn <daan.hoogland@gmail.com>
* marvin: check resource count of more types
* New feature: add flag resource.count.running.vms.only to count resource consumption of only running vms
Stopped VMs do not use CPU/RAM actually.
A new global configuration resource.count.running.vms.only is added to determine whether resource (cpu/memory) of only running vms (including Starting/Stopping) will be taken into calculation of resource consumption.
* Add integration test for resource count of only running vms
Stop asking user (in the upgrade documentation) to remove a trailing slash for local KVM pool - do it here in upgrade path - so not needed in DOC for the upgrade to 4.14 and onwards.
When start a vm or migrate a vm (away from a host in host maintenance), cloudstack will check capacity of all hosts and choose one. If there are hundreds of hosts on the platform, it will take some seconds. When cloudstack choose a host and start/migrate vm to it, the resource consumption of the host might have been changed. This normally happens when we start/migrate multiple vms.
It would be better to double check the host capacity when start vm on a host.
This PR includes the fix for cpucore capacity when start/migrate a vm.
When we calculate a resource consumption of a host, we need to take the vms in following states into calculation: Running, Starting, Stopping, Migrating (to the host), and vms are Migrating from the host. Because, when stop a vm, the resource on host will be released when vm is stopped. When migrate a vm, the resource on destination host will be increased before migration starts, and resource on source host will be decreased after migraiton succeeds.
In cloudstack, there is a task named CapacityChecked which run every 5 minutes (capacity.check.period =300000 ms by default). It recalculates capacity of all hosts. However, it takes only vms in Running and Starting into consideration. We have faced some issues in host maintenance due to it.
Steps to reproduce the issue
(1) migrate N vms from host A to host B, cpu/ram resource increases before the migration.
(2) capacity check recalculate the capacity of hosts. used capacity of Host B will be reset to original value (not including the vms in Migrating).
(3) migrate some more vms from other host to host B, the migrations are allowed by cloudstack (because used capacity is incorrect). If the actual used memory exceed the physical memory on the host, there might be some critical issues (for example, libvirt dies)
* Suqash commits to a single commit and rebase against master
Update marvin tests to use white list
* * Fix marvin test failure
* Add new marvin negative tests cases
* Remove hard-coded hypervisor types in marvin tests
* Fix build error after rebase and add hugepagesless
* Fix readability of python code
* Fix failing test
* Adding cleanup of vms for negative tests
* Bug fixes - change config checks properly and block extraconfig in details
* Trim to compare the keys
* CR comments
* Don't skip extraconfig without exception
Co-authored-by: Boris Stoyanov - a.k.a Bobby <bss.stoyanov@gmail.com>
* 4.13:
only update powerstate if sure it is the latest (#3743)
ui: fix migrate host form no host popup (#3682)
client: jetty session timeout set after server is started (#3658)
Increase DHCP lease time to infinite (#3662)
Currently in cloudstack, when we click on "Acquire New Ip", it will
randomly acquire IP from the pool. With this enhancement, it is
possible to select the IP from the drop down IP list of that network.
Same thing applies for a VPC as well.
* Service layer changes for new way of tracking maintanence progress
* Fixes after offline code review
* Fix marvin tests
* Change state name and add documentation
* Fix test
* Fix and add more unit tests for different caseS
* Fix and enhance Marvin Tests
* Fixes for corner cases
* More fixes and logging
* UI fixes
* Some minor changes and reducing VMs on host for more contained tests
* Fixed ssh client auth problem causing test failure
* Code review changes + fixes + some more logging
* Fix flaky tests by adding delays between host states
* Added fetching only enabled hosts for tests
* Make port blocking KVM specific and refactor to handle failure
* Make failing migrations due to tagged host instead of port blocking
* Added additional check for migrating VMs
* Refactor to use single place for methods checking maintenance states
* Add missing HA config keys
* Change time value to seconds
* Change Integer to Long
* Using ConfigKey defaultValue
* Do some code refactoring
* Simplify code
If the volume is in "Expunged" state then it should not be
considered towards total resource count of "primarystoragetotal"
field.
Currently cloudstack takes into resource calculation even if the
volume is expunged. The volume itself doesnt exist in primage
storage and hence it should not be considered towrds resource
caculation.
Steps to reproduce the issue:
1 . Get the resource count of "primarystoragetotal" of a particular domain.
2 . Create a VM with 5GB root disk size and stop it.
3 . Now the value of "primarystoragetotal" should be intitial value plus 5.
4 . Navigate to "volumes" of the VM and select "Download Volume" option.
5 . Once the volume is downloaded, expunge the VM.
6 . Get the resource count of "primarystoragetotal". it will be same value as in step 3
But it should be same as initial value obtained in step 1.
With this fix, the value obtained at step 6 will be same as in step 1.
In case an older SSVM is removed without changing it's state from Up
to Destroyed/Removed etc, the SSVM may be randomly selected for image
store related operations. This fix ensures that endpoints for an image
store are found only from a set of SSVM hosts that are not removed.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Diagnostics are hard when a snapshot fails if a null pointer occurs. This is because no stack trace or location of the error is logged. I.E.
2019-10-21 12:55:00,056 DEBUG [o.a.c.s.m.AncientDataMotionStrategy] (Work-Job-Executor-131:ctx-80420156 job-10033827/job-10033828 ctx-4864e2f5) (logid:21454564) copy snasphot failed: java.lang.NullPointerException
* server: Do NOT cleanup dhcp and dns when stop a vm
According comment in PR #3608, dhcp and dns entries are cleaned up only when a VM is expunged.
Revert part of commit 8fb388e931.
* server: cleanup dns/dhcp entries in removeNic instead of finalizeExpunge
- Removes CentOS6/el6 packaging (voting thread reference https://markmail.org/message/u3ka4hwn2lzwiero)
- Add upgrade path from 4.13 to 4.14
- Enable live storage migration support for KVM by default as el6 is deprecated
- PRs using live storage migration
#2997 KVM VM live migration with ROOT volume on file storage type
#2983 KVM live storage migration intra cluster from NFS source and destination
#2298 CLOUDSTACK-9620: Enhancements for managed storage
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When a network IP range is removed, the "vlan" stays mapped on pod_vlan_map; therefore, the method that lists the VLANs by pod id will return null VLANS.
This PR adds proper verifications to avoid null pointer exception when deploying VRs on a pod with removed VLANs. The exception was caused on getPlaceholderNicForRouter.
Problem: In Vmware, appliances that have options that are required to be answered before deployments are configurable through vSphere vCenter user interface but it is not possible from the CloudStack user interface.
Root cause: CloudStack does not handle vApp configuration options during deployments if the appliance contains configurable options. These configurations are mandatory for VM deployment from the appliance on Vmware vSphere vCenter. As shown in the image below, Vmware detects there are mandatory configurations that the administrator must set before deploy the VM from the appliance (in red on the image below):
Solution:
On template registration, after it is downloaded to secondary storage, the OVF file is examined and OVF properties are extracted from the file when available.
OVF properties extracted from templates after being downloaded to secondary storage are stored on the new table 'template_ovf_properties'.
A new optional section is added to the VM deployment wizard in the UI:
If the selected template does not contain OVF properties, then the optional section is not displayed on the wizard.
If the selected template contains OVF properties, then the optional new section is displayed. Each OVF property is displayed and the user must complete every property before proceeding to the next section.
If any configuration property is empty, then a dialog is displayed indicating that there are empty properties which must be set before proceeding
image
The specific OVF properties set on deployment are stored on the 'user_vm_details' table with the prefix: 'ovfproperties-'.
The VM is configured with the vApp configuration section containing the values that the user provided on the wizard.
Fix regression bug that affects KVM local storage migration. Some of the desired execution flows for KVM local storage migration had been altered to allow only managed storage to execute. Fixed allowing managed and non managed storages to execute.
Fixes#3521
Retrieval of an image store using ImageStoreProviderManager has been refactored by introducing three different methods,
DataStore getRandomImageStore(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will not be used here.
DataStore getImageStoreWithFreeCapacity(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will be used here and the store with max free space will be returned. If no store with filled storage less than the threshold is found, the NULL value will be returned.
List<DataStore> listImageStoresWithFreeCapacity(List<DataStore> imageStores);
To get a list of image stores for writing purpose which fulfills threshold capacity check.
Correspondingly DataStoreManager methods have been refactored to return similar values for a given zone.
Fixes#3287 - NULL value will be returned when secondary storage is needed for writing but there is not store with free space.
Fixes#3041 - Rather than returning random secondary storage for writing, storage with max. free space will be returned.
Fixes#3478 - For migration on VMware, all writable secondary storage will be mounted while preparation.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
Currently when refreshing disk usage stats all kvm agents are asked to collect stats for all volumes. In setups with multiple kvm hosts where managed storage is used, not all volumes are attached to all kvm hosts, this results in a large number of warnings in the kvm agent logs. This change introduces a filter step in case managed storage is used so that the management server only requests kvm agents for stats about volumes that are connected to each kvm host.
Fixes checkstyle issue caused by previous commit 6a511fce40
from PR #3466 where a minor review fix did not address this. Merging this
one to unblock few other PRs after running a local build test.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Add CephSnapshotStrategy to handle RBD revert (rollback) snapshot. In order to support RBD revert (rbd_rollback), this PR adds a CephSnapshotStrategy class to handle Ceph/RBD snapshot actions.
* Add revoke certificates API
* Add background task to sync certificates
* Fix marvin test and revoke certificate
* Fix certificate sent to hypervisor was missing headers
* Fix background task for uploading certificates to hosts
If projects have many resource tags, it will take a long time to list projects.
Remove resource tags information from project_view will fix it the issue.
Fixes#3178
Problem: Users don't know what keys/values to enter for template and VM details.
Root Cause: The feature does not exist that can list possible details and options.
Solution: Based on the possible VM and template details handled by the
codebase, those details were refactored and a list API is introduced
that can return users those details along with possible values. When
users add details now, they will be presented with a list of key details
and their possible options if any.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixes#2742
UI Supported ordering VPC Offerings but the API did not have that
support implemented. This makes the change in updateVPCOfferings
and listVPCOfferings API calls, along with necessary database
changes for supporting sorting of VPC Offerings.
Problem: The disk offering change is not reflected in cloud_usage database table.
Root Cause: The resizeVolume API does not publish the volume disk offering change event to the
cloud_usage database table.
Solution: This issue has been fixed by refactoring the resizeVolume API to publish this disk offering change for volumes that either in Allocated or Ready state.
Moves the method that published events for volumes in Ready state from
the VolumeStateListener class to the orchestrateResizeVolume method in
the VolumeApiService.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: Not able to configure a sort order for the zones that are listed in various views in the UI.
Root Cause: There is no mechanism to accept sort key for existing zones or UI widget, that would allow to listing zones in the UI in a certain order.
Solution: The order of zones in listed in various views in the UI can now be configured through the newly added “sort_key” field added for the zone. It can be set using updateZone API by providing “sort_key” parameter for a zone, or by reordering the items in the zones list in the UI. UI has been updated to show ordering controls in zones list view. Database changes include updating table “data_center” by adding “sort_key” column (containing integer values and defaults to zero).
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: Admins don’t want to charge for IP address usage on certain (shared) networks.
Root Cause: There is no flag or detail for admins to provide using UI or API when creating networks to specify if they want IP address usage of the network hidden.
Solution: A new boolean hideipaddressusage flag is added to the createNetwork API and a checkbox in the ‘Add guest network’ UI for the root admins to specify if they want the shared network’s IP address usage to be hidden in the listUsageRecords API response. The provided flag is saved as the ‘hideIpAddressUsage’ detail in the cloud.network_details table for the network. For existing (shared) networks, root admins can also specify the same boolean API parameter hideipaddressusage with the updateNetwork API request to configure the behaviour for an existing network. When the detail/flag is true, the IP address usage for the (shared) network is not exported in the listUsageRecords API response. The listNetworks API response will include the details of a network for root admin only. (note usage is still recorded in the usage database but not return by the listUsageRecords API)
The API flag works for any kind of network via the API, but the checkbox is only shown while creating shared networks in the UI.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Fixes tests path from old layout to standard maven in src/test/java/
- Removed duplicate SnapshotManagerImpl at old path `server/src/com...`
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548
Live storage migration on KVM under these conditions:
From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.
Additional notes:
To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:
Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: When a multi-disk OVA template is uploaded, only the root disk is recognized and VMs deployed using such template only get the root disk provisioned.
Root Cause: The template processor for multi-disk OVA was not used in the template upload processor.
Solution: Added support for local multi-disk OVA template upload. After a multi-disk OVA template is
uploaded, the mechanism that worked on multi-disk OVA templates registered using URL is now also used to discovers and creates data-disk templates in cloud.vm_template table and on the secondary storage.
To enable SSL on SSVMs :
• Upload the certificates like you usually do via the API or UI->Infrastructure tab
• Set the global settings secstorage.encrypt.copy, secstorage.ssl.cert.domain to appropriate values
along with the CPVM ones
• Restart management server (no need to destroy/restart SSVM (or the ssvm agent))
Test cases:
- Upload template and check it creates multi-disk folders on secondary
storage and entries in cloud.vm_template table
- Upload template and kill/shutdown management server. Then restart MS
to check if template sync works
- Copy template across zone of an uploaded template
Signed-off-by: Rohit Yadav rohit.yadav@shapeblue.com
This PR is for deactivating Ehcache in CloudStack since it is not usable. The first commit remove the default RMI cache peering configured for multicast which most of the time cannot work. It also requires to have an interface up which is not always the case while developing offline.
The second commits remove the configuration to activate caching on some DAOs.
Problems
The code in CS does not seem to fit any caching mechanism especially due to the homemade DAO code. The main 3 flaws are the following:
Entities are not expected to be shared
There is quite a lot of code with method calls passing entity IDs value as long, which does some object fetching. Without caching, this behavior will create distinct objects each time an entity with the same ID is fetched. With the cache enabled, the same object will be shared among those methods. It has been seen that it does generate some side effects where code still expected unchanged entity attributes after calling different methods thus generating exception/bugs.
DAO update operations are using search queries
Some part of the code are updating entities based on a search query, therefore the whole cache must be invalidated (see GenericDaoBase: public int update(UpdateBuilder ub, final SearchCriteria<?> sc, Integer rows);).
Entities based on views joining multiple tables
There are quite a lot of entities based on SQL views joining multiple entities in a same object. Enabling caching on those would require a mechanism to link and cross-remove related objects whenever one of the sub-entity is changed.
Final word
Based on the previously discussed points, the best approach IMHO would be to move out of the custom DAO framework in CS and use a well known one (out of scope of this change of course). It will handle caching well and the joins made by the views in the code. It's not an easy change, but it will fix along a lot of issues and add a proven / robust framework to an important part of the code.
Since the CloudStack virtual router was redesigned on version 4.6 it has been observed that the DHCP leases file is not persistent across network operations. This causes conflicts on guest VMs static IPs, causing these static IPs to not be renewed by the DHCP server running on isolated and VPC networks' virtual routers (dnsmasq). On stopping or destroying a VM, its dhcp/dns records are not removed from the virtual router causing ghost effects.
Fixes#3272Fixes#3354
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* DPDK vHost User mode selection
* SQL text field and DPDK classes refactor
* Fix NullPointerException after refactor
* Fix unit test
* Refactor details type
This commit simplifies the generateDestPath method and fixes an issue where an extra file, named as 'null', was created on the target storage pool during VM local storage volume migration. Without this fix, the VM is migrated and there is no data loss; however, 193 KB is allocated for the unused file named as 'null' and the file stays on the target storage.
Added changes for creating service offerings for specified domain(s) and zone(s).
Fixed checkAccess for disk offerings.
Fixed list APIs for disk and service offerings.
UI changes for creating disk, service offerings for specified domain(s) and zone(s).
Signed-off-by: Abhishek Kumar <abhishek.kumar@shapeblue.com>
Allows creating storage offerings associated with particular domain(s) and zone(s). In create disk/storage offfering form UI, a mult-select control has been addded to select desired zone(s) and domain select element has been made multi-select.
createDiskOffering API has been modified to allow passing list of domain and zone IDs with keys domainids and zoneids respectively. These lists are stored in DB in cloud.disk_offering_details table with 'domainids' and 'zoneids' key as string of comma separated list of IDs. Response for create, update and list disk offering APIs will return domainids, domainnames, zoneids and zonenames in details object of offering.
listDiskOfferings API has been modified to allow passing zoneid to return only offerings which are associated with the zone.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* Keep connection alive when on maintenance
* Refactor cancel maintenance and unit tests
* Add marvin tests
* Refactor
* Changing the way we get ssh credentials
* Add check on SSH restart and improve marvin tests
Problem: Custom compute offering does not allow setting min and max values for CPU and VRAM for custom VMs.
Root Cause: Custom compute offerings cannot be created with a given range of CPU number and memory instead it allows only fixed values.
Solution: createServiceOffering API has been modified to allow setting a defined range for CPU number and memory. Also, UI form for compute offering creation is provided with a new field named 'compute offering type’ with values - Fixed, Custom Constrained, Custom Constrained. It will allow the creation of compute offerings either with a fixed CPU speed and memory for fixed compute offering, or with a range of CPU number and memory for custom constrained compute offering or without predefined CPU number, CPU speed and memory for custom unconstrained compute offering.
To allow the user to set CPU number, CPU speed and memory during VM deployment, UI form for VM deployment has been modified to provide controls to change these values. These controls are depicted in screenshots below for custom constrained and custom unconstrained compute offering types.
Sample API calls using cmk to create a constrained service offering and deploying a VM using it,
create serviceoffering name=Constrained displaytext=Constrained customized=true mincpunumber=2 maxcpunumber=4 cpuspeed=400 minmemory=256 maxmemory=1024
deploy virtualmachine displayname=ConstrainedVM serviceofferingid=60f3e500-6559-40b2-9a61-2192891c2bd6 templateid=8e0f4a3e-601b-11e9-9df4-a0afbd4a2d60 zoneid=9612a0c6-ed28-4fae-9a48-6eb207af29e3 details[0].cpuNumber=3 details[0].memory=800
Signed-off-by: Abhishek Kumar <abhishek.kumar@shapeblue.com>
- Fixes PR #3146 db cleanup to the correct 4.12->4.13 upgrade path
- Fixes failing unit test due to jdk specific changes after forward
merging
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* skip geting used bytes for volumes that are not in Ready state
* updated log message
* filter snapshots by state backedup
* removed * import
* filter templates by state 'DOWNLOADED'
* refactored getUsedBytes to use O(1) queries
* querying for ready volumes instead filtering in memory
* make listByStoreIdInReadyState more generic ex listByStoreIdAndState
* updated snapshot search criteria for listByStoreIdAndState
* updated template search criteria for listByPoolIdAndState
* fixed typo in search criteria for listByTemplateAndState
* fixed typo in search criteria for templates in listByPoolIdAndState
With this patch b766bf7
we started tracking disks in attaching state so that other attach request can fail gracefully. However this missed the case where disks were in allocated state but attach was requested.
For the use case where users want to attach disk in allocated state but not ready, we need to have allocated-attaching transition as well. We must take care of returning to the original state - allocated or ready - when attach request has completed.
For the use case of unstarted vm's the disk must proceed as follows - "Allocated" -> Attaching -> Allocated. When VM is started, the disk is "created" and pool is assigned. For the use case of started VMs it's more trivial and disk proceeds as follows - Ready -> Attaching -> Ready.
Test this by creating a VM with "startvm=false", create a disk and try attaching it in allocated state. It would give an exception on latest 4.11 but will be fixed on this patch.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Problem: Users are billed for destroyed VMs with VM snapshots because usage records don't get that the VM and VM snapshots are removed.
Root Cause: The destroyVirtualMachine and expungeVirtualMachine APIs were removing VM snapshots but not generating VMSNAPSHOT.DELETE usage event due to which the VM snapshots were not marked as removed in the usage_vmsnapshot table.
Solution: The issue was fixed by emitting the proper usage event for all the VM snapshots of a VM that is destroyed.
* Migrate template to target host if needed.
Fix KVM VM local storage live migration by migrating its template to the
target host if needed.
* Address reviewer and add method that updates the DB template reference
* Remove deprecated Config.PrimaryStorageDownloadWait
* Code formating of @Inject to follow checkstyle
* feature: add libvirt / qemu io bursting
Adds the ability to set bursting features from libvirt / qemu
This allows you to utilize the iops and bytes temporary "burst" mode
introduced with libvirt 2.4 and improved upon with libvirt 2.6.
https://blogs.igalia.com/berto/2016/05/24/io-bursts-with-qemu-2-6/
* updates per rafael et al
* The snapshot.backup.rightafter configuration variable was removed by:
SHA: 6bb0ca2f85
This adds it back, though named snapshot.backup.to.secondary now instead.
This global parameter, once set, will allow you to prevent automatic backups of
snapshots to secondary storage, unless they're actually needed.
Fixes#3096
* updates per review
* api: add command to list management servers
* api: add number of mangement servers in listInfrastructure command
* ui: add block for mangement servers on infra page
* api name resolution method cleanup
Offerings can co-exist where on does provide Security Grouping in the
network, but other guest Networks have no Security Grouping.
In V(X)LAN isolation environments the L2 separation is handled by V(X)LAN
and protection between Instances is handled by Security Grouping.
There are multiple scenarios possible where one network has Security Grouping
enabled because that is required in that network.
In the other network, but in the same zone it could be a choice to have
Security Grouping disabled and allow all traffic to flow.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations
* Fix indentation of marvin test file and reformat against PEP8
* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.
* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one
* Fix unhandled NPE during VM migration
* Revert back to distinct event descriptions for VM to host or storage pool migration
* Reformat test_primary_storage file against PEP-8 and Remove unused imports
* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils
With IPv6 we are not using DHCP to allocate addresses, but using
StateLess Address Auto Configuration (SLAAC) a Instance will calculate
it's own address based on the Router Advertisements send out by the
routers in the network.
This Advertisement contains the IPv6 Subnet in use in that subnet and
allows to calculate the stable Address the Instance will obtain based
on it's MAC Address.
The existing code is 'dead code' as it has been written, but was never
used by any production code.
SLAAC only works properly with subnets of exactly 64-bits large.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
It's incorrect to use the findIncludingRemovedBy and
listIncludingRemovedBy for the common list and find operation.
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
A corner case was found on 4.11.2 for #2493 leading to an infinite loop in state PrepareForMaintenance
To prevent such cases, in which failed migrations are detected but still running on the host, this feature adds a new cluster setting host.maintenance.retries which is the number of retries before marking the host as ErrorInMaintenance if migration errors persist.
How Has This Been Tested?
- 2 KVM hosts, pick one which has running VMs as H
- Block migrations ports on H to simulate failures on migrations:
iptables -I OUTPUT -j REJECT -m state --state NEW -m tcp -p tcp --dport 49152:49215 -m comment --comment 'test block migrations' iptables -I OUTPUT -j REJECT -m state --state NEW -m tcp -p tcp --dport 16509 -m comment --comment 'test block migrations
- Put host H in Maintenance
- Observe that host is indefinitely in PrepareForMaintenance state (after this fix it goes into ErrorInMaintenance after retrying host.maintenance.retries times)
This PR adds the possibility to select a checkbox for the parameter bypassvlanoverlapcheck to the ajax request createNetwork. The checkbox was added for Guest Network as well as for the L2 Guest Network. For L2 Guest Network a backend check for the existence of the flag bypassvlanoverlapcheck was added.
* Allow KVM VM live migration with ROOT volume on file
* Allow KVM VM live migration with ROOT volume on file
- Add JUnit tests
* Address reviewers and change some variable names to ease future
implementation (developers can easily guess the name and use
autocomplete)
This commit allows deploying VMs with a specific IPv4 address.
DirectPodBasedNetworkGuru does not support requesting a custom
IP-Address while creating a new NIC/Instance, throwing the following
error:
Error 530: Does not support custom ip allocation at this time:
NicProfile[0-0-null-null-null
Unknown macro: { "cserrorcode"}
Some use-cases prefer the ability to request the IPv4 address which the
Instance will get.
This implementation adds unit test cases to cover and it was manually
tested in Basic Networking. I can perform more tests if requested.
With earlier work in Basic Networking and the security group provider IPv6 is
supported and we can allow IPv6 to be supplied in networks with SG enabled.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
The view "service_offering_view" doesn't include removed SOs, as a result when SO is removed, the bug happens. The PR introduces a change for resource calculation changing "service_offering_view" to "service_offering" table which has all service offerings.
Must be fixed in:
4.12
4.11
Fixes: #3009
This adds a new API updateVmwareDc that allows admins to update the
VMware datacenter details of a zone. It also recursively updates
the cluster_details for any username/password updates
as well as updates the url detail in cluster_details table and guid
detail in the host_details table with any newly provided vcenter
domain/ip. The update API assumes that there is only one vCenter per
zone. And, since the username/password for each VMware host could be different
than what gets configured for vcenter at zone level, it does not update the
username/password in host_details.
Previously, one has to manually update the db with any new vcenter details for the zone.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Remove some unused Classes
These classes were deleted because they have no references in our code base. They are not in Spring execution flow nor instantiated with "new":
- com.cloud.agent.api.CheckStateAnswer
- com.cloud.agent.api.StartupVMMAgentCommand
- com.cloud.agent.api.routing.UserDataCommand
- remove from description at
com.cloud.configuration.Config.ExecuteInSequenceNetworkElementCommands
enum
- com.cloud.agent.api.storage.UpgradeDiskCommand
- com.cloud.agent.api.storage.CreatePrivateTemplateCommand
- com.cloud.agent.api.storage.DestroyAnswer
- Note: "FIXME: Should have an DestroyAnswer" at
com.cloud.storage.resource.StoragePoolResource
- com.cloud.agent.api.storage.UpgradeDiskAnswer
- com.cloud.agent.api.storage.ManageVolumeAvailabilityAnswer
- com.cloud.agent.api.storage.ManageVolumeAvailabilityCommand
- com.cloud.exception.UsageServerException
- com.cloud.info.SecStorageVmLoadInfo
- com.cloud.serializer.SerializerHelper
* PR#1448 update description of 'execute.in.sequence.network.element.commands' param
Update description of 'execute.in.sequence.network.element.commands'parameter to reflect an unused command that has been removed. The removed class command is 'UserDataCommand'.
* Add cloud schema to update SQL
This force stops old VRs when performing rolling restart with
cleanup=true. This will ensure that VRs are powered off quickly than
wait longer for the normal ACPI shutdown. During testing, it was found
on VMware where VM stops are slow compared to XenServer and KVM.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This adds a global setting for admins who may not want the rolling
restart of routers or are seeing any issues around it. In future, this
setting may be removed.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes#2719 where private gateway IP might be incorrectly
programmed on a guest network nic. The VR would now check ipassoc
requests by mac addresses than provided nic/device id in case they are
wrong.
The root cause is that the device id information is lost when aggregated
commands are created upon starting of a new VPC VR, without the correct
device id in ip_associations json it mis-programs the VR.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
These boolean-return methods are named as "getXXX".
Other boolean-return methods are named as "isXXX".
Considering there methods will return boolean values, it should be more clear and consistent to rename them as "isXXX".
(rebase #2602 and #2816)
This fixes a regression introduced in #2799, by exporting $TYPE
before the `patch` is called to patch/extract archives for ssvm/cpvm.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
VMware router will be rebooted based on #2794, per current config
the VRs on reboot will go through fsck checks slowing down the deployment
process by few seconds. This will ensure that fsck checks are done
on every 3rd boot of the VR. The `4` is used because 1st boot is done
during the build of systemvmtemplate appliance.
Add upgrade path for a new 4.11.2 systemvmtemplate.
Other changes:
- Add support for XS 7.5 Fixes#2834.
- Reboot VR only if mgmt gw is not pingable on vmware.
- Enable passive ftp by enabling nf_conntrack_helper. This is change in behaviour since linux 4.7
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Add managed storage pool constraints to MigrateWithVolume API method
* Apply mike's suggestions
* Apply Mike's suggestion in a second review
* Mike's suggestions
* Confused bit
* just executeManagedStorageChecks
* Created methods `executeManagedStorageChecksWhenTargetStoragePoolNotProvided` and `executeManagedStorageChecksWhenTargetStoragePoolProvided`
* improve "executeManagedStorageChecksWhenTargetStoragePoolNotProvided"
* Fix "findVolumesThatWereNotMappedByTheUser" method
* Applu Mike's suggestion to improve "createMappingVolumeAndStoragePool" method
* Unit tests to cover modified code
* CLOUDSTACK-9473: storage pool capacity check when volume is resized or migrated
Storage pool checker is not being called on resize and migrate volume.
This may lead to allocated percentage of storage above 100%.
Setup:
1 VMware cluster with 2 Hosts.
Executed Steps:
Applied the following global settings:
storage.overprovisioning.factor = 1
pool.storage.allocated.capacity.disablethreshold = 1
pool.storage.capacity.disablethreshold = 1
Restarted management server
Executed Resize and migrate pool and Observed that Storage pool checker is not performed on resizeVolume and migrateVolume.
Result:
Root cause analysis shows storage pool checker is not called when doing migration and resizing.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In integration work for CCS I found that the service call UserVmService.destroyVm(long uuid, boolean expunge) does not honour the expunge flag. I traced it down to the implementation VirtualMachineManagerImpl.destroy(String vmUuid, boolean expunge).
Testing: manual testing so far, testing will pose some crosscutting challanges as the behaviour and implementation are seperated by about five layers of abstraction.
* rename package from "src" to "org.apache.cloudstack.storage.vmsnapshot"
* Remove Empty test class "SnapshotDataFactoryTest"
* Remove redundant imports (importing a class from its own package)
* Fix the problem at #1740 when it loads all snapshots in the primary storage
While checking a problem with @mike-tutkowski at PR #1740, we noticed that the method `org.apache.cloudstack.storage.snapshot.SnapshotDataFactoryImpl.getSnapshots(long, DataStoreRole)` was not loading the snapshots in `snapshot_store_ref` table according to their storage role. Instead, it would return a list of all snapshots (even if they are not in the storage role sent as a parameter) saying that they are in the storage that was sent as parameter.
* Add unit test
* 4.11:
Changed the implementation of isVolumeOnManagedStorage(VolumeInfo) to check if the data store in question is for primary storage (and added a unit test from Daan Hoogland)
vmware: reboot VR after mac updates (#2794)
* Cleaup and code-formatting POM files
* Remove obsolete mycila license-maven-plugin
* Remove obsolete console-proxy/plugin project
* Move console-proxy-rdbconsole under console-proxy parent
* Use correct parent path for rdpconsole
* Order alphabetally items in setnextversion.sh
* Unifiy License header in POMs
* Alphabetic order of modules definition
* Extract all defined versions into parent pom
* Remove obsolete files: version-info.in, configure-info.in
* Remove redundant defaultGoal
* Remove useless checkstyle plugin from checkstyle project
* Order alphabetally items in pom.xml
* Add aditional SPACEs to fix debian build
* Don't execute checkstyle on parent projects
* Use UTF-8 encoding in building checkstyle project
* Extract plugin versions into properties
* Execute PMD plugin on all the projects with -Penablefindbugs
* Upgrade maven plugins to latest version
* Make sure to always look for apache parent pom from repository
* Fix incorrect version grep in debian packaging
* Fix rebase conflicts
* Fix rebase conflicts
* Remove PMD for now to be fixed on another PR
This PR fixes NPE with the provisionCertificateCmd when reconnect is set to True.
Also fixes the following Marvin test failures:
- test_certauthority_root.py
Fixes the version in pom etc. to be consistent with versioning pattern as X.Y.Z.0-SNAPSHOT after a minor release.
Signed-off-by: Khosrow Moossavi <khos2ow@gmail.com>
This ensure that fewer mount points are made on hosts for either
primary storagepools or secondary storagepools.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.11:
comment on unencryption
ui: fix create VPC dialog box failure when zone is SG enabled (#2704)
CLOUDSTACK-10381: Fix password reset / reset ssh key with ConfigDrive
isisnot=
extra message
debug message
imports
update without decrypt doesn't work
set unsensitive attributes as not 'Secure'
remove old config artifacts from update path
Changes in PR #2508 have caused network restart to fail in a Nuage setup,
as the new VR takes the same IP as the old one, and the old VR is still running.
Nuage doesn't support multiple VM's having the same IP.
We delay provisioning the interfaces in VSD until the old VR interface is released.
* Create unit test cases for 'ConfigDriveBuilder' class
* add method 'getProgramToGenerateIso' as suggested by rohit and Daan
* fix encoding for base64 to StandardCharsets.US_ASCII
* fix MockServerTest.testIsMockServerCanUpgradeConnectionToSsl()
This is another method that is causing Jenkins to fail for almost a month
Example: A VM that uses managed storage is stopped. The VM is then started on a different host in the same cluster. The Start operation fails.
To get around this issue, you must either start the VM up on the same host or on a host in a different cluster.
The reason is due to a slightly erroneous check in VolumeOrchestrator.prepare.
To solve this issue, we should be checking if the cluster ID changes, not if the host ID changes.
This introduces a new global setting `vm.configdrive.primarypool.enabled` to toggle creation/hosting of config drive iso files on primary storage, the default will be false causing them to be hosted on secondary storage. The current support is limited from hypervisor resource side and in current implementation limited to `KVM` only. The next big change is that config drive is created at a temporary location by management server and shipped to either KVM or SSVM agent via cmd-answer pattern, the data of which is not logged in logs. This saves us from adding genisoimage dependency on cloudstack-agent pkg.
The APIs to reset ssh public key, password and user-data (via update VM API) requires that VM should be shutdown. Therefore, in the refactoring I removed the case of updation of existing ISO. If there are objections I'll re-put the strategy to detach+attach new config iso as a way of updation. In the refactored implementation, the folder name is changed to lower-cased configdrive. And during VM start, migration or shutdown/removal if primary storage is enable for use, the KVM agent will handle cleanup tasks otherwise SSVM agent will handle them.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When a user shuts down their VM from the guest OS (and VM HA is enabled), the VM just powers itself back on. Our environment is on KVM hosts.
CloudStack does not know the difference between a VM failing or being shutdown from within the guest OS.
This is a major pain point for all our users - especially since they don't pay for VMs when they are shutoff. It is not intuitive for end-users to understand why they can't shutdown VMs from within the guest OS. Especially when they all come from (non-cloudstack) VMware and Hyper-V environments where this is not an issue.
However, if a host fails, we need VM HA to still work.
This PR that creates a configuration option "ha.vm.restart.hostup". With this option set to false, if CloudStack sees a VM shutdown out-of-band, but the host it was on is still online, then it won't power the VM back on. The logic is that since the host is online, it was most likely shutdown from the guest OS.
For when a host actually fails, standard VM HA logic takes over and powers on VMs (if they have VM HA enabled) if the host they were on fails.
If that "ha.vm.restart.hostup" option is true (the default to match current functionality), it works like always, and even in-guest shutdowns of VMs causes CloudStack to power back on the VM.
* Primary Storage count for an account does not decrease when a Data Disk is deleted
When a data disk is created and not attached in a running VM, the "deleteVolume" will not decrement the count for used primary storage in the VMs accounting information. The property that is not being decremented is called "primarystoragetotal"; this information can be retrieved via "listAccounts" API method.
Steps to reproduce this issue:
1 - Create an account, deploy a VM in it
2 - Check the primary storage count for the account with listAccounts API
3 - Create a data disk
4 - Check the primary storage count for the account with listAccounts API
5 - Delete the Data disk
6 - Check the primary storage count for the account with listAccounts API - It is the same as before deleting the data disk (it should not be the same as the value in step 2!)
* formatting and cleanups
* fix imports that were wrongly changed during rebase
This fixes config drive to use VM's user provided host-name instead of
the internal VM instance ID for hostname related config in both
cloudstack and openstack metadata bundled in the ISO.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-9184: Fixes#2631 VMware dvs portgroup autogrowth
This deprecates the vmware.ports.per.dvportgroup global setting.
The vSphere Auto Expand feature (introduced in vSphere 5.0) will take
care of dynamically increasing/decreasing the dvPorts when running out
of distributed ports . But in case of vSphere 4.1/4.0 (If used), as this
feature is not there, the new default value (=> 8) have an impact in the
existing deployments. Action item for vSphere 4.1/4.0: Admin should
modify the global configuration setting "vmware.ports.per.dvportgroup"
from 8 to any number based on their environment because the proposal
default value of 8 would be very less without auto expand feature in
general. The current default value of 256 may not need immediate
modification after deployment, but 8 would be very less which means
admin need to update immediately after upgrade.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces a rolling restart of VRs when networks are restarted
with cleanup option for isolated and VPC networks. A make redundant option is
shown for isolated networks now in UI.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Using a hierarchy of database version rather than a flat
list of them. Adding a new schema upgrade path was really
cumbersome and error-prone, because we needed to maintain
a flat map of versions and their corresponding list of
upgrade paths (`DbUpgrade`). Instead we're using a logical
hierarchy structure of versions:
```
DatabaseVersionHierarchy.builder()
.next("4.0.0" , new Upgrade40to41())
.next("4.0.1" , new Upgrade40to41())
.next("4.0.2" , new Upgrade40to41())
.next("4.1.0" , new Upgrade410to420())
.next("4.1.1" , new Upgrade410to420())
.next("4.2.0" , new Upgrade420to421())
...
.next("4.2.1" , new Upgrade421to430())
.next("4.9.3.0" , new Upgrade4930to41000())
.next("4.10.0.0", new Upgrade41000to41100())
.next("4.11.0.0", new Upgrade41100to41110())
.build();
```
With this change, when we need to add a new version upgrade
path, we only need to add it in correct place in the hierarchy
rather than add that in dozens of places in `_upgradeMap`.
Supporting ConfigDrive user data on L2 networks.
Add UI checkbox to create L2 network offering with config drive.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10147 Disabled Xenserver Cluster can still deploy VM's. Added code to skip disabled clusters when selecting a host (#2442)
(cherry picked from commit c3488a51db)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10318: Bug on sorting ACL rules list in chrome (#2478)
(cherry picked from commit 4412563f19)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10284:Creating a snapshot from VM Snapshot generates error if hypervisor is not KVM.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10221: Allow IPv6 when creating a Basic Network (#2397)
Since CloudStack 4.10 Basic Networking supports IPv6 and thus
should be allowed to be specified when creating a network.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
(cherry picked from commit 9733a10ecd)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10214: Unable to remove local primary storage (#2390)
Allow admins to remove primary storage pool.
Cherry-picked from eba2e1d8a1
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* dateutil: constistency of tzdate input and output (#2392)
Signed-off-by: Yoan Blanc <yoan.blanc@exoscale.ch>
Signed-off-by: Daan Hoogland <daan.hoogland@shapeblue.com>
(cherry picked from commit 2ad5202823)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10054:Volume download times out in 3600 seconds (#2244)
(cherry picked from commit bb607d07a9)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* When creating a new account (via domain admin) it is possible to select “root admin” as the role for the new user (#2606)
* create account with domain admin showing 'root admin' role
Domain admins should not be able to assign the role of root admin to new users. Therefore, the role ‘root admin’ (or any other of the same type) should not be visible to domain admins.
* License and formatting
* Break long sentence into multiple lines
* Fix wording of method 'getCurrentAccount'
* fix typo in variable name
* [CLOUDSTACK-10259] Missing float part of secondary storage data in listAccounts
* [CLOUDSTACK-9338] ACS not accounting resources of VMs with custom service offering
ACS is accounting the resources properly when deploying VMs with custom service offerings. However, there are other methods (such as updateResourceCount) that do not execute the resource accounting properly, and these methods update the resource count for an account in the database. Therefore, if a user deploys VMs with custom service offerings, and later this user calls the “updateResourceCount” method, it (the method) will only account for VMs with normal service offerings, and update this as the number of resources used by the account. This will result in a smaller number of resources to be accounted for the given account than the real used value. The problem becomes worse because if the user starts to delete these VMs, it is possible to reach negative values of resources allocated (breaking all of the resource limiting for accounts). This is a very serious attack vector for public cloud providers!
* [CLOUDSTACK-10230] User should not be able to use removed “Guest OS type” (#2404)
* [CLOUDSTACK-10230] User is able to change to “Guest OS type” that has been removed
Users are able to change the OS type of VMs to “Guest OS type” that has been removed. This becomes a security issue when we try to force users to use HVM VMs (Meltdown/Spectre thing). A removed “guest os type” should not be usable by any users in the cloud.
* Remove trailing lines that are breaking build due to checkstyle compliance
* Remove unused imports
* fix classes that were in the wrong folder structure
* Updates to capacity management
This moves db upgrade paths and checks around a new systemvmtemplate
for 4.11.1. The new systemvmtemplate compared to 4.11.0 template
is slightly smaller and has meltdown/spectre fixes among few other
security fixes from Debian and changes to cloud-early-config.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This adds support for XenServer 7.3 and 7.4, and XCP-ng 7.4 version as hypervisor hosts. Fixes#2523.
This also fixes the issue of 4.11 VRs stuck in starting for up-to 10mins, before they come up online.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* [CLOUDSTACK-10323] Allow changing disk offering during volume migration
This is a continuation of work developed on PR #2425 (CLOUDSTACK-10240), which provided root admins an override mechanism to move volumes between storage systems types (local/shared) even when the disk offering would not allow such operation. To complete the work, we will now provide a way for administrators to enter a new disk offering that can reflect the new placement of the volume. We will add an extra parameter to allow the root admin inform a new disk offering for the volume. Therefore, when the volume is being migrated, it will be possible to replace the disk offering to reflect the new placement of the volume.
The API method will have the following parameters:
* storageid (required)
* volumeid (required)
* livemigrate(optional)
* newdiskofferingid (optional) – this is the new parameter
The expected behavior is the following:
* If “newdiskofferingid” is not provided the current behavior is maintained. Override mechanism will also keep working as we have seen so far.
* If the “newdiskofferingid” is provided by the admin, we will execute the following checks
** new disk offering mode (local/shared) must match the target storage mode. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** we will check if the new disk offering tags match the target storage tags. If it does not match, an exception will be thrown and the operator will receive a message indicating the problem.
** check if the target storage has the capacity for the new volume. If it does not have enough space, then an exception is thrown and the operator will receive a message indicating the problem.
** check if the size of the volume is the same as the size of the new disk offering. If it is not the same, we will ALLOW the change of the service offering, and a warning message will be logged.
We execute the change of the Disk offering as soon as the migration of the volume finishes. Therefore, if an error happens during the migration and the volume remains in the original storage system, the disk offering will keep reflecting this situation.
* Code formatting
* Adding a test to cover migration with new disk offering (#4)
* Adding a test to cover migration with new disk offering
* Update test_volumes.py
* Update test_volumes.py
* fix test_11_migrate_volume_and_change_offering
* Fix typo in Java doc
* CLOUDSTACK-10289: Config Drive Metadata: Use VM UUID instead of VM id
* CLOUDSTACK-10288: Config Drive Userdata: support for binary userdata
* CLOUDSTACK-10358: SSH keys are missing on Config Drive disk in some cases
* fix https://issues.apache.org/jira/browse/CLOUDSTACK-10356
* del patch file
* Update ResourceCountDaoImpl.java
* fix some format
* fix code
* fix error message in VolumeOrchestrator
* add check null stmt
* del import unuse class
* use BooleanUtils to check Boolean
* fix error message
* delete unuse function
* delete the deprecated function updateDomainCount
* add error log and throw exception in ProjectManagerImpl.java
* Add stack traces information
* update stack trace info
* update stack trace to make them consistent
* update stack traces
* update stacktraces
* update stacktraces for other similar situations
* fix some other situations
* enhance other situations
* Create database upgrade from 4.11.0.0 to 4.11.1.0 & VMWare version to OS mappings (#2490)
* Create database upgrade from 4.11.0.0 to 4.11.1.0. Add missing VMWare version to OS mapping SQL in the schema-41100to41110.sql.
* add unit test and add 4.11.0.0 entry to _upgradeMap
* upgrade 4.11.1 to 4.12 definition
* applied Nitin's comments
* Create database upgrade from 4.11.0.0 to 4.11.1.0. Add missing VMWare version to OS mapping SQL in the schema-41100to41110.sql.
* add unit test and add 4.11.0.0 entry to _upgradeMap
* CLOUDSTACK-10278 - WIP: need to test this script before create a pull request
* CLOUDSTACK-10278 - added more idempotent stored procs and moved all lines, that end with a semicolon in existing proc, onto one line because com/cloud/utils/db/ScriptRunner.java executes the sql as soon as it reads in line with a semicolon delimeter at the end.
* CLOUDSTACK-10278 - changed more sql statements to call idempotent stored procs
* CLOUDSTACK-10278 - WIP: need to test this script before create a pull request
* CLOUDSTACK-10278 - added more idempotent stored procs and moved all lines, that end with a semicolon in existing proc, onto one line because com/cloud/utils/db/ScriptRunner.java executes the sql as soon as it reads in line with a semicolon delimeter at the end.
* CLOUDSTACK-10278 - changed more sql statements to call idempotent stored procs
Since CloudStack 4.10 Basic Networking supports IPv6 and thus
should be allowed to be specified when creating a network.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
CLOUDSTACK-10341: VR minor fixes to systemvmtemplate (#2468)
CLOUDSTACK-10340: Add setter to hypervisorType in VMInstanceVO (#2504)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The new CA framework introduced basic support for comma-separated
list of management servers for agent, which makes an external LB
unnecessary.
This extends that feature to implement LB sorting algorithms that
sorts the management server list before they are sent to the agents.
This adds a central intelligence in the management server and adds
additional enhancements to Agent class to be algorithm aware and
have a background mechanism to check/fallback to preferred management
server (assumed as the first in the list). This is support for any
indirect agent such as the KVM, CPVM and SSVM agent, and would
provide support for management server host migration during upgrade
(when instead of in-place, new hosts are used to setup new mgmt server).
This FR introduces two new global settings:
- `indirect.agent.lb.algorithm`: The algorithm for the indirect agent LB.
- `indirect.agent.lb.check.interval`: The preferred host check interval
for the agent's background task that checks and switches to agent's
preferred host.
The indirect.agent.lb.algorithm supports following algorithm options:
- static: use the list as provided.
- roundrobin: evenly spreads hosts across management servers based on
host's id.
- shuffle: (pseudo) randomly sorts the list (not recommended for production).
Any changes to the global settings - `indirect.agent.lb.algorithm` and
`host` does not require restarting of the mangement server(s) and the
agents. A message bus based system dynamically reacts to change in these
global settings and propagates them to all connected agents.
Comma-separated management server list is propagated to agents on
following cases:
- Addition of a host (including ssvm, cpvm systevms).
- Connection or reconnection by the agents to a management server.
- After admin changes the 'host' and/or the
'indirect.agent.lb.algorithm' global settings.
On the agent side, the 'host' setting is saved in its properties file as:
`host=<comma separated addresses>@<algorithm name>`.
First the agent connects to the management server and sends its current
management server list, which is compared by the management server and
in case of failure a new/update list is sent for the agent to persist.
From the agent's perspective, the first address in the propagated list
will be considered the preferred host. A new background task can be
activated by configuring the `indirect.agent.lb.check.interval` which is
a cluster level global setting from CloudStack and admins can also
override this by configuring the 'host.lb.check.interval' in the
`agent.properties` file.
Every time agent gets a ms-host list and the algorithm, the host specific
background check interval is also sent and it dynamically reconfigures
the background task without need to restart agents.
Note: The 'static' and 'roundrobin' algorithms, strictly checks for the
order as expected by them, however, the 'shuffle' algorithm just checks
for content and not the order of the comma separate ms host addresses.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- new flag `-T, --use-timestamp` to use `timestamp` when POM version contains SNAPSHOT
- in the final artifacts (jar) name
- in the final package (rpm, deb) name
- in `/etc/cloudstack-release` file of SystemVMs
- in the Management Server > About dialog
- if there's a "branding" string in the POM version (e.g. `x.y.z.a-NAME[-SNAPSHOT]`),
the branding name will be used in the final generated pacakge name such as following:
- `cloudstack-management-x.y.z.a-NAME.NUMBER.el7.centos.x86_64`
- `cloudstack-management_x.y.z.a-NAME-NUMBER~xenial_all.deb`
- branding string can be overriden with newly added `-b, --brand` flag
- handle the new format version for VR version
- fix long opts (they were broken)
- tolerate and show a warning message for unrecognized flags
- usage help reformat
* Deprecate Version class in favor or CloudStackVersion
* CLOUDSTACK-8855 Improve Error Message for Host Alert State
* [CLOUDSTACK-9846] create column to save the content of alert messages
Remove declaration of throws CloudRuntimeException
I also removed some unused variables and comments left behind
This closes#837
* Isolate a problematic test "smoke/test_certauthority_root"
* Refactored nuage tests
Added simulator support for ConfigDrive
Allow all nuage tests to run against simulator
Refactored nuage tests to remove code duplication
* Move test data from test_data.py to nuage_test_data.py
Nuage test data is now contained in nuage_test_data.py instead of
test_data.py
Removed all nuage test data from nuage_test_data.py
* CLOUD-1252 fixed cleanup of vpc tier network
* Import libVSD into the codebase
* CLOUDSTACK-1253: Volumes are not expunged in simulator
* Fixed some merge issues in test_nuage_vsp_mngd_subnets test
* Implement GetVolumeStatsCommand in Simulator
* Add vspk as marvin nuagevsp dependency, after removing libVSD dependency
* correct libVSD files for license purposes
pep8 pyflakes compliant
* [CLOUDSTACK-10314] Add Text-Field to each ACL Rule
It is interesting to have a text field (e.g. CHAR-256) added to each ACL rule, which allows to enter a "reason" for each FW Rule created. This is valuable for customer documentation, as well as best practice for an evidence towards auditing the system
* Formatting to make check style happy and code clean ups
* [CLOUDSTACK-10240] ACS cannot migrate a volume from local to shared storage.
CloudStack is logically restricting the migration of local storages to shared storage and vice versa. This restriction is a logical one and can be removed for XenServer deployments. Therefore, we will enable migration of volumes between local-shared storages in XenServers independently of their service offering. This will work as an override mechanism to the disk offering used by volumes. If administrators want to migrate local volumes to a shared storage, they should be able to do so (the hypervisor already allows that). The same the other way around.
* Cleanups implemented while working on [CLOUDSTACK-10240]
* Fix test case test_03_migrate_options_storage_tags
The changes applied were:
- When loading hypervisors capabilities we must use "default" instead of nulls
- "Enable" storage migration for simulator hypervisor
- Remove restriction on "ClusterScopeStoragePoolAllocator" to find shared pools
4.10.0.0 users when upgrade to 4.11.0.0 may face db related
discrepancies due to some PRs that got merged without moving their sql
changes to 4.10->4.11 upgrade path. The 4.10.0.0 users can run those
missing sql statements manually and then upgrade to 4.11.0.0, since a
workaround like this is possible this ticket is not marked a blocker. In
4.11.1.0+, we'll move those changes from 4.9.3.0->4.10.0.0 upgrade path
to 4.10.0.0->4.11.0.0 upgrade path. Ideally we should not be doing this,
but this will fix issues for a future 4.10.0.0 user who may want to
upgrade to 4.11.1.0 or 4.12.0.0+.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
ACS is accounting the resources properly when deploying VMs with custom service offerings. However, there are other methods (such as updateResourceCount) that do not execute the resource accounting properly, and these methods update the resource count for an account in the database. Therefore, if a user deploys VMs with custom service offerings, and later this user calls the “updateResourceCount” method, it (the method) will only account for VMs with normal service offerings, and update this as the number of resources used by the account. This will result in a smaller number of resources to be accounted for the given account than the real used value. The problem becomes worse because if the user starts to delete these VMs, it is possible to reach negative values of resources allocated (breaking all of the resource limiting for accounts). This is a very serious attack vector for public cloud providers!
Automate dynamic roles migration for missing props file
- In case commands.properties file is missing, enables dynamic roles.
- Adds a new -D or --default flag to migrate-dynamicroles.py script
to simply update the global setting and use the default role-rule
permissions.
- Add warning message, ask admins to move to dynamic roles during upgrade
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes move refactoring error introduced in #2283
For instance, the class DatadiskTO is supposed to be in com.cloud.agent.api.to package. However, the folder structure it was placed in is com.cloud.agent.api.api.to.
Skip tests for cloud-plugin-hypervisor-ovm3:
For some unknown reason, there are quite a lot of broken test cases for cloud-plugin-hypervisor-ovm3. They might have appeared after some dependency upgrade and was overlooked by the person updating them. I checked them to see if they could be fixed, but these tests are not developed in a clear and clean manner. On top of that, we do not see (at least I) people using OVM3-hypervisor with ACS. Therefore, I decided to skip them.
Identention corrected to use spaces instead of tabs in XML files
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.
- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
This fixes regression failures seen in Trillian, fixes NPEs that cause Travis related failures.
This also removes the aria2 dependency from rpms that require users to enable/install epel-release.
This finally updates the checksums for 4.11 systemvmtemplates in db upgrade path.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)
Added support for root disks on managed storage with KVM
Added support for volume snapshots with managed storage on KVM
Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage
Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.
Included support for Reinstall VM on KVM with managed storage
Enabled offline migration on KVM from non-managed storage to managed storage and vice versa
Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)
Added support to download (extract) a managed-storage volume to a QCOW2 file
When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.
Included support for the KVM auto-convergence feature
The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
Added a new Global Setting: kvm.storage.live.migration.wait
For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)
Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.
Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot
Added an SIOC API plug-in to support VMware SIOC
When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.
Added the ability to revert a volume to a snapshot
Enabled cluster-scoped managed storage
Added support for VMware dynamic discovery
IPv4 and IPv6 are two different protocols and the presence of IPv6
in a network does not mean that IPv4 aliases/multiple subnets should
not be configured or supported by the VR.
This if-statement was written almost 5 years ago in a attempt to
add IPv6 support to CloudStack but was never fully implemented.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Extending Config Drive support
* Added support for VMware
* Build configdrive.iso on ssvm
* Added support for VPC and Isolated Networks
* Moved implementation to new Service Provider
* UI fix: add support for urlencoded userdata
* Add support for building systemvm behind a proxy
Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
Co-Authored-By: Frank Maximus <frank.maximus@nuagenetworks.net>
Co-Authored-By: Sigert Goeminne <sigert.goeminne@nuagenetworks.net>
During storage expunge domain resource statistics for primary storage space resource counter is not updated for domain. This leads to the situation when domain resource statistics for primary storage is overfilled (statistics only increase but not decrease).
Global scheduled task resourcecount.check.interval > 0 provides a workaround but not fixes the problem truly because when accounts inside domains use primary_storage allocation/deallocation intensively it leads to service block of operation.
NB: Unable to implement marvin tests because it (marvin) places in database weird primary storage volume size of 100 when creating VM from template. It might be a sign of opening a new issue for that bug.
CloudStack volumes and templates are one single virtual disk in case of XenServer/XCP and KVM hypervisors since the files used for templates and volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates are in OVA format, which are archives that can contain a complete VM including multiple VMDKs and other files such as ISOs. And currently, Cloudstack only supports Template creation based on OVA files containing a single disk. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, only the first disk is attached to the new instance and other disks are ignored.
Similarly with uploaded volumes, attaching an uploaded volume that contains multiple disks to a VM will result in only one VMDK to being attached to the VM.
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Support+OVA+files+containing+multiple+disks
This behavior needs to be improved in VMWare to support OVA files with multiple disks for both uploaded volumes and templates. i.e. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, the first disk should be attached to the new instance as the ROOT disk and volumes should be created based on other VMDK disks in the OVA file and should be attached to the instance.
Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows using templates and ISOs avoiding secondary storage as intermediate cache on KVM. The virtual machine deployment process is enhanced to supported bypassed registered templates and ISOs, delegating the work of downloading them to primary storage to the KVM agent instead of the SSVM agent.
Template and ISO registration:
- When hypervisor is KVM, a checkbox is displayed with 'Direct Download' label.
- API methods registerTemplate and registerISO are both extended with this new parameter directdownload.
- On template or ISO registration, no download job is sent to SSVM agent, CloudStack would only persist an entry on template_store_ref indicating that template or ISO has been marked as 'Direct Download' (bypassing Secondary Storage). These entries are persisted as:
template_id = Template or ISO id on vm_template table
store_id NULL
download_state = BYPASSED
state = Ready
(Note: these entries allow users to deploy virtual machine from registered templates or ISOs)
- An URL validation command is sent to a random KVM host to check if template/ISO location can be reached. Metalink are also supported by this feature. In case of a metalink, it is fetched and URL check is performed on each of its URLs.
- Checksum should be provided as indicated on #2246: {ALGORITHM}CHKSUMHASH
- After template or ISO is registered, it would be displayed in the UI
Virtual machine deployment:
When a 'Direct Download' template is selected for deployment, CloudStack would delegate template downloading to destination storage pool via destination host by a new pluggable download manager.
Download manager would handle template downloading depending on URL protocol. In case of HTTP, request headers can be set by the user via vm_template_details. Those details should be persisted as:
Key: HTTP_HEADER
Value: HEADERNAME:HEADERVALUE
In case of HTTPS, a new API method is added uploadTemplateDirectDownloadCertificate to allow user importing a client certificate into all KVM hosts' keystore before deployment.
After template or ISO is downloaded to primary storage, usual entry would be persisted on template_spool_ref indicating the mapping between template/ISO and storage pool.
Steps to reproduce issue
Deploy a VM
Take snapshot of the root volume
Delete the snapshot
Before the garbage collector has run, shutdown the VM and assign the VM to other user.
When garage collector executes NPE shows in the logs.
This feature allow admins to dedicate a range of public IP addresses to the SSVM and CPVM, such that they can be subject to specific external firewall rules. The option to dedicate a public IP range to the System VMs (SSVM & CPVM) is added to the createVlanIpRange API method and the UI.
Solution:
Global setting 'system.vm.public.ip.reservation.mode.strictness' is added to determine if the use of the system VM reservation is strict (when true) or preferred (false), false by default.
When a range has been dedicated to System VMs, CloudStack should apply IPs from that range to
the public interfaces of the CPVM and the SSVM depending on global setting's value:
If the global setting is set to false: then CloudStack will use any unused and unreserved public IP
addresses for system VMs only when the pool of reserved IPs has been exhausted
If the global setting is set to true: then CloudStack will fail to deploy the system VM when the pool
of reserved IPs has been exhausted, citing the lack of available IPs.
UI Changes
Under Infrastructure -> Zone -> Physical Network -> Public -> IP Ranges, button 'Account' label is refactored to 'Set reservation'.
When that button is clicked, dialog displayed is also refactored, including a new checkbox 'System VMs' which indicates if range should be dedicated for CPVM and SSVM, and a note indicating its usage.
When clicking on button for any created range, UI dialog displayed indicates whether IP range is dedicated for system vms or not.
The first PR(#1176) intended to solve #CLOUDSTACK-9025 was only tackling the problem for CloudStack deployments that use single hypervisor types (restricted to XenServer). Additionally, the lack of information regarding that solution (poor documentation, test cases and description in PRs and Jira ticket) led the code to be removed in #1124 after a long discussion and analysis in #1056. That piece of code seemed logicless (and it was!). It would receive a hostId and then change that hostId for other hostId of the zone without doing any check; it was not even checking the hypervisor and storage in which the host was plugged into.
The problem reported in #CLOUDSTACK-9025 is caused by partial snapshots that are taken in XenServer. This means, we do not take a complete snapshot, but a partial one that contains only the modified data. This requires rebuilding the VHD hierarchy when creating a template out of the snapshot. The point is that the first hostId received is not a hostId, but a system VM ID(SSVM). That is why the code in #1176 fixed the problem for some deployment scenarios, but would cause problems for scenarios where we have multiple hypervisors in the same zone. We need to execute the creation of the VHD that represents the template in the hypervisor, so the VHD chain can be built using the parent links.
This commit changes the method com.cloud.hypervisor.XenServerGuru.getCommandHostDelegation(long, Command). From now on we replace the hostId that is intended to execute the “copy command” that will create the VHD of the template according to some conditions that were already in place. The idea is that starting with XenServer 6.2.0 hotFix ESP1004 we need to execute the command in the hypervisor host and not from the SSVM. Moreover, the method was improved making it readable and understandable; it was also created test cases assuring that from XenServer 6.2.0 hotFix ESP1004 and upward versions we change the hostId that will be used to execute the “copy command”.
Furthermore, we are not selecting a random host from a zone anymore. A new method was introduced in the HostDao called “findHostConnectedToSnapshotStoragePoolToExecuteCommand”, using this method we look for a host that is in the cluster that is using the storage pool where the volume from which the Snaphost is taken of. By doing this, we guarantee that the host that is connected to the primary storage where all of the snapshots parent VHDs are stored is used to create the template.
Consider using Disabled hosts when no Enabled hosts are found
This also closes#2317
This extends work presented on #2048 on which the ability to extend the management range is provided.
Aim
This PR allows separating the management network subnet on which SSVM and CPVM are from the virtual routers management subnet.
Detailed use case
PCI compliance requires that network elements are defined as ‘in scope’ or ‘out of scope’, for compliance purposes. The SSVM and CPVM are both in scope as they allow public HTTP or HTTPS connections. The virtual routers have been defined as out of scope as they have been placed entirely in a firewalled network's segment. However, all of the system VM types share management network. As SSVM and CPVM are both in scope this would bring the virtual routers into scope as well, requiring individual audits of every virtual router. As this is not practical, the ‘management network’ which the SSVM and CPVM are on, and the management network which the virtual routers are on, must be separated by a firewall.
Description
By this feature it is possible to dedicate a created range for SSVM and CPVM (system vms) and provide a VLAN ID for its range.
A new boolean global configuration is added: system.vm.management.ip.reservation.mode.strictness. If enabled, the use of System VMs management IP reservation is strict, preferred if not. Default value is false (preferred).
Strict reservation: System VMs should try to get a private IP from a range marked for system vms. If not available, deployment fails
Preferred reservation: System VMS will try to get a private IP from a range marked for system vms. If not available, IP for range not marked for system vms is taken.
In CLOUDSTACK-9886, we are reading ping.interval and ping.timeout using configdao which involves direct reading of DB. So, replaced it with ConfigKey based approach.
Snapshot on primary storage not cleaned up after Storage migration. This happens in the following scenario:
Steps To Reproduce
Create an instance on the local storage on any host
Create a scheduled snapshot of the volume:
Wait until ACS created the snapshot. ACS is creating a snapshot on local storage and is transferring this snapshot to secondary storage. But the latest snapshot on local storage will stay there. This is as expected.
Migrate the instance to another XenServer host with ACS UI and Storage Live Migration
The Snapshot on the old host on local storage will not be cleaned up and is staying on local storage. So local storage will fill up with unneeded snapshots.
Consider this scenario:
1. User launches a VM from Template and keep it running
2. Admin logins and deleted that template [CloudPlatform does not check existing / running VM etc. while the deletion is done]
3. User resets the VM
4. CloudPlatform fails to star the VM as it cannot find the corresponding template.
It throws error as
java.lang.RuntimeException: Job failed due to exception Resource [Host:11] is unreachable: Host 11: Unable to start instance due to can't find ready template: 209 for data center 1
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:113)
at org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:495)
Client is requesting better handing of this scenario. We need to check existing / running VM's when the template is deleted and warn admin about the possible issue that may occur.
REPRO STEPS
==================
1. Launches a VM from Template and keep it running
2. Now delete that template
3. Reset the VM
4. CloudPlatform fails to star the VM as it cannot find the corresponding template.
EXPECTED BEHAVIOR
==================
Cloud platform should throw some warning message while the template is deleted if that template is being used by existing / running VM's
ACTUAL BEHAVIOR
==================
Cloud platform does not throw as waring etc.
Ran into an issue today where we passed both the "id" and "domainid" parameters into "listAccounts" and received a response despite the account id passed not belonging to the domainid passed.
Allow usage of "domainid" AND "id" in "listAccounts"
- Adding "AccountDoa::findActiveAccountById"
- Adding "AccountDaoImpl::findActiveAccountById"
- Removing seemingly pointless "listForDomain" parameter
- Updating "typeNEQ" value from "5" to "Account.ACCOUNT_TYPE_PROJECT"
(which is "5")
- Only attempt to load domain for "path" query parameter once
"searchForAccountsInternal" input validation logic pseudo-code:
- If "domainid" set, check immediately
- If "id" not set:
- and user is admin and "listall" is true
- if "domainid" not set, use caller domain id
- force "isrecursive" true
- else use caller account id
- Else if "domainid" and "name" set
- verify existence of account and that user has access
- Else:
- if "domainid" not set, locate account by "id"
- else, locate account by "id" and "domainid"
- verify account found and caller has access rights
Python base64 requires that the string is a multiple of 4 characters but
the Apache codec does not. RFC states is not mandatory so the data should
not fail the VR script (vmdata.py).
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This includes test related fixes and code review fixes based on
reviews from @rafaelweingartner, @marcaurele, @wido and @DaanHoogland.
This also includes VMware disk-resize limitation bug fix based on comments
from @sateesh-chodapuneedi and @priyankparihar.
This also includes the final changes to systemvmtemplate and fixes to
code based on issues found via test failures.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This moves the systevmtemplate migration logic from previous upgrade path
to 4.10.0.0->4.11.0.0 upgrade path.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In default/fresh installations, the guest os type for systemvms with id=15
or Debian 5 (32-bit) can cause memory allocation issues to guest. Using
Other Linux 64-bit as guest OS systemvms get all the allocated RAM. This
avoids OOM related kernel panics for certain VRs such as rVRs, lbvm etc.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes test failures around VMware with the new systemvmtemplate.
In addition:
- Does not skip rVR related test cases for VMware
- Removes rc.local
- Processes unprocessed cmd_line.json
- Fixed NPEs around VMware tests/code
- On VMware, use udevadm to reconfigure nic/mac address than rebooting
- Fix proper acpi shutdown script for faster systemvm shutdowns
- Give at least 256MB of swap for VRs to avoid OOM on VMware
- Fixes smoke tests for environment related failures
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Refactors and simplifies systemvm codebase file structures keeping
the same resultant systemvm.iso packaging
- Password server systemd script and new postinit script that runs
before sshd starts
- Fixes to keepalived and conntrackd config to make rVRs work again
- New /etc/issue featuring ascii based cloudmonkey logo/message and
systemvmtemplate version
- SystemVM python codebase linted and tested. Added pylint/pep to
Travis.
- iptables re-application fixes for non-VR systemvms.
- SystemVM template build fixes.
- Default secondary storage vm service offering boosted to have 2vCPUs
and RAM equal to console proxy.
- Fixes to several marvin based smoke tests, especially rVR related
tests. rVR tests to consider 3*advert_int+skew timeout before status
is checked.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Fixes timezone issue where dates show up as nvalid in UI
- Introduces new event timeline listing/filtering of events
- Several UI improvements to add columns in list views
- Bulk operations support in instance list view to shutdown and destroy
multiple-selected VMs (limitation: after operation, redundant entries
may show up in the list view, refreshing VM list view fixes that)
- Align table thead/tbody to avoid splitting of tables
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Refactor cloud-early-config and make appliance specific scripts
- Make patching work without requiring restart of appliance and remove
postinit script
- Migrate to systemd, speedup booting/loading
- Takes about 5-15s to boot on KVM, and 10-30seconds for VMware and XenServer
- Appliance boots and works on KVM, VMware, XenServer and HyperV
- Update Debian9 ISO url with sha512 checksum
- Speedup console proxy service launch
- Enable additional kernel modules
- Remove unknown ssh key
- Update vhd-util URL as previous URL was down
- Enable sshd by default
- Use hostnamectl to add hostname
- Disable services by default
- Use existing log4j xml, patching not necessary by cloud-early-config
- Several minor fixes and file refactorings, removed dead code/files
- Removes inserv
- Fix dnsmasq config syntax
- Fix haproxy config syntax
- Fix smoke tests and improve performance
- Fix apache pid file path in cloud.monitoring per the new template
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows CloudStack administrators to create layer 2 networks on CloudStack. As these networks are purely layer 2, they don't require IP addresses or Virtual Router, only VLAN is necessary (provided by administrator or assigned by CloudStack). Also, network services should be handled externally, e.g. DNS, DHCP, as they are not provided by L2 networks.
As a consequence, a new Guest Network type is created within CloudStack: L2
Description:
Network offerings and networks support new guest type: L2.
L2 Network offering creation allows administrator to select Specify VLAN or let CloudStack assign it dynamically.
L2 Network creation allows administrator to specify VLAN tag (if network offerings allows it) or simply create network.
VM deployments on L2 networks:
VMs should not IP addresses or any network service
No Virtual Router deployed on network
If Specify VLAN = true for network offering, network gets implemented using a dynamically assigned VLAN
UI changes
A new button is added on Networks tab, available for admins, to allow L2 networks creation
At present, The management IP range can only be expanded under the same subnet. According to existing range, either the last IP can be forward extended or the first IP can be backward extended. But we cannot add an entirely different range from the same subnet. So the expansion of range is subnet bound, which is fixed. But when the range gets exhausted and a user wants to deploy more system VMs, then the operation would fail. The purpose of this feature is to expand the range of management network IPs within the existing subnet. It can also delete and list the IP ranges.
Please refer the FS here: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Expansion+of+Management+IP+Range
MySQLTransactionRollbackException is seen frequently in logs
Root Cause
Attempts to lock rows in the core data access layer of database fails if there is a possibility of deadlock. However Operations are not getting retried in case of deadlock. So introducing retries here
Solution
Operations would be retried after some wait time in case of dead lock exception.
When a template is copied back to zone after it is deleted. deleted field gets reset to null. delete field is added to Search on template zone mapping table to take care of the existing mapping.
ISSUE :
VM start API Job returns success for failed Job
STEPS TO REPRODUCE :
Stop VM instance.
Mark state of associate router as Stopping from DB
Execute startVirtualMachine Api from rest client.
VM will not start but still main job will return the status as SUCCEEDED, whereas sub-job for command VmWorkStart will return status as failed.
Related 86bbe211f2 and CLOUDSTACK-494. Currently we can not have serveral VPCs in one account with different VPN customer gateways configuration per same gateway IP.
This commit adds support for passing IPv6 Addresses and/or Subnets as
Secondary IPs.
This is groundwork for CLOUDSTACK-9853 where IPv6 Subnets have to be
allowed in the Security Groups of Instances to we can add DHCPv6
Prefix Delegation.
Use ; instead of : for separating addresses, otherwise it would cause
problems with IPv6 Addresses.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Depending on the timezone you're running CS (before GMT timezones) you could experience that some jobs are marked as failed since the parent job got a null result despite its child job having successfully done the job. The child job got deleted by the CleanupTask ahead of time, due to a missing datetime conversion to GMT timezone.
Jobs are failing with this message: Job failed with un-handled exception
The fix intends to correct any datetime used in the code that should be using the GMT timezone instead of the local one since all DB datetime should be stored at GMT.
This fixes the following:
- Unchecked thread growth in RemoteEndHostEndPoint
- Potential NPE while finding EP for a storage/scope
Unbounded thread growth can be reproduced with following findings:
- Every unreachable template would produce 6 new threads (in a single
ScheduledExecutorService instance) spaced by 10 seconds
- Every reachable template url without the template would produce 1 new
thread (and one ScheduledExecutorService instance), it errors out quickly without
causing more thread growth.
- Every valid url will produce upto 10 threads as the same ep (endpoint
instance) will be reused to query upload/download (async callback)
progresses.
Every RemoteHostEndPoint instances creates its own
ScheduledExecutorService instance which is why in the jstack dump, we
see several threads that share the prefix RemoteHostEndPoint-{1..10}
(given poolsize is defined as 10, it uses suffixes 1-10).
This fixes the discovered thread leakage with following notes:
- Instead of ScheduledExecutorService instance, a cached pool could be
used instead and was implemented, and with `static` scope to be reused
among other future RemoteHostEndPoint instances.
- It was not clear why we would want to wait when we've Answers returned
from the remote EP, and therefore a scheduled/delayed Runnable was
not required at all for processing answers. ScheduledExecutorService
was therefore not really required, moved to ExecutorService instead.
- Another benefit of using a cached pool is that it will shutdown
threads if they are not used in 60 seconds, and they get re-used for
future runnable submissions.
- Caveat: the executor service is still unbounded, however, the use-case
that this method is used for short jobs to check upload/download
progresses fits the case here.
- Refactored CmdRunner to not use/reference objects from parent class.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Tags field to be included in the listusagerecords response such that it can be used in billing report. E.g.
"tags":[
{"key":"city","value":"Toronto","resourcetype":"UserVm","resourceid":"a0cca906-f985-4b56-ad11-f33e59c4c733","account":"admin","domainid":"dec39eb8-4f81-11e7-8315-067fa0000031","domain":"ROOT"}
,
{"key":"region","value":"canada","resourcetype":"UserVm","resourceid":"a0cca906-f985-4b56-ad11-f33e59c4c733","account":"admin","domainid":"dec39eb8-4f81-11e7-8315-067fa0000031","domain":"ROOT"}
- Migrate to embedded Jetty server.
- Improve ServerDaemon implementation.
- Introduce a new server.properties file for easier configuration.
- Have a single /etc/default/cloudstack-management to configure env.
- Reduce shaded jar file, removing unnecessary dependencies.
- Upgrade to Spring 5.x, upgrade several jar dependencies.
- Does not shade and include mysql-connector, used from classpath instead.
- Upgrade and use bountcastle as a separate un-shaded jar dependency.
- Remove tomcat related configuration and files.
- Have both embedded UI assets in uber jar and separate webapp directory.
- Refactor systemd and init scripts, cleanup packaging.
- Made cloudstack-setup-databases faster, using `urandom`.
- Remove unmaintained distro packagings.
- Moves creation and usage of server keystore in CA manager, this
deprecates the need to create/store cloud.jks in conf folder and
the db.cloud.keyStorePassphrase in db.properties file. This also
remove the need of the --keystore-passphrase in the
cloudstack-setup-encryption script.
- GZip contents dynamically in embedded Jetty
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* VSP ID Caching
* VSP call Statistics
* 5.0 Support
Co-Authored-By: Frank Maximus <frank.maximus@nuagenetworks.net>
Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
Allow security policies to apply on port groups:
- Accepts security policies while creating network offering
- Deployed network will have security policies from the network offering
applied on the port group (in vmware environment)
- Global settings as fallback when security policies are not defined for a network
offering
- Default promiscuous mode security policy set to REJECT as it's the default
for standard/default vswitch
Portgroup vlan-trunking options for dvswitch: This allows admins to define
a network with comma separated vlan id and vlan
range such as vlan://200-400,21,30-50 and use the provided vlan range to
configure vlan-trunking for a portgroup in dvswitch based environment.
VLAN overlap checks are performed for:
- isolated network against existing shared and isolated networks
- dedicated vlan ranges for the physical/public network for the zone
- shared network against existing isolated network
Allow shared networks to bypass vlan overlap checks: This allows admins
to create shared networks with a `bypassvlanoverlapcheck` API flag
which when set to 'true' will create a shared network without
performing vlan overlap checks against isolated network and against
the vlans allocated to the datacenter's physical network (vlan ranges).
Notes:
- No vlan-range overlap checks are performed when creating shared networks
- Multiple vlan id/ranges should include the vlan:// scheme prefix
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This implements a CloudStack Prometheus exporter as a plugin, that serves
metrics on a HTTP port.
New global settings:
1. prometheus.exporter.enable - (default: false), Enable the prometheus
exporter plugin, management server restart needed.
2. prometheus.exporter.port - (default: 9595), The prometheus exporter
server port.
3. prometheus.exporter.allowed.ips - (default: 127.0.0.1), List of comma
separated prometheus server ips (with no spaces) that should be allowed to
access the URLs.
The following list of metrics are provided per pop (zone) with the exporter:
• Per host:
o CPU cores: used, total
o CPU usage: used, total (in MHz)
o Memory usage: used, total (in MiBs)
o Total VMs running on the host
• CPU cores: allocated (per zone)
• CPU usage: allocated (per zone, in MHz)
• Memory usage: allocated (per zone, in MiBs)
• Hosts: online, offline, total
• VMs: in all states -- starting, running, stopping, stopped, destroyed,
expunging, migrating, error, unknown
• Volumes: ready, destroyed, total
• Primary Storage Pool: (Disk size) used, allocated, unallocated, total (in GiBs)
• Secondary Storage Pool: (Disk size) used, allocated, unallocated, total (in GiBs)
• Private IPs: allocated, total
• Public IPs: allocated, total
• Shared Network IPs: allocated, total
• VLANs: allocated, total
Additional metrics for the environment:
• Summed domain (level=1) limit for CPU cores
• Summed domain (level=1) limit for memory/ram
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10046 digest helper for calculating checksums
* CLOUDSTACK-10046 cleanup unused checksum code
* CLOUDSTACK-10046 padding method proof of concept
* CLOUDSTACK-10046 only compare checksums if old value is valid
* Adding positive and negative tests for md5, sha-1 and sha-256, for xen, vmware and kvm hypervisors.
KVM Results:
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 189, in test_02_1_create_template_with_checksum_sha1_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{sha-1}bf580a13f791d86acf3449a7b457a91a14389264" didn\'t match the given value, "{sha-1}someInvalidValue"\n']
=== TestName: test_02_1_create_template_with_checksum_sha1_negative | Status : SUCCESS ===
=== TestName: test_02_create_template_with_checksum_sha1 | Status : SUCCESS ===.
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 203, in test_03_1_create_template_with_checksum_sha256_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{SHA-256}efc03633f2b8f5db08acbcc5dc1be9028572dfd8f1c6c8ea663f0ef94b458c5" didn\'t match the given value, "{SHA-256}someInvalidValue"\n']
=== TestName: test_03_1_create_template_with_checksum_sha256_negative | Status : SUCCESS ===
=== TestName: test_03_create_template_with_checksum_sha256 | Status : SUCCESS ===
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 217, in test_04_1_create_template_with_checksum_md5_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{md5}ada77653dcf1e59495a9e1ac670ad95f" didn\'t match the given value, "{md5}someInvalidValue"\n']
=== TestName: test_04_1_create_template_with_checksum_md5_negative | Status : SUCCESS ===
=== TestName: test_04_create_template_with_checksum_md5 | Status : SUCCESS ===
* CLOUDSTACK-10046 digest helper for calculating checksums
* CLOUDSTACK-10046 cleanup unused checksum code
* CLOUDSTACK-10046 padding method proof of concept
* CLOUDSTACK-10046 only compare checksums if old value is valid
* Adding positive and negative tests for md5, sha-1 and sha-256, for xen, vmware and kvm hypervisors.
KVM Results:
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 189, in test_02_1_create_template_with_checksum_sha1_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{sha-1}bf580a13f791d86acf3449a7b457a91a14389264" didn\'t match the given value, "{sha-1}someInvalidValue"\n']
=== TestName: test_02_1_create_template_with_checksum_sha1_negative | Status : SUCCESS ===
=== TestName: test_02_create_template_with_checksum_sha1 | Status : SUCCESS ===.
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 203, in test_03_1_create_template_with_checksum_sha256_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{SHA-256}efc03633f2b8f5db08acbcc5dc1be9028572dfd8f1c6c8ea663f0ef94b458c5" didn\'t match the given value, "{SHA-256}someInvalidValue"\n']
=== TestName: test_03_1_create_template_with_checksum_sha256_negative | Status : SUCCESS ===
=== TestName: test_03_create_template_with_checksum_sha256 | Status : SUCCESS ===
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 217, in test_04_1_create_template_with_checksum_md5_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{md5}ada77653dcf1e59495a9e1ac670ad95f" didn\'t match the given value, "{md5}someInvalidValue"\n']
=== TestName: test_04_1_create_template_with_checksum_md5_negative | Status : SUCCESS ===
=== TestName: test_04_create_template_with_checksum_md5 | Status : SUCCESS ===
* Adding additional test with no checksum added when registering template
Result:
test_05_create_template_with_no_checksum (integration.smoke.test_templates.TestCreateTemplateWithChecksum) ... === TestName: test_05_create_template_with_no_checksum | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 42.320s
OK
* Fixing negative tests exception handling
* Adding tests for ISO checksum validation and fixing a zero prefix failure test in templates
* CLOUDSTACK-10046 padding
* CLOUDSTACK-10046 usability additions
* yet another IDE artifact hindering checkstyle
Co-Authored-By: Prashanth Manthena <prashanth.manthena@nuagenetworks.net>
Co-Authored-By: Sigert Goeminne <sigert.goeminne@nuagenetworks.net>
Bug: https://issues.apache.org/jira/browse/CLOUDSTACK-9832
Detail:
When the VPC offering does not contain VpcVirtualRouter as a SourceNat provider,
then we will not add the interface in the public network to the VpcVR.
CLOUDSTACK-9832: Move isSrcNat check to VpcManager
* CLOUDSTACK-9899 adding a global setting for not checking URLs from the MS
* CLOUDSTACK-9899 refactor HttpTemplateDownloader contructor cleanup
* CLOUDSTACK-9899 refactor HttpTemplateDownloader.download() cleanup
* CLOUDSTACK-9899 add the new config key to configurable
* CLOUDSTACK-9899 refactor download method
* CLOUDSTACK-9899 less verbose setting comment
* CLOUDSTACK-9899 debug message to indicate checking happened
* CLOUDSTACK-9899 typi flase -> false
This causes VM deployment failure on the host that was disabled while adding the storage repository.
In the attachCluster function of the PrimaryDataStoreLifeCycle, we were only selecting hosts that are up and are in enabled state. Here if we select all up hosts, it will populate the DB properly and will fix this issue. Also added a unit test for attachCluster function.
Fix: Block VMOperations if Host in PrepareForMaintenance mode. VM operations (Stop, Reboot, Destroy, Migrate to host) are not allowed when Host in PrepareForMaintenance mode.
Added ability to specify mac in deployVirtualMachine and
addNicToVirtualMachine api endpoints.
Validates mac address to be in the form of:
aa:bb:cc:dd:ee:ff , aa-bb-cc-dd-ee-ff , or aa.bb.cc.dd.ee.ff.
Ensures that mac address is a Unicast mac.
Ensures that the mac address is not already allocated for the
specified network.
This can happen when you stop a VM in one cluster and start a VM in another cluster. When the VM starts in a new cluster, we don't add a new VAG and hence it fails to start. This PR ensures that we call grantAccess to the VM that gets started which will fix the access issue.
Snapshots are not deleted resulting unexpected storage consumption in case of VMware.
Steps to reproduce this issue :
In VMware setup, create a snapshot of volume say Snap1.
After successful creation of snapshot Snap1, create new snapshot of same volume say Snap2.snapshots
While Snap2 is in BackingUp state, delete Snap1.
Snap1 will disappear from Web UI, but when we check secondary storage, files associated with Snap1 still persists even after cleanup job is performed.
In snapshot_store_ref table in DB, Snap1 will be in ready state instead of Destroyed.
Also, in snapshots table, status of Snap1 will be Destroyed but removed column will be null and will never change to the date of snapshot removal.
Fix for this issue :
In VMware, snapshot chain is not maintained, instead full snapshot is taken every time.
So, it makes sense not to assign parent snapshot id for the snapshot. In this way, every snapshot will be individual and can be deleted successfully whenever required.
Host-HA offers investigation, fencing and recovery mechanisms for host that for
any reason are malfunctioning. It uses Activity and Health checks to determine
current host state based on which it may degrade a host or try to recover it. On
failing to recover it, it may try to fence the host.
The core feature is implemented in a hypervisor agnostic way, with two separate
implementations of the driver/provider for Simulator and KVM hypervisors. The
framework also allows for implementation of other hypervisor specific provider
implementation in future.
The Host-HA provider implementation for KVM hypervisor uses the out-of-band
management sub-system to issue IPMI calls to reset (recover) or poweroff (fence)
a host.
The Host-HA provider implementation for Simulator provides a means of testing
and validating the core framework implementation.
Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces a new certificate authority framework that allows
pluggable CA provider implementations to handle certificate operations
around issuance, revocation and propagation. The framework injects
itself to `NioServer` to handle agent connections securely. The
framework adds assumptions in `NioClient` that a keystore if available
with known name `cloud.jks` will be used for SSL negotiations and
handshake.
This includes a default 'root' CA provider plugin which creates its own
self-signed root certificate authority on first run and uses it for
issuance and provisioning of certificate to CloudStack agents such as
the KVM, CPVM and SSVM agents and also for the management server for
peer clustering.
Additional changes and notes:
- Comma separate list of management server IPs can be set to the 'host'
global setting. Newly provisioned agents (KVM/CPVM/SSVM etc) will get
radomized comma separated list to which they will attempt connection
or reconnection in provided order. This removes need of a TCP LB on
port 8250 (default) of the management server(s).
- All fresh deployment will enforce two-way SSL authentication where
connecting agents will be required to present certificates issued
by the 'root' CA plugin.
- Existing environment on upgrade will continue to use one-way SSL
authentication and connecting agents will not be required to present
certificates.
- A script `keystore-setup` is responsible for initial keystore setup
and CSR generation on the agent/hosts.
- A script `keystore-cert-import` is responsible for import provided
certificate payload to the java keystore file.
- Agent security (keystore, certificates etc) are setup initially using
SSH, and later provisioning is handled via an existing agent connection
using command-answers. The supported clients and agents are limited to
CPVM, SSVM, and KVM agents, and clustered management server (peering).
- Certificate revocation does not revoke an existing agent-mgmt server
connection, however rejects a revoked certificate used during SSL
handshake.
- Older `cloudstackmanagement.keystore` is deprecated and will no longer
be used by mgmt server(s) for SSL negotiations and handshake. New
keystores will be named `cloud.jks`, any additional SSL certificates
should not be imported in it for use with tomcat etc. The `cloud.jks`
keystore is stricly used for agent-server communications.
- Management server keystore are validated and renewed on start up only,
the validity of them are same as the CA certificates.
New APIs:
- listCaProviders: lists all available CA provider plugins
- listCaCertificate: lists the CA certificate(s)
- issueCertificate: issues X509 client certificate with/without a CSR
- provisionCertificate: provisions certificate to a host
- revokeCertificate: revokes a client certificate using its serial
Global settings for the CA framework:
- ca.framework.provider.plugin: The configured CA provider plugin
- ca.framework.cert.keysize: The key size for certificate generation
- ca.framework.cert.signature.algorithm: The certificate signature algorithm
- ca.framework.cert.validity.period: Certificate validity in days
- ca.framework.cert.automatic.renewal: Certificate auto-renewal setting
- ca.framework.background.task.delay: CA background task delay/interval
- ca.framework.cert.expiry.alert.period: Days to check and alert expiring certificates
Global settings for the default 'root' CA provider:
- ca.plugin.root.private.key: (hidden/encrypted) CA private key
- ca.plugin.root.public.key: (hidden/encrypted) CA public key
- ca.plugin.root.ca.certificate: (hidden/encrypted) CA certificate
- ca.plugin.root.issuer.dn: The CA issue distinguished name
- ca.plugin.root.auth.strictness: Are clients required to present certificates
- ca.plugin.root.allow.expired.cert: Are clients with expired certificates allowed
UI changes:
- Button to download/save the CA certificates.
Misc changes:
- Upgrades bountycastle version and uses newer classes
- Refactors SAMLUtil to use new CertUtils
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes issue of enabling dynamic roles based on the global setting
only. This also fixes application of the default role/permissions mapping
on upgrade from 4.8 and previous versions to 4.9+.
Previously, it would make additional check to ensure commands.properties
is not in the classpath however this creates confusion for admins who
may skip/skim through the rn/docs and assume that mere changing the
global settings was not enough.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows changing permission for existing role permissions, as those were static and could not be changed once created. It also provides the ability to change these permissions in the UI using a drop down menu for each permission rule, in which admin can select ‘Allow’ or ‘Deny’ permission.
Changes in the API:
This feature modifies behaviour of updateRolePermission API method:
New optional parameters ‘ruleid’ and ‘permission’ are introduced, they are mutual exclusive to ‘ruleorder’ parameter. This defines two use cases:
Update role permission: ‘ruleid’ and ‘permission’ parameters needed
Update rules order: ‘ruleorder’ parameter needed
Parameter ‘ruleorder’ is now optional
updateRolePermission providing ‘ruleorder’ parameter should be sent via POST
Default value of the account level global config vmsnapshot.expire.interval is -1 that conforms to legacy behaviour. A positive value will expire the VM snapshots for the respective account in that many hours.
CloudStack has several background polling tasks that are spread across
the codebase, the aim of this work is to provide a single manager to
handle submission, execution and handling of background tasks. With
the framework implemented, existing oobm background task has been
refactored to use this manager.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
removed code which nullifies vm_instance_id
Also modified QueryManagerImpl to ignore volume which does not have uuid. This is to avoid duplicate volume listing.
(cherry picked from commit 3cced927c4)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In case of vmware host failure, all the VMs including stopped VMs migrate
to the new host. For the Stopped Vms powerhost gets updated. This was
triggering HandlePowerStateReport which finally calls updatePowerState
updating update_time for the VM. This cause the capacity being reserved
for stopped VMs.
(cherry picked from commit 9d268c8cd5)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
store ref table so builtin template is never downloaded completely
In handleSysTemplateDownload method creating template only if there exists no entry
handleTemplateSync will take care of other scenario
(cherry picked from commit 929595c114)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Update the volume id in volume_store_ref table to newly created volume for migration
(cherry picked from commit 42b89278e9)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
as that snapshot will never be going to use again and also it will fill up primary storage
(cherry picked from commit 336df84f17)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Without this information a NPE might be triggered when starting a VR, SSVM or CP
as this information is read from the 'nics' table and causes a NPE.
During deployment we should set the IPv6 Gateway and CIDR for the NIC object so that
it is persisted to the database.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
(cherry picked from commit f661b631a1)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This commit contains following changes
(1) add CPU CORE information in op_host_capacity
(2) add capacity name in the CapacityResponse
(3) add allocatedCapacity for CPU/MEMORY/CPU CORE for zones
(4) sort CapacityResponse by zonename and CapacityType
CLOUDSTACK-9669:egress destination cidr VR python script changes
CLOUDSTACK-9669:egress destination API and orchestration changes
CLOUDSTACK-9669: Added the ipset package in systemvm template
CLOUDSTACK-9669:Added licence header for new files
CLOUDSTACK-9669: replacing 0.0.0.0/0 with the network cidr
ipset member add with 0.0.0.0/0 fails. So 0.0.0.0/0 replaced with the network cidr.
In source cidr 0.0.0.0/0 is nothing but network cidr.
updated the default egress all cidr with network cidr
The 'force' option provided with the stopVirtualMachine API command is
often assumed to be a hard shutdown sent to the hypervisor, when in fact
it is for CloudStacks' internal use. CloudStack should be able to send
the 'hard' power-off request to the hosts.
When forced parameter on the stopVM API is true, power off (hard shutdown)
a VM. This uses initial changes from #1635 to pass the forced parameter
to hypervisor plugin via the StopCommand, and fixes force stop (poweroff)
handling for KVM, VMware and XenServer.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
given is running out of capacity. If host id is specified the deployment should happen
on the given host and it should fail if the host is out of capacity. We are retrying
deployment on the entire zone without the given host id if we fail once. The retry,
which will retry on other hosts, should only be attempted if host id isn't given.
Also, introduces global setting
allow.deploy.vm.if.deploy.on.given.host.fails with which old behaviour
can be restored
A root volume can be replaced by a different root volume without the VM it belongs to being expunged.
From dev@:
For example: Let’s say we have a system VM running on NFS primary storage. We then put this primary storage into maintenance mode, which creates the system VM (with the same name) on a different primary storage (we do not create a new row in the cloud.vm_instance table for this VM). While this VM works, the original root disk of the system VM remains on the original primary storage and is not destroyed by the code in StorageManagerImpl.cleanupStorage(boolean) in 4.10 because 4.10 (as shown above) only asks for non-root volumes to consider for deletion. In the 4.9 version of the code, the original root disk is cleaned up in StorageManagerImpl.cleanupStorage(boolean). The problem with 4.10 relying on a root disk always being deleted when the VM it belongs to is deleted is that in a situation like this that the system VM doesn’t get deleted at this point – it gets a new root disk that’s hosted by a different primary storage (so now it’s original root disk is stranded).
1. Removed XenServerGuestOsMemoryMap from CitrixHelper.java
This java file was holding a static in memory map named XenServerGuestOsMemoryMap. This was the source for xenserver dynamic memory values(max and min). These values were moved to guest_os_details table.
2. DAO layer was modified to access these values.
3. VirtualMachineTo object was modified to populate the dynamic memory values.
4. addGuestOs and UpdateGuestOS api has been modified to update memory values.
removed code which nullifies vm_instance_id
Also modified QueryManagerImpl to ignore volume which does not have uuid. This is to avoid duplicate volume listing.
This fixes the agreed upon url on download.cloudstack.org in various
sql files and misc scripts.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
[4.10-blocker] Fix error in restart network in 4.10.0.0 RCThe PR fixes the error in restart network:
2017-04-04 10:27:39,217 DEBUG [c.c.n.r.NetworkHelperImpl] (API-Job-Executor-2:ctx-08904854 job-29417 ctx-3405d3f2) (logid:19bbd6e6) Router requires upgrade. Unable to send command to router:9784, router template version : Cloudstack Release 4.10.0 Wed Feb 15 05:42:18 UTC 2017, minimal required version : 4.10.0.0
It works after changing minreq.sysvmtemplate.version from 4.10.0.0 to 4.10.0
* pr/2025:
Fix error in restart network in 4.10.0.0 RC
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9783: Improve metrics view performanceThis improves the metrics view feature by improving the rendering performance
of metrics view tables, by re-implementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.
List of APIs introduced for improving the performance that re-implement the frontend logic at backend:
listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics
Pinging for review - @abhinandanprateek @DaanHoogland @borisstoyanov @karuturi @rashmidixit
Marvin test results:
=== TestName: test_list_clusters_metrics | Status : SUCCESS ===
=== TestName: test_list_hosts_metrics | Status : SUCCESS ===
=== TestName: test_list_infrastructure_metrics | Status : SUCCESS ===
=== TestName: test_list_pstorage_metrics | Status : SUCCESS ===
=== TestName: test_list_vms_metrics | Status : SUCCESS ===
=== TestName: test_list_volumes_metrics | Status : SUCCESS ===
=== TestName: test_list_zones_metrics | Status : SUCCESS ===
* pr/1944:
CLOUDSTACK-9783: Improve metrics view performance
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Fix for test_snapshots.py using nfs2 instead of nfs templateFix for marvin test failure introduced in #1847
Cc: @borisstoyanov @rhtyd @karuturi
* pr/1961:
Fix for test failure
Fix for test_snapshots.py using nfs2 instead of nfs template
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
There are some VM deployment failures happening when multiple VMs are deployed at a time, failures mainly due to NetworkModel code that iterates over all the vlans in the pod. This causes each deployVM thread to hold the global lock on Network longer and cause delays. This delay in turn causes more threads to choose same host and fail since capacity is not available on that host.
Following are some changes required to be done to reduce delays during VM deployments which in turn causes some vm deployment failures when multiple VMs are launched at a time.
In Planner, remove the clusters that do not contain a host with matching service offering tag. This will save some iterations over clusters that dont have matching tagged host
In NetworkModel, do not query the vlans for the pod within the loop. Also optimized the logic to query the ip/ipv6
In DeploymentPlanningManagerImpl, do not process the affinity group if the plan has hostId provided.
* 4.9:
moved logrotate from cron.daily to cron.hourly for vpcrouter in cloud-early-config
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
[4.9] CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agentThe router.aggregation.command.each.timeout in global configuration is only applied on new created KVM host.
For existing KVM host, changing the value will not be effective.
We need to propagate the configuration to existing host when cloudstack-agent is connected.
* pr/1856:
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
In case of vmware host failure, all the VMs including stopped VMs migrate
to the new host. For the Stopped Vms powerhost gets updated. This was
triggering HandlePowerStateReport which finally calls updatePowerState
updating update_time for the VM. This cause the capacity being reserved
for stopped VMs.
CLOUDSTACK 9601: Upgrade: change logic for update path for filesFor going from version A to version D, it uses to run the SQL files in
that order: A -> B -> C -> D -> A-cleanup -> B-cleanup -> C-cleanup ->
D-cleanup. If you had upgraded each version separatively you would have
run A -> A-cleanup -> B -> B-cleanup -> C -> C-cleanup -> D ->
D-cleanup.
This change the logic to follow the same path if you are jumping over
versions.
Signed-off-by: Marc-Aurle Brothier <m@brothier.org>
* pr/1768:
Upgrade: change logic for update path for files
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This improves the metrics view feature by improving the rendering performance
of metrics view tables, by reimplementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.
List of APIs introduced for improving the performance:
listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
* pr/1825:
CLOUDSTACK-9660: NPE while destroying volumes during 1000 VMs deploy and destroy tests NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
ipv6: Set IPv6 CIDR and Gateway in 'nic' profileWithout this information a NPE might be triggered when starting a VR, SSVM or CP
as this information is read from the 'nics' table and causes a NPE.
During deployment we should set the IPv6 Gateway and CIDR for the NIC object so that
it is persisted to the database.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* pr/1927:
ipv6: Set IPv6 CIDR and Gateway in 'nic' profile
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
For going from version A to version D, it uses to run the SQL files in
that order: A -> B -> C -> D -> A-cleanup -> B-cleanup -> C-cleanup ->
D-cleanup. If you had upgraded each version separatively you would have
run A -> A-cleanup -> B -> B-cleanup -> C -> C-cleanup -> D ->
D-cleanup.
This change the logic to follow the same path if you are jumping over
versions.
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
CLOUDSTACK-9766 : Executing deleteSnapshot api with already deleted sIf we try to delete the snapshot which is already deleted, then no proper error appears in the log and it just try to delete the snapshot which is already deleted.
Steps to reproduce :
-------
1-create a snapshot
2-delete the snapshot
3-try to delete snapshot which is deleted in step 2
Expected Result
-------------
Result should show proper error message. Request for deleting already deleted snapshot should not be placed.
* pr/1924:
CLOUDSTACK-9766 : Executing deleteSnapshot api with already deleted snapshot does not throw any exception or failure message
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9574: Redesign storage views## Part 1: Redesign storage tags
### Actual behavior
Primary storage tags are being saved as an entry on `storage_pool_details` with:
* name = TAG_NAME
* value = "true"
When a boolean property is defined in {{storage_pool_details}} and has value = "true", it is displayed as a tag.


### Goal
Redesign `Storage Tags` for Primary Storage view, to list only tags, as it is done in Host Tags (Hosts view).
## Part 2: Remove details from listImageStores API call response and UI
### Description
In Secondary Storage view we propose removing `Details` field, as `Setting` tab list details for a given image store. We also remove details from response on `listImageStores` API method
* pr/1747:
CLOUDSTACK-9574: Redesign storage tags and remove details from listImageStores response and UI
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This closes#1644
* 4.9:
CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable Unhides snapshot.backup.rightafter from global configuration
CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable
Unhides snapshot.backup.rightafter from global configuration
If snapshot.backup.rightafter is set to false (defaults to true), snapshots are
not backed up to secondary storage.
This is the same as PR #1644 applied to 4.9, as per @jburwell
* pr/1697:
CLOUDSTACK-4858 Honors the snapshot.backup.rightafter configuration variable Unhides snapshot.backup.rightafter from global configuration
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9738: [Vmware] Optimize vm expunge process for instances with vm snapshots## Description
It was noticed that expunging instances with many vm snapshots took a look of time, as hypervisor received as many tasks as vm snapshots instance had, apart from the delete vm task. We propose a way to optimize this process for instances with vm snapshots by sending only one delete task to hypervisor, which will delete vm and its snapshots
## Use cases
1. deleteVMsnapohsot-> no changes to current behavior
2. destroyVM with expunge=false -> no actions to VMsnaphsot is performed at the moment. When VM cleanup thread is executed it will perform the same sequence as (3). If instance is recovered before expunged by the cleanup thread it will remain intact with VMSnapshot chain present
3. destroyVM with expunge=true:
* Vmsnaphsot is marked with removed timestamp and state = Expunging in DB
* VM is deleted in HW
* pr/1905:
CLOUDSTACK-9738: [Vmware] Optimize vm expunge process for instances with vm snapshots
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Without this information a NPE might be triggered when starting a VR, SSVM or CP
as this information is read from the 'nics' table and causes a NPE.
During deployment we should set the IPv6 Gateway and CIDR for the NIC object so that
it is persisted to the database.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
[4.10] CLOUDSTACK-8746: VM Snapshotting implementation for KVM
* pr/977:
Fixes for testing VM Snapshots on KVM. Related to PR 977
CLOUDSTACK-8746: vm snapshot implementation for KVM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9619: Updates for SAN-assisted snapshotsThis PR is to address a few issues in #1600 (which was recently merged to master for 4.10).
In StorageSystemDataMotionStrategy.performCopyOfVdi we call getSnapshotDetails. In one such scenario, the source snapshot in question is coming from secondary storage (when we are creating a new volume on managed storage from a snapshot of ours thats on secondary storage).
This usually worked in the regression tests due to a bit of "luck": We retrieve the ID of the snapshot (which is on secondary storage) and then try to pull out its StorageVO object (which is for primary storage). If you happen to have a primary storage that matches the ID (which is the ID of a secondary storage), then getSnapshotDetails populates its Map<String, String> with inapplicable data (that is later ignored) and you dont easily see a problem. However, if you dont have a primary storage that matches that ID (which I didnt today because I had removed that primary storage), then a NullPointerException is thrown.
I have fixed that issue by skipping getSnapshotDetails if the source is coming from secondary storage.
While fixing that, I noticed a couple more problems:
1) We can invoke grantAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
2) We can invoke revokeAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
I have corrected those issues, as well.
I then came across one more problem:
When using a SAN snapshot and copying it to secondary storage or creating a new managed-storage volume from a snapshot of ours on secondary storage, we attach to the SR in the XenServer code, but detach from it in the StorageSystemDataMotionStrategy code (by sending a message to the XenServer code to perform an SR detach). Since we know to detach from the SR after the copy is done, we should detach from the SR in the XenServer code (without that code having to be explicitly called from outside of the XenServer logic).
I went ahead and changed that, as well.
JIRA Ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-9619
* pr/1749:
CLOUDSTACK-9619: Updates for SAN-assisted snapshots
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
(1) add support to create/delete/revert vm snapshots on running vms with QCOW2 format
(2) add new API to create volume snapshot from vm snapshot
(3) delete metadata of vm snapshots before stopping/migrating and recover vm snapshots after starting/migrating
(4) enable deleting of VM snapshot on stopped vm or vm snapshot is not listed in qcow2 image.
(5) enable smoke tests for vmsnaphsots on KVM
- Switches Travis to use jdk1.8
- Changes java-version to 1.8
- Change jdk/maven version to 1.8
- Switch to F5/java8 compatible library release
- Switch packaging to use jdk 1.8, and jre 1.8 in init/systemd scripts
- Switch systemvm to openjdk-8-jre
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9456: Migrate master to Spring 4.xThis changes makes CloudStack use spring 4:
```
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK7
- Bump servet dependency version
- Migrates various xmls to use version independent schema uris
```
Outstanding issue:
- Testing of various non-standard plugins such as network and storage plugins etc.
Since, this is a big change pinging for review -- @jburwell @karuturi @wido @murali-reddy @abhinandanprateek @DaanHoogland @GaborApatiNagy @JayapalUradi @kishankavala @K0zka @nvazquez @rafaelweingartner @pyr and others
@blueorangutan package
* pr/1638:
CLOUDSTACK-9456: Update Spring version in maven poms
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9597: Should not fetch resource count for removed entityFetch the number of resourceCount by domain and account excluding the removed ones.
Signed-off-by: Marc-Aurle Brothier <m@brothier.org>
* pr/1764:
CLOUDSTACK-9597: Should not fetch resource count for removed entity
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Rvr Network with cleanup which is updated from the isolated network is failed.
Corrected the column name string issue.
This closes#1781
(cherry picked from commit 0f742e1723)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9683: system.vm.default.hypervisor will pin the hypervisor for VR too with this fix
* pr/1839:
CLOUDSTACK-9683: system.vm.default.hypervisor will pin the hypervisor for VR too with this fix
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
store ref table so builtin template is never downloaded completely
In handleSysTemplateDownload method creating template only if there exists no entry
handleTemplateSync will take care of other scenario
- Bump spring-framework version to 4.x and Jetty to version that runs with JDK8
- Bump servet dependency version
- Migrate spring xmls to version 4, fixes schema locations that are 3.0
dependent in various xmls.
- Fix failing tests due to spring upgrade
(Thanks @marcaurele Marc-Aurèle Brothier for fixing them)
* Fix test DeploymentPlanningManagerImplTest
* Fix GloboDNS test
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8908 After copying the template charging for that template is getting stoppedThis is happening as the zone id is not part of the query. Zone id is added to the query and unit tests are also added
* pr/896:
CLOUDSTACK-8908 After copying the template charging for that template is stopped
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
NPE is seen as VM destroy and storage cleanup threads try to remove the same root volume. Fix is to handle
only non-root volumes in storage cleanup thread, root volumes will be handled as part of VM destroy.
CLOUDSTACK-9627 Fix template sync for region store.When using a region store like Swift or S3 as secondary storage,
the `zoneId` can be null. This causes an exception when we try
to convert it to a `long`. This fix guards against that.
Before this fix, if you restart the management server, all the templates
would change to "NOT READY" because the code which syncs the NFS cache
and the object store crashes due to the above mentioned issue.
This PR fixes that.
* pr/1772:
CLOUDSTACK-9627:Fix template sync for region store
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Upgrades Maven dependency version to v1.55
- Fixes bountycastle usages and issues
- Adds timeout to jetty/annotation scanning
- Fixes servlet issue, uses servlet 3.1.0
- Downgrade javassist used by reflections to fix annotation process errors
- Make console-proxy-rdp bc dependency same as rest of the codebase
- Picks up PR #1510 by Daan
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Introduced a global configuration flag 'cluster.threshold.enabled'. By default the flag is true.
If the value is false, then a VM can be started in a cluster even if the cluster thresholds are
crossed. However, for a new VM deployment the cluster threshold will always be honoured.
CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD Storage if vm has been removed
* pr/1710:
CLOUDSTACK-9538: FIX failure in Deleting Snapshot From Primary Storage RBD Storage if vm has been removed
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to underlay) in Nuage VSP pluginSupport for underlay features (Source & Static NAT to underlay) with Nuage VSP SDN Plugin including Marvin test coverage for corresponding Source & Static NAT features on master. Moreover, our Marvin tests are written in such a way that they can validate our supported feature set with both Nuage VSP SDN platform's overlay and underlay infra.
PR contents:
1) Support for Source NAT to underlay feature on master with Nuage VSP SDN Plugin.
2) Support for Static NAT to underlay feature on master with Nuage VSP SDN Plugin.
3) Marvin test coverage for Source & Static NAT to underlay on master with Nuage VSP SDN Plugin.
4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
5) PEP8 & PyFlakes compliance with our Marvin test code.
* pr/1580:
CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to underlay) in Nuage VSP plugin
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9402 : Marvin tests for Source NAT and Static NAT features verification with NuageVsp (both overlay and underlay infra).
Co-Authored-By: Prashanth Manthena <prashanth.manthena@nuagenetworks.net>, Frank Maximus <frank.maximus@nuagenetworks.net>
CLOUDSTACK-9561 After domain/account deletion, snapshot taken by the
domain/account remains undeleted
While deleting the UserAccount Cleanup for the removed VMs/volumes are not
happening. For the removed VMs, snapshots doesn't get cleaned. Only for running
VMs(volumes in ready state) the cleanup happens.
When the VM is desroyed, the volume is marked as destroyed and later storage
garbage collector perform the cleanup. But if we try delete domain/account
before storage garbage collector runs, then it fails.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Made the changes to improve logging.CLOUSTACK-9465 Several log refactoring/improvement suggestions.
There are two scenarios of logging which needs refactoring/improvement:
Method invocation replaced by variable
This means that in the logging code, the method invocation is pre-defined as a variable. for simplicity, the method invocation should be replaced by the variable.
Delete variable which must be null
The variable in the logging code is null, there is no need to put the variable there.
* pr/1705:
Made the changes to improve logging.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR adds an ability to Pass a new parameter, locationType,
to the “createSnapshot” API command. Depending on the locationType,
we decide where the snapshot should go in case of managed storage.
There are two possible values for the locationType param
1) `Standard`: The standard operation for managed storage is to
keep the snapshot on the device. For non-managed storage, this will
be to upload it to secondary storage. This option will be the
default.
2) `Archive`: Applicable only to managed storage. This will
keep the snapshot on the secondary storage. For non-managed
storage, this will result in an error.
The reason for implementing this feature is to avoid a single
point of failure for primary storage. Right now in case of managed
storage, if the primary storage goes down, there is no easy way
to recover data as all snapshots are also stored on the primary.
This features allows us to mitigate that risk.
KVM hosts on shared storage failure was accepted by mgmt server with the
host state as Up, even though there was no primary/shared storage available on
it. This patch offers a quick fix by throwing an exception in the storage monitor
which connects storage pool on host. The failure is trapped by agent manager
that disconnects the agent without any investigation.
Based on Lab tests, KVM agent may take upto 2 minutes to attempt NFS mount when
the storage is inaccessible (firewalled, or shutdown) before returning back with
an error. It is safe to assume that this won't add pressure on mgmt server due to
several reconnection attempts, and KVM agent would retry reconnection every 2
minutes.
For such KVM hosts, where failure happens due to storage issues; they will be
briefly put in Alert state but will be mostly be in Connecting state during which
the KVM host attempts to mount/reconfigure NFS storage pool.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Unhides snapshot.backup.rightafter from global configuration
If snapshot.backup.rightafter is set to false (defaults to true), snapshots are
not backed up to secondary storage
Adding support for cross-cluster storage migration for managed storage when using XenServerThis PR adds support for cross-cluster storage migration of VMs that make use of managed storage with XenServer.
Managed storage is when you have a 1:1 mapping between a virtual disk and a volume on a SAN (in the case of XenServer, an SR is placed on this SAN volume and a single virtual disk placed in the SR).
Managed storage allows features such as storage QoS and SAN-side snapshots to work (sort of analogous to VMware VVols).
This PR focuses on enabling VMs that are using managed storage to be migrated across XenServer clusters.
I have successfully run the following tests on this branch:
TestVolumes.py
TestSnapshots.py
TestVMSnapshots.py
TestAddRemoveHosts.py
TestVMMigrationWithStorage.py (which is a new test that is being added with this PR)
* pr/1671:
Adding support for cross-cluster storage migration for managed storage when using XenServer
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Added checks to prevent netwrok update when router state is unknown or when
the new offering removes a service that is in use.
Added a new param forced to the updateNetwork API. The network will
undergo a forced update when this param is set to true.
CLOUDSTACK-8751 Clean network config like firewall rules etc, when network services are removed during network update.
Conflicts:
engine/schema/src/com/cloud/upgrade/DatabaseUpgradeChecker.java
engine/schema/test/com/cloud/upgrade/DatabaseUpgradeCheckerTest.java
tools/marvin/setup.py
This fixes class names to make things consistent as per the 4.9 PR on master.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Renames schema-490to491*.sql to schema490to4910*.sql
* Renames the Upgrade490to491 class to Upgrade490to4910
* Removes the unused s_logger contant from Upgrade490to4910
* Updates the version in tools/marvin/setup to 4.9.1.0-SNAPSHOT
Updating pom.xml version numbers for release 4.8.2.0-SNAPSHOTOften, patch and security releases do not require schema migrations or
data migrations. However, if an empty upgrade class and associated
scripts are not defined, the upgrade process will break. With this
change, if a release does not have an upgrade, a noop DbUpgrade is added
to the upgrade path. This approach allows the upgrade to proceed and
for the database to properly reflect the installed version. This change
should make the release process simpler as RMs no longer need to
rememeber to create this boilerplate code when starting a new release.
Beginning with the 4.8.2.0 and 4.9.1.0 releases, the project will
formally adopt a four (4) position release number to properly accomodate
rekeases that contain only CVE fixes. The DatabaseUpgradeChecker and
Version classes made assumptions that they would always parse and
compare three (3) position version numbers. This change adds the
CloudStackVersion value object that supports both three (3) and four (4)
version numbers. It encapsulates version comparsion logic, as well as,
the rules to allow three (3) and four (4) to interoperate.
* Modifies DatabaseUpgradeChecker to handle derive an upgrade path for
a version that was not explicitly specified. It determines the
releases the first release before it with database migrations and uses
that list as the basis for the list for version being calculated. A
noop upgrade is then added to the list which causes no schema changes
or data migrations, but will update the database to the version.
* Adds unit tests for the upgrade path calculation logic in
DatabaseUpgradeChecker
* Removes dummy upgrade logic for the 4.8.2.0 introduced in previous
versions of this patch
* Introduces the CloudStackVersion value object which parses and
compares three (3) and four (4) position version numbers. This class
is intended to replace com.cloud.maint.Version.
* Adds the junit-dataprovider dependency -- allowing test data to be
concisely generated separately from the execution of a test case.
Used extensively in the CloudStackVersionTest.
Signed-off-by: John Burwell <meaux@cockamamy.net>
/cc @rhtyd @karuturi
* pr/1654:
Adds support for four position versions and optional db upgrades
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Often, patch and security releases do not require schema migrations or
data migrations. However, if an empty upgrade class and associated
scripts are not defined, the upgrade process will break. With this
change, if a release does not have an upgrade, a noop DbUpgrade is added
to the upgrade path. This approach allows the upgrade to proceed and
for the database to properly reflect the installed version. This change
should make the release process simpler as RMs no longer need to
rememeber to create this boilerplate code when starting a new release.
Beginning with the 4.8.2.0 and 4.9.1.0 releases, the project will
formally adopt a four (4) position release number to properly accomodate
rekeases that contain only CVE fixes. The DatabaseUpgradeChecker and
Version classes made assumptions that they would always parse and
compare three (3) position version numbers. This change adds the
CloudStackVersion value object that supports both three (3) and four (4)
version numbers. It encapsulates version comparsion logic, as well as,
the rules to allow three (3) and four (4) to interoperate.
* Modifies DatabaseUpgradeChecker to handle derive an upgrade path for
a version that was not explicitly specified. It determines the
releases the first release before it with database migrations and uses
that list as the basis for the list for version being calculated. A
noop upgrade is then added to the list which causes no schema changes
or data migrations, but will update the database to the version.
* Adds unit tests for the upgrade path calculation logic in
DatabaseUpgradeChecker
* Removes dummy upgrade logic for the 4.8.2.0 introduced in previous
versions of this patch
* Introduces the CloudStackVersion value object which parses and
compares three (3) and four (4) position version numbers. This class
is intended to replace com.cloud.maint.Version.
* Adds the junit-dataprovider dependency -- allowing test data to be
concisely generated separately from the execution of a test case.
Used extensively in the CloudStackVersionTest.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In the 4.1.0-4.2.0 db upgrade path, it creates new tables to store secondary
(nfs) storage in image_store table and volumes in volume_store_ref table. In
the upgrade path, it first tries to migrate NFS storage pool where it excludes
storage pools which have been removed, but it migrates all the volumes without
checking if their storage pools have been removed. This causes fk constraint
failure as the volume/row being inserted refers to a storage pool which does
not exist in the image_store table.
The fix migrates all the nfs storage pools to image_store including removed
storage pools and in doing so migrates with the 'removed' field. This fixes
db upgrade for old pre-4.0 and 4.0/4.1 CloudStack clouds.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
[4.9/LTS] Add upgrade path from 4.9.0 to 4.9.1, change version to 4.9.1.0-SNAPSHOTThis adds db upgrade path from 4.9.0 to 4.9.1 and fixes a typo in default user role description (CLOUDSTACK-9449)
/cc @karuturi @jburwell -- this will cause issues when fwd-merged to master, I can do the fwd-merging if you would like to avoid fixing the conflicts yourself
@blueorangutan package
* pr/1646:
Updating pom.xml version numbers for release 4.9.1.0-SNAPSHOT
cloudstack: upgrade path from 4.9.0 to 4.9.1
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Adds db upgrade path from 4.9.0 to 4.9.1
- CLOUDSTACK-9449: Fix typo in default user role description
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Adds role_id column to cloud_usage.account, fixes UsageDaoImpl to insert
Accounts with role_id from account table.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9238: Fix URL length to 2048 for all url fields in VOI will update the PR to add max field length in the API commands too
* pr/1567:
API: update url field max length
not needed on host table
Fix URL length to 2048 for all url fields in VO
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9368: Fix for Support configurable NFS version for Secondary Storage mounts## Description
JIRA TICKET: https://issues.apache.org/jira/browse/CLOUDSTACK-9368
This pull request address a problem introduced in #1361 in which NFS version couldn't be changed after hosts resources were configured on startup (for hosts using `VmwareResource`), and as host parameters didn't include `nfs.version` key, it was set `null`.
## Proposed solution
In this proposed solution `nfsVersion` would be passed in `NfsTO` through `CopyCommand` to `VmwareResource`, who will check if NFS version is still configured or not. If not, it will use the one sent in the command and will set it to its storage processor and storage handler. After those setups, it will proceed executing command.
* pr/1518:
CLOUDSTACK-9368: Fix for Support configurable NFS version for Secondary Storage mounts
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Dynamically load drivers before creating our DB connectionsSolution to the mailing thread titled "MySQL : No suitable driver found for jdbc:mysql".
It doesn't harm that we explicitely load the MySQL driver, and for those which would use a commons-dbcp version < 1.4 this would fix it as well. Since JDBC 4.0, the JDBC driver can auto-register itself, but for some weird cases (like ours), it's not working. Therefore we need to explicitly load the JDBC driver.
* pr/1553:
Dynamic loading of DB driver + support for other DB providers
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Remodeling of Nuage VSP Plugin + CLOUDSTACK-9294Hi all,
We've remodeled the Nuage VSP plugin to use the same model as VMWare is using (non-OSS). Before, we had a runtime dependency to the Nuage Client, this has been changed to a compile-time dependency instead because of multiple reasons (build management, readability, maintainability, ...)
We've adapted the code so it now uses model objects defined in the Nuage client instead of passing a list of parameters to the Nuage client. This is a lot more readable, and a lot more maintainable.
I've had a chat with @DaanHoogland about this approach, and he told me that ACS is trying to move away from the whole non-OSS approach. We're looking into the Juniper approach, we would set up a custom maven repository which would host the required dependencies for the Nuage VSP plugin.
Any remarks or suggestions are always welcome :)
* pr/1494:
Nuage VSP : Extending Marvin test coverage
Nuage VSP : Fix for NPE while cleaning up account when there are still resources belonging to that account
CLOUDSTACK-9294 : Make sure to remove VR from VSD when removing the VPC
CLOUDSTACK-9242 : Remodel Nuage VSP plugin
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Bug-ID: CLOUDSTACK-8870: Skip external device usage collection if no external devices existexternal network device usage monitor thread that runs every 5mins by default (based on global config external.network.stats.interval) and runs coalesce query to acquire a lock. When there are no external devices exist, there is no need to run usage collection.
Added test case to verify that usage collection task is not run when there are no External LB or External FW
* pr/846:
Bug-ID: CLOUDSTACK-8870: Skip external device usage collection if no external devices exist
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-6928: fix issue disk I/O throttling not appliedDisk IO throttling (for KVM) is not applied in the merge of 4.2.
Tests passed:
(1) start vm
(2) attach volume
(3) start vm with volume
(4) migrate vm (with volume)
* pr/1410:
CLOUDSTACK-6928: fix issue disk I/O throttling not applied
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9348: NioConnection improvementsReopened PR with squashed changes for a re-review and testing after https://github.com/apache/cloudstack/pull/1493 and sub-sequent PRs got reverted
* pr/1549:
CLOUDSTACK-9348: NioConnection improvements
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Taking fast and efficient volume snapshots with XenServer (and your storage provider)A XenServer storage repository (SR) and virtual disk image (VDI) each have UUIDs that are immutable.
This poses a problem for SAN snapshots, if you intend on mounting the underlying snapshot SR alongside the source SR (duplicate UUIDs).
VMware has a solution for this called re-signaturing (so, in other words, the snapshot UUIDs can be changed).
This PR only deals with the CloudStack side of things, but it works in concert with a new XenServer storage manager created by CloudOps (this storage manager enables re-signaturing of XenServer SR and VDI UUIDs).
I have written Marvin integration tests to go along with this, but cannot yet check those into the CloudStack repo as they rely on SolidFire hardware.
If anyone would like to see these integration tests, please let me know.
JIRA ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-9281
Here's a video I made that shows this feature in action:
https://www.youtube.com/watch?v=YQ3pBeL-WaA&list=PLqOXKM0Bt13DFnQnwUx8ZtJzoyDV0Uuye&index=13
* pr/1403:
Faster logic to see if a cluster supports resigning
Support for backend snapshots with XenServer
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9366: Capacity of one zone-wide primary storage ignoredDisable and Remove Host operation disables the primary storage capacity.
Steps to replicate:
Base Condition: There exists a host and storage pool with same id
Steps:
1. Find a host and storage pool having same id
2. Disable the host
3. CPU(1) and MEMORY(0) capacity in op_host_capacity for above host is disabled
4. STORAGE(3) capacity in op_host_capacity for storage pool with id same as above host is also disabled
RCA:
'host_id' column in 'op_host_capacity' table used for storing both storage pool id (for STORAGE capacity) and host id (MEMORY and CPU). While disabling a HOST we also disable the capacity associated with host.
Ideally while disabling capacity we should only disable MEMORY and CPU capacity, but we are not doing so.
Code Path:
ResourceManagerImpl.doDeleteHost() -> ResourceManagerImpl.resourceStateTransitTo() -> CapacityDaoImpl.updateCapacityState(null, null, null, host.getId(), capacityState.toString())
updateCapacityState is updating disabling all entries which matches the host_id. This will also disable a entry having storage pool id same as that of host id.
Changes:
introduced new capacityType parameter in updateCapacityState method and necessary changes to add capacity_type clause in sql
also fixed incorrect sql builder logic (unused code path for which it is never surfaced )
Added marvin test to check host and storagepool capacity when host is disabled
Test Result:
```
Before Fix:
mysql> select ohc.host_id, ohc.`capacity_state`, case capacity_type when 0 then 'MEMORY' when 1 then 'CPU' ELSE 'STORAGE' END as 'capacity_type' , total_capacity, case capacity_type when 0 then 'HOST' when 1 then 'HOST' ELSE 'STORAGE POOL' END as 'HOST/STORAGE POOL' from op_host_capacity ohc where host_id=3;
+---------+----------------+---------------+----------------+-------------------+
| host_id | capacity_state | capacity_type | total_capacity | HOST/STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
| 3 | Enabled | MEMORY | 8589934592 | HOST |
| 3 | Enabled | CPU | 32000 | HOST |
| 3 | Enabled | STORAGE | 2199023255552 | STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
9 rows in set (0.00 sec)
Disable Host 3 from UI.
mysql> select ohc.host_id, ohc.`capacity_state`, case capacity_type when 0 then 'MEMORY' when 1 then 'CPU' ELSE 'STORAGE' END as 'capacity_type' , total_capacity, case capacity_type when 0 then 'HOST' when 1 then 'HOST' ELSE 'STORAGE POOL' END as 'HOST/STORAGE POOL' from op_host_capacity ohc where host_id=3;
+---------+----------------+---------------+----------------+-------------------+
| host_id | capacity_state | capacity_type | total_capacity | HOST/STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
| 3 | Disabled | MEMORY | 8589934592 | HOST |
| 3 | Disabled | CPU | 32000 | HOST |
| 3 | Disabled | STORAGE | 2199023255552 | STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
After Fix:
mysql> select ohc.host_id, ohc.`capacity_state`, case capacity_type when 0 then 'MEMORY' when 1 then 'CPU' ELSE 'STORAGE' END as 'capacity_type' , total_capacity, case capacity_type when 0 then 'HOST' when 1 then 'HOST' ELSE 'STORAGE POOL' END as 'HOST/STORAGE POOL' from op_host_capacity ohc where host_id=3;
+---------+----------------+---------------+----------------+-------------------+
| host_id | capacity_state | capacity_type | total_capacity | HOST/STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
| 3 | Enabled | MEMORY | 8589934592 | HOST |
| 3 | Enabled | CPU | 32000 | HOST |
| 3 | Enabled | STORAGE | 2199023255552 | STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
3 rows in set (0.01 sec)
Disable Host 3 from UI.
mysql> select ohc.host_id, ohc.`capacity_state`, case capacity_type when 0 then 'MEMORY' when 1 then 'CPU' ELSE 'STORAGE' END as 'capacity_type' , total_capacity, case capacity_type when 0 then 'HOST' when 1 then 'HOST' ELSE 'STORAGE POOL' END as 'HOST/STORAGE POOL' from op_host_capacity ohc where host_id=3;
+---------+----------------+---------------+----------------+-------------------+
| host_id | capacity_state | capacity_type | total_capacity | HOST/STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
| 3 | Disabled | MEMORY | 8589934592 | HOST |
| 3 | Disabled | CPU | 32000 | HOST |
| 3 | Enabled | STORAGE | 2199023255552 | STORAGE POOL |
+---------+----------------+---------------+----------------+-------------------+
3 rows in set (0.00 sec)
Sudhansus-MAC:cloudstack sudhansu$ nosetests-2.7 --with-marvin --marvin-config=setup/dev/advanced.cfg test/integration/component/maint/test_capacity_host_delete.py
==== Marvin Init Started ====
=== Marvin Parse Config Successful ===
=== Marvin Setting TestData Successful===
==== Log Folder Path: /tmp//MarvinLogs//Apr_22_2016_22_42_27_X4VBWD. All logs will be available here ====
=== Marvin Init Logging Successful===
==== Marvin Init Successful ====
===final results are now copied to: /tmp//MarvinLogs/test_capacity_host_delete_9RHSNB===
Sudhansus-MAC:cloudstack sudhansu$ cat /tmp//MarvinLogs/test_capacity_host_delete_9RHSNB/results.txt
test_01_op_host_capacity_disable_host (integration.component.maint.test_capacity_host_delete.TestHosts) ... === TestName: test_01_op_host_capacity_disable_host | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 0.168s
OK
```
* pr/1516:
CLOUDSTACK-9366: Capacity of one zone-wide primary storage ignored
Signed-off-by: Will Stevens <williamstevens@gmail.com>
dynamic-roles: packaging improvementsOn some MySQL server envs, this may cause a SQL statement error, though
I was unable to reproduce it. Since it's not needed, an order by 'sort_order'
is enough, we can safely remove it.
/cc @swill @anshulgangwar @DaanHoogland and others
* pr/1551:
migrate-dynamicroles: use mysql.connector due to #1054
packaging: don't bundle systemvm.zip in rpms
packaging: backup commands.properties as it does not exist in new rpms
dynamic-roles: remove unnecessary order by ID
Signed-off-by: Will Stevens <williamstevens@gmail.com>
introduced new capacityType parameter in updateCapacityState method and necessary changes to add capacity_type clause in sql
also fixed incorrect sql builder logic (unused code path for which it is never surfaced )
Added marvin test to check host and storagepool capacity when host is disabled
Added conditions to ensure the capacity_type is added only when capacity_type length is greater than 0.
Added checks in marvin test to ensure the capacity exists for a host before disabling it.
Added checks to avoid index out of range exception
- Unit test to demonstrate denial of service attack
The NioConnection uses blocking handlers for various events such as connect,
accept, read, write. In case a client connects NioServer (used by
agent mgr to service agents on port 8250) but fails to participate in SSL
handshake or just sits idle, this would block the main IO/selector loop in
NioConnection. Such a client could be either malicious or aggresive.
This unit test demonstrates such a malicious client that can perform a
denial-of-service attack on NioServer that blocks it to serve any other client.
- Use non-blocking SSL handshake
- Uses non-blocking socket config in NioClient and NioServer/NioConnection
- Scalable connectivity from agents and peer clustered-management server
- Removes blocking ssl handshake code with a non-blocking code
- Protects from denial-of-service issues that can degrade mgmt server responsiveness
due to an aggressive/malicious client
- Uses separate executor services for handling ssl handshakes
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.7:
Fix Sync of template.properties in Swift
Configure rVPC for router.redundant.vrrp.interval advert_int setting
Have rVPCs use the router.redundant.vrrp.interval setting
Resolve conflict as forceencap is already in master
Split the cidr lists so we won't hit the iptables-resture limits
Check the existence of 'forceencap' parameter before use
Do not load previous firewall rules as we replace everyhing anyway
Wait for dnsmasq to finish restart
Remove duplicate spaces, and thus duplicate rules.
Restore iptables at once using iptables-restore instead of calling iptables numerous times
Add iptables copnversion script.
On some MySQL server envs, this may cause a SQL statement error, though
I was unable to reproduce it. Since it's not needed, an order by 'sort_order'
is enough, we can safely remove it.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This reverts commit 7ce0e10fbc, reversing
changes made to 29ba71f2db.
This was reverted because it seemed to be related to an issue
when doing a DeployDC, causing an `addHost` error.
CLOUDSTACK-9265 cleanup around httpclient versionssome cleanup done
- replaced HttpStatus from org.apache.commons.httpclient with that from org.apache.http
- removed unthrown HttpException
- left auto reformat in place
* pr/1385:
CLOUDSTACK-9265 cleanup around httpclient versions
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Notify listeners when a host has been added to a cluster, is about to be removed from a cluster, or has been removed from a cluster
This PR addresses the following JIRA ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-8813
The problem is that there needs to be notifications sent when a host is added to, about to be removed from, and removed from a cluster.
Such notifications can be used for many purposes. For example, it can allow storage plug-ins to update ACLs on their storage systems. Also, it can allow us to clean up IQNs from ESXi hosts that are no longer needed.
* pr/816:
CLOUDSTACK-8813: Notify listeners when a host has been added to a cluster, is about to be removed from a cluster, or has been removed from a cluster
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Support access to a host’s out-of-band management interface (e.g. IPMI, iLO,
DRAC, etc.) to manage host power operations (on/off etc.) and querying current
power state in CloudStack.
Given the wide range of out-of-band management interfaces such as iLO and iDRA,
the service implementation allows for development of separate drivers as plugins.
This feature comes with a ipmitool based driver that uses the
ipmitool (http://linux.die.net/man/1/ipmitool) to communicate with any
out-of-band management interface that support IPMI 2.0.
This feature allows following common use-cases:
- Restarting stalled/failed hosts
- Powering off under-utilised hosts
- Powering on hosts for provisioning or to increase capacity
- Allowing system administrators to see the current power state of the host
For testing this feature `ipmisim` can be used:
https://pypi.python.org/pypi/ipmisim
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Out-of-band+Management+for+CloudStack
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows root administrators to define new roles and associate API
permissions to them.
A limited form of role-based access control for the CloudStack management server
API is provided through a properties file, commands.properties, embedded in the
WAR distribution. Therefore, customizing API permissions requires unpacking the
distribution and modifying this file consistently on all servers. The old system
also does not permit the specification of additional roles.
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Dynamic+Role+Based+API+Access+Checker+for+CloudStack
DB-Backed Dynamic Role Based API Access Checker for CloudStack brings following
changes, features and use-cases:
- Moves the API access definitions from commands.properties to the mgmt server DB
- Allows defining custom roles (such as a read-only ROOT admin) beyond the
current set of four (4) roles
- All roles will resolve to one of the four known roles types (Admin, Resource
Admin, Domain Admin and User) which maintains this association by requiring
all new defined roles to specify a role type.
- Allows changes to roles and API permissions per role at runtime including additions or
removal of roles and/or modifications of permissions, without the need
of restarting management server(s)
Upgrade/installation notes:
- The feature will be enabled by default for new installations, existing
deployments will continue to use the older static role based api access checker
with an option to enable this feature
- During fresh installation or upgrade, the upgrade paths will add four default
roles based on the four default role types
- For ease of migration, at the time of upgrade commands.properties will be used
to add existing set of permissions to the default roles. cloud.account
will have a new role_id column which will be populated based on default roles
as well
Dynamic-roles migration tool: scripts/util/migrate-dynamicroles.py
- Allows admins to migrate to the dynamic role based checker at a future date
- Performs a harder one-way migrate and update
- Migrates rules from existing commands.properties file into db and deprecates it
- Enables an internal hidden switch to enable dynamic role based checker feature
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8901: PrepareTemplate job thread hard-coded to max 8 threads The thread pool was hardcoded to use 8 threads,
com.cloud.template.TemplateManagerImpl.configure(String, Map<String, Object>):
_preloadExecutor = Executors.newFixedThreadPool(8, new NamedThreadFactory("Template-Preloader"));
Added the change to pick threadpool size from configuration.
* pr/880:
CLOUDSTACK-8901: PrepareTemplate job thread hard-coded to max 8 threads
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CID-1338387: Deletion of method endPointSelector.selectHypervisorHostFollowing the discussions and analysis presented on PR #1056 create by @DaanHoogland
This PR is intended to push those changes that were discussed there regarding the of endPointSelector.selectHypervisorHost method.
* pr/1124:
Deletion of method endPointSelector.selectHypervisorHost
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9348: Use non-blocking SSL handshake in NioConnection/Link- Uses non-blocking socket config in NioClient and NioServer/NioConnection
- Scalable connectivity from agents and peer clustered-management server
- Removes blocking ssl handshake code with a non-blocking code
- Protects from denial-of-service issues that can degrade mgmt server responsiveness
due to an aggressive/malicious client
- Uses separate executor services for handling connect/accept events
Changes are covered the NioTest so I did not write a new test, advise how we can improve this. Further, I tried to invest time on writing a benchmark test to reproduce a degraded server but could not write it deterministic-ally (sometimes fails/passes but not always). Review, CI testing and feedback requested /cc @swill @jburwell @DaanHoogland @wido @remibergsma @rafaelweingartner @GabrielBrascher
* pr/1493:
CLOUDSTACK-9348: Use non-blocking SSL handshake
CLOUDSTACK-9348: Unit test to demonstrate denial of service attack
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-8302: Removing snapshots on RBDSnapshot removing implemented if primary datastore is RBD
https://issues.apache.org/jira/browse/CLOUDSTACK-8302
* pr/1230:
CLOUDSTACK-8302 - Cleanup snapshot on KVM with RBD Snapshot removing implemented on RBD. 1. On management side: when created new shanpshot we checking if our primary storage is RBD, then do not remove record from cloud.snapshot_store_ref with link to Ceph image via 'install_path' field. 2. On management side: when removing snapshot, also send command to agent 'DeleteCommand'. 3. On agent side: method implemented 'public Answer deleteSnapshot(final DeleteCommand cmd)'
Signed-off-by: Will Stevens <williamstevens@gmail.com>
- Uses non-blocking socket config in NioClient and NioServer/NioConnection
- Scalable connectivity from agents and peer clustered-management server
- Removes blocking ssl handshake code with a non-blocking code
- Protects from denial-of-service issues that can degrade mgmt server responsiveness
due to an aggressive/malicious client
- Uses separate executor services for handling ssl handshakes
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8847: ListServiceOfferings is returning incompatible tagged offerings when called with VM idWhen calling listServiceOfferings with VM id as parameter. It is returning incompatible tagged offerings. It should only list all compatible tagged offerings. Compatible means the new service offering should contain all the tags of the existing service offering(Existing offering SUBSET of new offering). If that is the case It should list in the result and can be upgraded to that offering.
* pr/1321:
CLOUDSTACK-8847: ListServiceOfferings is returning incompatible tagged offerings when called with VM id
Signed-off-by: Will Stevens <williamstevens@gmail.com>
engine/schema: fix upgrade path to work with MySQL 5.7Found this issue when using MySQL 5.7 with Ubuntu 16.04. The upgrade path fix removes an invalid `IGNORE` param that is deprecated now, in the upgrade path we run the alter statement to add an index only if it does not exist so we're good.
For MySQL 5.7, we'll also need to update the docs at some point to include `server-id` along with other parameters. Some of the SQL statements used throughout engine/schema don't adhere to SQL 99 standard which is enforced by default in MySQL 5.7, therefore the following sql-mode (for backward compatibility with mysql 5.6 modes) will be necessary for anyone willing to use MySQL 5.7 (until we fix codebase wide raw and generated sql statements to be SQL99 compliant):
sql-mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,ERROR_FOR_DIVISION_BY_ZERO,NO_ZERO_DATE,NO_ZERO_IN_DATE,NO_ENGINE_SUBSTITUTION"
server-id = 1
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=350
log-bin=mysql-bin
binlog-format = 'ROW'
/cc @swill @jburwell @agneya2001 @wido @DaanHoogland and others
* pr/1517:
engine/schema: fix upgrade path to work with MySQL 5.7
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Snapshot removing implemented on RBD.
1. On management side: when created new shanpshot we checking if our primary storage is RBD,
then do not remove record from cloud.snapshot_store_ref with link to Ceph
image via 'install_path' field.
2. On management side: when removing snapshot, also send command to agent 'DeleteCommand'.
3. On agent side: method implemented 'public Answer deleteSnapshot(final DeleteCommand cmd)'
Found this issue when using MySQL 5.7 with Ubuntu 16.04 with following settings:
sql-mode="STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION,ERROR_FOR_DIVISION_BY_ZERO,NO_ZERO_DATE,NO_ZERO_IN_DATE,NO_ENGINE_SUBSTITUTION"
server-id = 1
innodb_rollback_on_timeout=1
innodb_lock_wait_timeout=600
max_connections=350
log-bin=mysql-bin
binlog-format = 'ROW'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Updated most dependencies to latest minor releases, EXCEPT:
- Gson 2.x
- Major spring framework version
- Servlet version
- Embedded jetty version
- Mockito version (beta)
- Mysql lib minor version upgrade (breaks mysql-ha plugin)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9130: Make RebootCommand similar to start/stop/migrate agent commands w.r.t. "execute in sequence" flag
RebootCommand now behaves in the same way as start/stop/migrate agent commands w.r.t. to sequential/parallel execution.
* pr/1200:
CLOUDSTACK-9130: Make RebootCommand similar to start/stop/migrate agent commands w.r.t. "execute in sequence" flag RebootCommand now behaves in the same way as start/stop/migrate agent commands w.r.t. to sequential/parallel execution.
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9196: Fixing null pointer exception when vm meta data is synced on upgraded setuphttps://issues.apache.org/jira/browse/CLOUDSTACK-9196
NullPointerException can occur if XenServer reports non-existing VM in cloud DB.
* pr/1274:
CLOUDSTACK-9196: Fixing null pointer exception when vm meta data is synced on upgraded setup.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Removed unused variables from "NetworkStateListener" classWe removed the following variables from "com.cloud.network.NetworkStateListener"
. UsageEventDao _usageEventDao
. NetworkDao _networkDao
We changed the EventBus s_eventBus variable to private, the constructor not to use those variables and applied this change in classes com.cloud.network.IpAddressManagerImpl and org.apache.cloudstack.engine.orchestration.NetworkOrchestrator
* pr/1261:
Removed unused variables from class NetworkStateListener
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixed return type Void to void in DataMotionStrategy.The main changes are:
- Changing methods Void to void.
- Removal of the method Void copyAsync(DataObject srcData, DataObject
destData, AsyncCompletionCallback<CopyCommandResult> callback) that was
never used.
We noticed that methods form that class are using the return type Void
with capital V. This way that method has to return a null value at the
end.
Removed trim lines from XenServerStorageMotionStrategy.
The trim lines were removed from XenServerStorageMotionStrategy.
* pr/969:
Changed return of methods from DataMotionStrategy, Void to void
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9185: [VMware DRS] VM sync failed with exception due to out-of-band changesSummary: The target "ClusteredVirtualMachineManagerImpl.HandlePowerStateReport" invoked during the VM power state sync is not found as HandlePowerStateReport was not implemented in ClusteredVirtualMachineManagerImpl and was private in VirtualMachineManagerImpl, which was resulting in InvocationTargetException. Changed HandlePowerStateReport() in VirtualMachineManagerImpl to protected.
* pr/1256:
CLOUDSTACK-9185: [VMware DRS] VM sync failed with exception due to out-of-band changes
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8860: improve error messages in VM deployment code path.improved the error messages in vm deployment code path. added some more data to the error messages and also fixed some errors using internal ids to use uuids.
* pr/864:
CLOUDSTACK-8860: improve error messages in VM deployment code path.
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.7:
CLOUDSTACK-9154 - Sets the pub interface down when all guest nets are gone
CLOUDSTACK-9187 - Makes code ready for more something like ethXXXX, if we ever get that far
CLOUDSTACK-9188 - Reads network GC interval and wait from configDao
CLOUDSTACK-9187 - Fixes interface allocation to VRRP instances
CLOUDSTACK-9187 - Adds test to cover multiple nics and nic removal
CLOUDSTACK-9154 - Adds test to cover nics state after GC
CLOUDSTACK-9154 - Returns the guest iterface that is marked as added
Conflicts:
engine/orchestration/src/org/apache/cloudstack/engine/orchestration/NetworkOrchestrator.java
[4.7] Critical VPCVR issues fixed: CLOUDSTACK-9154; CLOUDSTACK-9187; and CLOUDSTACK-9188This PR applies the same fixes as in the PR #1259, but against branch 4.7.
Please refer to PR #1259 for the tests results and all the comments already made there.
Issues fixed are:
* CLOUDSTACK-9154: rVPC doesn't recover from cleaning up of network garbage collector
* CLOUDSTACK-9187: rVPC routers in Master/Master due to concurrency problem when writing the keepalivd.conf
* CLOUDSTACK-9188: NetworkGarbageCollector is not using gc.interval and gc.wait from settings
Those changes have been covered by 2 new tests added to ```smoke/test_vpc_redundant.py```:
* test_04_rvpc_network_garbage_collector_nics
* test_05_rvpc_multi_tiers
The test ```test_04_rvpc_network_garbage_collector_nics``` depends on the global settings for the network.gc.interval and gc.wait. If one wants the test to run quicker, please change the settings (default is 600 seconds for each) and restart the Management Server before running the tests. I would suggest to set it to 60 seconds.
In addition, the NetworkGarbageCollector was redefining the settings above mentioned and not reading their values through ConfigDao. Due to that, the settings were not being applied properly and the test was waiting to long to check the VPC routers.
* pr/1277:
CLOUDSTACK-9154 - Sets the pub interface down when all guest nets are gone
CLOUDSTACK-9187 - Makes code ready for more something like ethXXXX, if we ever get that far
CLOUDSTACK-9188 - Reads network GC interval and wait from configDao
CLOUDSTACK-9187 - Fixes interface allocation to VRRP instances
CLOUDSTACK-9187 - Adds test to cover multiple nics and nic removal
CLOUDSTACK-9154 - Adds test to cover nics state after GC
CLOUDSTACK-9154 - Returns the guest iterface that is marked as added
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.7:
Implement CheckHealthCommand for NSX controllers
Fix log message that refers to agent, not host
Prevent NullPointerException when host does not belong to a pod
Add Health Check Command to NSX pluginThe NSX plugin does not support the HeathCheckCommand. Instead it fakes a PingCommand as a call tot he control cluster status API.
However, we have seen in production that the management server will sometimes find the NSX controller to be behind on ping and that will trigger a HealthCheckCommand which will return with an unsupported command answer.
Once this happens the controller is put into Alert state and will not recover until the management sever is restarted.
In addition, during the investigation, there will be a null pointer exception due tot he fact that the NSX controllers do not live in a pod.
This PR tries to address those two issues.
* pr/1293:
Implement CheckHealthCommand for NSX controllers
Fix log message that refers to agent, not host
Prevent NullPointerException when host does not belong to a pod
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.7:
Fix unable to setup more than one Site2Site VPN Connection
FIX S2S VPN rVPC: Check only redundant routers in state MASTER
PEP8 of integration/smoke/test_vpc_vpn
Add S2S VPN test for Redundant VPC
Make integration/smoke/test_vpc_vpn Hypervisor independant
FIX VPN: non-working ipsec commands
[UI] MADNESS
[DB] Add force_encap field to s2s_customer_gateway table
[ROUTER] Add forceencaps field to python router ipsec config method
[TEST] unittest needs rework
[MARVIN] Add forceencap field to VpnCustomerGateway class in marvin base
[CORE] Add Force UDP Encapsulation option to Site2Site VPN
CLOUDSTACK-9186: Root admin cannot see VPC created by Domain admin user
CLOUDSTACK-9192: UpdateVpnCustomerGateway is failing
CLOUDSTACK-6485 prevent ip asignment of private gw iface
CLOUDSTACK-9204 Do not error when staticroute is already gone
make both check lines consistent
CLOUDSTACK-9181 Prevent syntax error in checkrouter.sh
CLOUDSTACK-9202 Bump ssh timeout
[4.7] ADD Force UDP encapsulation option to Site2Site VPNThis PR adds the option to enable forced UDP encapsulation of ESP packets during a setup of a site2site vpn. This options enforces the 'forceencaps' option in the openswan ipsec config:
https://wiki.strongswan.org/projects/strongswan/wiki/ConnSection
* pr/1317:
[UI] MADNESS
[DB] Add force_encap field to s2s_customer_gateway table
[ROUTER] Add forceencaps field to python router ipsec config method
[TEST] unittest needs rework
[MARVIN] Add forceencap field to VpnCustomerGateway class in marvin base
[CORE] Add Force UDP Encapsulation option to Site2Site VPN
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.7:
CLOUDSTACK-9220 Sort list of domains on Domain tab in UI
Admin cannot see VMs on port forwarding page
Fix mariadb related listCapacity bug (CLOUDSTACK-8966)
CLOUDSTACK-9213 - Split the ACL rules using comma instead of dash.
CLOUDSTACK-9213 - Formatting the code
Summary: The target "ClusteredVirtualMachineManagerImpl.HandlePowerStateReport" invoked during the VM power state sync is not found as HandlePowerStateReport was not implemented in ClusteredVirtualMachineManagerImpl and was private in VirtualMachineManagerImpl, which was resulting in InvocationTargetException. Changed HandlePowerStateReport() in VirtualMachineManagerImpl to protected.
CLOUDSTACK-9134: set device_id as the first device_id not in use instead of nic count
when we restart vpc tiers, the old nics will be removed, and create a new nic.
however, the device_id was set to the nic count, which may be already used.
this commit get the first device_id not in use as the device_id of new nic.
This issue also happen when we add multiple networks to a vm and remove them.
* pr/1209:
CLOUDSTACK-9134: set device_id as the first device_id not in use instead of nic count
Signed-off-by: Daan Hoogland <daan@onecht.net>
when we restart vpc tiers, the old nics will be removed, and create a new nic.
however, the device_id was set to the nic count, which may be already used.
this commit get the first device_id not in use as the device_id of new nic.
This issue also happen when we add multiple networks to a vm and remove them.
Quota service while allowing for scalability will make sure that the cloud is
not exploited by attacks, careless use and program errors. To address this
problem, we propose to employ a quota-enforcement service that allows resource
usage within certain bounds as defined by policies and available quotas for
various entities. Quota service extends the functionality of usage server to
provide a measurement for the resources used by the accounts and domains using a
common unit referred to as cloud currency in this document. It can be configured
to ensure that your usage won’t exceed the budget allocated to accounts/domain
in cloud currency. It will let user know how much of the cloud resources he is
using. It will help the cloud admins, if they want, to ensure that a user does
not go beyond his allocated quota. Per usage cycle if a account is found to be
exceeding its quota then it is locked. Locking an account means that it will not
be able to initiat e a new resource allocation request, whether it is more
storage or an additional ip. Needless to say quota service as well as any action
on the account is configurable.
Changes from Github code review:
- Added marvin test for quota plugin API
- removed unused commented code
- debug messages in debug enabled check
- checks for nulls, fixed access to member variables and feature
- changes based on PR comments
- unit tests for UsageTypes
- unit tests for all Cmd classes
- unit tests for all service and manager impls
- try-catch-finally or try-with-resource in dao impls for failsafe db switching
- remove dead code
- add missing quota calculation case (regression fixed)
- replace tabs with spaces in pom.xmls
- quota: though default value for quota_calculated is 0, the usage server
makes it null while entering usage entries. Flipping the condition so
as to acocunt for that.
- quotatypes: fix NPE in quota type
- quota framework test fixes
- made statement period configurable
- changed default email templates to reflect the fact that exhausted quota may not result in a locked account
- added quotaUpdateCmd that refreshes quota balances and sends alerts and statements
- report quotaSummary command returns quota balance, quota usage and state for all account
- made UI framework changes to allow for text area input in edit views
- process usage entries that have greater than 0 usage
- orocess quota entries only if tariff is non zero
- if there are credit entries but no balance entry create a dummy balance entry
- remove any credit entries that are before the last balance entry
when displaying balance statement
- on a rerun the last balance is now getting added
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Quota+Service+-+FS
PR: https://github.com/apache/cloudstack/pull/768
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9105: Logging enhancement: Handle/reference to track API calls end to end in the MS logs
Added logid to logging framework, now all API call logs can be tracked with this id end to end
* pr/1167:
CLOUDSTACK-9105: Logging enhancement: Handle/reference to track API calls end to end in the MS logs Added logid to logging framework, now all API call logs can be tracked with this id end to end
Signed-off-by: Daan Hoogland <daan@onecht.net>
CLOUDSTACK-9047 rename enumsmake enums adhere to best practice naming conventions
* pr/1049:
CLOUDSTACK-9046 rename enums to adhere to naming conventions
CLOUDSTACK-9046 renamed enums in kvm plugin
CLOUDSTACK-9047 use 'State's only with context there are more types called 'State' (or to be called so but now 'state') So remove imports and prepend their enclosing class/context to them.
Signed-off-by: Daan Hoogland <daan@onecht.net>
CLOUDSTACK-9107: Description of global config agent.load.threshold and the actual behavior are not in sync
Made configuration parameter description and behavior in sync.
* pr/1172:
CLOUDSTACK-9107: Description of global config agent.load.threshold and the actual behavior are not in sync Made configuration parameter description and behavior in sync.
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.6:
CLOUDSTACK-9075 - Uses the same vlan since it should have been already released
CLOUDSTACK-9075 - Adds VPC static routes test
CLOUDSTACK-9075 - Covers Private GW ACL with Redundant VPCs
CLOUDSTACK-9075 - Add method to get list of Physical Networks per zone
CLOUDSTACK-6276 Removing unused parameter in integration test for projects
CLOUDSTACK-6276 Removing unused parameter in integration test
CLOUDSTACK-6276 Fixing affinity groups for projects
CLOUDSTACK-6276 Fixing affinity groups for projectsWith some contributions from @resmo and @ustcweizhou.
This closes https://github.com/apache/cloudstack/pull/508
To test manually (need at least 2 hosts):
Create a project
Create an affinity group in that project
Deploy a vm with that affinity group
Deploy a second vm with that affinity group
They should be on different hosts
Ran old and new tests for affinity groups on the simulator
Test create affinity group as admin in project ... === TestName: test_01_admin_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group as domain admin for projects ... === TestName: test_02_doadmin_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group as user for projects ... === TestName: test_03_user_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group that exists (same name) for projects ... === TestName: test_4_user_create_aff_grp_existing_name_for_project | Status : SUCCESS ===
ok
#Delete Affinity Group by id. ... === TestName: test_01_delete_aff_grp_by_id | Status : SUCCESS ===
ok
#Delete Affinity Group by id should fail for user not in project ... === TestName: test_02_delete_aff_grp_by_id_another_user | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups ... === TestName: test_01_deploy_vm_anti_affinity_group | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups with more vms than hosts. ... === TestName: test_02_deploy_vm_anti_affinity_group_fail_on_not_enough_hosts | Status : SUCCESS ===
ok
List affinity group for a vm for projects ... === TestName: test_01_list_aff_grps_for_vm | Status : SUCCESS ===
ok
List multiple affinity groups associated with a vm for projects ... === TestName: test_02_list_multiple_aff_grps_for_vm | Status : SUCCESS ===
ok
List affinity groups by id for projects ... === TestName: test_03_list_aff_grps_by_id | Status : SUCCESS ===
ok
List Affinity Groups by name for projects ... === TestName: test_04_list_aff_grps_by_name | Status : SUCCESS ===
ok
List Affinity Groups by non-existing id for projects ... === TestName: test_05_list_aff_grps_by_non_existing_id | Status : SUCCESS ===
ok
List Affinity Groups by non-existing name for projects ... === TestName: test_06_list_aff_grps_by_non_existing_name | Status : SUCCESS ===
ok
List affinity group should list all for a vms associated with that group for projects ... === TestName: test_07_list_all_vms_in_aff_grp | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupids ... === TestName: test_01_update_aff_grp_by_ids | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 16 tests in 581.706s
OK
Deploy vm as Admin in Affinity Group belonging to regular user (should fail) ... === TestName: test_01_deploy_vm_another_user | Status : SUCCESS ===
ok
Create Affinity Group as admin for regular user ... === TestName: test_02_create_aff_grp_user | Status : SUCCESS ===
ok
List Affinity Groups as admin for all the users ... === TestName: test_03_list_aff_grp_all_users | Status : SUCCESS ===
ok
List Affinity Groups belonging to admin user ... === TestName: test_04_list_all_admin_aff_grp | Status : SUCCESS ===
ok
List Affinity Groups belonging to regular user passing account id and domain id ... === TestName: test_05_list_all_users_aff_grp | Status : SUCCESS ===
ok
List Affinity Groups belonging to regular user passing group id ... === TestName: test_06_list_all_users_aff_grp_by_id | Status : SUCCESS ===
ok
Delete Affinity Group belonging to regular user ... === TestName: test_07_delete_aff_grp_of_other_user | Status : SUCCESS ===
ok
Test create affinity group as admin ... === TestName: test_01_admin_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group as domain admin ... === TestName: test_02_doadmin_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group as user ... === TestName: test_03_user_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group that exists (same name) ... === TestName: test_04_user_create_aff_grp_existing_name | Status : SUCCESS ===
ok
Test create affinity group with existing name but within different account ... === TestName: test_05_create_aff_grp_same_name_diff_acc | Status : SUCCESS ===
ok
Test create affinity group of non-existing type ... === TestName: test_06_create_aff_grp_nonexisting_type | Status : SUCCESS ===
ok
Delete Affinity Group by name ... === TestName: test_01_delete_aff_grp_by_name | Status : SUCCESS ===
ok
Delete Affinity Group as admin for an account ... === TestName: test_02_delete_aff_grp_for_acc | Status : SUCCESS ===
ok
Delete Affinity Group which has vms in it ... === TestName: test_03_delete_aff_grp_with_vms | Status : SUCCESS ===
ok
Delete Affinity Group with id which does not belong to this user ... === TestName: test_05_delete_aff_grp_id | Status : SUCCESS ===
ok
Delete Affinity Group by name which does not belong to this user ... === TestName: test_06_delete_aff_grp_name | Status : SUCCESS ===
ok
Delete Affinity Group by id. ... === TestName: test_08_delete_aff_grp_by_id | Status : SUCCESS ===
ok
Root admin should be able to delete affinity group of other users ... === TestName: test_09_delete_aff_grp_root_admin | Status : SUCCESS ===
ok
Deploy VM without affinity group ... === TestName: test_01_deploy_vm_without_aff_grp | Status : SUCCESS ===
ok
Deploy VM by aff grp name ... === TestName: test_02_deploy_vm_by_aff_grp_name | Status : SUCCESS ===
ok
Deploy VM by aff grp id ... === TestName: test_03_deploy_vm_by_aff_grp_id | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups ... === TestName: test_04_deploy_vm_anti_affinity_group | Status : SUCCESS ===
ok
Deploy vms by affinity group id ... === TestName: test_05_deploy_vm_by_id | Status : SUCCESS ===
ok
Deploy vm in affinity group of another user by name ... === TestName: test_06_deploy_vm_aff_grp_of_other_user_by_name | Status : SUCCESS ===
ok
Deploy vm in affinity group of another user by id ... === TestName: test_07_deploy_vm_aff_grp_of_other_user_by_id | Status : SUCCESS ===
ok
Deploy vm in multiple affinity groups ... === TestName: test_08_deploy_vm_multiple_aff_grps | Status : SUCCESS ===
ok
Deploy multiple vms in multiple affinity groups ... === TestName: test_09_deploy_vm_multiple_aff_grps | Status : SUCCESS ===
ok
Deploy VM by aff grp name and id ... === TestName: test_10_deploy_vm_by_aff_grp_name_and_id | Status : SUCCESS ===
ok
List affinity group for a vm ... === TestName: test_01_list_aff_grps_for_vm | Status : SUCCESS ===
ok
List multiple affinity groups associated with a vm ... === TestName: test_02_list_multiple_aff_grps_for_vm | Status : SUCCESS ===
ok
List affinity groups by id ... === TestName: test_03_list_aff_grps_by_id | Status : SUCCESS ===
ok
List Affinity Groups by name ... === TestName: test_04_list_aff_grps_by_name | Status : SUCCESS ===
ok
List Affinity Groups by non-existing id ... === TestName: test_05_list_aff_grps_by_non_existing_id | Status : SUCCESS ===
ok
List Affinity Groups by non-existing name ... === TestName: test_06_list_aff_grps_by_non_existing_name | Status : SUCCESS ===
ok
List affinity group should list all for a vms associated with that group ... === TestName: test_07_list_all_vms_in_aff_grp | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupids ... === TestName: test_01_update_aff_grp_by_ids | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupnames ... === TestName: test_02_update_aff_grp_by_names | Status : SUCCESS ===
ok
Update the list of affinityGroups for vm which is not associated ... === TestName: test_03_update_aff_grp_for_vm_with_no_aff_grp | Status : SUCCESS ===
ok
Update the list of Affinity Groups to empty list ... SKIP: Skip - Failing - work in progress
Update the list of Affinity Groups on running vm ... === TestName: test_05_update_aff_grp_on_running_vm | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 42 tests in 976.432s
OK (SKIP=1)
* pr/1134:
CLOUDSTACK-6276 Removing unused parameter in integration test for projects
CLOUDSTACK-6276 Removing unused parameter in integration test
CLOUDSTACK-6276 Fixing affinity groups for projects
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.6:
CLOUDSTACK-9022: move storage.cleanup related global configurations to StorageManager
CLOUDSTACK-9022: keep Destroyed volumes for sometime
Conflicts:
server/src/com/cloud/storage/StorageManagerImpl.java
Removed unused code from the EngineHostDao Implementation After analysing the code within the EngineHostDaoImpl class, we noticed that methods:
countBy;
findByGuid;
findAndUpdateDirectAgentToLoad;
findAndUpdateApplianceToLoad;
markHostsAsDisconnected;
listAllUpAndEnabledNonHAHosts;
findLostHosts;
getRunningHostCounts;
getNextSequence;
countRoutingHostsByDataCenter;
updateResourceState;
findByTypeNameAndZoneId;
findHypervisorHostInCluster;
lockOneRandomRow;
And variables:
status_logger;
state_logger;
Have no usage. Thus, in order to clean up the code, we decided to remove them from EngineHostDaoImpl and its interface (EngineHostDao).
All of EngineHostDaoImpl's attributes were set to private, given that they are not accessed by any other class.
* pr/942:
Removed unused code from the EngineHostDao Implementation.
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.6:
CLOUDSTACK-9020: UI enhancements from metrics view
CLOUDSTACK-9064: The users should be able to create multiple Guest Shared Networks in same Vlan ID, same Physical Network and same network, just with a different IP ranges.
CLOUDSTACK-9078: Gave scripts executable permissions.
CLOUDSTACK-9076: Changed ownership of directory /var/lib/cloudstack to cloud.
Cannot list vlanipranges by keyword
[4.7] CLOUDSTACK-8958: add dedicated ips to domain (account for now)For now, we dedicate ip pool to account, however, other accounts in the same domain cannot fetch the ip from this ip pool.
By dedicating ip pool to domain, accounts in the domain can fetch the public ip from same ip pool.
* pr/1007:
CLOUDSTACK-8958: throw an exception if project account cannot be found
CLOUDSTACK-8958: add dedicated ips to domain (account for now)
Signed-off-by: Remi Bergsma <github@remi.nl>
The main changes are:
• Changing methods “Void” to “void”.
• Removal of the method “Void copyAsync(DataObject srcData, DataObject
destData, AsyncCompletionCallback callback)” that was never used. We
noticed that methods form that class are using the return type Void with
capital V. This way that method has to return a null value at the end.
CLOUDSTACK-9062: Improve S3 implementation.The S3 implementation is far from finished, this commit focuses on the bases.
- Upgrade AWS SDK to latest version.
- Rewrite S3 Template downloader.
- Rewrite S3Utils utility class.
- Improve addImageStoreS3 API command.
- Split various classes for convenience.
- Various minor improvements and code optimizations.
A side effect of the new AWS SDK is that it, by default, uses the V4 signature. Therefore I added an option to specify the Signer, so it stays compatible with previous versions.
Please review thoroughly, both code inspection and (automated) integration tests. Currently no integration tests are available specifically for S3. Therefore the implementation is needed to be tested manually, for now...
What I tested:
- Greenfield install -> will download latest systemvm template automatically to S3.
- Upload a template/iso
- Download a template/iso
- Restart of management server -> list available templates -> doesn't download them again if available.
* pr/1083:
CLOUDSTACK-9062: Improve S3 implementation.
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.6:
CLOUDSTACK-9053 security upgrade as per COLLECTIONS-580
CLOUDSTACK-9055: fix NPE in updating Redundant State of VPC networks
CLOUDSTACK-9057 remove old system vm upgrade code
The S3 implementation is far from finished, this commit focusses on the bases.
- Upgrade AWS SDK to latest version.
- Rewrite S3 Template downloader.
- Rewrite S3Utils utility class.
- Improve addImageStoreS3 API command.
- Split various classes for convenience.
- Various minor improvements and code optimalisations.
A side effect of the new AWS SDK is that it, by default, uses the V4 signature. Therefore I added an option to specify the Signer, so it stays compatible with previous versions.
* 4.6:
more poms didn't get updated with script
implemented upgrade path from 4.6.0 to 4.6.1
checkstyle pom didn't get updated with script
debian: add 4.6.1-snapshot to changelog
Updating pom.xml version numbers for release 4.6.1-SNAPSHOT
Updating pom.xml version numbers for release 4.6.0
CLOUDSTACk-9002: VM deployment is successful even when dhcp entry command fails - Fixed
Reason: The return value of the call to accept() function in the applyRules() function of BasicNetworkTopology.java was not checked for success or failure. As a result even if it fails, exception was not thrown and VM deployment went ahead without any errors.
Fix: Added the necessary checks.
* pr/995:
CLOUDSTACk-9002: VM deployment is successful even when dhcp entry command fails - Fixed
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8825, CLOUDSTACK-8824 : Fixed issues if vm.allocation.algorithm is set to firstfitleastconsumedFixed following issues if vm.allocation.algorithm is set to firstfitleastconsumed
1. VM deployment failure if thre is only ZWPS in setup
2. VM migration is impossible from UI
To test
1. Create setup with ZWPS only
2. set vm.allocation.algorithm to firstfitleastconsumed in global settings
3. deploy virtual machine
observation: vm deployment will fail
After this fix it will pass
second scenario
1. Create Cloudstack Setup with two hosts (As it needs setup for migration)
2. Try migrating VM from UI
Observation: There will be error response in logs with nothing available in UI
After fix it will pass
Regarding BVT I am not sure whether there exists tests for firstfitleastconsumed vm allocation algorithm.
* pr/787:
CLOUDSTACK-8825, CLOUDSTACK-8824 : Fixed following issues if vm.allocation.algorithm is set to firstfitleastconsumed 1. VM deployment failure if thre is only ZWPS in setup 2. VM migration is impossible from UI
Signed-off-by: Remi Bergsma <github@remi.nl>
FIX: Ovm3 physical network traffic labels to work.The labeling was broken. Only labels assigned at zone creation
were used, changing labels was not working. Tested with changing
a label and checking it, labels at zone creation still works.
As a bonus fixed the consistency of KVM in Dutch compared to other
traffic labels in Dutch and copied in the OVM3 translated label
in other languages based on the other tarffic labels in those languages.
* pr/964:
FIX: Ovm3 physical network traffic labels to work.
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8964: Can't create volume from snapshot of a removed volumeThis issue happens on KVM as well.
This is because the volume info is missing in the CopyCommand once the volume has been removed.
When the KVM agent tries to process the command, it will throws a NPE.
* pr/976:
CLOUDSTACK-8964: Can't create volume from snapshot of a removed volume
Signed-off-by: Remi Bergsma <github@remi.nl>
This issue happens on KVM as well.
This is because the volume info is missing in the CopyCommand once the volume has been removed.
When the KVM agent tries to process the command, it will throws a NPE.
The labeling was broken. Only labels assigned at zone creation
were used, changing labels was not working. Tested with changing
a label and checking it.
As a bonus fixed the consistency of KVM in Dutch compared to other
traffic labels in dutch and copied in the OVM3 translated label
in other languages.
There 2 things which has been changed.
* We look on power_state_update_time instead of update_time. Didn't make sense to me at all to look at update_time.
* Due DB update optimisation, powerState will only be updated if < MAX_CONSECUTIVE_SAME_STATE_UPDATE_COUNT. That is why we can not rely on these information unless we make sure these are up to date.
[4.6][BLOCKER]CLOUDSTACK-8763: Resolved POD/ZONE deletion failure.Instead of having both checkIfPodIsDeletable() and checkIfZoneIsDeletable have there own SQL query, I've refactored them so they use DAO SQL Queries.
This resolves the SQL Exception thrown by both classes.
Test to confirm working order:
- deploy ACS
- Add zones / pods. -> Try to delete without resources. -> Direct success.
- Add resources to zones / pods. -> Try to delete with resources in the pod / zone. -> Correct exception thrown. (Error message is why it cannot remove the zone / pod. IE: There is still a hosts or vm or .... )
* pr/845:
Added unit tests for checkIfPodIsDeletable() and checkIfZoneIsDeletable().
Updated Dao classes with correct field names.
Refactored checkIfZoneIsDeletable().
Added findByDc(long dcId) to VolumeDao and VolumeDaoImpl.
Added countIPs(long dcId, boolean onlyCountAllocated) to IPAddressDao and IPAddressDaoImpl.
Added countIPs(long dcId, boolean onlyCountAllocated) to DataCenterIpAddressDao and DataCenterIpAddressDaoImpl.
Refactored checkIfPodIsDeletable().
Added findByPodId(Long podId) to HostDao and HostDaoImpl.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
CLOUDSTACK-8851 Redundant VR getting started in the same cluster or hwe are not populating the deployment destination of the previous rvr in the avoid set of the rvr that is being created. This was resulting in both the rvrs getting deployed to same host sometimes even when it
could have gone to a different one.
Now we are updating the avoids set of the deployment plan to fix this issue.
* pr/839:
CLOUDSTACK-8851 Redundant VR getting started in the same cluster or host even when there are suitable hosts available
Signed-off-by: Remi Bergsma <github@remi.nl>
Cloudstack 8816 entityuuid missing in some of the eventsIn some of the events generated, entity uuid was missing making it difficult to find the entity. Fixed the same.
Tested it on rabbitmq instance.
There are the events before after the fix:
Before
--------------------------------------------------------------------------------
routing_key: management-server.ActionEvent.ACCOUNT-DELETE.Account.*
exchange: cloudstack-events
message_count: 2
payload:
{"eventDateTime":"2015-09-04 17:59:24 +0530","status":"Scheduled","description":"deleting User test4 (id: 28) and accountId \u003d 28","event":"ACCOUNT.DELETE","Account":"c09e2e81-8edc-4c27-b072-25005b522b63","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"}
payload_bytes: 304
payload_encoding: string
redelivered: False
--------------------------------------------------------------------------------
routing_key: management-server.AsyncJobEvent.complete.Account.*
exchange: cloudstack-events
message_count: 0
payload: {"cmdInfo":"{\"id\":\"9dd3abc2-3f8b-4852-aa60-a74b234acb13\",\"response\":\"json\",\"sessionkey\":\"5ig1ItP2_5v-mgY4cVJbJN5hw_w\",\"ctxDetails\":\"
{\\\"interface com.cloud.user.Account\\\":\\\"9dd3abc2-3f8b-4852-aa60-a74b234acb13\\\"}
\",\"cmdEventType\":\"ACCOUNT.DELETE\",\"expires\":\"2015-09-07T11:11:56+0000\",\"ctxUserId\":\"2\",\"signatureversion\":\"3\",\"httpmethod\":\"GET\",\"uuid\":\"9dd3abc2-3f8b-4852-aa60-a74b234acb13\",\"ctxAccountId\":\"2\",\"ctxStartEventId\":\"447\"}","instanceType":"Account","jobId":"5004989d-0cde-4922-8afa-66bf38b75ea7","status":"SUCCEEDED","processStatus":"0","commandEventType":"ACCOUNT.DELETE","resultCode":"0","command":"org.apache.cloudstack.api.command.admin.account.DeleteAccountCmd","jobResult":"org.apache.cloudstack.api.response.SuccessResponse/null/
{\"success\":true}
","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"}
payload_bytes: 914
payload_encoding: string
redelivered: False
--------------------------------------------------------------------------------
After
--------------------------------------------------------------------------------
routing_key: management-server.ActionEvent.ACCOUNT-DELETE.Account.e5e2db91-414d-484c-99d5-c4e265c14ad8
exchange: cloudstack-events
message_count: 13
payload: {"eventDateTime":"2015-09-07 17:32:26 +0530","status":"Completed","description":"Successfully completed deleting account. Account Id: 45","event":"ACCOUNT.DELETE","entityuuid":"e5e2db91-414d-484c-99d5-c4e265c14ad8","entity":"com.cloud.user.Account","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"}
payload_bytes: 344
payload_encoding: string
redelivered: True
--------------------------------------------------------------------------------
routing_key: management-server.AsyncJobEvent.complete.Account.e5e2db91-414d-484c-99d5-c4e265c14ad8
exchange: cloudstack-events
message_count: 12
payload: {"cmdInfo":"{\"id\":\"e5e2db91-414d-484c-99d5-c4e265c14ad8\",\"response\":\"json\",\"sessionkey\":\"8AJVbn8HIpg5LZ_VaVfSPs_QN2k\",\"ctxDetails\":\"{\\\"interface com.cloud.user.Account\\\":\\\"e5e2db91-414d-484c-99d5-c4e265c14ad8\\\"}\",\"cmdEventType\":\"ACCOUNT.DELETE\",\"expires\":\"2015-09-07T12:17:42+0000\",\"ctxUserId\":\"2\",\"signatureversion\":\"3\",\"httpmethod\":\"GET\",\"uuid\":\"e5e2db91-414d-484c-99d5-c4e265c14ad8\",\"ctxAccountId\":\"2\",\"ctxStartEventId\":\"465\"}","instanceType":"Account","instanceUuid":"e5e2db91-414d-484c-99d5-c4e265c14ad8","jobId":"0bb08486-6d9f-4e9f-bfef-b7463c42e71b","status":"SUCCEEDED","processStatus":"0","commandEventType":"ACCOUNT.DELETE","resultCode":"0","command":"org.apache.cloudstack.api.command.admin.account.DeleteAccountCmd","jobResult":"org.apache.cloudstack.api.response.SuccessResponse/null/{\"success\":true}","account":"bd73dc2e-35c0-11e5-b094-d4ae52cb9af0","user":"bd7ea748-35c0-11e5-b094-d4ae52cb9af0"}
payload_bytes: 968
payload_encoding: string
redelivered: True
--------------------------------------------------------------------------------
* pr/782:
CLOUDSTACK-8816 Systemvm reboot event doesnt have uuids. Fixed the same
CLOUDSTACK-8816: Project UUID is not showing for some of operations in RabbitMQ.
CLOUDSTACK-8816: entity uuid missing in create network event
CLOUDSTACK-8816: instance uuid is missing in events for delete account
CLOUDSTACK-8816 Fixed entityUuid missing in some cases is events
Signed-off-by: Rajani Karuturi <rajani.karuturi@citrix.com>
CLOUDSTACK-8822 - Replacing Runnable by CallableThat's the first part of the refactor, which will touch the ManagedContextRunnable and all its subclasses. However, in order to reduce impact, the first part comprises the Agent and Nio related classes (NioConnection, NioServer and NioClient).
* All the sub-classes were also updated according to the changes in the super-classes
* Improved exception handling
* There were also code formatting changes
Changes were structural and the NioTest covered them without need to modify the unit test.
This PR is quite extensive. Please, wait for the Test Report in order to proceed with further review.
ping @remibergsma @miguelaferreira @bhaisaab @karuturi @wido @DaanHoogland @borisroman @K0zka
Cheers,
Wilder
* pr/805:
CLOUDSTACK-8822 - Replacing Runnable by Callable in the Taks and NioConnection classes
Signed-off-by: wilderrodrigues <wrodrigues@schubergphilis.com>
Guys, can you review it? things need to be discussed:
(1) this supports KVM/QCOW2 only. Anyone want to implement for other Hypervisor/format ?
(2) The original data volume (on primary storage) will be removed.
(3) The script uses the default timeout in libvirtComputingResource. Do we need to add one in global configuration (like copy.volume.wait or backup.snapshot.wait, create.volume.from.snapshot.wait)
(4) In scripts/storage/qcow2/managesnapshot.sh, I use "qemu-img convert -f qcow2 -O qcow2" to copy the snapshot from secondary to primary (hence there is no base image file), instead of "cp -f", this is because convert is faster than cp in my testing.
* pr/732:
CLOUDSTACK-5863: revert volume snapshot for KVM/QCOW2
Signed-off-by: Wei Zhou <w.zhou@tech.leaseweb.com>
This reverts commit cd7218e241, reversing
changes made to f5a7395cc2.
Reason for Revert:
noredist build failed with the below error:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on project cloud-plugin-hypervisor-vmware: Compilation failure
[ERROR] /home/jenkins/acs/workspace/build-master-noredist/plugins/hypervisors/vmware/src/com/cloud/hypervisor/guru/VMwareGuru.java:[484,12] error: non-static variable logger cannot be referenced from a static context
[ERROR] -> [Help 1]
even the normal build is broken as reported by @koushik-das on dev list
http://markmail.org/message/nngimssuzkj5gpbz
CLOUDSTACK-8773 : NPE in CheckRouterTask, when a DomainRouter happens to be expunged at the same time
* pr/745:
CLOUDSTACK-8773 : NPE in CheckRouterTask, when a DomainRouter happens to be expunged at the same time
Signed-off-by: Rajani Karuturi <rajani.karuturi@citrix.com>
CLOUDSTACK-8754: VM migration triggered by dynamic scaling is failingThis is caused by serialization failure for VmWorkMigrateForScale object. Replaced DeployDestination member present in VmWorkMigrateForScale with serializable types.
Haven't added any unit test as couldn't find a clean way to do it. Simply adding a test to do (de)serialization won't help catch any new member addition to the type which breaks serializability.
* pr/725:
CLOUDSTACK-8754: VM migration triggered by dynamic scaling is failing This is caused by serialization failure for VmWorkMigrateForScale object. Replaced DeployDestination member present in VmWorkMigrateForScale with serializable types.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This patch makes it possible to expose VM UUID to subsystems, this can be
useful for implementing VM Snapshots for KVM in future.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fix for the NicVO.java regression.Renamed set*() methods to correct naming.
* pr/726:
Fix for the NicVO.java regression.
Signed-off-by: Remi Bergsma <github@remi.nl>
This is caused by serialization failure for VmWorkMigrateForScale object. Replaced DeployDestination member
present in VmWorkMigrateForScale with serializable types.
This reverts commit 180afe52e5.
This broke the build on master:
master build broken with the below error http://jenkins.buildacloud.org/job/build-master-slowbuild/2101/consoleText
```
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /home/jenkins/acs/workspace/build-master-slowbuild/plugins/hypervisors/xenserver/test/com/cloud/hypervisor/xenserver/resource/wrapper/xenbase/CitrixRequestWrapperTest.java:[1581,51] error: constructor CreateVMSnapshotCommand in class CreateVMSnapshotCommand cannot be applied to given types;
[ERROR] required: String,String,VMSnapshotTO,List<VolumeObjectTO>,String
found: String,VMSnapshotTO,List<VolumeObjectTO>,String
reason: actual and formal argument lists differ in length
/home/jenkins/acs/workspace/build-master-slowbuild/plugins/hypervisors/xenserver/test/com/cloud/hypervisor/xenserver/resource/wrapper/xenbase/CitrixRequestWrapperTest.java:[1623,53] error: no suitable constructor found for RevertToVMSnapshotCommand(String,VMSnapshotTO,List<VolumeObjectTO>,String)
[INFO] 2 errors
[INFO] -------------------------------------------------------------
```
This was PR #717 towards 4.5
This patch makes it possible to expose VM UUID to subsystems, this can be
useful for implementing VM Snapshots for KVM in future.
This was PR #717 towards 4.5
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit 0062ff2672)
Signed-off-by: Remi Bergsma <github@remi.nl>
Changed methodnames according to Nic.java refactor.
Fixed NicVO.java due to regression from Nic.java refactor.
Fixed VmWareGuru.java after Nic.java refactor.
See issue CLOUDSTACK-8736 for ongoing effort to clean up network code.
Cloudstack 8656: do away with more silently ignoring exceptions.a lot of messages added.
some restructuring for test exception assertions and try-with-resource blocks
* pr/654: (29 commits)
CLOUDSTACK-8656: more logging instead of sysout
CLOUDSTACK-8656: use catch block for validation
CLOUDSTACK-8656: class in json specified not found
CLOUDSTACK-8656: removed unused classes
CLOUDSTACK-8656: restructure of tests
CLOUDSTACK-8656: reorganise sychronized block
CLOUDSTACK-8656: restructure tests to ensure exception throwing
CLOUDSTACK-8656: validate the throwing of ServerApiException
CLOUDSTACK-8656: logging ignored exceptions
CLOUDSTACK-8656: try-w-r removes need for empty catch block
CLOUDSTACK-8656: try-w-r instead of clunckey close-except
CLOUDSTACK-8656: deal with empty SQLException catch block by try-w-r
CLOUDSTACK-8656: unnecessary close construct removed
CLOUDSTACK-8656: message about timed buffer logging
CLOUDSTACK-8656: message about invalid number from store
CLOUDSTACK-8656: move cli test tool to separate file
CLOUDSTACK-8656: exception is the rule for some tests
CLOUDSTACK-8656: network related exception logging
CLOUDSTACK-8656: reporting ignored exceptions in server
CLOUDSTACK-8656: log in case we are on a platform not supporting UTF8
...
Signed-off-by: Remi Bergsma <github@remi.nl>
* pr/547:
CLOUDSTACK-8601. VMFS storage added as local storage can be re-added as shared storage. Fail addition of a VMFS shared storage pool in case it has already been added as local storage in CS.
Signed-off-by: Mike Tutkowski <mike.tutkowski@solidfire.com>
* pr/649:
CLOUDSTACK-8656: checkstyle no longer used import removed
CLOUDSTACK-8656: messages on SQL exception in DbUtils!
CLOUDSTACK-8656: replace empty catch block on close by try-with-resource
CLOUDSTACK-8656: 30x legacy upgrade code exception messages
CLOUDSTACK-8656: removed redundant implements
CLOUDSTACK-8656: silent close failure of clustering socket log as info
CLOUDSTACK-8656: try with resource te eliminate empty catch clauses
CLOUDSTACK-8656: log messages on exception in legacy sql upgrade code
CLOUDSTACK-8656: removed unused input stream there was code to close a stream that was never created
CLOUDSTACK-8656: info on error closing peering channels
CLOUDSTACK-8656: messages on errors closing streams for local templates
CLOUDSTACK-8656: handle template properties loading
Signed-off-by: Daan Hoogland <daan@onecht.net>
* pr/603:
coverity: try-with-resource and restructure in upgrade datacenter
extra try-w-r
coverity issues in old upgrade code
Signed-off-by: Daan Hoogland <daan@onecht.net>
* pr/604:
coverity 1116563: resource count leak for accounts
coverity 1116562: resource count resource leak
coverity 1116612: update network cidrs firewall rules and acls
coverity 1116610: upgrade cluster overprovisioning details
coverity 1212194: reuse of prepared statements in try-block and of course have them autoclosed
coverity 1225199: vmware dc upgrade
coverity 1288575: replace all close with try-with-resource not strictly necessary in all but one case. done consequently.
Signed-off-by: Daan Hoogland <daan@onecht.net>