Increases allowed max and permgen memory flags to maven-surefire plugins.
This fixes unit test failures in cloud-server.
(cherry picked from commit 54d6d11c16)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Updating pom.xml version numbers for release 4.6.2-SNAPSHOTSet next version in 4.6 release branch to version 4.6.2-SNAPSHOT.
Using ` ./tools/build/setnextversion.sh`.
Ping @bhaisaab @DaanHoogland before we merge this, how will we be creating the upgrade paths from 4.6.2 to 4.7? After this PR is merged, we need to manually do a fwd-merge and make sure we keep the pom versions in master/4.7. Much like in #1071.
* pr/1186:
Fixed typo in iam/pom.xml
Updating pom.xml version numbers for release 4.6.2-SNAPSHOT
Signed-off-by: Daan Hoogland <daan@onecht.net>
[4.6] CLOUDSTACK-4787 - vmware diskcontrollersSame as #1131 (see this for screenshots etc)
* pr/1132:
CLOUDSTACK-4787: Allow users to select disk controller for VM/template
CLOUDSTACK-4787 Allow selection of scsi controller type in vSphere
Signed-off-by: Daan Hoogland <daan@onecht.net>
- Changed the NetworkTopologyContext class just to make the private member accessible from the test
- Added a test class to cover the positive scenario of the VpcVirtualRouterElementTest.applyVpnUsers() method.
- Covering when there is either no VPC or no routers.
- Checks the result of a call against the previous result. Either both are true or the method returns false.
- Do not thrown exceptions because some calls are not handling/rethrowing them. It would cause runtime problems.
- When doing a list.addAll(Arrays.asList(String[]{}) will cause problems when trying to cast the list.toArray() into an aray of String
It would only work if instead of calling addAll() I would pass it straight into the constructor:
e.g. List<String> l = new ArrayList(Arrays.asList(new String[]{});
Stirng [] s = (String[]) l.toArray();
But I did not like that implementation because it would require 2 arrays of string and combine them at the end.
- Use the router to retrieve the instance ID
- Check if the VPC is redundant in order to reuse the private gateway address.
- Brings the private gateways interfaces up.
- It was causing problems because Nics were expected to be plugged before they actually exist. Only in rVPC cases.
- Applies ACL items to routers only after the Pvt GW is setup.
CLOUDSTACK-9025: Fixed can't create usable template from snapshot in Xenserver and Vmwarehttps://issues.apache.org/jira/browse/CLOUDSTACK-9025
Fix also reverts below commit as below solution making assumption about hypervisor which are not applicable in case of XenServer and VmWare
Revert "CLOUDSTACK-8964: Can't create template or volume from snapshot"
This reverts commit ccf5d75cfb.
Testing:
Able to deploy VM successfully from template created from linked clone snapshot on XenServer.
* pr/1176:
CLOUDSTACK-9025: Fixed can't create usable template from snapshot in Xenserver and Vmware
Signed-off-by: Daan Hoogland <daan@onecht.net>
CLOUDSTACK-9101: fix some issues in resize volume(1) fix issue: volume size is not updated even if the operation succeed
(2) Add ui support for root volume resize
(3) resize on qcow2 type ROOT volume of stopped vm does not really work
see https://issues.apache.org/jira/browse/CLOUDSTACK-9101
* pr/1161:
CLOUDSTACK-9101: resize root volume of stopped vm on KVM
CLOUDSTACK-9101: add UI support for root volume resize
CLOUDSTACK-9101: update volume size after resizevolume
Signed-off-by: Daan Hoogland <daan@onecht.net>
Fix also reverts below commit as below solution making assumption about hypervisor which are not applicable
in case of XenServer and VmWare
Revert "CLOUDSTACK-8964: Can't create template or volume from snapshot"
This reverts commit ccf5d75cfb.
CLOUDSTACK-6276 Fixing affinity groups for projectsWith some contributions from @resmo and @ustcweizhou.
This closes https://github.com/apache/cloudstack/pull/508
To test manually (need at least 2 hosts):
Create a project
Create an affinity group in that project
Deploy a vm with that affinity group
Deploy a second vm with that affinity group
They should be on different hosts
Ran old and new tests for affinity groups on the simulator
Test create affinity group as admin in project ... === TestName: test_01_admin_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group as domain admin for projects ... === TestName: test_02_doadmin_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group as user for projects ... === TestName: test_03_user_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group that exists (same name) for projects ... === TestName: test_4_user_create_aff_grp_existing_name_for_project | Status : SUCCESS ===
ok
#Delete Affinity Group by id. ... === TestName: test_01_delete_aff_grp_by_id | Status : SUCCESS ===
ok
#Delete Affinity Group by id should fail for user not in project ... === TestName: test_02_delete_aff_grp_by_id_another_user | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups ... === TestName: test_01_deploy_vm_anti_affinity_group | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups with more vms than hosts. ... === TestName: test_02_deploy_vm_anti_affinity_group_fail_on_not_enough_hosts | Status : SUCCESS ===
ok
List affinity group for a vm for projects ... === TestName: test_01_list_aff_grps_for_vm | Status : SUCCESS ===
ok
List multiple affinity groups associated with a vm for projects ... === TestName: test_02_list_multiple_aff_grps_for_vm | Status : SUCCESS ===
ok
List affinity groups by id for projects ... === TestName: test_03_list_aff_grps_by_id | Status : SUCCESS ===
ok
List Affinity Groups by name for projects ... === TestName: test_04_list_aff_grps_by_name | Status : SUCCESS ===
ok
List Affinity Groups by non-existing id for projects ... === TestName: test_05_list_aff_grps_by_non_existing_id | Status : SUCCESS ===
ok
List Affinity Groups by non-existing name for projects ... === TestName: test_06_list_aff_grps_by_non_existing_name | Status : SUCCESS ===
ok
List affinity group should list all for a vms associated with that group for projects ... === TestName: test_07_list_all_vms_in_aff_grp | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupids ... === TestName: test_01_update_aff_grp_by_ids | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 16 tests in 581.706s
OK
Deploy vm as Admin in Affinity Group belonging to regular user (should fail) ... === TestName: test_01_deploy_vm_another_user | Status : SUCCESS ===
ok
Create Affinity Group as admin for regular user ... === TestName: test_02_create_aff_grp_user | Status : SUCCESS ===
ok
List Affinity Groups as admin for all the users ... === TestName: test_03_list_aff_grp_all_users | Status : SUCCESS ===
ok
List Affinity Groups belonging to admin user ... === TestName: test_04_list_all_admin_aff_grp | Status : SUCCESS ===
ok
List Affinity Groups belonging to regular user passing account id and domain id ... === TestName: test_05_list_all_users_aff_grp | Status : SUCCESS ===
ok
List Affinity Groups belonging to regular user passing group id ... === TestName: test_06_list_all_users_aff_grp_by_id | Status : SUCCESS ===
ok
Delete Affinity Group belonging to regular user ... === TestName: test_07_delete_aff_grp_of_other_user | Status : SUCCESS ===
ok
Test create affinity group as admin ... === TestName: test_01_admin_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group as domain admin ... === TestName: test_02_doadmin_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group as user ... === TestName: test_03_user_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group that exists (same name) ... === TestName: test_04_user_create_aff_grp_existing_name | Status : SUCCESS ===
ok
Test create affinity group with existing name but within different account ... === TestName: test_05_create_aff_grp_same_name_diff_acc | Status : SUCCESS ===
ok
Test create affinity group of non-existing type ... === TestName: test_06_create_aff_grp_nonexisting_type | Status : SUCCESS ===
ok
Delete Affinity Group by name ... === TestName: test_01_delete_aff_grp_by_name | Status : SUCCESS ===
ok
Delete Affinity Group as admin for an account ... === TestName: test_02_delete_aff_grp_for_acc | Status : SUCCESS ===
ok
Delete Affinity Group which has vms in it ... === TestName: test_03_delete_aff_grp_with_vms | Status : SUCCESS ===
ok
Delete Affinity Group with id which does not belong to this user ... === TestName: test_05_delete_aff_grp_id | Status : SUCCESS ===
ok
Delete Affinity Group by name which does not belong to this user ... === TestName: test_06_delete_aff_grp_name | Status : SUCCESS ===
ok
Delete Affinity Group by id. ... === TestName: test_08_delete_aff_grp_by_id | Status : SUCCESS ===
ok
Root admin should be able to delete affinity group of other users ... === TestName: test_09_delete_aff_grp_root_admin | Status : SUCCESS ===
ok
Deploy VM without affinity group ... === TestName: test_01_deploy_vm_without_aff_grp | Status : SUCCESS ===
ok
Deploy VM by aff grp name ... === TestName: test_02_deploy_vm_by_aff_grp_name | Status : SUCCESS ===
ok
Deploy VM by aff grp id ... === TestName: test_03_deploy_vm_by_aff_grp_id | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups ... === TestName: test_04_deploy_vm_anti_affinity_group | Status : SUCCESS ===
ok
Deploy vms by affinity group id ... === TestName: test_05_deploy_vm_by_id | Status : SUCCESS ===
ok
Deploy vm in affinity group of another user by name ... === TestName: test_06_deploy_vm_aff_grp_of_other_user_by_name | Status : SUCCESS ===
ok
Deploy vm in affinity group of another user by id ... === TestName: test_07_deploy_vm_aff_grp_of_other_user_by_id | Status : SUCCESS ===
ok
Deploy vm in multiple affinity groups ... === TestName: test_08_deploy_vm_multiple_aff_grps | Status : SUCCESS ===
ok
Deploy multiple vms in multiple affinity groups ... === TestName: test_09_deploy_vm_multiple_aff_grps | Status : SUCCESS ===
ok
Deploy VM by aff grp name and id ... === TestName: test_10_deploy_vm_by_aff_grp_name_and_id | Status : SUCCESS ===
ok
List affinity group for a vm ... === TestName: test_01_list_aff_grps_for_vm | Status : SUCCESS ===
ok
List multiple affinity groups associated with a vm ... === TestName: test_02_list_multiple_aff_grps_for_vm | Status : SUCCESS ===
ok
List affinity groups by id ... === TestName: test_03_list_aff_grps_by_id | Status : SUCCESS ===
ok
List Affinity Groups by name ... === TestName: test_04_list_aff_grps_by_name | Status : SUCCESS ===
ok
List Affinity Groups by non-existing id ... === TestName: test_05_list_aff_grps_by_non_existing_id | Status : SUCCESS ===
ok
List Affinity Groups by non-existing name ... === TestName: test_06_list_aff_grps_by_non_existing_name | Status : SUCCESS ===
ok
List affinity group should list all for a vms associated with that group ... === TestName: test_07_list_all_vms_in_aff_grp | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupids ... === TestName: test_01_update_aff_grp_by_ids | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupnames ... === TestName: test_02_update_aff_grp_by_names | Status : SUCCESS ===
ok
Update the list of affinityGroups for vm which is not associated ... === TestName: test_03_update_aff_grp_for_vm_with_no_aff_grp | Status : SUCCESS ===
ok
Update the list of Affinity Groups to empty list ... SKIP: Skip - Failing - work in progress
Update the list of Affinity Groups on running vm ... === TestName: test_05_update_aff_grp_on_running_vm | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 42 tests in 976.432s
OK (SKIP=1)
* pr/1134:
CLOUDSTACK-6276 Removing unused parameter in integration test for projects
CLOUDSTACK-6276 Removing unused parameter in integration test
CLOUDSTACK-6276 Fixing affinity groups for projects
Signed-off-by: Remi Bergsma <github@remi.nl>
[4.6] Cannot list vlanipranges by keywordBefore change:
cloudmonkey> list vlanipranges keyword=118
: Caught: com.mysql.jdbc.JDBC4PreparedStatement@18f36b6e: SELECT vlan.id, vlan.vlan_id, vlan.vlan_gateway, vlan.vlan_netmask, vlan.ip6_gateway, vlan.ip6_cidr, vlan.data_center_id, vlan.description, vlan.ip6_range, vlan.network_id, vlan.physical_network_id, vlan.vlan_type, vlan.uuid, vlan.removed, vlan.created FROM vlan WHERE ( OR vlan.description LIKE ** NOT SPECIFIED ** ) AND vlan.removed IS NULL ORDER BY vlan.id ASC LIMIT 0, 500
After change:
cloudmonkey> list vlanipranges keyword='118'
count = 1
vlaniprange:
id = 0d80fd9c-cd6b-4f99-96c6-261420e75f58
account = system
domain = ROOT
domainid = 2044762d-c4a5-11e3-8379-005056ac4490
......
* pr/1085:
Cannot list vlanipranges by keyword
Signed-off-by: Remi Bergsma <github@remi.nl>
- Adds new controller types in the UI, for selecting root disk controller while
registering templates
- Fixes bug to not override disk controller type if provided in the details (either
vm details or from template details)
(cherry picked from commit c7d67628b3)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
commit #7
So far only 1 controller (scsi or ide) is supported in Cloudstack for ide or
scsi, this is existing limitation. Added support for 2nd IDE controller. Support adding IDE
virtual disk to VM. Also added check if VM is running as IDE virtual disk cannot be attached
to VM if VM is runnning.If user detaches a virtual disk on lower unit number of controller,
then subsequent attach operation should find free unit number on the controller and attach
the virtual disk there.
commit #6
Let the controllers of existing VMs continue without flip, current busInfo retrieved from
chain_info field of volume record from database would be preferred over
controller settings from all configuration settings.
commit #5
Editing global configuration param vmware.root.disk.controller osdefault value results
in loss of previous root disk controller type. Hence root disk's controller type for legacy
VMs is unknow post that modificaiton by user. If VM is stop/start then we could get this
infromation from bus info of existing volume. But if user resets VM and then try to start VM.
The existing bus info would be lost. Hence existing disk info is not available to depend on.
Using lsilogic or generic scsi controller for ROOT disk of legacy VMs if reset.
commit #4
Avoid adding additional (>1) scsi controllers to system vms. While attaching volume to legacy VM
don't use osdefault optoin which applicable only for VM created with the option enabled, use
legacy data disk controller type (lsilogic)
commit #3
If root disk's controller type is scsi and data disk controller type condenses
to any of scsi sub-types then data disk controller type would fall back to root disk controller itself. This
ensures data volumes would be accessible in all cases as controller of root volume would be reliable
and it means VM has the supported controller. It also avoids mix of scsi controller sub-types in a user instance.
Also translating disk controller type scsi to lsilogic.
commit #2
Support auto detection of recommended virtual disk controller type for specific guest OS.
commit #1
Support granual controller types. Add support for controller types in template registration as well.
Fix white spaces.
Removed stale HEAD merge lines
Removed tail of merge lines
Fixed VmwareResource, removing storage commands that moved to VmwareStorageProcessor.
removed stale code of controller that is present in processor
Fixed check style errors.
Fixed injection.
Tested with Linux and windows templates. Unable to run iso based tests due to few bugs in register iso area.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
(cherry picked from commit a4cc987a6f)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The fixing of CLOUDSTACK-8816 introduced a regression that removed the
first class entities in the event bus description property. This is
because everything was changed to use the Class as a key... Everything
but the populateFirstClassEntities method in ActionEventUtils.
Before change:
cloudmonkey> list vlanipranges keyword=118
: Caught: com.mysql.jdbc.JDBC4PreparedStatement@18f36b6e: SELECT vlan.id, vlan.vlan_id, vlan.vlan_gateway, vlan.vlan_netmask, vlan.ip6_gateway, vlan.ip6_cidr, vlan.data_center_id, vlan.description, vlan.ip6_range, vlan.network_id, vlan.physical_network_id, vlan.vlan_type, vlan.uuid, vlan.removed, vlan.created FROM vlan WHERE ( OR vlan.description LIKE ** NOT SPECIFIED ** ) AND vlan.removed IS NULL ORDER BY vlan.id ASC LIMIT 0, 500
After change:
cloudmonkey> list vlanipranges keyword='118'
count = 1
vlaniprange:
id = 0d80fd9c-cd6b-4f99-96c6-261420e75f58
account = system
domain = ROOT
domainid = 2044762d-c4a5-11e3-8379-005056ac4490
......
CLOUDSTACK-9006 - ListTemplates API returns result in inconsistent order when called concurrentlyThe order of templates returned in the response is based on a field called sortkey and by default value for the field is set to 0.
With more than 1000 templates, we tried listing the templates with different page sizes concurrently, and we noticed the results being inconsistent.
Thus we added a secondary order by clause to list templates call on tempZonePair column to make sure the results are consistent.
The addOrderby method of Filter class was also not appending , if we added more orderby clauses.
* pr/1009:
CLOUDSTACK-9006 - ListTemplates API returns result in inconsistent order when called concurrently
CLOUDSTACK-9006 - ListTemplates API returns result in inconsistent order when called concurrently
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8902 Restart Network fails in EIP/ELB zoneThe restart network was failing when using external loadbalencer. The failure was because of a number format exception. When BroadcastDomainType.getValue(guestConfig.getBroadcastUri() is executed this returns a string untagged. We were trying to parse this as long so there was a number pointer exception.
This happens only when the vlan uri is vlan://untagged. in other cases were there is a number instead of untagged (vlan tag) this used to succeed. Although we were trying to convert the number to long we were not really using it. we were converting the number to long and then back to string when creating the IpAddressTo. so I removed this unnecessary conversion in this case for fixing the issue at hand.
I did a manual restart of the network and checked for this number format exception in a EIP/ELB setup.
* pr/898:
CLOUDSTACK-89027 Restart Network fails in EIP/ELB zone
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-9016 Fail to create VM instance within VPCbug link https://issues.apache.org/jira/browse/CLOUDSTACK-9016.
CS doe not allocate the ip of the form x.x.x.1 to a guest VM. We seem to incorrectly assume that the first ip in the subnet belongs to the gateway.
* pr/1020:
CLOUDSTACK-9016: Deploy vm with gateway ip address in VPC
CLOUDSTACK-9016 Fail to create VM instance within VPC
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACk-9002: VM deployment is successful even when dhcp entry command fails - Fixed
Reason: The return value of the call to accept() function in the applyRules() function of BasicNetworkTopology.java was not checked for success or failure. As a result even if it fails, exception was not thrown and VM deployment went ahead without any errors.
Fix: Added the necessary checks.
* pr/995:
CLOUDSTACk-9002: VM deployment is successful even when dhcp entry command fails - Fixed
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8940: Wrong value is inserted into nics table netmask field when creating a VM - Fixed
Problem: When creating a VM in shared network with no service, the value of netmask is added in the table in the CIDR format unlike other cases where it is added as normal string in the format xxx.xxx.xxx.xxx. The netmask column in the nics table has a length of 15 chars which gets violated if the CIDR exceeds it(Max CIDR length can be 18).
Fix: Before storing the netmask convert from CIDR to native format.
* pr/916:
CLOUDSTACK-8940: Wrong value is inserted into nics table netmask field when creating a VM - Fixed
Signed-off-by: Remi Bergsma <github@remi.nl>
Fixed: Network Update from RVR offering to Standalone offering failsProblem: Moving a RVR network offering to standalone makes the status of VR's as UNKNOWN and Redundant Router marked with YES.
Fix: The network's isRedundant was not getting updated.
* pr/818:
CLOUDSTACK-8844: Network Update from RVR offering to Standalone offering fails - Fixed
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8793 Enable s2s VPN connection for projects
* pr/879:
CLOUDSTACK-8793 Added project id to create vpn customer gateway, and to the impl of list vpn connections and list vpn customer gateways
Signed-off-by: Remi Bergsma <github@remi.nl>
Pass LbProtocol down to the HAProxyConfiguratorThis will let us specify a new load balancer protocol (tcp-proxy) which enables HAProxy's `send-proxy` functionality.
`send-proxy` / [the PROXY protocol][1] will send the real connection origin IP through to the servers behind HAProxy, without requiring any protocol specific changes (such as HTTP header rewriting).
[1]: http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt
This is also inline with what [Amazon ELB now supports][2].
[2]: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html
* pr/848:
Pass LbProtocol down to the HAProxyConfigurator
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8962: Dedicated cluster is used for virtual routers that belong to non-dedicated account
Earlier the deployment planner was not handling the case of virtual routers.(In Explicit Dedication)
It was only handling for all instance VMs/user VMs.
Added code for checking the case of Virtual Routers.
* pr/945:
CLOUDSTACK-8962: Dedicated cluster is used for virtual routers that belong to non-dedicated account
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8911: VM start job got stuck in loop looking for suitable host
VM instance creation job get stuck in the loop, when VMs require local storage there are host that reached max guest limit and remain hosts does have storage available. This happens because the hosts that reach the max guest limit were not getting added to the avoid list and hence the cluster.
Verified the fix on my local setup.
Repro Steps:
1. Take an environment with single cluster and 2 hosts.
2. change the max guest limit for the hypervisor such that on one host max guest limit should reach.
3. change thresholds so that other host should not have enough storage. If required create a VM for sufficient bigger disk.
4. Now deploy a VM with local storage.
5. cluster will never be put in the avoid set and job will keep looking for suitable host.
6. once we increase the max guest limit, VM will deploy or will fail if there is a lack of storage.
* pr/895:
CLOUDSTACK-8911: VM start job got stuck in loop looking for suitable host
Signed-off-by: Remi Bergsma <github@remi.nl>
Cloudstack-8816 some of the events do not have resource uuidsThe key objects in the context map are sometimes String and sometimes object. This causes missing uuids when an entity put in the context map with key entity.toString is queried with key entity
Testing:
manually tested by deploying a vm and checked that the created events in rabbitmq now has uuids.
events before and after the change are update at https://issues.apache.org/jira/browse/CLOUDSTACK-8816?focusedCommentId=14805239
unittests
```
$ mvn -pl :cloud-api test -Dtest=CallContextTest
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running org.apache.cloudstack.context.CallContextTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.152 sec - in org.apache.cloudstack.context.CallContextTest
Results :
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 11.445 s
[INFO] Finished at: 2015-09-18T14:58:53+05:30
[INFO] Final Memory: 55M/448M
[INFO] ------------------------------------------------------------------------
```
* pr/849:
CLOUDSTACK-8816 added missing events
CLOUDSTACK-8816: fixed missing resource uuid in delete network cmd
CLOUDSTACK-8816: fixed missing resource uuid in destroy vm event
Cloudstack-8816: Fixed missing resource uuid in delete snapshot events
CLOUDSTACK-8816: some of the events do not have resource uuids
CLOUDSTACK-8816: some of the events do not have resource uuids
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8917 : Instance tab takes long time to load with 12K Vmsmodified sql that is used for retrieving vm count .
In load test environment listVirtualmachine takes 8-11 sec to load. This environment has around 12k active VMs. Total number of rows is 190K.
Performance bottleneck in listVirtualmachine command is fetching the count and distinct vms.
{noformat}
// search vm details by ids
Pair<List<UserVmJoinVO>, Integer> uniqueVmPair = _userVmJoinDao.searchAndCount(sc, searchFilter);
Integer count = uniqueVmPair.second();
{noformat}
This takes 95% of the total time.
To fetch the count and distinct vms we are using below sqls.
Query 1:
{noformat}
SELECT DISTINCT(user_vm_view.id) FROM user_vm_view WHERE user_vm_view.account_type != 5 AND user_vm_view.display_vm = 1 AND user_vm_view.removed IS NULL ORDER BY user_vm_view.id ASC LIMIT 0, 20
{noformat}
Query 2:
select count(distinct id) from user_vm_view WHERE user_vm_view.account_type != 5 AND user_vm_view.display_vm = 1 AND user_vm_view.removed IS NULL
Query 2 is a problematic query.
If we rewrite the query as mentioned below then it will be ~2x faster.
select count(*) from (select distinct id from user_vm_view WHERE user_vm_view.account_type != 5 AND user_vm_view.display_vm = 1 AND user_vm_view.removed IS NULL) as temp;
Mysql Test result:
With 134 active Vms (total rows 349)
mysql> select count(*) from vm_instance;
+----------+
| count(*) |
+----------+
| 349 |
+----------+
1 row in set (0.00 sec)
mysql> select count(*) from user_vm_view;
+----------+
| count(*) |
+----------+
| 135 |
+----------+
1 row in set (0.02 sec)
mysql> select count(distinct id) from user_vm_view WHERE user_vm_view.account_type != 5 AND user_vm_view.display_vm = 1 AND user_vm_view.removed IS NULL;
+--------------------+
| count(distinct id) |
+--------------------+
| 134 |
+--------------------+
1 row in set (0.02 sec)
mysql> select count(*) from (select distinct id from user_vm_view WHERE user_vm_view.account_type != 5 AND user_vm_view.display_vm = 1 AND user_vm_view.removed IS NULL) as temp;
+----------+
| count(*) |
+----------+
| 134 |
+----------+
1 row in set (0.01 sec)
With 14326 active Vms (total rows 195660)
mysql> select count(*) from vm_instance;
+----------+
| count(*) |
+----------+
| 195660 |
+----------+
1 row in set (0.04 sec)
mysql> select count(*) from user_vm_view;
+----------+
| count(*) |
+----------+
| 41313 |
+----------+
1 row in set (4.55 sec)
mysql> select count(distinct id) from user_vm_view WHERE user_vm_view.account_type != 5 AND user_vm_view.display_vm = 1 AND user_vm_view.removed IS NULL;
+--------------------+
| count(distinct id) |
+--------------------+
| 14326 |
+--------------------+
1 row in set (7.39 sec)
mysql> select count(*) from (select distinct id from user_vm_view WHERE user_vm_view.account_type != 5 AND user_vm_view.display_vm = 1 AND user_vm_view.removed IS NULL) as temp;
+----------+
| count(*) |
+----------+
| 14326 |
+----------+
1 row in set (2.08 sec)
UI test Results:
Before:

After

* pr/894:
CLOUDSTACK-8917 : Instance tab takes long time to load with 12K active VM (total vms: 190K)
Signed-off-by: Rajani Karuturi <rajani.karuturi@citrix.com>