This will purge all the cookies on logout including multiple sessionkey
cookies if passed. On login, this will restrict sessionkey cookie
(httponly) to the / path.
Fixes#4136
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This change will ensure that B&R APIs are not exported if the feature
is not enabled in any of the zones.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
While migrate a vm, in the popup, the host dedicated to other accounts/domains are also 'Suitable" for migration, which is obviously wrong.
The same issue happens with api findHostsForMigration
If we resize a volume of a vm running on a host which is not Up or not Enable, the job will be scheduled to another normal host. Then the volume will be resized by "qemu-img resize" instead of "virsh blockresize", the image might be corrupted after resize.
Adding missing fields in the following APIs
osdisplayname in listVirtualMachines
vpcofferingname in listVpcs
vpcname in listPublicIpAddresses
vpcname in listPrivateGateways
vpcname in listVpnGateways
templatename, podname in listRouters
templatename, podname in listSystemVms
Fixes: #4161
Adds the listall parameter to listLdapConfigurations.
If set to true, and no domainid specified, list all LDAP configurations irrespective of the linked domain
Fixes wrong count in listAffinityGroup API.
API was returning the count of AffinityGroupJoinVO records.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes an issue where an instance fails to deploy due to a null pointer when using an L2 Guest Network with DefaultL2NetworkOfferingConfigDrive on Xenserver. It also fixes migrating an instance to another host.
This has been tested by:
- Creating an L2 Guest network, using DefaultL2NetworkOfferingConfigDrive as the network offering.
- Deploying an instance using the L2 Guest network created.
- Migrating the instance away from the host and back
When a VPC is restarted, the networks in the VPC is not restarted, this PR will add the logic to restart the networks in the VPC that needs a restart when the VPC is restarted.
Fixes#3816
BackupSync task would switch between databases to update backup usage
metrics in the cloud_usage.usage_backup table. The current framework
and the usage in ManagedContext causes database connection
(LegacyTransaction) leaks. When the thread runs faster, the issue is
easily reproducible and checking via heap dump analysis or using JMX
MBeans. This fixes by moving the task of backup data updation for
usage data to the usage server by publishing usage events instead of
switching between databases in a local thread while in a
ManagedContextRunnable.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Root cause:
Even though dynamic scaling job is handled in vmworkjob queue which ensures serilizing multiple jobs but the database updating and generating usage events are out of the job queue.
Solution:
Moved all updations into the job queue
Firstly I have tested all the scenarios to check if nothing is broken:
Scaling on a running VM with normal compute offering
Scaling on a stopped VM with normal compute offering
Scaling on a running VM with custom compute offering
Scaling on stopped VM with custom compute offering
Scaling on stopped/running VM between custom compute offering and normal compute offering and combinations among these. Checked if the custom parameters have been populated or deleted accordingly based on the offering to which the VM is scaled
Since this is a corner scenario I could not test the exact point where two usage events are recorded at the same time for two different API calls on same VM.
Fixes#4090
When trying to migrate a VM across 2 clusters, if a snapshot has been deleted and garbage collection has run to update the removed field, it is not possible to migrate the instance due to a null pointer.
When you migrate volume between data stores CS keeps the original UUID and changes the path of the volume.
When volume is not found by the given path the agent throws CloudRuntimeException but it's not catched in LibvirtGetVolumeStatsCommandWrapper.java
When expunge a Running vm, vm will be stopped with forcestop=false which does not make sense. we should honor vm.destroy.forcestop in global setting, or always set forcestop=true.
When scripts/vm/hypervisor/kvm/kvmvmactivity.sh is called with an incorrect file name, an error is printed which is then interpreted as output from the script.
When an incorrect file name is passed the script prints out:
stat: cannot stat ‘b51d7336-d964-44ee-be60-bf62783dabc’: No such file or directory
=====> DEAD <======
The KVMHAVMActivityChecker.java checkingHB() process is expecting just
=====> DEAD <======
but gets the unexpected error message and interprets the file as alive.
When Guest VM add secondary nic, will get wrong hostname "infiniteh" from dhcp server
infiniteh -->infinite
cat /etc/dhcphosts.txt
02:00:0b:ef:00:04,set:192_168_4_18,192.168.4.18,gumd-tes3,infiniteh
* changed template and binaries iso to public links
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* iso state check and timeout fixes
refactoring
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
* changed timeouts
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>