If vm has last host_id specified, cloudstack will try to start vm on it at first.
However, host tag is checked, but guest os preference is not checked.
for new vm, it will be deployed to the preferred host as we expect.
Fixes: #3554 (comment)
After a few hours running with InfluxDB configured, CloudStack hangs due to OutOfMemoryException raised. The exception happens at com.cloud.server.StatsCollector.writeBatches(StatsCollector.java:1510):
2020-08-12 21:19:00,972 ERROR [c.c.s.StatsCollector] (StatsCollector-6:ctx-0a4cfe6a) (logid:03a7ba48) Error trying to retrieve host stats
java.lang.OutOfMemoryError: unable to create new native thread
...
at org.influxdb.impl.BatchProcessor.<init>(BatchProcessor.java:294)
at org.influxdb.impl.BatchProcessor$Builder.build(BatchProcessor.java:201)
at org.influxdb.impl.InfluxDBImpl.enableBatch(InfluxDBImpl.java:311)
at com.cloud.server.StatsCollector.writeBatches(StatsCollector.java:1510)
at com.cloud.server.StatsCollector$AbstractStatsCollector.sendMetricsToInfluxdb(StatsCollector.java:1351)
at com.cloud.server.StatsCollector$HostCollector.runInContext(StatsCollector.java:522)
Context on InfluxDB Batch: Enabling batch on InfluxDB is great and speeds writing but it requires caution to avoid Zombie threads.
Solution: This happens because the batching feature creates an internal thread pool that needs to be shut down explicitly; therefore, it is important to add: influxDB.close().
This PR aims to fix the issue below
Create a network offering for isolated network, services: Dns/Dhcp/Userdata, and enable it
create a isolated network with the new offering
create a vm
check the guest IP of virtual router,
restart network with cleanup
check the guest IP of new virtual router
The IP in step4 and step6 should be the same, but they are different actually.
The "hypervisor" field in listvmsnapshot response will
be used in primate to enable/disable creating snapshot
from vm snapshot functionality.
Creating snpashot from vm snapshot will be enabled only if
hypervisor is KVM
When the static route service is not available on the VPC and a static route is created, the static route is created in a revoked state.
Currently, the UI doesn't distinguish between active or revoked static routes.
This PR adds the missing state filter to the list routes command and only lists active routes in the UI.
It also ignores revoked routes when the private gateway is being removed but clears out the inactive routes before the gateway is removed.
Fixes#2908
This fixes issues of virtual size to be twice in case the disk is a
linked-clone root disk. The virtual size of root disk (first in chain)
must be used.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Adding the following fixes so primate can work without issues :
- Adding pagination for listNetworkAclLists
- Adding pagination for listRoles
- Returning mshost uuid rather than msid in list hosts response
- Allowing listVirtualMachinesMetrics to respect hostid
- Fixing return all details in template response
This will purge all the cookies on logout including multiple sessionkey
cookies if passed. On login, this will restrict sessionkey cookie
(httponly) to the / path.
Fixes#4136
Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
This change will ensure that B&R APIs are not exported if the feature
is not enabled in any of the zones.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
While migrate a vm, in the popup, the host dedicated to other accounts/domains are also 'Suitable" for migration, which is obviously wrong.
The same issue happens with api findHostsForMigration
If we resize a volume of a vm running on a host which is not Up or not Enable, the job will be scheduled to another normal host. Then the volume will be resized by "qemu-img resize" instead of "virsh blockresize", the image might be corrupted after resize.
Adding missing fields in the following APIs
osdisplayname in listVirtualMachines
vpcofferingname in listVpcs
vpcname in listPublicIpAddresses
vpcname in listPrivateGateways
vpcname in listVpnGateways
templatename, podname in listRouters
templatename, podname in listSystemVms
Fixes: #4161
Fixes wrong count in listAffinityGroup API.
API was returning the count of AffinityGroupJoinVO records.
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
This PR fixes an issue where an instance fails to deploy due to a null pointer when using an L2 Guest Network with DefaultL2NetworkOfferingConfigDrive on Xenserver. It also fixes migrating an instance to another host.
This has been tested by:
- Creating an L2 Guest network, using DefaultL2NetworkOfferingConfigDrive as the network offering.
- Deploying an instance using the L2 Guest network created.
- Migrating the instance away from the host and back
When a VPC is restarted, the networks in the VPC is not restarted, this PR will add the logic to restart the networks in the VPC that needs a restart when the VPC is restarted.
Fixes#3816
BackupSync task would switch between databases to update backup usage
metrics in the cloud_usage.usage_backup table. The current framework
and the usage in ManagedContext causes database connection
(LegacyTransaction) leaks. When the thread runs faster, the issue is
easily reproducible and checking via heap dump analysis or using JMX
MBeans. This fixes by moving the task of backup data updation for
usage data to the usage server by publishing usage events instead of
switching between databases in a local thread while in a
ManagedContextRunnable.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Root cause:
Even though dynamic scaling job is handled in vmworkjob queue which ensures serilizing multiple jobs but the database updating and generating usage events are out of the job queue.
Solution:
Moved all updations into the job queue
Firstly I have tested all the scenarios to check if nothing is broken:
Scaling on a running VM with normal compute offering
Scaling on a stopped VM with normal compute offering
Scaling on a running VM with custom compute offering
Scaling on stopped VM with custom compute offering
Scaling on stopped/running VM between custom compute offering and normal compute offering and combinations among these. Checked if the custom parameters have been populated or deleted accordingly based on the offering to which the VM is scaled
Since this is a corner scenario I could not test the exact point where two usage events are recorded at the same time for two different API calls on same VM.
* 4.13:
Snapshot deletion issues (#3969)
server: Cannot list affinity group if there are hosts dedicated… (#4025)
server: Search zone-wide storage pool when allocation algothrim is firstfitleastconsumed (#4002)