By default only the Integers between -128..127 are cached (unless overridden by java.lang.Integer.IntegerCache.high system property)
If the inbound or outbound values are higher, the reference comparison won't work.
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
- minor resource leak cleaned up
- cpu-speed reading method extracted
- test added
- logging added in case of exception
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
This saves us a lot of code and libvirt is probably a better
place to do this.
libvirt-java now has the support we want, so we can now resize volumes
with libvirt.
(C)LVM volumes can't be resized using libvirt, so we have to
invoke a resize script for that.
By default the client_mount_timeout setting in librados is 300 seconds,
but that causes the connect to the Ceph cluster to block for 5 minutes
if the Ceph cluster is not available.
This patch is not ideal, but it mitigates the problem for now.
At a later point all this librados/librbd code should go back to libvirt
again, but the current versions of libvirt in the distributions are
to old for all the features we require.
For now this should prevent the CloudStack agent blocking for 5 minutes
when the Ceph cluster isn't available.
This is also tracked at the Ceph tracker: http://tracker.ceph.com/issues/6507
To obtain network read/write statistics, multiply sample duration with the
average of the particular performance metric obtained over the sample period.
Added fix for exception and listing. Mentioned details under bug.
Post the fix, simulator works fine.
Signed-off-by: Santhosh Edukulla <Santhosh.Edukulla@citrix.com>
Signed-off-by: Koushik Das <koushik@apache.org>
Introduced by:
commit ac65f8fddf
Author: Hugo Trippaers <htrippaers@schubergphilis.com>
Date: Mon Jan 20 18:03:02 2014 +0100
CLOUDSTACK-5884 make getTargetSwitch(NicTO nicTo) do all the work to select
switch name, type and vlan token. Change preference to use the tags set on the
physical network.
the management server was restarted. The template.properties file created for the
template has the format field in upper-case. This caused the template service to
not to recognise the format and it removed the entry from the template_store_ref
table in db. Fixed the format field in the templatee.properties.
null pointer exception was getting generated when a VolumeTO object was
serialized to create an answer object. If a local storage is used the uri
field will be null. Added null checks for the same.
We hit these excptions whenever a management server held session that was with the old 5.1 vCenter server
is used to make resource calls to the new 5.5 vCenter.
Validate a vCenter session context before it is being used to make a resource call.
And if the context is invalid then discard the context and retrieve a new one.
During the invalidation of an old context handle the context disconnect better
by catching the appropriate exception and returning a newly created context.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareContextFactory.java
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareSecondaryStorageResourceHandler.java
vmware-base/src/com/cloud/hypervisor/vmware/util/VmwareContext.java
For a detached volume, don't try to find the associated VM on the hypervisor/peer hypervisor host.
By default create a worker VM to perform snapshot operations.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareStorageProcessor.java
on hyperv. There were multiple issues here. Upload volume was actually
failing because the post download check for vhd on the cifs share was
unsuccessful. Also the agent code wasn't parsing the volume path correctly.
Fixed it too.
Don't package the OVF and VMDK files into OVA after a template is created from volume.
Since packaging process contains reading and writing from the NFS mount, it doubles the amount of data that needs to be moved around
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareStorageProcessor.java
Instead of injecting object of VolumeOrchestrationService into VmwareResource, we now populate the command object (MigrateVolumeCommand here) with required information. Thus we dont need volume orchestration service to query that information from resource.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
is attached a hard disk drive is created on the scsi controller. On detach
the data disk is removed from the drive but the disk drive is left behind.
On reattach the agent was again trying to create a disk drive while it was
already present. Fixed the agent code to look up for disk drive while
attaching and if one is not found then only to create the drive for
attaching a data disk.
The agent was always creating a disk with image format vhdx, but the
cloudstack management server defaults to image format vhd for hyperv.
Updated the agent code to be consistent with what cs expects. All disks
are now created with image format vhd.
When VM is not running, existing code is unable to retrieve associated cluster's Id. Now we will try to get this information using previous host where the VM was running.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/guru/VMwareGuru.java
DetachISO is succeeding even though detach opeartion is failing as cdrom is locked by VM as it was mounted inside VM.
Detect if cdrom is locked or not. If locked fail detach operation and warn user to unmount before detaching the iso/cdrom device.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareStorageProcessor.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
deployment fails because of that as cloudstack tries to deploy it on a host which is
ctually down. An investigator wasn't present for hyper-v; so cloudstack wasn't able to
determine the status of the host. Wrote an investigator for hyper-v which checks with
other hosts in the cluster for the status of the host being investigated.
host is put in maintenance mode. The migrate flag wasn't set to true in
the maintain answer. This caused cloudstack to not to schedule a migration
work item for vms on the host. Made a change to set the migrate flag to
true in migrate answer.