Add executeInVR() with timeout interface to VirtualRouterDeployer
AggregationControlCommand with Action.Finish may take longer than normal command
since it would execute all the commands in one execution, and it may result in
SSH timeout for SshHelper or other mechanism communicate with VR.
Introduce an new executeInVR() interface with added timeout period for waiting
FinishAggregationCommand to complete execution.
to flow rules and applies them on the bridge
add event subscriber in OvsTunnelManager, that listens to
replaceNetworkAcl events. On event sends the updated policy info to all
the hosts in the VPC
- get the hosts on which VPC spans given vpc id
- get the VM's in the VPC
- get the hosts on which a network spans
- get the VPC's to which a hosts is part of
- get VM's of a VPC on a hosts
introduces capability to build a physical toplogy representation of a
VPC. This json file is encapsulated in
OvsVpcPhysicalTopologyConfigCommand, and is used to send full topology
to hypervisor hosts. On hypervisor this json config can be used to setup
tunnels, configure bridge, add flow rules etc
Ovs GURU, to use different broasdcast scheme VS://vpcid.gerkey for the
networks in VPC that use distributed routing
each VIF and tunnel interface to carry the network UUID in other/options
config
storage pool (SMB) and attached to a running vm can be live migrated to another shared storage
pool. Also a vm and its volumes can be live migrated to another host and storage pool respectively.
2) Corrected some logging in MidoNetPublicNetworkGuru - removed .toString method call on the objects in the log body as toString is called on the object by default when use log4j
CLOUDSTACK-4762 : Enabling VGPU support for XenServer.
This feature is to enable the GPU-passthrough and vGPU functionality,
with the help of this feature, admins/users will be able to leverage
the GPU graphics unit power by deploying a virtul machine with GPU or
vGPU support or by changing the service offering of an existing VM
at any later point of time. There GPU/vGPU enabled VMs are able to run
graphical applications.
For now, this feature is only supported with XenServer hypervisor but
can be extended to add the support of other hypervisors.
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
Update volume path for every volume belonging to the VM to the corresponding new disk filename.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
plugins/hypervisors/vmware/src/org/apache/cloudstack/storage/motion/VmwareStorageMotionStrategy.java
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
If a volume has been renamed upon migration update its volumePath to that of the new disk filename.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
With VirtIO enabled on KVM. FreeBSD 10 supports VirtIO for both the
network and the disks. This frees us from IDE and E1000 which should
also improve performance.
By default all network disks are in RAW format. Gluster works fine with
QCOW2 which has some advantages.
Disks are by default in QCOW2 format. It is possible to run into
a mismatch, where the disk is in QCOW2 format, but QEMU gets started
with format=raw. This causes the virtual machines to lockup on boot.
Failures to start a virtual machine can be verified by checking the log
of the virtual machine, and compare the output of 'qemu-img info'.
In /var/log/libvirt/qemu/<VM>.log find the URL for the drive:
-drive file=gluster+tcp://...,format=raw,..
Compare this with the 'qemu-img info' output of the same file, mounted
under /mnt/<pool-uuid>/<img-uuid>:
# qemu-img info /mnt/<pool-uuid>/<img-uuid>
...
file format: qcow2
...
This change makes passes the format when creating a disk located on RBD
(RAW only) and Gluster (QCOW2).
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The support for Gluster as Primary Storage is mostly based on the
implementation for NFS. Like NFS, libvirt can address a Gluster environment
through the 'netfs' pool-type.
PrepareForMigrationCommand, so that destination hypervisor can
mount pool. This further exposed an issue for KVM where iso
was not getting cleaned up upon successful migration, fixed as well.
By default only the Integers between -128..127 are cached (unless overridden by java.lang.Integer.IntegerCache.high system property)
If the inbound or outbound values are higher, the reference comparison won't work.
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
- minor resource leak cleaned up
- cpu-speed reading method extracted
- test added
- logging added in case of exception
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
This saves us a lot of code and libvirt is probably a better
place to do this.
libvirt-java now has the support we want, so we can now resize volumes
with libvirt.
(C)LVM volumes can't be resized using libvirt, so we have to
invoke a resize script for that.
By default the client_mount_timeout setting in librados is 300 seconds,
but that causes the connect to the Ceph cluster to block for 5 minutes
if the Ceph cluster is not available.
This patch is not ideal, but it mitigates the problem for now.
At a later point all this librados/librbd code should go back to libvirt
again, but the current versions of libvirt in the distributions are
to old for all the features we require.
For now this should prevent the CloudStack agent blocking for 5 minutes
when the Ceph cluster isn't available.
This is also tracked at the Ceph tracker: http://tracker.ceph.com/issues/6507
To obtain network read/write statistics, multiply sample duration with the
average of the particular performance metric obtained over the sample period.