Add executeInVR() with timeout interface to VirtualRouterDeployer
AggregationControlCommand with Action.Finish may take longer than normal command
since it would execute all the commands in one execution, and it may result in
SSH timeout for SshHelper or other mechanism communicate with VR.
Introduce an new executeInVR() interface with added timeout period for waiting
FinishAggregationCommand to complete execution.
introduce 'RegionLevelVpc' as capability of 'Connectivity' service. Add
support for CreateVPCOffering to take the 'regionlevelvpc' as capability
of service 'connectivity'.
introduces new capability 'StretchedL2Subnet' for 'Connectivity'
service. Also add support to createNetworkOffering api to allow
StretchedL2Subnet capablity for the connectivity service.
adds check to ensure 'Connectivity' service provider supports
'StretchedL2Subnet' and 'RegionLevelVpc' capabilities when specified in
createNetworkOffering and createVpcOffering respectivley
enable ovs plug-in to support both StretchedL2Subnet and RegionLevelVpc
capabilities
make zone id optional parameter in createVpc, zone id can be null only
if vpc offfering supports region level VPC
in region level vpc, let the network/tier to be created in any zone of
the region
keep zoneid as required param for createVpc
skip external guest network guru if 'Connectivy' service is present in
network offering
fix build break in contrail manager
permit VM's to be created in different zone that in which network is
created if the network support streched L2 subnet
add integration tests for region level VPC
rebase to master
Conflicts:
setup/db/db/schema-430to440.sql
to flow rules and applies them on the bridge
add event subscriber in OvsTunnelManager, that listens to
replaceNetworkAcl events. On event sends the updated policy info to all
the hosts in the VPC
- get the hosts on which VPC spans given vpc id
- get the VM's in the VPC
- get the hosts on which a network spans
- get the VPC's to which a hosts is part of
- get VM's of a VPC on a hosts
introduces capability to build a physical toplogy representation of a
VPC. This json file is encapsulated in
OvsVpcPhysicalTopologyConfigCommand, and is used to send full topology
to hypervisor hosts. On hypervisor this json config can be used to setup
tunnels, configure bridge, add flow rules etc
Ovs GURU, to use different broasdcast scheme VS://vpcid.gerkey for the
networks in VPC that use distributed routing
each VIF and tunnel interface to carry the network UUID in other/options
config
storage pool (SMB) and attached to a running vm can be live migrated to another shared storage
pool. Also a vm and its volumes can be live migrated to another host and storage pool respectively.
2) Corrected some logging in MidoNetPublicNetworkGuru - removed .toString method call on the objects in the log body as toString is called on the object by default when use log4j
CLOUDSTACK-4762 : Enabling VGPU support for XenServer.
This feature is to enable the GPU-passthrough and vGPU functionality,
with the help of this feature, admins/users will be able to leverage
the GPU graphics unit power by deploying a virtul machine with GPU or
vGPU support or by changing the service offering of an existing VM
at any later point of time. There GPU/vGPU enabled VMs are able to run
graphical applications.
For now, this feature is only supported with XenServer hypervisor but
can be extended to add the support of other hypervisors.
Moving default transport for console proxy, SSVM to http.
See
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Realhost+IP+changes
for more info.
jlk ported Amogh's patch for 4.3 to master - code base is different
enough that patch has multiple issues.
Author: Amogh Vasekar <Amogh Vasekar <amogh.vasekar@citrix.com>
Signed-off-by: John Kinsella <jlk@stratosec.co> 1394398017 -0700
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
Update volume path for every volume belonging to the VM to the corresponding new disk filename.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
plugins/hypervisors/vmware/src/org/apache/cloudstack/storage/motion/VmwareStorageMotionStrategy.java
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
If a volume has been renamed upon migration update its volumePath to that of the new disk filename.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
With VirtIO enabled on KVM. FreeBSD 10 supports VirtIO for both the
network and the disks. This frees us from IDE and E1000 which should
also improve performance.
By default all network disks are in RAW format. Gluster works fine with
QCOW2 which has some advantages.
Disks are by default in QCOW2 format. It is possible to run into
a mismatch, where the disk is in QCOW2 format, but QEMU gets started
with format=raw. This causes the virtual machines to lockup on boot.
Failures to start a virtual machine can be verified by checking the log
of the virtual machine, and compare the output of 'qemu-img info'.
In /var/log/libvirt/qemu/<VM>.log find the URL for the drive:
-drive file=gluster+tcp://...,format=raw,..
Compare this with the 'qemu-img info' output of the same file, mounted
under /mnt/<pool-uuid>/<img-uuid>:
# qemu-img info /mnt/<pool-uuid>/<img-uuid>
...
file format: qcow2
...
This change makes passes the format when creating a disk located on RBD
(RAW only) and Gluster (QCOW2).
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The support for Gluster as Primary Storage is mostly based on the
implementation for NFS. Like NFS, libvirt can address a Gluster environment
through the 'netfs' pool-type.
return null. Changed to retrieve the first entry in the map.
Removed the ExecutionException try catch, this would prevent the
unittest from giving accurate information on exceptions. Avoid catching
checked exceptions in a unittest, use expected keyword on @Test instead.
PrepareForMigrationCommand, so that destination hypervisor can
mount pool. This further exposed an issue for KVM where iso
was not getting cleaned up upon successful migration, fixed as well.
When project/domains are created, our plugin code won't get invoked. Our
contrail plugin uses Event infrastructure provided by cloudstack to
receive these events and handle them accordingly. It is must to create
domains/projects before creating a virtual network/vm object in contrail
implementation. Hence our plugin must need a way to get notified about
project/domain events.
By default only the Integers between -128..127 are cached (unless overridden by java.lang.Integer.IntegerCache.high system property)
If the inbound or outbound values are higher, the reference comparison won't work.
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
- minor resource leak cleaned up
- cpu-speed reading method extracted
- test added
- logging added in case of exception
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
Replaced HttpClient#execute(HttpUriRequest) with
HttpClient#execute(HttpUriRequest,ResponseHandler<T>).
The former requires extra EntityUtils#consume(HttpEntity).
This saves us a lot of code and libvirt is probably a better
place to do this.
libvirt-java now has the support we want, so we can now resize volumes
with libvirt.
(C)LVM volumes can't be resized using libvirt, so we have to
invoke a resize script for that.
By default the client_mount_timeout setting in librados is 300 seconds,
but that causes the connect to the Ceph cluster to block for 5 minutes
if the Ceph cluster is not available.
This patch is not ideal, but it mitigates the problem for now.
At a later point all this librados/librbd code should go back to libvirt
again, but the current versions of libvirt in the distributions are
to old for all the features we require.
For now this should prevent the CloudStack agent blocking for 5 minutes
when the Ceph cluster isn't available.
This is also tracked at the Ceph tracker: http://tracker.ceph.com/issues/6507
To obtain network read/write statistics, multiply sample duration with the
average of the particular performance metric obtained over the sample period.