1. Fixed JSON response deserialization. While creating a mock a JSON can be passed which will be deserialized into a response object and returned from agent layer.
For e.g. for a mock corresponding to StopCommand, a response like "{"com.cloud.agent.api.StopAnswer":{"result":false,"wait":0}}" can be passed.
2. Ability to mock PingCommand (returned as part of getCurrentStatus() agent method). As a part of this a mocked VM state report can be returned.
For e.g. {"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"newStates":{},"_hostVmStateReport":{"v-2-VM":{"state":"PowerOn","host":"SimulatedAgent.e6df7732-69b2-429b-9b6a-3e24dddfa2e0"},"i-2-5-VM":{"state":"PowerOff","host":"SimulatedAgent.e6df7732-69b2-429b-9b6a-3e24dddfa2e0"}},"_gatewayAccessible":true,"_vnetAccessible":true,"hostType":"Routing","hostId":3,"contextMap":{},"wait":0}}
in the XenServer pool for GRE tunnel networks
Fix uses XenServer recommended way
Network.other_config:assume_network_is_shared=true
which ensures bridge is created automatically on hosts in the pool for
GRE tunnel networks. Fix also gets rid of error prone custom logic that ensures
bridge is created by plugging a VIF into the dom0 and connected to
GRE tunnel network.
Conflicts:
plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/CitrixResourceBase.java
xen cluster
XenServer does not create a bridge automatically when VIF from domU is connected
to internal network. So there is logic to force bridge creation by
creating VIF in dom0 connected to GRE tunnel network. But there is no
logic to delete the VIF after bridge gets created. So this fix ensure
VIF is delted when atleast there is one domU VIF connected to the
network.
1. Adding the missing Template/Volume URLs expiration functionality
2. Improvement - While deleting the volume during expiration use rm -rf as vmware now contains directoy
3. Improvement - Use standard Answer so that the error gets logged in case deletion of expiration link didnt work fine.
4. Improvement - In case of domain change, expire the old urls
In situations where libvirt lost the storage pool the KVM Agent will re-create the
storage pool in libvirt.
This could be then libvirt is restarted for example.
The object returned internally was missing essential information like the sourceDir
aka the Ceph pool, the monitor IPs, cephx information and such.
In this case the first operation on this newly created pool would fail. All operations
afterwards would succeed.
The proxy-arp add/del is done on firewall rule add/del.
The proxy-arp rule is deleted only when there is no static nat or dest nat rule is not using the ip.
When there is static nat or PF and firewall rule
a. Delete firewall rule. It skips delete proxy-arp because the rule is used by static nat rule.
b. After deleting fw rule if we disable static nat there is no way to delete proxy-arp rule.
On VM expunge we are deleting firewall rules first then static nat rules. This caused the stale proxy-arp
rules.
With this fix adding/deleting proxy arp rule on static nat/PF rule add/del.
to get nic info of 0.0.0.0. To get it, it iterates through all nics and return the last NIC in
the list if it doesn't match with any IP address. In case last NIC doesn't have unicastAddress,
Hyper-V agent will fail to start. We don't need IP address during initialization. It get
initialized with startupcommand later.
last VM from the VPC is deleted on a host
OVS distributed routing: ensure bridge is deleted when last VM from the
VPC is deleted on a host. This fix ensures that bridge is
destroyed.
the folder column. For an smb share the smb credentials are in the query string of the path.
Before adding the path, smb shares query string should be cleaned up.
that case cloudstack was not doing anything and not updating the state of the vms to stopped.
Now the agent returns empty list of hostvmstatereport. Management server will then update the
vm state to stopped (instead of not acting upon the return state).
not created already when OvsVpcPhysicalTopologyConfigCommand update is
recived
Currently if the tunnel creation fails, there is no retry logic. Fix
ensures OvsVpcPhysicalTopologyConfigCommand updates as an opputiunity to ensure
proper tunnels are established between the hosts.
Check if switch name detected from traffic label for management, storage, control traffic is null before falling back to default value.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
We used to create the snapshot after the copy from Secondary Storage,
but it could be that we never use the snapshot.
Now we check if the snapshot exists prior to performing the cloning operation
Since we use qemu-img to copy from RBD to Secondary Storage we no
longer have to force to RAW images, but can stick with QCOW2
When the snapshot backups are QCOW2 format they can easily be deployed
again when restoring from a backup
The KVMStorageProcessor no longer has a hardcoded if-statement which sets
RBD volumes to RAW, this is now handled in the LibvirtStorageAdapter
The Management Server still sends QCOW2 as format. That's a fix for later.
templatefilter="shared" is used , we see public templates also being
included in the list. This commit reverts listTemplates behavior to 4.3
old logic without using consistent interpretation of list parameters
adopted in new IAM model.
invalid password is provided.
- AccountManager now works using accountId instead of accountType in
following methods too:
- isResourceDomainAdmin()
- isAdmin()
OvsVpcPhysicalTopologyConfigCommand and OvsVpcRoutingPolicyConfigCommand
fix ensures only latest updates are applied (new openflow rules) to the
bidge enabled for distributed routing.
fix mismatch of ovs-host-setup, ovs_host_setup used Libvirt resource and
scripts
plug the nic to OVS bridges created for the tunnel network.
Conflicts:
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/OvsVifDriver.java
In case some environments has different performance or we found some commands
would took too long to execute, one global configuration item is introduced to
specify "time out in seconds per one command in aggregation commands".
By default it's 3 seconds. If admin feel it's too long, it can be adjust to as
low as 1 seconds, which runs still well in my machine.
Conflicts:
setup/db/db/schema-430to440.sql
Added a new flag 'checkBeforeCleanup' to StopCommand based on which check is done to see if VM is running in HV host.
If VM is running then in this case it is not stopped and the operation bails out.
Also modified the MS code to call the StopCommand with appropriate value for the flag based on the context.
Currently it is only set to 'true' when called from the new vmsync logic based on powerstate of VM. For rest it
is set to 'false' meaning no change in behaviour.
This reduces the amount of time and storage it takes dramatically. We no longer
do a full copy, but a sparse copy. The destination image is still in RAW
format, but we only copy over used blocks.
Qemu is also better in doing this then us doing it in Java code.
Otherwise a RBDException will be thrown with the message that the snapshot
isn't protected.
modified: plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
clusers
across the poll members an internal network created is visible to al the
members but bridge is not necessariliy created. This fix enables
plugging a temp VIF connected to internal network to dom0 and then
unplug-it. this action creates a bridge on the host of the network.
This saves the step of writing to a temporary image in /tmp first before
writing to RBD.
This is possible due to a new version in librbd. With the rbd_default_format
setting we can now force qemu-img to create format 2 RBD images.
This is available since Ceph version 0.67.5 (Dumpling).
introduce 'RegionLevelVpc' as capability of 'Connectivity' service. Add
support for CreateVPCOffering to take the 'regionlevelvpc' as capability
of service 'connectivity'.
introduces new capability 'StretchedL2Subnet' for 'Connectivity'
service. Also add support to createNetworkOffering api to allow
StretchedL2Subnet capablity for the connectivity service.
adds check to ensure 'Connectivity' service provider supports
'StretchedL2Subnet' and 'RegionLevelVpc' capabilities when specified in
createNetworkOffering and createVpcOffering respectivley
enable ovs plug-in to support both StretchedL2Subnet and RegionLevelVpc
capabilities
make zone id optional parameter in createVpc, zone id can be null only
if vpc offfering supports region level VPC
in region level vpc, let the network/tier to be created in any zone of
the region
keep zoneid as required param for createVpc
skip external guest network guru if 'Connectivy' service is present in
network offering
fix build break in contrail manager
permit VM's to be created in different zone that in which network is
created if the network support streched L2 subnet
add integration tests for region level VPC
rebase to master
Conflicts:
setup/db/db/schema-430to440.sql
Conflicts:
api/src/org/apache/cloudstack/api/ApiConstants.java
engine/schema/src/com/cloud/network/vpc/VpcVO.java
setup/db/db/schema-430to440.sql
Add executeInVR() with timeout interface to VirtualRouterDeployer
AggregationControlCommand with Action.Finish may take longer than normal command
since it would execute all the commands in one execution, and it may result in
SSH timeout for SshHelper or other mechanism communicate with VR.
Introduce an new executeInVR() interface with added timeout period for waiting
FinishAggregationCommand to complete execution.
to flow rules and applies them on the bridge
add event subscriber in OvsTunnelManager, that listens to
replaceNetworkAcl events. On event sends the updated policy info to all
the hosts in the VPC
- get the hosts on which VPC spans given vpc id
- get the VM's in the VPC
- get the hosts on which a network spans
- get the VPC's to which a hosts is part of
- get VM's of a VPC on a hosts
introduces capability to build a physical toplogy representation of a
VPC. This json file is encapsulated in
OvsVpcPhysicalTopologyConfigCommand, and is used to send full topology
to hypervisor hosts. On hypervisor this json config can be used to setup
tunnels, configure bridge, add flow rules etc
Ovs GURU, to use different broasdcast scheme VS://vpcid.gerkey for the
networks in VPC that use distributed routing
each VIF and tunnel interface to carry the network UUID in other/options
config
storage pool (SMB) and attached to a running vm can be live migrated to another shared storage
pool. Also a vm and its volumes can be live migrated to another host and storage pool respectively.
2) Corrected some logging in MidoNetPublicNetworkGuru - removed .toString method call on the objects in the log body as toString is called on the object by default when use log4j
CLOUDSTACK-4762 : Enabling VGPU support for XenServer.
This feature is to enable the GPU-passthrough and vGPU functionality,
with the help of this feature, admins/users will be able to leverage
the GPU graphics unit power by deploying a virtul machine with GPU or
vGPU support or by changing the service offering of an existing VM
at any later point of time. There GPU/vGPU enabled VMs are able to run
graphical applications.
For now, this feature is only supported with XenServer hypervisor but
can be extended to add the support of other hypervisors.
Moving default transport for console proxy, SSVM to http.
See
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Realhost+IP+changes
for more info.
jlk ported Amogh's patch for 4.3 to master - code base is different
enough that patch has multiple issues.
Author: Amogh Vasekar <Amogh Vasekar <amogh.vasekar@citrix.com>
Signed-off-by: John Kinsella <jlk@stratosec.co> 1394398017 -0700
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
Update volume path for every volume belonging to the VM to the corresponding new disk filename.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
plugins/hypervisors/vmware/src/org/apache/cloudstack/storage/motion/VmwareStorageMotionStrategy.java
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
If a volume has been renamed upon migration update its volumePath to that of the new disk filename.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
With VirtIO enabled on KVM. FreeBSD 10 supports VirtIO for both the
network and the disks. This frees us from IDE and E1000 which should
also improve performance.
By default all network disks are in RAW format. Gluster works fine with
QCOW2 which has some advantages.
Disks are by default in QCOW2 format. It is possible to run into
a mismatch, where the disk is in QCOW2 format, but QEMU gets started
with format=raw. This causes the virtual machines to lockup on boot.
Failures to start a virtual machine can be verified by checking the log
of the virtual machine, and compare the output of 'qemu-img info'.
In /var/log/libvirt/qemu/<VM>.log find the URL for the drive:
-drive file=gluster+tcp://...,format=raw,..
Compare this with the 'qemu-img info' output of the same file, mounted
under /mnt/<pool-uuid>/<img-uuid>:
# qemu-img info /mnt/<pool-uuid>/<img-uuid>
...
file format: qcow2
...
This change makes passes the format when creating a disk located on RBD
(RAW only) and Gluster (QCOW2).
Signed-off-by: Niels de Vos <ndevos@redhat.com>