CLOUDSTACK-3317 - DVS does not support management\storage network
Added support for Management and Storage Network traffic over VMware DVS in CloudStack deployments. Also added support for storage VLAN over dvPortGroup.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CLOUDSTACK-3317 - DVS does not support management\storage network
Use non-zero dvport count while updating dvportgroups of system traffic.
Updated configuration compare logic to avoid update dvportgroup operation unless required.
This would help improve speed in vm/network deployment as the update calls would reduce. Also improved logging.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Adding unit tests in class HypervisorHostHelperTest
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Added license header to new file being added to repo/branch.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Assertion is not used in runtime, correct way is throw and handle exception without killing app
Signed-off-by: Daan Hoogland <daan.hoogland@gmail.com>
This closes#362
VmwareResource.java:315, MS_SHOULD_BE_FINAL, Priority: High
com.cloud.hypervisor.vmware.resource.VmwareResource.s_serviceContext isn't final but should be
VmwareResource.java:331, MS_SHOULD_BE_FINAL, Priority: High
com.cloud.hypervisor.vmware.resource.VmwareResource.s_powerStatesTable isn't final but should be
Signed-off-by: Daan Hoogland <daan.hoogland@gmail.com>
This closes#363
KVM hosts which are actuall up, but if their agents are shutdown should be put
in disconnected state. This would avoid getting the VMs HA'd and other commands
such as deploying a VM will exclude that host and save us from errors.
The improvement is that, we first try to contact the KVM host itself. If it fails
we assume that it's disconnected, and then ask its KVM neighbours if they can
check its status. If all of the KVM neighbours tell us that it's Down and we're
unable to reach the KVM host, then the host is possibly down. In case any of the
KVM neighbours tell us that it's Up but we're unable to reach the KVM host then
we can be sure that the agent is offline but the host is running.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Signed-off-by: wilderrodrigues <wrodrigues@schubergphilis.com>
This closes#340
When executing the tests in an environment where Libvirt is also installed, it
caused errors.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This closes#342
Maven was bulding the project and executing the tests just fine because it uses "-sourcepath". However,
with IDEs and javac it would fail.
Signed-off-by: wilderrodrigues <wrodrigues@schubergphilis.com>
This closes#317
Pull average Cpu util report between polling intervals instead of since boot
instead of using values since uptime
(cherry picked from commit 04176eaf17)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Conflicts:
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
This closes#297
Passing the file argument to the xml break for EL 7.1, the fix removes
the argument as just passing rombar='off' with its file arg to be empty string.
This closes#290
(cherry picked from commit aafa0c80b3)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
EL7 has a different output to 'free', use /proc/meminfo instead of a tool to be
more consistent across distros
(cherry picked from commit 212a05a345)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Conflicts:
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
Removing real IPs from the tests because they cause a long running time for LibvirtComputingResourceTest
- In a local machine it takes 1.977s, but in a KVM test environment it's taking 257.879 sec
Fixing typo on LibvirtRequestWrapper
- Replace linbvirtCommands by libvirtCommands on LibvirtRequestWrapper
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This closes#255
- The test was okay, but when running in an environment where a /root/.ssh/id_rsa existed, it would return true then fail
- We now mock the calls to methods that return the key paths, instead of relying in the static variables
- Adding LibvirtNetworkElementCommandWrapper and LibvirtStorageSubSystemCommandWrapper
- 2 unit tests added
- KVM hypervisor plugin with 22.2% coverage
I also refactored the StorageSubSystemCommand interface into an abstract class
- Remove the pseudo-multiple-inheritance implementation
- The StorageSubSystemCommand was an interface, not related to the Command class
and its implementation were extending the Command class anyway. The whole structure is better now.
- Addin LibvirtPvlanSetupCommandWrapper
- 6 unit tests added
- KVM hypervisor plugin with 21% coverage
From the 6 tests added, 2 were extra tests to increase the coverage of the LibvirtStopCommandWrapper
- Increased from 35% to 78.7%
- Adding LibvirtCopyVolumeCommandWrapper
Refactoring the LibvirtUtilitiesHelper
- Changing method name
Did not add any test to this commit due to the refactor mentioned abot.
Will proceed and add the tests
i# Please enter the commit message for your changes. Lines starting
- Gave it a better, more suggestive, name since I now added other methods to the class.
- It makes easier to mock objects and get a better coverage of the classes
- Adding LibvirtBackupSnapshotCommandWrapper, LibvirtCreatePrivateTemplateFromVolumeCommandWrapper and LibvirtManageSnapshotCommandWrapper
- 3 unit tests added
- KVM hypervisor plugin with 18.3% coverage
Less tests added to those classes because the code is quite complex and way too long.
The tests added are just covering the new flow, to make sure it works fine. I will come back to those classes later.
- Adding LibvirtOvsDestroyBridgeCommandWrapper, LibvirtOvsSetupBridgeCommandWrapper
- 4 unit tests added
- KVM hypervisor plugin with 13.9% coverage
More tests added to cover LibvirtPrepareForMigrationCommandWrapper
- Coverage of this wrapper broght from 37% to 90.6%
- 4 new tests added
- Adding LibvirtCheckConsoleProxyLoadCommandWrapper, LibvirtConsoleProxyLoadCommandWrapper, LibvirtWatchConsoleProxyLoadCommandWrapperand CitrixConsoleProxyLoadCommandWrapper
- 2 unit tests added
- KVM hypervisor plugin with 12% coverage
Refactored the CommandWrapper interface in order to remove the esecuteProxyLoadScan, which is now
implemented bu subclasses.
- Adding LibvirtGetHosStatsCommandWrapper
- 1 unit test added
- KVM hypervisor with 10.5% coverage
Tests are a bit limited on this one becuause of the current implementation. Would clean it up later in a separate branch
- Adding LibvirtGetVmStatsCommandWrapper
- 3 unit tests
Refactored the LibvirtConnectiobn by surrounding it with an wrapper.
- Make it easier to cover the static/native calls
- Added better coverage to StopCommand tests
- Adding LibvirtStopCommandWrapper
- LibvirtRequestWrapper
- 1 unit tests
Refactored the RequestWrapper to make it better.
- Changes also applied to the CitrixRequestWrapper
Linux kernel supports vmxnet3, allowing it in KVM plugin would allow us to
run ESX hosts on KVM hosts using CloudStack with vmxnet3 nic which can be
passed as VM's nicAdapter detail
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit e02d787f30)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This improvements checks for "guest.cpu.features" property which is a space
separated list of cpu features that is specific for a host. When added, it
will add <feature policy='require' name='{{feature-you-listed}}'/> in the
<cpu> section of the generated vm spec xml.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit ea7fd37783)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The only artifact resolved from libvirt.org was org.libvirt:libvirt:0.5.1
this artifact is now available from maven's default central repository
This closes#180
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
As suggested by Wido on the dev ML changing the repo to eu.ceph.com to avoid
build failures. Will revert if ceph.com is up again.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit c9fd57fff3)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CentOS 7 does not ship with ifconfig anymore. We should use ip commands instead.
This also works on older versions, like CentOS 6 and Ubuntu 12.x/14.x, that we
support.
This closes#165
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- use sticky chmod 1777 on the mountpoint
- remove dead code
(cherry picked from commit eea716b791)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
1. provide compatibility with the Big Cloud Fabric (BCF) controller
L2 Connectivity Service in both VPC and non-VPC modes
2. virtual network terminology updates: VNS --> BCF_SEGMENT
3. uses HTTPS with trust-always certificate handling
4. topology sync support with BCF controller
5. support multiple (two) BCF controllers with HA
6. support VM migration
7. support Firewall, Static NAT, and Source NAT with NAT enabled option
8. add VifDriver for Indigo Virtual Switch (IVS)
This closes#151
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- With the changes added by the rVPC work, the bump priority became deprecated.
This commit includes a refactor to get it removed from the following resources:
* Java classes
* domain_router table - removing the is_priority_bumpup column
* Fixing unit tests
All changes were tested with:
XenServer 6.2 running under our VMWare zone
CloudStack Management Server running on MacBook Pro
MySql running on MackBook Pro
Storage Type: Local
Started the refactor of the XenServer56Resource class
- Unit test added
Changing the CitrixRequestWrapper in order to cope with multiple resources classes and commands
Still have to remove few methods form CitrixResourceBase
- Unit tests added
In the executeRequest I needed to keep the following:
// We need this one because the StorageSubSystemCommand is from another hierarchy.
if (cmd instanceof StorageSubSystemCommand) {
return storageHandler.handleStorageCommands((StorageSubSystemCommand) cmd);
}
- Added basic tests
- Changed the way RebootCommand gets called from RebootRouterCommand
- Made a couple of methods public in the CitrixResourceBase and its subclasses
Sonarcube is great, but has no context and can be wrong, in this
case File.separator is nice if you're running platform independent,
the agent on OVM3 is however NOT platform independent and will break
if we feed it windows separators.
The copy command reply can have null size returned, so check and set values
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit 53ca0b1861)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fix: RTNETLINK errors
- Management Server health check trying to create already existing interface
- Changes on update_config.py, cs_guestnetwork.py, merger.py
Fix: replace RRouTER_LOG in the CsRedundant.py per log file location
Fix: Guest Net address association during Router restart
- Changes on NicProfileHelper, NicProfileHelperImpl
Fix: aggregationExecution() method on VirtualNetworkApplianceManagerImpl
- Do not send an AggregationControlCommand to a non-configured router
Some classes have been formatted.
Added a source column to the user table.
Source now has only two values UNKNOWN,LDAP with UNKNOWN being the
default and is an enum is com.cloud.User.
When the source is UNKNOWN, the old method of authenticating against all
the available authenticators is used. If a source is available, only
that particular authenticator will be used.
added overloaded methods in AccountService to createUserAccount and
createUser with source specified.
(cherry picked from commit 5da733072e)
MigrateVMWithVolumes-
1. If ESXi host version is below 5.1, ensure destination datastore(s) is mounted on the source host, then migrate the storage and then finally migrate the VM.
If destination storage(s) is not mounted on the source host,
- In case of NFS storage mount the storage(s).
- In case of VMFS storage fail the request for migration.
2. If EXi host version is 5.1 or above, simultaneously migrate the VM and its storage to the destination host and storage(s) respectively for both NFS and VMFS storage.
This is a plugin that puts in ovm3 support ranging from 3.3.1 to 3.3.2. Basic
functionality is in here, advanced networking etc..
Snapshots only work when a VM is stopped now due to the semantics of OVM's raw
image implementation (so snapshots should work on a storage level underneath the
hypervisor shrug)
This closes#113
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When there is large size VR configuration (aggregate commands) copying data to VR using vmops plugin was failed
because of the ARG_MAX size limitation. The configuration data size is around 300KB.
Updated this to create file in host by scp with file contents. This will create file in host.
Then copy the file from the host to VR using hte vmops createFileInDomr method.
In host file get created in /tmp/ with name VR-<UUID>.cfg, once it copied to VR this file will be removed.
Refactored to use the XPatch expressions to check the generated domain xml rathern than string comparison.
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
Earlier host addition of multiple hosts with local storage failed due to
same local storage UUID being used where the storage path is same.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
(cherry picked from commit bf17f640c6)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Fixed some issues with the polling logic to handle scenarios when SSVM is destroyed/down.
Also changed the status of the volume_store_ref entries so that subsequent operations work fine.
KVMStoragePoolManager is a singleton in practice, any plugin
or extension of LibvirtComputingResource will need to act on
the specific instance of KVMStoragePoolManager that LibvirtComputingResource
has initialized. Therefore, expose this variable for those who
wish to call storage commands from plugins or extensions.
Conflicts:
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
1. If a VM by the same name exists on a different cluster in the VMware DC, unregister the existing VM and continue with the VM start.
2. If VM start succeeds, delete VM files associated with the unregistered VM.
3. If VM start fails, re-register the unregistered VM.
For AttachVolume/DetachVolume API command, improve user error message in case of RuntimeException by throwing the exception instead of 'Unexpected Exception'.
on calling GetUploadParamsForTemplate, persisting the metadata to db
validating the account limits and incrementing the appropriate limits
encoded the metadata on management server using preshared key
On default iptables rules are updated to add ACCEPT egress traffic.
If the network egress default policy is false, CS remove ACCEPT and adds the DROP rule which
is egress default rule when there are no other egress rules.
If the CS network egress default policy is true, CS won't configure any default rule for egress because
router already came up to accept egress traffic. If there are already egress rules for network then the
egress rules get applied on VR.
For isolated network with out firewall service, VR default allows egress traffic (guestnetwork --> public network)
Findbugs will report error on this as it is expecting true/false for Boolean value.
But we have diffrent meaning for null so it is false positive case from findbug
This closes#39
Look for a VM in vCenter based on both the vCenter name and CS internal name (required in case 'vm.instancename.flag' is enabled).
During Attach Volume and Volume Migration, for lookup and other operations use VM's name as obtained from vCenter instead of using the name set in the agent command.
GPU enabled hosts from non-GPU VM deployment.
Cluster reordering is based on the number of unique host tags in a cluster,
cluster with most number of unique host tags will put at the end of list.
Hosts with GPU capability will get tagged with implicit tags defined by
global config param 'implicit.host.tags' at the time os host discovery.
Also added FirstFitPlannerTest unit test file.
GPU enabled hosts from non-GPU VM deployment.
Cluster reordering is based on the number of unique host tags in a cluster,
cluster with most number of unique host tags will put at the end of list.
Hosts with GPU capability will get tagged with implicit tags defined by
global config param 'implicit.host.tags' at the time os host discovery.
Also added FirstFitPlannerTest unit test file.
Clearly show if a volume is found and if not, that the pool is being refreshed
and the fetch is tried again.
Due to my commit b53a9dcc9f the chance of a volume
not being found is slightly bigger, but the performance gain is enormous on larger
deployments.
This is why we clearly have to log that we are refreshing the pool information
when a volume is not found.
It could be that a volume is created on host A and a few seconds later host B tries
to access the volume. In that case host B's libvirt doesn't know about the volume
yet and has to refresh the pool before it does.
(cherry picked from commit 4ee82f1f40)
modules can use it.
(cherry picked from commit b6401b04f22b0a5b686c7c477da4c6e0fd18df84)
Conflicts:
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalKickStartServiceImpl.java
GPU enabled hosts from non-GPU VM deployment.
Cluster reordering is based on the number of unique host tags in a cluster,
cluster with most number of unique host tags will put at the end of list.
Hosts with GPU capability will get tagged with implicit tags defined by
global config param 'implicit.host.tags' at the time os host discovery.
Also added FirstFitPlannerTest unit test file.
(cherry picked from commit 39fe766c2b)
Support offline volume resize on ESX by creating a worker VM to attach the unattached volume to and then resize it.
(cherry picked from commit 65ed25b7a6)
While expunging a volume, CS chooses the endpoint to perform delete operation by selecting any host that has the storage containing the volume mounted on it.
Instead, if the volume to be deleted is attached to a VM, the endpoint chosen by CCP should be the host that contains the VM.
(cherry picked from commit f1e3e83bbf)
If CCP thread local context which is used to handle connections to a vCenter is being re-used, validate that the context corresponds to the right vCenter API session.
(cherry picked from commit 6b06970366)
On larger (especially RBD) storage pools this can take a lot of
time slowing operations like creating volumes down.
The getStorageStats command will still ask a pool to be refreshed so
that the management server has accurate information about the storage pools.
On larger deployments, with thousands of volumes in one pool, this should
significantly improve storage related operations
(cherry picked from commit b53a9dcc9f)
cleaning up vmsync changes, checkvirtualmachine command was updated to return the
power state of the vm. The change was missed for Hyper-V. This caused migration to
fail on cloudstack even though it used to succeed on the hypervisor. Updated the
hyper-v agent code to return the cloudstack equivalent power state for check virtual
machine answer.
(cherry picked from commit 5350e61187)
Clearly show if a volume is found and if not, that the pool is being refreshed
and the fetch is tried again.
Due to my commit b53a9dcc9f the chance of a volume
not being found is slightly bigger, but the performance gain is enormous on larger
deployments.
This is why we clearly have to log that we are refreshing the pool information
when a volume is not found.
It could be that a volume is created on host A and a few seconds later host B tries
to access the volume. In that case host B's libvirt doesn't know about the volume
yet and has to refresh the pool before it does.
GPU enabled hosts from non-GPU VM deployment.
Cluster reordering is based on the number of unique host tags in a cluster,
cluster with most number of unique host tags will put at the end of list.
Hosts with GPU capability will get tagged with implicit tags defined by
global config param 'implicit.host.tags' at the time os host discovery.
Also added FirstFitPlannerTest unit test file.
GPU enabled hosts from non-GPU VM deployment.
Cluster reordering is based on the number of unique host tags in a cluster,
cluster with most number of unique host tags will put at the end of list.
Hosts with GPU capability will get tagged with implicit tags defined by
global config param 'implicit.host.tags' at the time os host discovery.
Also added FirstFitPlannerTest unit test file.
While expunging a volume, CS chooses the endpoint to perform delete operation by selecting any host that has the storage containing the volume mounted on it.
Instead, if the volume to be deleted is attached to a VM, the endpoint chosen by CCP should be the host that contains the VM.
If CCP thread local context which is used to handle connections to a vCenter is being re-used, validate that the context corresponds to the right vCenter API session.
On larger (especially RBD) storage pools this can take a lot of
time slowing operations like creating volumes down.
The getStorageStats command will still ask a pool to be refreshed so
that the management server has accurate information about the storage pools.
On larger deployments, with thousands of volumes in one pool, this should
significantly improve storage related operations
cleaning up vmsync changes, checkvirtualmachine command was updated to return the
power state of the vm. The change was missed for Hyper-V. This caused migration to
fail on cloudstack even though it used to succeed on the hypervisor. Updated the
hyper-v agent code to return the cloudstack equivalent power state for check virtual
machine answer.
Fix the bug by removing the command CreateEntityDownloadURLCommand from the host delegation. This results in same ssvm for creating the symlink on ssvm and same public ip being used for generating the url on MS.
1. While destroying a ROOT volume do the lookup of the associated VM under the DC and not just cluster.
2. In case of VMware, during VM start if a volume is being recreated no need to detach the old volume because
we now expunge it immediately and don't wait for the storage cleanup task to run.
During VM start while configuring its disk devices, obtain the matching disk for a volume in storage
using both the volume's path and volume's datastore information.
While adding host to existing cluster which is using Nexus 1000v as a network backend, skip validation of Nexus VSM as it was already done while adding that cluster.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
determined. To exclude hosted system we filter the result on Caption='Virtual Machine'
but this string is locale dependent so it may not not work properly for locales other
than english. To overcome this now we started using ProcessId >= 0 filter
For ResizeVolume API command -
1. If hypervisor resource throws an exception, handle the NPE thrown by the job framework.
2. Improve user error message in case of RuntimeException by throwing the exception instead of 'Unexpected Exception'.
We don't need an external script to investigate the format of the RBD volume,
we only have to ask Libvirt to resize the volume and that will ask librbd to
do so.
1. Fixed JSON response deserialization. While creating a mock a JSON can be passed which will be deserialized into a response object and returned from agent layer.
For e.g. for a mock corresponding to StopCommand, a response like "{"com.cloud.agent.api.StopAnswer":{"result":false,"wait":0}}" can be passed.
2. Ability to mock PingCommand (returned as part of getCurrentStatus() agent method). As a part of this a mocked VM state report can be returned.
For e.g. {"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"newStates":{},"_hostVmStateReport":{"v-2-VM":{"state":"PowerOn","host":"SimulatedAgent.e6df7732-69b2-429b-9b6a-3e24dddfa2e0"},"i-2-5-VM":{"state":"PowerOff","host":"SimulatedAgent.e6df7732-69b2-429b-9b6a-3e24dddfa2e0"}},"_gatewayAccessible":true,"_vnetAccessible":true,"hostType":"Routing","hostId":3,"contextMap":{},"wait":0}}
in the XenServer pool for GRE tunnel networks
Fix uses XenServer recommended way
Network.other_config:assume_network_is_shared=true
which ensures bridge is created automatically on hosts in the pool for
GRE tunnel networks. Fix also gets rid of error prone custom logic that ensures
bridge is created by plugging a VIF into the dom0 and connected to
GRE tunnel network.
Conflicts:
plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/CitrixResourceBase.java
xen cluster
XenServer does not create a bridge automatically when VIF from domU is connected
to internal network. So there is logic to force bridge creation by
creating VIF in dom0 connected to GRE tunnel network. But there is no
logic to delete the VIF after bridge gets created. So this fix ensure
VIF is delted when atleast there is one domU VIF connected to the
network.
In situations where libvirt lost the storage pool the KVM Agent will re-create the
storage pool in libvirt.
This could be then libvirt is restarted for example.
The object returned internally was missing essential information like the sourceDir
aka the Ceph pool, the monitor IPs, cephx information and such.
In this case the first operation on this newly created pool would fail. All operations
afterwards would succeed.
to get nic info of 0.0.0.0. To get it, it iterates through all nics and return the last NIC in
the list if it doesn't match with any IP address. In case last NIC doesn't have unicastAddress,
Hyper-V agent will fail to start. We don't need IP address during initialization. It get
initialized with startupcommand later.
last VM from the VPC is deleted on a host
OVS distributed routing: ensure bridge is deleted when last VM from the
VPC is deleted on a host. This fix ensures that bridge is
destroyed.
the folder column. For an smb share the smb credentials are in the query string of the path.
Before adding the path, smb shares query string should be cleaned up.
that case cloudstack was not doing anything and not updating the state of the vms to stopped.
Now the agent returns empty list of hostvmstatereport. Management server will then update the
vm state to stopped (instead of not acting upon the return state).
Check if switch name detected from traffic label for management, storage, control traffic is null before falling back to default value.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
We used to create the snapshot after the copy from Secondary Storage,
but it could be that we never use the snapshot.
Now we check if the snapshot exists prior to performing the cloning operation
Since we use qemu-img to copy from RBD to Secondary Storage we no
longer have to force to RAW images, but can stick with QCOW2
When the snapshot backups are QCOW2 format they can easily be deployed
again when restoring from a backup
The KVMStorageProcessor no longer has a hardcoded if-statement which sets
RBD volumes to RAW, this is now handled in the LibvirtStorageAdapter
The Management Server still sends QCOW2 as format. That's a fix for later.
fix mismatch of ovs-host-setup, ovs_host_setup used Libvirt resource and
scripts
plug the nic to OVS bridges created for the tunnel network.
Conflicts:
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/OvsVifDriver.java
In case some environments has different performance or we found some commands
would took too long to execute, one global configuration item is introduced to
specify "time out in seconds per one command in aggregation commands".
By default it's 3 seconds. If admin feel it's too long, it can be adjust to as
low as 1 seconds, which runs still well in my machine.
Conflicts:
setup/db/db/schema-430to440.sql
Added a new flag 'checkBeforeCleanup' to StopCommand based on which check is done to see if VM is running in HV host.
If VM is running then in this case it is not stopped and the operation bails out.
Also modified the MS code to call the StopCommand with appropriate value for the flag based on the context.
Currently it is only set to 'true' when called from the new vmsync logic based on powerstate of VM. For rest it
is set to 'false' meaning no change in behaviour.
This reduces the amount of time and storage it takes dramatically. We no longer
do a full copy, but a sparse copy. The destination image is still in RAW
format, but we only copy over used blocks.
Qemu is also better in doing this then us doing it in Java code.
Otherwise a RBDException will be thrown with the message that the snapshot
isn't protected.
modified: plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/LibvirtStorageAdaptor.java
clusers
across the poll members an internal network created is visible to al the
members but bridge is not necessariliy created. This fix enables
plugging a temp VIF connected to internal network to dom0 and then
unplug-it. this action creates a bridge on the host of the network.
This saves the step of writing to a temporary image in /tmp first before
writing to RBD.
This is possible due to a new version in librbd. With the rbd_default_format
setting we can now force qemu-img to create format 2 RBD images.
This is available since Ceph version 0.67.5 (Dumpling).
Add executeInVR() with timeout interface to VirtualRouterDeployer
AggregationControlCommand with Action.Finish may take longer than normal command
since it would execute all the commands in one execution, and it may result in
SSH timeout for SshHelper or other mechanism communicate with VR.
Introduce an new executeInVR() interface with added timeout period for waiting
FinishAggregationCommand to complete execution.
to flow rules and applies them on the bridge
add event subscriber in OvsTunnelManager, that listens to
replaceNetworkAcl events. On event sends the updated policy info to all
the hosts in the VPC
- get the hosts on which VPC spans given vpc id
- get the VM's in the VPC
- get the hosts on which a network spans
- get the VPC's to which a hosts is part of
- get VM's of a VPC on a hosts
introduces capability to build a physical toplogy representation of a
VPC. This json file is encapsulated in
OvsVpcPhysicalTopologyConfigCommand, and is used to send full topology
to hypervisor hosts. On hypervisor this json config can be used to setup
tunnels, configure bridge, add flow rules etc
Ovs GURU, to use different broasdcast scheme VS://vpcid.gerkey for the
networks in VPC that use distributed routing
each VIF and tunnel interface to carry the network UUID in other/options
config
storage pool (SMB) and attached to a running vm can be live migrated to another shared storage
pool. Also a vm and its volumes can be live migrated to another host and storage pool respectively.
2) Corrected some logging in MidoNetPublicNetworkGuru - removed .toString method call on the objects in the log body as toString is called on the object by default when use log4j
CLOUDSTACK-4762 : Enabling VGPU support for XenServer.
This feature is to enable the GPU-passthrough and vGPU functionality,
with the help of this feature, admins/users will be able to leverage
the GPU graphics unit power by deploying a virtul machine with GPU or
vGPU support or by changing the service offering of an existing VM
at any later point of time. There GPU/vGPU enabled VMs are able to run
graphical applications.
For now, this feature is only supported with XenServer hypervisor but
can be extended to add the support of other hypervisors.
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
Update volume path for every volume belonging to the VM to the corresponding new disk filename.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
plugins/hypervisors/vmware/src/org/apache/cloudstack/storage/motion/VmwareStorageMotionStrategy.java
In vCenter 5.5, once a volume is migrated the VMDKs are renamed to match the name of the VM.
If a volume has been renamed upon migration update its volumePath to that of the new disk filename.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
With VirtIO enabled on KVM. FreeBSD 10 supports VirtIO for both the
network and the disks. This frees us from IDE and E1000 which should
also improve performance.
By default all network disks are in RAW format. Gluster works fine with
QCOW2 which has some advantages.
Disks are by default in QCOW2 format. It is possible to run into
a mismatch, where the disk is in QCOW2 format, but QEMU gets started
with format=raw. This causes the virtual machines to lockup on boot.
Failures to start a virtual machine can be verified by checking the log
of the virtual machine, and compare the output of 'qemu-img info'.
In /var/log/libvirt/qemu/<VM>.log find the URL for the drive:
-drive file=gluster+tcp://...,format=raw,..
Compare this with the 'qemu-img info' output of the same file, mounted
under /mnt/<pool-uuid>/<img-uuid>:
# qemu-img info /mnt/<pool-uuid>/<img-uuid>
...
file format: qcow2
...
This change makes passes the format when creating a disk located on RBD
(RAW only) and Gluster (QCOW2).
Signed-off-by: Niels de Vos <ndevos@redhat.com>
The support for Gluster as Primary Storage is mostly based on the
implementation for NFS. Like NFS, libvirt can address a Gluster environment
through the 'netfs' pool-type.
PrepareForMigrationCommand, so that destination hypervisor can
mount pool. This further exposed an issue for KVM where iso
was not getting cleaned up upon successful migration, fixed as well.
By default only the Integers between -128..127 are cached (unless overridden by java.lang.Integer.IntegerCache.high system property)
If the inbound or outbound values are higher, the reference comparison won't work.
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
- minor resource leak cleaned up
- cpu-speed reading method extracted
- test added
- logging added in case of exception
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
This saves us a lot of code and libvirt is probably a better
place to do this.
libvirt-java now has the support we want, so we can now resize volumes
with libvirt.
(C)LVM volumes can't be resized using libvirt, so we have to
invoke a resize script for that.
By default the client_mount_timeout setting in librados is 300 seconds,
but that causes the connect to the Ceph cluster to block for 5 minutes
if the Ceph cluster is not available.
This patch is not ideal, but it mitigates the problem for now.
At a later point all this librados/librbd code should go back to libvirt
again, but the current versions of libvirt in the distributions are
to old for all the features we require.
For now this should prevent the CloudStack agent blocking for 5 minutes
when the Ceph cluster isn't available.
This is also tracked at the Ceph tracker: http://tracker.ceph.com/issues/6507
To obtain network read/write statistics, multiply sample duration with the
average of the particular performance metric obtained over the sample period.
Added fix for exception and listing. Mentioned details under bug.
Post the fix, simulator works fine.
Signed-off-by: Santhosh Edukulla <Santhosh.Edukulla@citrix.com>
Signed-off-by: Koushik Das <koushik@apache.org>
Introduced by:
commit ac65f8fddf
Author: Hugo Trippaers <htrippaers@schubergphilis.com>
Date: Mon Jan 20 18:03:02 2014 +0100
CLOUDSTACK-5884 make getTargetSwitch(NicTO nicTo) do all the work to select
switch name, type and vlan token. Change preference to use the tags set on the
physical network.
the management server was restarted. The template.properties file created for the
template has the format field in upper-case. This caused the template service to
not to recognise the format and it removed the entry from the template_store_ref
table in db. Fixed the format field in the templatee.properties.
null pointer exception was getting generated when a VolumeTO object was
serialized to create an answer object. If a local storage is used the uri
field will be null. Added null checks for the same.
We hit these excptions whenever a management server held session that was with the old 5.1 vCenter server
is used to make resource calls to the new 5.5 vCenter.
Validate a vCenter session context before it is being used to make a resource call.
And if the context is invalid then discard the context and retrieve a new one.
During the invalidation of an old context handle the context disconnect better
by catching the appropriate exception and returning a newly created context.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareContextFactory.java
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareSecondaryStorageResourceHandler.java
vmware-base/src/com/cloud/hypervisor/vmware/util/VmwareContext.java
For a detached volume, don't try to find the associated VM on the hypervisor/peer hypervisor host.
By default create a worker VM to perform snapshot operations.
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareStorageProcessor.java
on hyperv. There were multiple issues here. Upload volume was actually
failing because the post download check for vhd on the cifs share was
unsuccessful. Also the agent code wasn't parsing the volume path correctly.
Fixed it too.
Don't package the OVF and VMDK files into OVA after a template is created from volume.
Since packaging process contains reading and writing from the NFS mount, it doubles the amount of data that needs to be moved around
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareStorageProcessor.java
Instead of injecting object of VolumeOrchestrationService into VmwareResource, we now populate the command object (MigrateVolumeCommand here) with required information. Thus we dont need volume orchestration service to query that information from resource.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
is attached a hard disk drive is created on the scsi controller. On detach
the data disk is removed from the drive but the disk drive is left behind.
On reattach the agent was again trying to create a disk drive while it was
already present. Fixed the agent code to look up for disk drive while
attaching and if one is not found then only to create the drive for
attaching a data disk.
The agent was always creating a disk with image format vhdx, but the
cloudstack management server defaults to image format vhd for hyperv.
Updated the agent code to be consistent with what cs expects. All disks
are now created with image format vhd.
When VM is not running, existing code is unable to retrieve associated cluster's Id. Now we will try to get this information using previous host where the VM was running.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/guru/VMwareGuru.java
DetachISO is succeeding even though detach opeartion is failing as cdrom is locked by VM as it was mounted inside VM.
Detect if cdrom is locked or not. If locked fail detach operation and warn user to unmount before detaching the iso/cdrom device.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Conflicts:
plugins/hypervisors/vmware/src/com/cloud/hypervisor/vmware/resource/VmwareResource.java
plugins/hypervisors/vmware/src/com/cloud/storage/resource/VmwareStorageProcessor.java
vmware-base/src/com/cloud/hypervisor/vmware/mo/VirtualMachineMO.java
deployment fails because of that as cloudstack tries to deploy it on a host which is
ctually down. An investigator wasn't present for hyper-v; so cloudstack wasn't able to
determine the status of the host. Wrote an investigator for hyper-v which checks with
other hosts in the cluster for the status of the host being investigated.
host is put in maintenance mode. The migrate flag wasn't set to true in
the maintain answer. This caused cloudstack to not to schedule a migration
work item for vms on the host. Made a change to set the migrate flag to
true in migrate answer.
brought back up after being down for few hours,snapshot jobs do not get
triggered with reason "there is other active snapshot tasks on the
instance to which the volume is attached".
The systemvm iso file is copied only when a systemvm or router vm is to be started on
a host. The file gets copied to the secondary storage. The mount point used is the one
that has permissions for regular user to mount a share.
CLOUDSTACK-5275: The failure was because a secondary storage wasn't available when the
host was added. When a setup is done through wizard the hosts get added before the
secondary storage. CS was tying to copy yhe systemvm iso to secondary and it used to
fail if it wasn't available. Made a change to copy the iso only when a systemvm is
being started on a host.
CLOUDSTACK-5202: Made changes to clean up mount points on stop and start.
All the three are related fixes; so putting a fix in one commit.
While VRs upgrade, cloudstack should still be able to work with older
/newer version of the scripts within VRs. To allow this, the simulator
needs to send the version strings for the domr version response or the
VR start is interrupted.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
In VMware during VM start the existing disk information is used to configure the VMs. So even if a new disk is created using the new template VM continues to use the old disk.
Once the old root disk is marked for destroy force expunge it and sync the new disk into the VM folder before VM start
- the result of dividing long with long resulted in loss of precision both for network and IO
- unit tests included
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
replace vlanid wih broadcast uri to support vxlan to identify whether id is VLAN ID or VNI
Signed-off-by: ynojima <mail@ynojima.net>
Signed-off-by: Hugo Trippaers <htrippaers@schubergphilis.com>
Fixing rebase issues after integrating with wmi v2 implementation.
Removing the executable attribute from some files.
Remove the unused wmi v1 interface file.
Unit test for DestroyCommand implementation in hyperv agent.
Fixed VM state changes w.r.t wmi version 2 changes
If a VM is already running, deploy virtual machine shouldn't fail and throw an exception.
Don't run vhd-util on templates which are present on CIFS. Hyperv uses cifs as secondary storage
Add a SCSI controller by default. This is needed so that data volumes can be added/removed
on a running vm.
Remove the hard coded path in the agent code.
Rat fixes for hyper agent. Added the missing headers in files where it was missing.
Copy the iso to the secondary storage and let the hypervisor agent know of its
location during setup. The agent will copy it over once it handles the setup
command.
Changes for attaching the systemvm iso to virtual router will booting it -
part 2. The agent copies over the systemvm iso during setup. When a
virtual router is being booted it attaches the iso to it.
Hyperv unit tests for the agent. Unit tests are written using NSubstitute and XUnit and
they test the create, stop and start commands in the agent.
Fix to make sure the hyperv agent and the funcitonal tests are working after the unit tests update.
Fixing the warnings while running unit tests for hyper agent.
Added a new switch for functional tests.
Update the unit test to create a fake vhd file on the fly and run the test. The file is removed when the test completes.
Fix for functional tests. The test was failing to build on java 1.6.
Fix to bring up SSVM and Console Proxy systemvms
Fix to discover the seeded template to bring up the systemvm's for the first startup and fixed UNC path isues
Fixed the UNC path for copying the files from CIFS, and from seeded template
Fixed the issues for ssvm and cpvm to wait until it gets configured and then return the status. Made checksum method to return true.
Fixed HypervDirectConnect resource to figure out the status of systemvms, Need to fix this issue by connecting to public/control ip instead of local ip
checksum is failing for the copied system vm images, currently bypassing.
Implemented commands that are required for VR to bootup and Vm deployment to work
Modified hyperv agent code, to deploy VR with Boot Args, boot args passed to VR using KVP Exchange Component.
Fix for VR to boot up and get configured with boot args, Fixed issue in VolumeOrchestrator
Implemented SetFirewallRulesCommand in HyperV Resource
Implemented VR network commands to provide the necessary services from VR
Fixed hyperv localstorage path encode url issue. encode is converting space to '+'
architecture allows additional functionality to be easily added. Incorporating the plugin in CloudStack will allow
the community to participate in improving the features available with Hyper-V. The plugin uses a Director Connect
Agent architecture described here: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Progress
Add ability to pass kvp data via the key cloudstack-vm-userdata
Rearrange code to make it clearer what .NET objects are being used.
Test failures are easier to deal with if test key is not deleted.
Acquire management/pod ip for control ip when VR deploys in HyperV
Fixed deletion on VM's on hyperv host when mgmt server gets restarted due to HA
Implementation for attach iso command. Attaches an iso to a given vm.
Detail: getPhysicalDisk() was not matching on volumes with .raw, so
instead setting disk format to QCOW2.
BUG-ID: CLOUDSTACK-5018
Bugfix-for:
Reviewed-by:
Reported-by:
Signed-off-by: John Kinsella <jlk@stratosec.co> 1383287538 -0700
Now VPN connection can be created as "passive", which would enable the ability
of remote peer initiate the connection. So it's possible for VPC VR to
establish the connection to another VPC VR of CloudStack.
Test case also included.
The test case would create 2 vpcs and using VPN to connect them.
1) vxlan will use bridge scheme 'brvx-<vni>'. Multiple physical networks can host guest
traffic type with vxlan isolation, so long as they don't use the same VNI range.
2) Guest traffic labels can be physical interface if bridge by given name is not found.
Normally we take traffic label name, find the matching bridge, then resolve that to a
physical interface. Then we create guest bridges on that interface. Now we can just
specify the interface.
When a ROOT volume is created from base template, if a folder already exists for the ROOT volume's VM then replace the old ROOT disk files with the new one.
The simulator uses the default planners of cloudstack and does not
require a separate planner context (as of now). This was just c&p from
baremetal planners.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
xs 6.1/6.2 introduce the new virtual platform, so there are two virtual platforms, windows PV driver version must match virtual platforms,
this patch tracks PV driver versions in vm details and template details.
Anthony
Detail: Checks for other Ethernet interface names uses startsWith(),
whereas the p1p1 style interface uses a regex that doesn't allow for
tailing characters, and so blocks vlan IDs. Fixed.
BUG-ID: CLOUDSTACK-4884
Bugfix-for: 4.2.1
Reviewed-by:
Reported-by:
Signed-off-by: John Kinsella <jlk@stratosec.co> 1381965250 -0700
Introduction of a new Transaction API that is more consistent with the style
of Spring's transaction managment. The existing Transaction class was renamed
to TransactionLegacy. All of the non-DAO code in the management server has been
updated to use the new Transaction API.
I don't think host kernel version has any bearing on it. Original code
was tested with CentOS 6.3 and 6.4, but it seems to succeed or fail per-host,
e.g. a fast host might work and a slow host might not. I was getting intermittent
failures with ubuntu 12.04.3 prior to this patch.
These changes are a joint effort between Edison and I to refactor some
of the code around snapshotting VM volumes and creating
templates/volumes from VM volume snapshots. In general, we were working
towards allowing PrimaryDataStoreDrivers to create snapshots on primary
storage and not requiring the snapshots to be transferred to secondary
storage.
High level changes:
-Added uuid to NfsTO, SwiftTO & S3TO to cut down on the requirement of
PrimaryDataStoreTO and ImageStoreTO which don't really serve much of a
purpose
-Initial work towards enable reverting VM volume from snapshots
-Added hypervisor commands for introducing and forgetting new hypervisor
objects (snapshots, templates & volumes)
Signed-off-by: Edison Su <sudison@gmail.com>
ACS is now comprised of a hierarchy of spring application contexts.
Each plugin can contribute configuration files to add to an existing
module or create it's own module.
Additionally, for the mgmt server, ACS custom AOP is no longer used
and instead we use Spring AOP to manage interceptors.
The managed context framework provides a simple way to add logic
to ACS at the various entry points of the system. As threads are
launched and ran listeners can be registered for onEntry or onLeave
of the managed context. This framework will be used specifically
to handle DB transaction checking and setting up the CallContext.
This framework is need to transition away from ACS custom AOP to
Spring AOP.
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue and/or TODO:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
- Documentation!
Signed-off-by : Toshiaki Hatano <haeena@haeena.net>
Libvirt reports:
org.libvirt.LibvirtException: Storage volume not found: no storage vol
with matching name
in some cases, if the volume is created on one kvm host, while accessed
from other host.
It's possible due to concurrent access(read/write) storage.
The current fix is to try serveral times, and wait for 30 seconds for
each retry.
If the issue still there, then need to sync the storage pool access
CLOUDSTACK-4457:
CLOUDSTACK-4459:
harden kvm getvolume. It's possible that one volume created on other kvm host, won't show up on another host, try more times to refresh storage pool if volume won't shown up
Conflicts:
engine/storage/integration-test/test/org/apache/cloudstack/storage/test/FakeDriverTestConfiguration.java
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
There still exist two issues after Edison's commits.
(1) Migration from new hosts to old hosts failed.
The bridge name on old host is set to cloudVirBr* if network.bridge.name.schema is set to 3.0 in /etc/cloudstack/agent/agent.properties, but the actual bridge name is breth*-* after running cloudstack-agent-upgrade.
(2) all ports of vms (Basic zone, or Advanced zone with security groups) on old hosts are open, because the iptables rules are binding to device (bridge) name which is changed by cloudstack-agent-upgrade.
After this, the KVM upgrade steps :
a. Install 4.2 cloudstack agent on each kvm host
b. Run "cloudstack-agent-upgrade". This script will upgrade all the existing bridge name to new bridge name, and update related firewall rules.
c. install a libvirt hook:
c1. mkdir /etc/libvirt/hooks
c2. cp /usr/share/cloudstack-agent/lib/libvirtqemuhook /etc/libvirt/hooks/qemu
c3. chmod +x /etc/libvirt/hooks/qemu
c4. service libvirtd restart
c5. service cloudstack-agent restart
Signed-off-by: Wei Zhou <w.zhou@leaseweb.com>
The migrate method from libvirt supports passing down a different XML for running
the instance of the target hypervisor.
This enables the VNC to bind to the private IP Address of the hypervisor and during
migration this will be changed to the private IP address of the target host.
This way VNC doesn't listen world wide and is much safer.
when secondary storage is mounted as read-only, changing permission of files on it will fail. But we should still stick to current mount point instread of
returning a wrong mount point /mnt/sec
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
This failed due to a RAW -> QCOW2 conversion (again).
The current code still makes to much assumptions about everything always
being QCOW2 while that is not always true.
UI support for baremetal PXE server
CloudStack CLOUDSTACK-1364
UI support for baremetal DHCP server
Conflicts:
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BareMetalPingServiceImpl.java
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalKickStartServiceImpl.java
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalPxeManagerImpl.java
KVM - Create template from volume
Vmware - Create template from volume / Create template from snapshot
send the physical size in the copycommand which accordingly will populate template store ref and the usage_event tables with the right physical size
Signed off by : nitin mehta<nitin.mehta@citrix.com>
XS Creating templates from volume - send the physical size in the copycommand which accordingly will populate template store ref and the usage_event tables with the right physical size
Signed off by : nitin mehta<nitin.mehta@citrix.com>
If all the VM's volumes are on zone wide primary storage pool then live migration of the VM would not involve storage migration. Hence MigrateVM API would be called against MigrateVMWithVolume. So far PrepareForMigrationCommand handled scenarios of VM moving across hosts within a cluster, but with zone wide primary storage in picture this command need to handle scenarios of VM moving across clusters. Try to find the VM in datacenter if not found within cluster.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Simulator should revert back to CLOUD_DB after its operations on
SIMULATOR_DB or the cloudstack connections go to the simulator instead
of cloud.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
(cherry picked from commit 3d39716c8f)
Although libvirt supports resizing RBD volumes (and other formats) the
Java bindings (libvirt-java) don't.
Right now we use the Java bindings for librbd to handle the resizing for us,
but in the future this should be done by libvirt rather then these
Java bindings.
- ManagementServerSimulatorImpl is not injected by default context.
configureSimulatorCmd API was loaded as part of it. Use
SimulatorManagerImpl as PluggableService to inject configureSimulator
API.
- Remove unused ManagementServerSimulatorImpl.
- Rename ConfigureSimulator to ConfigureSimulatorCmd for uniformity with
all API Cmds
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
(cherry picked from commit 0c294a50a8)
Make VSM specific input paramters optional while adding VMware cluster where no traffic chosen to use Nexus 1000v dvSwitch when cloud level vSwitch is Nexus 1000v.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CS used to access vnc server in xenserver dom0 to get VM console, now CS moves to use XenServer console API. getvncport plugin is not needed any more.
remove the code related to getvncport in XenServer
While finding endpoint to send the migrate volume command, in case of zone wide storage pool, storage engine is picking a host among all connected hosts randomly. Hence a host may be from different cluster t
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Detail: Cloudstack tries to stop VMs all the time, for all sorts of reasons,
but usually just to get into a known state. Libvirt throws an exception of
'Domain not found' when attempting to stop a VM that doesn't exist. This causes
problems for troubleshooting real issues. Domain not found should equate
to success if trying to stop.
BUG-ID: CLOUDSTACK-4011
Bugfix-for: 4.2
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1375825281 -0600
Detail: KVM recently got a patch that did away with a few dozen ssh calls
when programming virtual router (CLOUDSTACK-3163), saving several seconds
for each vm served by the virtual router when the router is rebooted. This
patch updates Xen to use the same method, and cleans up the old script refs.
Reviewed-by: Sheng Yang, Prasanna Santhanam
The method getCommandHostDelegation(long hostId, Command cmid) got overidden in VmwareGuru.java as part of
commit bfe30cd2e3. Earlier there was no HV specific implementation and copy
volume from secondary to primary worked fine. With the Vmware specific change the code was getting hit even
in case of XS and other hypervisors and failed with NPE.
Now there is a check in the Vmware implementation to check if the HV is of type Vmware.
Marked the system template new system template as dynamicallyScalable
- handled upgrade case
- moved "dynamicallyScalable" flag to vm_instance table from user_vm_details to support dynamic scaling of system vm
Signed off by : Nitin Mehta<nitin.mehta@citrix.com>
details has information on cores per socket, create an instance accordingly. The vm record
is populated with information on how many cores should be allowed in a socket.
a host to the pool it may take a little longer to join the pool. Increased the time the
resource waits and checks to make sure the host has joined as slave to the pool.
Include the createTemplateFromSnapshot in the storageprocessor.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
(cherry picked from commit e8383121c60e7c815ba1990df1e8a440f7d87188)
Send success answer of type CopyCmdAnswer with full snapshot backup path in secondary store (image store role)
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
- Move vnetBridge clean up function from LibvirtComputingResource to BridgeVifDriver
-- since only BridgeVifDriver have to handle this event
- LibvirtComputingResource now properly call VifDriver.unplug() when it receives UnPlugCommand
- Remove not working and no longer used method getVnet(String) from VirtualMachineName
- Remove not working and no longer used method getVnet() from StopCommand
- Remove unused constructer StopCommand(VirtualMachine, String, boolean) from StopCommand
- Remove unused member vnet from StopCommand
- Remove unused member _modifyVlanPath from OvsVifDriver
Tested with 2 KVM hosts and confirmed it correctly manipulate vnetBridge with start, stop, migrate, plug, and unplug event
Signed-off-by: Hugo Trippaers <htrippaers@schubergphilis.com>
Fix the name of the interface id in extra config, so the nic is found up using the nic uuid instead of the vm id.
getVlanInfo should return null when the nic is plugged on a Lswitch network.
On a distributed portgroup all vms should be connected to a single
portgroup and set a specific vlan on the port. On a standard switch each nic should have its own portgroup with a dedicated vlan (not related to CS vlans)
For legacy zones insist on the old model of asking for credentials for each cluster being added.
For non-legacy zones check if username & password are provided. If either or both not provided, try to retrieve & use the credentials from database, which are provided earlier while adding VMware DC to zone.
This lets user to not specify credentials while adding VMware cluster to zone, if the credentials are same as that of provided while adding VMware DC to zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Description:
a) Fixing NPE when wrong path is provided for primary datastore.
b) No error dialog shows up in GUI when wrong path is provided,
after NPE fix - propagating exception upward.
c) If the KVM agent is down, an invalid datastore gets logged in
storage_pool table and doesn't get removed, so it shows up
in the GUI in the list of datastores - fixing this as well.
listVmwareDcs API is added. Takes 1 parameter (mandatory) which is zoneId.
Returns list of VMware DCs associated with specified zone.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Following Hotadd memory checks are included now.
1) Check if guest operating system supports memory hotadd
2) Check if virtual machine is using hardware version 7 or later before enabling memory hotadd
Following Hotadd CPU checks are included now.
1) Check if guest operating system supports cpu hotadd
2) Check if virtual machine is using hardware version 8 or later.
3) Check if virtual machine has only 1 core per socket. If hardware version is 7, then only 1 core per socket is supported. Hot adding multi-core vcpus is not allowed if hardware version is 7.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
The problem was because in cloudstack when a vm is stopped it gets destroyed on the host. For a
windows vm the timeoffset (which can be set by changing the timezone from within the vm) is stored
in the platform:timeoffset attribute of vm record. The information is lost when the vm is destroted.
Made change to read and persist the platform:timeoffset vm attribute when an instance is stopped.
The value is persisted in the user_vm_details table. When the vm is started again the attribute is
set for the vm instance that gets created.
Retrieve maximum hotadd memory limit & hotadd memory increment size from running VM's configuration to validate dynamic scale memory limit while scaling up a VM.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Now ESXi server license would be fetched to see if HOTPLUG feature is license or not.
Throw Exception if the feature is not licensed.
Also added FeatureKeyConstants enum type to maintain list of various features to be checked whether licensed or not.
Check if the associated VMware DC matches the one specified in API params. This check would yield success as the association exists between same entities (zone and VMware DC)
This scenario would result in if the API addVmwareDc is called more than once with same parameters. For such scenario, return success response with VmwareDatacenter object read from database. Also returns success if addVmwareDC is called agsin with same zone and VMwareDC where the zone already has resources added to it (clusters etc.)
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
on adding sec. storage no need to report back to resourceManager since
the sc. storage is no longer a directly connected host.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
Currently it is very hard to see which tasks are pending in cloudstack
that need to be executed by xapi. Ideally we would like to have a
central overview of all tasks centrally. But for now being able to
enable tracelogging should give some insight in what is going on.
- writeToFile removed since no references to it
- readFileAsString replaced with FileUtils.readFileToString
- minor code duplication removed in dependent method getNicStats
- unit test added
Signed-off-by: Laszlo Hornyak <laszlo.hornyak@gmail.com>
instead of a dynamically configured one.
Refactor bits of the HypervisorHostHelper to better
support multiple BroadcastDomainTypes
Cleanup some parts of the code that was using tab indents and removed some dead code.
Read the global configuration setting while configuring VmwareManager.
Also enabling autoExpand feature for supported distributed virtual switch versions.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
The file cloudstack-oss.asf/client/target/cloud-client-ui-4.2.0-SNAPSHOT/WEB-INF/classes/scripts/util/ipmi.py
is not mark as executible, so we cannot execute it directly.
Involve "/usr/bin/python" for it. The position of python file maybe different in
the different machines.
1. Baremetal doesn't have secondary storage, so we don't need check them.
2. The new "AddBaremetalHostCmd" hasn't been used by UI, so keep the validity
checking out for now. "AddHostCmd" would still works.
3. Baremetal haven't implemented multiple ip range feature(CLOUDSTACK-702),
return true for now for single range.
Introduced the SimulatorStorageProcessor to handle image store related
commands. Right now only mock implementations are provided, no error
handling, logging, runtime exception scenarios limited. SystemVMs are
able to start up but the default built-in template has trouble in going
to Ready state.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
CLOUDSTACK-3042 - handle Scaling up of vm memory/CPU based on the presence of XS tools in the template
This also takes care of updation of VM after XS tools are installed in the vm and set memory values accordingly to support dynamic scaling after stop start of VM
Signed-off-by: Abhinandan Prateek <aprateek@apache.org>
for number of commands participating in Vm deployment process, as parallel deployment is supported on the hypervisor side.
The behavior is controlled by global config varirables:
"execute.in.sequence.hypervisor.commands" (false by default) sets/resets the synchronization for commands:
=========================
StartCommand
StopCommand
CreateCommand
CopyVolumeCommand
"execute.in.sequence.network.element.commands" (false by default) sets/resets the synchronization for commands:
==========================
DhcpEntryCommand
SavePasswordCommand
UserDataCommand
VmDataCommand
As a part of the fix, increased the global lock timeout to 30 mins in several VR scripts:
===========================
edithosts.sh
savepassword.sh
userdata.sh
to support situations when multiple concurrent calls to the script are being made.