To make sure that a qemu2-image won't be corrupted by the snapshot deletion procedure which is being performed after copying the snapshot to a secondary store, I'd propose to put a VM in to suspended state.
Additional reference: https://bugzilla.redhat.com/show_bug.cgi?id=920020#c5Fixes#3193
* Improvements on upload direct download certificates
* Move upload direct download certificate logic to KVM plugin
* Extend unit test certificate expiration days
* Add marvin tests and command to revoke certificates
* Review comments
* Do not include revoke certificates API
Since the CloudStack virtual router was redesigned on version 4.6 it has been observed that the DHCP leases file is not persistent across network operations. This causes conflicts on guest VMs static IPs, causing these static IPs to not be renewed by the DHCP server running on isolated and VPC networks' virtual routers (dnsmasq). On stopping or destroying a VM, its dhcp/dns records are not removed from the virtual router causing ghost effects.
Fixes#3272Fixes#3354
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
We do NOT always reserve VMware CPU/RAM resources - only when "vmware.reserve.cpu" or "vmware.reserve.mem" setting is set to TRUE - AND we do so, irrelevant if overprovisioning is active or not. Verified for both system VMs and user VMs.
* DPDK vHost User mode selection
* SQL text field and DPDK classes refactor
* Fix NullPointerException after refactor
* Fix unit test
* Refactor details type
This adds memory used column in the instance metrics view. Also fixes
a bug for VMware, due to which incorrect memory usage was returned.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When I use SandyBridge as custom cpu in my testing, vm failed to start due to following error:
```
org.libvirt.LibvirtException: unsupported configuration: guest and host CPU are not compatible: Host CPU does not provide required features: avx, xsave, aes, tsc-deadline, x2apic, pclmuldq
```
With this patch, it works with the following setting in agent.properties:
```
guest.cpu.mode=custom
guest.cpu.model=SandyBridge
guest.cpu.features=-avx -xsave -aes -tsc-deadline -x2apic -pclmuldq
```
vm cpu is defined as below:
```
<cpu mode='custom' match='exact'>
<model fallback='allow'>SandyBridge</model>
<feature policy='disable' name='avx'/>
<feature policy='disable' name='xsave'/>
<feature policy='disable' name='aes'/>
<feature policy='disable' name='tsc-deadline'/>
<feature policy='disable' name='x2apic'/>
<feature policy='disable' name='pclmuldq'/>
</cpu>
```
On first startup, the management server creates and saves a random
ssh keypair using ssh-keygen in the database. The command does
not specify keys in PEM format which is not the default as generated
by latest ssh-keygen tool.
The systemvmtemplate always needs re-building whenever there is a change
in the cloud-early-config file. This also tries to fix that by introducing a
stage 2 bootstrap.sh where the changes specific to hypervisor detection
etc are refactored/moved. The initial cloud-early-config only patches
before the other scripts are called.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes the issue that VM with VMsnapshots fails to start after
extract volume is done on a stopped VM, on VMware.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Fixes PR #3146 db cleanup to the correct 4.12->4.13 upgrade path
- Fixes failing unit test due to jdk specific changes after forward
merging
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces a new patching script for patching systemvms on KVM
using qemu-guest-agent that runs inside the systemvm on startup. This
also removes the vport device which was previously used by the legacy
patching script and instead uses the modern and new uniform guest
agent vport for host-guest communication.
Also updates the sytemvmtemplate build config to use the latest Debian
9.9.0 iso.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* ubuntu16: fix unable to add host if cloudbrX is not configured
while add a ubuntu16.04 host with native eth0 (cloudbrX is not configured),
the operation failed and I got the following error in /var/log/cloudstack/agent/setup.log
```
DEBUG:root:execute:ifconfig eth0
DEBUG:root:[Errno 2] No such file or directory
File "/usr/lib/python2.7/dist-packages/cloudutils/serviceConfig.py", line 38, in configration
result = self.config()
File "/usr/lib/python2.7/dist-packages/cloudutils/serviceConfig.py", line 211, in config
super(networkConfigUbuntu, self).cfgNetwork()
File "/usr/lib/python2.7/dist-packages/cloudutils/serviceConfig.py", line 108, in cfgNetwork
device = self.netcfg.getDefaultNetwork()
File "/usr/lib/python2.7/dist-packages/cloudutils/networkConfig.py", line 53, in getDefaultNetwork
pdi = networkConfig.getDevInfo(dev)
File "/usr/lib/python2.7/dist-packages/cloudutils/networkConfig.py", line 157, in getDevInfo
elif networkConfig.isBridge(dev) or networkConfig.isOvsBridge(dev):
```
The issue is caused by commit 9c7cd8c248
2017-09-19 16:45 Sigert Goeminne ● CLOUDSTACK-10081: CloudUtils getDevInfo function will now return "bridge" instead o
* ubuntu16: Stop service libvirt-bin.socket while add a host
service libvirt-bin.socket will be started when add a ubuntu 16.04 host
DEBUG:root:execute:sudo /usr/sbin/service libvirt-bin start
However, libvirt-bin service will be broken by it after restarting
Stopping service libvirt-bin.socket will fix the issue.
An example is given as below.
```
root@node32:~# /etc/init.d/libvirt-bin restart
[ ok ] Restarting libvirt-bin (via systemctl): libvirt-bin.service.
root@node32:~# virsh list
error: failed to connect to the hypervisor
error: no valid connection
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
root@node32:~# systemctl stop libvirt-bin.socket
root@node32:~# /etc/init.d/libvirt-bin restart
[ ok ] Restarting libvirt-bin (via systemctl): libvirt-bin.service.
root@node32:~# virsh list
Id Name State
----------------------------------------------------
```
* ubuntu16: Diable libvirt default network
By default, libvirt will create default network virbr0 on kvm hypervisors.
If vm uses the same ip range 192.168.122.0/24, there will be some issues.
In some cases, if we run tcpdump inside vm, we will see the ip of kvm hypervisor as source ip.
* Mock Scanner, instead of scan the computer running the test.
This allows non linux machines to run the tests without scanning for a
non existing /proc/meminfo.
* test fixes on 'other' platforms libvirt wrapper unit tests (#3)
* Fix XenServer Security Groups 'vmops' script
- fix tokens = line.split(':') to tokens = line.split(';')
- fix expected tokens size from 5 to 4
- enhance logs
- remove unused vmops script. The XCP patch points to the vmops script
on the parent folder [1]. Thus, all XenServer versions are considering
the vmops script located at [2].
- fix UI ipv4/ipv6 cidr validator to allow a list of cidirs.
Fixing issue: #3192 Security Group rules not applied at all for
XenServer 6.5 / Advanced Zone
https://github.com/apache/cloudstack/issues/3192
* Update security group rules after VM migration
Add security group rules on target host
Cause: vmops script expected secondary IPs as "0;" but received "0:"
Remove security group network rules on source host.
Cause: destroy_network_rules_for_vm function on vmops script was not
called when migrating VM
* Add unit tests and address reviewers
* Keep iotune section in the VM's XML after live migration
When live migrating a KVM VM among local storages, the VM loses the
<iotune> section on its XML, therefore, having no IO limitations.
This commit removes the piece of code that deletes the <iotune> section
in the XML.
* Add test for replaceStorage in LibvirtMigrateCommandWrapper
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* Fix Javadoc for method replaceIpForVNCInDescFile
* feature: add libvirt / qemu io bursting
Adds the ability to set bursting features from libvirt / qemu
This allows you to utilize the iops and bytes temporary "burst" mode
introduced with libvirt 2.4 and improved upon with libvirt 2.6.
https://blogs.igalia.com/berto/2016/05/24/io-bursts-with-qemu-2-6/
* updates per rafael et al
* api: add command to list management servers
* api: add number of mangement servers in listInfrastructure command
* ui: add block for mangement servers on infra page
* api name resolution method cleanup
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations
* Fix indentation of marvin test file and reformat against PEP8
* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.
* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one
* Fix unhandled NPE during VM migration
* Revert back to distinct event descriptions for VM to host or storage pool migration
* Reformat test_primary_storage file against PEP-8 and Remove unused imports
* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils
The KVM Agent had two mechanisms for reporting its capabilities
and memory to the Management Server.
On startup it would ask libvirt the amount of Memory the Host has
and subtract and add the reserved and overcommit memory.
When the HostStats were however reported to the Management Server
these two configured values on the Agent were no longer reported
in the statistics thus showing all the available memory in the
Agent/Host to the Management Server.
This commit unifies this by using the same logic on Agent Startup
and during statistics reporting.
memory=3069636608, reservedMemory=1073741824
This was reported by a 4GB Hypervisor with this setting:
host.reserved.mem.mb=1024
The GUI (thus API) would then show:
Memory Total 2.86 GB
This way the Agent properly 'lies' to the Management Server about its
capabilities in terms of Memory.
This is very helpful if you want to overprovision or undercommit machines
for various reasons.
Overcommitting can be done when KSM or ZSwap or a fast SWAP device is
installed in the machine.
Underprovisioning is done when the Host might run other tasks then a KVM
hypervisor, for example when it runs in a hyperconverged setup with Ceph.
In addition internally many values have been changed from a Double to a Long
and also store the amount of bytes instead of Kilobytes.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* security group: Replace deprecated optparse by argparse
Starting with Python 2.7 the library optparse has been replaced by
argpase.
This commit replaces the use of optparse by argparse
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* security group: Remove LXC support from security_group.py
LXC does not work and has been partially removed from CloudStack already
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* security group: Refactor libvirt code
Use a single function which properly throws an Exception when the
connection to libvirt fails.
Also simplify some logic, make it PEP-8 compatible and remove a unused
function from the code.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* security group: Raise Exception on execute() failure
If the executed command exists with a non-zero exit status we should
still return the output to the command, but also raise an Exception.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* security group: Use a function to determin the physical device of a bridge
We can not safely assume that the first device listed under a bridge is the
physical device.
With VXLAN isolation a vnet device can be attached to a bridge prior to the
vxlanXXXX device being attached.
We need to filter out those devices and then fetch the physical device attached
to the bridge.
In addition use the 'bridge' command instead of 'brctl'. 'bridge' is part of the
iproute2 utils just like 'ip' and should be considered as the new default.
This command is also available on EL6 and does not break any backwards compat.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* security group: --set is deprecated, use --match-set
These messages are seen in the KVM Agent log:
--set option deprecated, please use --match-set
Functionality does not change
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* security group: PEP-8 and indentation fixes
There were a lot of styling problems in the code:
- Missing whitespace or exess whitespace
- CaMelCaSe function names and variables
- 2-space indentation instead of 4 spaces
This commit addresses those issues.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
The additional queues can enhance the performance of the VirtIO SCSI disk
and it is recommended to set this to the amount of vCPUs a Instance is assigned.
The optional queues attribute specifies the number of queues for the
controller. For best performance, it's recommended to specify a value matching
the number of vCPUs. Since 1.0.5 (QEMU and KVM only)
Source: https://libvirt.org/formatdomain.html#elementsVirtio
Signed-off-by: Wido den Hollander <wido@widodh.nl>
The static method syncVolumeToRootFolder() from VmwareStorageLayoutHelper.java:146 has been incorrectly called and leads to an infinite recursive call that ends up in a StackOverflowError. This PR fixes this.
public static void syncVolumeToRootFolder(DatacenterMO dcMo, DatastoreMO ds, String vmdkName, String vmName) throws Exception { syncVolumeToRootFolder(dcMo, ds, vmdkName, null); } -> public static void syncVolumeToRootFolder(DatacenterMO dcMo, DatastoreMO ds, String vmdkName, String vmName) throws Exception { syncVolumeToRootFolder(dcMo, ds, vmdkName, vmName, null); }
* Allow KVM VM live migration with ROOT volume on file
* Allow KVM VM live migration with ROOT volume on file
- Add JUnit tests
* Address reviewers and change some variable names to ease future
implementation (developers can easily guess the name and use
autocomplete)
Added dummy and lo devices to be treated as a normal bridge slave devs.
Fixes#2998
Added two more device names (lo* and dummy*). Implemented tests. Code was refactored.
Improved paths concatenation code from "+" to Paths.get.
If a host has many routes this can be a magnitude faster then printing
all the routes and grepping for the default.
In some situations the host might have a large amount of routes due to
dynamic routing being used like OSPF or BGP.
In addition fix a couple of loglines which were throwing messages on
DEBUG while WARN and ERROR should be used there.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
When vxlan://untagged is used for public (or guest) network, use the
default public/guest bridge device same as how vlan://untagged works.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
These additional RBD features allow for faster lookups of how much space a RBD
image is using, but with the exclusive locking we prevent two VMs from writing
to the same RBD image at the same time.
These are the default features used by Ceph for any new RBD image.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This adds a new API updateVmwareDc that allows admins to update the
VMware datacenter details of a zone. It also recursively updates
the cluster_details for any username/password updates
as well as updates the url detail in cluster_details table and guid
detail in the host_details table with any newly provided vcenter
domain/ip. The update API assumes that there is only one vCenter per
zone. And, since the username/password for each VMware host could be different
than what gets configured for vcenter at zone level, it does not update the
username/password in host_details.
Previously, one has to manually update the db with any new vcenter details for the zone.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
On actual testing, I could see that kvmheartbeat.sh script fails on NFS
server failure and stops the agent only. Any HA VMs could be launched
in different hosts, and recovery of NFS server could lead to a state
where a HA enabled VM runs on two hosts and can potentially cause
disk corruptions. In most cases, VM disk corruption will be worse than
VM downtime. I've kept the sleep interval between check/rounds but
reduced it to 10s. The change in behaviour was introduced in #2722.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Prevents errors while migrating VM from ISO:
Test 1: Deploy VM from ISO -> Live migrate VM to another host -> ERROR
Test 2: Register ISO using Direct Download on KVM -> Deploy VM from ISO -> Live migrate VM to another host -> ERROR
- Prevent NullPointerException migrating VM from ISO
- Prevent mount secondary storage on ISO direct downloads on KVM
Since we support only Ubuntu 16.04+ on master/4.12+, we can now use
the libvirt service name `libvirtd` for all distributions. This also
fixes an optional package name for libvirtd installation on Debian 9+.
Fixes#2909
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Windows has support for several paravirt features that it will use when running on Hyper-V, Microsoft's hypervisor. These features are called enlightenments. Many of the features are similar to paravirt functionality that exists with Linux on KVM (virtio, kvmclock, PV EOI, etc.)
Nowadays QEMU/KVM can also enable support for several Hyper-V enlightenments. When enabled, Windows VMs running on KVM will use many of the same paravirt optimizations they would use when running on Hyper-V.
A number of years ago, a PR was introduced that added a good portion of the code to enable this feature set, but it was never completed. This PR enables the existing features. The previous patch set detailed in #1013 also included the tests.
By selecting Windows PV, the enlightenment additions will be applied to the libvirt configuration. This is support on Windows Server 2008 and beyond, so all currently supported versions of Windows Server.
In our testing, we've seen benchmark improvements of around 20-25% running on Centos 7 hosts and it is also supported on Centos/RHEL 6.5 and later. Testing on Ubuntu would be appreciated.
This tries to provide a threshold based fix for #2873 where swappinness of VR is not used until last resort. By limiting swappiness unless actually needed, the VR system degradation can be avoided for most cases. The other change is around not starting baremetal-vr by default on all VRs, according to the spec https://cwiki.apache.org/confluence/display/CLOUDSTACK/Baremetal+Advanced+Networking+Support only vmware VRs need to run it and that too only as the last step of the setup/completion, so we don't need to run it all the time.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
[VMware] VM is not accessible after migration across clusters.
Once a VM is successfully started, don't delete the files associated with the unregistered VM, if the files are in a storage that is being used by the new VM.
Attempt to unregister a VM in another DC, only if there is a host associated with a VM.
This closes#556
When a Instance is (attempted to be) started in KVM Host the Agent
should not worry about the allocated memory on this host.
To make a proper judgement we need to take more into account:
- Memory Overcommit ratio
- Host reserved memory
- Host overcommit memory
The Management Server has all the information and the DeploymentPlanner
has to make the decision if a Instance should and can be started on a
Host, not the host itself.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This fixes#2763 by moving a post cert-renewal class for kvm
plugin/hypervisor to src/main/java. The regression is due to change
in file-system layout due to maven standard refactoring on master and
issue was not caught during forward-merging of a PR from 4.11 branch.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Cleaup and code-formatting POM files
* Remove obsolete mycila license-maven-plugin
* Remove obsolete console-proxy/plugin project
* Move console-proxy-rdbconsole under console-proxy parent
* Use correct parent path for rdpconsole
* Order alphabetally items in setnextversion.sh
* Unifiy License header in POMs
* Alphabetic order of modules definition
* Extract all defined versions into parent pom
* Remove obsolete files: version-info.in, configure-info.in
* Remove redundant defaultGoal
* Remove useless checkstyle plugin from checkstyle project
* Order alphabetally items in pom.xml
* Add aditional SPACEs to fix debian build
* Don't execute checkstyle on parent projects
* Use UTF-8 encoding in building checkstyle project
* Extract plugin versions into properties
* Execute PMD plugin on all the projects with -Penablefindbugs
* Upgrade maven plugins to latest version
* Make sure to always look for apache parent pom from repository
* Fix incorrect version grep in debian packaging
* Fix rebase conflicts
* Fix rebase conflicts
* Remove PMD for now to be fixed on another PR
This is a new feature for CS that allows Admin users improved
troubleshooting of network issues in CloudStack hosted networks.
Description: For troubleshooting purposes, CloudStack administrators may wish to execute network utility commands remotely on system VMs, or request system VMs to ping/traceroute/arping to specific addresses over specific interfaces. An API command to provide such functionalities is being developed without altering any existing APIs. The targeted system VMs for this feature are the Virtual Router (VR), Secondary Storage VM (SSVM) and the Console Proxy VM (CPVM).
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Remote+Diagnostics+API
ML discussion:
https://markmail.org/message/xt7owmb2c6iw7tva
Fixes the version in pom etc. to be consistent with versioning pattern as X.Y.Z.0-SNAPSHOT after a minor release.
Signed-off-by: Khosrow Moossavi <khos2ow@gmail.com>
This ensure that fewer mount points are made on hosts for either
primary storagepools or secondary storagepools.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Now the KVM agent checks whether a storage pool is mounted or not mounted before calling storagePoolCreateXML().
Signed-off-by: Kai Takahashi <k-takahashi@creationline.com>
This introduces a new global setting `vm.configdrive.primarypool.enabled` to toggle creation/hosting of config drive iso files on primary storage, the default will be false causing them to be hosted on secondary storage. The current support is limited from hypervisor resource side and in current implementation limited to `KVM` only. The next big change is that config drive is created at a temporary location by management server and shipped to either KVM or SSVM agent via cmd-answer pattern, the data of which is not logged in logs. This saves us from adding genisoimage dependency on cloudstack-agent pkg.
The APIs to reset ssh public key, password and user-data (via update VM API) requires that VM should be shutdown. Therefore, in the refactoring I removed the case of updation of existing ISO. If there are objections I'll re-put the strategy to detach+attach new config iso as a way of updation. In the refactored implementation, the folder name is changed to lower-cased configdrive. And during VM start, migration or shutdown/removal if primary storage is enable for use, the KVM agent will handle cleanup tasks otherwise SSVM agent will handle them.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In 4.11.0, I added the ability to online migrate volumes from NFS to managed storage. This actually works for Ceph to managed storage in a private 4.8 branch, as well. I thought I had brought along all of the necessary code from that private 4.8 branch to make Ceph to managed storage functional in 4.11.0, but missed one piece (which is fixed by this PR).
* CLOUDSTACK-9184: Fixes#2631 VMware dvs portgroup autogrowth
This deprecates the vmware.ports.per.dvportgroup global setting.
The vSphere Auto Expand feature (introduced in vSphere 5.0) will take
care of dynamically increasing/decreasing the dvPorts when running out
of distributed ports . But in case of vSphere 4.1/4.0 (If used), as this
feature is not there, the new default value (=> 8) have an impact in the
existing deployments. Action item for vSphere 4.1/4.0: Admin should
modify the global configuration setting "vmware.ports.per.dvportgroup"
from 8 to any number based on their environment because the proposal
default value of 8 would be very less without auto expand feature in
general. The current default value of 256 may not need immediate
modification after deployment, but 8 would be very less which means
admin need to update immediately after upgrade.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10147 Disabled Xenserver Cluster can still deploy VM's. Added code to skip disabled clusters when selecting a host (#2442)
(cherry picked from commit c3488a51db)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10318: Bug on sorting ACL rules list in chrome (#2478)
(cherry picked from commit 4412563f19)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10284:Creating a snapshot from VM Snapshot generates error if hypervisor is not KVM.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10221: Allow IPv6 when creating a Basic Network (#2397)
Since CloudStack 4.10 Basic Networking supports IPv6 and thus
should be allowed to be specified when creating a network.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
(cherry picked from commit 9733a10ecd)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10214: Unable to remove local primary storage (#2390)
Allow admins to remove primary storage pool.
Cherry-picked from eba2e1d8a1
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* dateutil: constistency of tzdate input and output (#2392)
Signed-off-by: Yoan Blanc <yoan.blanc@exoscale.ch>
Signed-off-by: Daan Hoogland <daan.hoogland@shapeblue.com>
(cherry picked from commit 2ad5202823)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10054:Volume download times out in 3600 seconds (#2244)
(cherry picked from commit bb607d07a9)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* When creating a new account (via domain admin) it is possible to select “root admin” as the role for the new user (#2606)
* create account with domain admin showing 'root admin' role
Domain admins should not be able to assign the role of root admin to new users. Therefore, the role ‘root admin’ (or any other of the same type) should not be visible to domain admins.
* License and formatting
* Break long sentence into multiple lines
* Fix wording of method 'getCurrentAccount'
* fix typo in variable name
* [CLOUDSTACK-10259] Missing float part of secondary storage data in listAccounts
* [CLOUDSTACK-9338] ACS not accounting resources of VMs with custom service offering
ACS is accounting the resources properly when deploying VMs with custom service offerings. However, there are other methods (such as updateResourceCount) that do not execute the resource accounting properly, and these methods update the resource count for an account in the database. Therefore, if a user deploys VMs with custom service offerings, and later this user calls the “updateResourceCount” method, it (the method) will only account for VMs with normal service offerings, and update this as the number of resources used by the account. This will result in a smaller number of resources to be accounted for the given account than the real used value. The problem becomes worse because if the user starts to delete these VMs, it is possible to reach negative values of resources allocated (breaking all of the resource limiting for accounts). This is a very serious attack vector for public cloud providers!
* [CLOUDSTACK-10230] User should not be able to use removed “Guest OS type” (#2404)
* [CLOUDSTACK-10230] User is able to change to “Guest OS type” that has been removed
Users are able to change the OS type of VMs to “Guest OS type” that has been removed. This becomes a security issue when we try to force users to use HVM VMs (Meltdown/Spectre thing). A removed “guest os type” should not be usable by any users in the cloud.
* Remove trailing lines that are breaking build due to checkstyle compliance
* Remove unused imports
* fix classes that were in the wrong folder structure
* Updates to capacity management
This adds and allows Ubuntu 18.04 to be used as KVM host. In addition,
on the UI when hypervisor version key is missing, this adds and display
the host os and version detail which is useful to show the KVM host
os and version.
When cache mode 'none' is used for empty cdrom drives, systemvms
and guest VMs fail to start on newer libvirtd such as Ubuntu bionic.
The fix is ensure that cachemode is not declared when drives are empty
upon starting of the VM. Similar issue logged at redhat here:
https://bugzilla.redhat.com/show_bug.cgi?id=1342999
The workaround is to ensure that we don't configure cachemode for
cdrom devices at all. This also fixes live VM migration issue.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The three methods are named as "setXXX", actually, they are not simple setter or getter.
They are further renamed as "generateXXX" with dahn's comments.
This adds support for XenServer 7.3 and 7.4, and XCP-ng 7.4 version as hypervisor hosts. Fixes#2523.
This also fixes the issue of 4.11 VRs stuck in starting for up-to 10mins, before they come up online.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* fix https://issues.apache.org/jira/browse/CLOUDSTACK-10356
* del patch file
* Update ResourceCountDaoImpl.java
* fix some format
* fix code
* fix error message in VolumeOrchestrator
* add check null stmt
* del import unuse class
* use BooleanUtils to check Boolean
* fix error message
* delete unuse function
* delete the deprecated function updateDomainCount
* add error log and throw exception in ProjectManagerImpl.java
This extends securing of KVM hosts to securing of libvirt on KVM
host as well for TLS enabled live VM migration. To simplify implementation
securing of host implies that both host and libvirtd processes are
secured with management server's CA plugin issued certificates.
Based on whether keystore and certificates files are available at
/etc/cloudstack/agent, the KVM agent determines whether to use TLS or
TCP based uris for live VM migration. It is also enforced that a secured
host will allow live VM migration to/from other secured host, and an
unsecured hosts will allow live VM migration to/from other unsecured
host only.
Post upgrade the KVM agent on startup will expose its security state
(secured detail is sent as true or false) to the managements server that
gets saved in host_details for the host. This host detail can be accesed
via the listHosts response, and in the UI unsecured KVM hosts will show
up with the host state of ‘unsecured’. Further, a button has been added
that allows admins to provision/renew certificates to KVM hosts and can
be used to secure any unsecured KVM host.
The `cloudstack-setup-agent` was modified to accept a new flag `-s`
which will reconfigure libvirtd with following settings:
listen_tcp=0
listen_tls=1
tcp_port="16509"
tls_port="16514"
auth_tcp="none"
auth_tls="none"
key_file = "/etc/pki/libvirt/private/serverkey.pem"
cert_file = "/etc/pki/libvirt/servercert.pem"
ca_file = "/etc/pki/CA/cacert.pem"
For a connected KVM host agent, when the certificate are
renewed/provisioned a background task is scheduled that waits until all
of the agent tasks finish after which libvirt process is restarted and
finally the agent is restarted via AgentShell.
There are no API or DB changes.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Add stack traces information
* update stack trace info
* update stack trace to make them consistent
* update stack traces
* update stacktraces
* update stacktraces for other similar situations
* fix some other situations
* enhance other situations
* [CLOUDSTACK-10226] CloudStack is not importing Local storage properly
CloudStack is importing as Local storage any XenServer SR that is of type LVM or EXT. This causes a problem when one wants to use both Direct attach storage and local storage. Moreover, CloudStack was not importing all of the local storage that a host has available when local storage is enabled. It was only importing the First SR it sees.
To fix the first problem we started ignoring SRs that have the flag shared=true when discovering local storages. SRs configured to be shared are used as direct attached storage, and therefore should not be imported again as local ones.
To fix the second problem, we started loading all Local storage and importing them accordingly to ACS.
* Cleanups and formatting
* [CLOUDSTACK-10241] Duplicated file SRs being created in XenServer pools
Due to a race condition between multiple management servers, in some rare cases, CloudStack is creating multiple file SRs to the same secondary folder. This causes a problem when introducing the SR to the XenServer pools, as “there will be VDIs with duplicated UUIDs“. The VDIs are the same, but they are seen in different SRs, and therefore cause an error.
The solution to avoid race conditions between management servers is to use a deterministic srUuid for the file SR to be created (we are leaving XenServer with the burden of managing race conditions). The UUID is based on the SR file path and is generated using UUID#nameUUIDFromBytes. Therefore, if there is an SR with the generated UUID, this means that some other management server has just created it. An exception will occur and it will contain a message saying 'Db_exn.Uniqueness_constraint_violation'. In these unlikely events, we catch the exception and use the method retrieveAlreadyConfiguredSrWithoutException to get the SR that has already been created for the given mount point.
Several fixes addressed:
- Dettach ISO fails when trying to detach a direct download ISO
- Fix for metalink support on SSVM agents (this closes CLOUDSTACK-10238)
- Reinstall VM from bypassed registered template (this closes CLOUDSTACK-10250)
- Fix upload certificate error message even though operation was successful
- Fix metalink download, checksum retry logic and metalink SSVM downloader
* Refactored nuage tests
Added simulator support for ConfigDrive
Allow all nuage tests to run against simulator
Refactored nuage tests to remove code duplication
* Move test data from test_data.py to nuage_test_data.py
Nuage test data is now contained in nuage_test_data.py instead of
test_data.py
Removed all nuage test data from nuage_test_data.py
* CLOUD-1252 fixed cleanup of vpc tier network
* Import libVSD into the codebase
* CLOUDSTACK-1253: Volumes are not expunged in simulator
* Fixed some merge issues in test_nuage_vsp_mngd_subnets test
* Implement GetVolumeStatsCommand in Simulator
* Add vspk as marvin nuagevsp dependency, after removing libVSD dependency
* correct libVSD files for license purposes
pep8 pyflakes compliant
* Update the displayText of XenServer ISO when it already exist in the DB
Besides updating the ISO display text, I also created unit test cases for 'createXenServerToolsIsoEntryInDatabase' and 'getActualIsoTemplate' methods.
* Formatting and cleanups for checkstyle of changed classes
There is a race condition in the monitoring of the migration process on KVM. If the monitor wakes up in the tight window after the migration succeeds, but before the migration thread terminates, the monitor will get a LibvirtException “Domain not found: no domain with matching uuid” when checking on the migration status. This in turn causes CloudStack to sync the VM state to stop, in which it issues a defensive StopCommand to ensure it is correctly synced.
Fix: Prevent LibvirtException: "Domain not found" caused by the call to dm.getInfo()
CLOUDSTACK-10269: On deletion of role set name to null (#2444)
CLOUDSTACK-10146 checksum in java instead of script (#2405)
CLOUDSTACK-10222: Clean snaphosts from primary storage when taking (#2398)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
When user creates a snapshot (manual or recurring), snapshot remains on
the primary storage, even if the snapshot is transferred successfully to
secondary storage. This is causing issues because XenServer can only hold
a limited number of snapshots in its VDI chain, preventing the user from
creating new snapshots after some time, when too many old snapshots are
present on the primary storage.
This fixes move refactoring error introduced in #2283
For instance, the class DatadiskTO is supposed to be in com.cloud.agent.api.to package. However, the folder structure it was placed in is com.cloud.agent.api.api.to.
Skip tests for cloud-plugin-hypervisor-ovm3:
For some unknown reason, there are quite a lot of broken test cases for cloud-plugin-hypervisor-ovm3. They might have appeared after some dependency upgrade and was overlooked by the person updating them. I checked them to see if they could be fixed, but these tests are not developed in a clear and clean manner. On top of that, we do not see (at least I) people using OVM3-hypervisor with ACS. Therefore, I decided to skip them.
Identention corrected to use spaces instead of tabs in XML files
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.
- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
The documentation of Libvirt specifies the requirement of using an XML namespace,
when having metadata in the Domain XML. The Nuage extenstion metadata was not
adhering to this specification, and the lastest Libvirt version ignores it in that case.
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)
Added support for root disks on managed storage with KVM
Added support for volume snapshots with managed storage on KVM
Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage
Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.
Included support for Reinstall VM on KVM with managed storage
Enabled offline migration on KVM from non-managed storage to managed storage and vice versa
Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)
Added support to download (extract) a managed-storage volume to a QCOW2 file
When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.
Included support for the KVM auto-convergence feature
The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
Added a new Global Setting: kvm.storage.live.migration.wait
For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)
Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.
Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot
Added an SIOC API plug-in to support VMware SIOC
When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.
Added the ability to revert a volume to a snapshot
Enabled cluster-scoped managed storage
Added support for VMware dynamic discovery
Extending Config Drive support
* Added support for VMware
* Build configdrive.iso on ssvm
* Added support for VPC and Isolated Networks
* Moved implementation to new Service Provider
* UI fix: add support for urlencoded userdata
* Add support for building systemvm behind a proxy
Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
Co-Authored-By: Frank Maximus <frank.maximus@nuagenetworks.net>
Co-Authored-By: Sigert Goeminne <sigert.goeminne@nuagenetworks.net>
CloudStack volumes and templates are one single virtual disk in case of XenServer/XCP and KVM hypervisors since the files used for templates and volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates are in OVA format, which are archives that can contain a complete VM including multiple VMDKs and other files such as ISOs. And currently, Cloudstack only supports Template creation based on OVA files containing a single disk. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, only the first disk is attached to the new instance and other disks are ignored.
Similarly with uploaded volumes, attaching an uploaded volume that contains multiple disks to a VM will result in only one VMDK to being attached to the VM.
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Support+OVA+files+containing+multiple+disks
This behavior needs to be improved in VMWare to support OVA files with multiple disks for both uploaded volumes and templates. i.e. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, the first disk should be attached to the new instance as the ROOT disk and volumes should be created based on other VMDK disks in the OVA file and should be attached to the instance.
Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows using templates and ISOs avoiding secondary storage as intermediate cache on KVM. The virtual machine deployment process is enhanced to supported bypassed registered templates and ISOs, delegating the work of downloading them to primary storage to the KVM agent instead of the SSVM agent.
Template and ISO registration:
- When hypervisor is KVM, a checkbox is displayed with 'Direct Download' label.
- API methods registerTemplate and registerISO are both extended with this new parameter directdownload.
- On template or ISO registration, no download job is sent to SSVM agent, CloudStack would only persist an entry on template_store_ref indicating that template or ISO has been marked as 'Direct Download' (bypassing Secondary Storage). These entries are persisted as:
template_id = Template or ISO id on vm_template table
store_id NULL
download_state = BYPASSED
state = Ready
(Note: these entries allow users to deploy virtual machine from registered templates or ISOs)
- An URL validation command is sent to a random KVM host to check if template/ISO location can be reached. Metalink are also supported by this feature. In case of a metalink, it is fetched and URL check is performed on each of its URLs.
- Checksum should be provided as indicated on #2246: {ALGORITHM}CHKSUMHASH
- After template or ISO is registered, it would be displayed in the UI
Virtual machine deployment:
When a 'Direct Download' template is selected for deployment, CloudStack would delegate template downloading to destination storage pool via destination host by a new pluggable download manager.
Download manager would handle template downloading depending on URL protocol. In case of HTTP, request headers can be set by the user via vm_template_details. Those details should be persisted as:
Key: HTTP_HEADER
Value: HEADERNAME:HEADERVALUE
In case of HTTPS, a new API method is added uploadTemplateDirectDownloadCertificate to allow user importing a client certificate into all KVM hosts' keystore before deployment.
After template or ISO is downloaded to primary storage, usual entry would be persisted on template_spool_ref indicating the mapping between template/ISO and storage pool.
This happens when the root disk size is overridden. The primary storage limit check should be performed based on overridden size instead of template size. Enabled root disk resize tests to run on simulator as well.
This feature allow admins to dedicate a range of public IP addresses to the SSVM and CPVM, such that they can be subject to specific external firewall rules. The option to dedicate a public IP range to the System VMs (SSVM & CPVM) is added to the createVlanIpRange API method and the UI.
Solution:
Global setting 'system.vm.public.ip.reservation.mode.strictness' is added to determine if the use of the system VM reservation is strict (when true) or preferred (false), false by default.
When a range has been dedicated to System VMs, CloudStack should apply IPs from that range to
the public interfaces of the CPVM and the SSVM depending on global setting's value:
If the global setting is set to false: then CloudStack will use any unused and unreserved public IP
addresses for system VMs only when the pool of reserved IPs has been exhausted
If the global setting is set to true: then CloudStack will fail to deploy the system VM when the pool
of reserved IPs has been exhausted, citing the lack of available IPs.
UI Changes
Under Infrastructure -> Zone -> Physical Network -> Public -> IP Ranges, button 'Account' label is refactored to 'Set reservation'.
When that button is clicked, dialog displayed is also refactored, including a new checkbox 'System VMs' which indicates if range should be dedicated for CPVM and SSVM, and a note indicating its usage.
When clicking on button for any created range, UI dialog displayed indicates whether IP range is dedicated for system vms or not.
The first PR(#1176) intended to solve #CLOUDSTACK-9025 was only tackling the problem for CloudStack deployments that use single hypervisor types (restricted to XenServer). Additionally, the lack of information regarding that solution (poor documentation, test cases and description in PRs and Jira ticket) led the code to be removed in #1124 after a long discussion and analysis in #1056. That piece of code seemed logicless (and it was!). It would receive a hostId and then change that hostId for other hostId of the zone without doing any check; it was not even checking the hypervisor and storage in which the host was plugged into.
The problem reported in #CLOUDSTACK-9025 is caused by partial snapshots that are taken in XenServer. This means, we do not take a complete snapshot, but a partial one that contains only the modified data. This requires rebuilding the VHD hierarchy when creating a template out of the snapshot. The point is that the first hostId received is not a hostId, but a system VM ID(SSVM). That is why the code in #1176 fixed the problem for some deployment scenarios, but would cause problems for scenarios where we have multiple hypervisors in the same zone. We need to execute the creation of the VHD that represents the template in the hypervisor, so the VHD chain can be built using the parent links.
This commit changes the method com.cloud.hypervisor.XenServerGuru.getCommandHostDelegation(long, Command). From now on we replace the hostId that is intended to execute the “copy command” that will create the VHD of the template according to some conditions that were already in place. The idea is that starting with XenServer 6.2.0 hotFix ESP1004 we need to execute the command in the hypervisor host and not from the SSVM. Moreover, the method was improved making it readable and understandable; it was also created test cases assuring that from XenServer 6.2.0 hotFix ESP1004 and upward versions we change the hostId that will be used to execute the “copy command”.
Furthermore, we are not selecting a random host from a zone anymore. A new method was introduced in the HostDao called “findHostConnectedToSnapshotStoragePoolToExecuteCommand”, using this method we look for a host that is in the cluster that is using the storage pool where the volume from which the Snaphost is taken of. By doing this, we guarantee that the host that is connected to the primary storage where all of the snapshots parent VHDs are stored is used to create the template.
Consider using Disabled hosts when no Enabled hosts are found
This also closes#2317
* Cleanup and Improve NetUtils
This class had many unused methods, inconsistent names and redundant code.
This commit cleans up code, renames a few methods and constants.
The global/account setting 'api.allowed.source.cidr.list' is set
to 0.0.0.0/0,::/0 by default preserve the current behavior and thus
allow API calls for accounts from all IPv4 and IPv6 subnets.
Users can set it to a comma-separated list of IPv4/IPv6 subnets to
restrict API calls for Admin accounts to certain parts of their network(s).
This is to improve Security. Should an attacker steal the Access/Secret key
of an account he/she still needs to be in a subnet from where accounts are
allowed to perform API calls.
This is a good security measure for APIs which are connected to the public internet.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This includes test related fixes and code review fixes based on
reviews from @rafaelweingartner, @marcaurele, @wido and @DaanHoogland.
This also includes VMware disk-resize limitation bug fix based on comments
from @sateesh-chodapuneedi and @priyankparihar.
This also includes the final changes to systemvmtemplate and fixes to
code based on issues found via test failures.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Resize for VMware root disk should only be performed during VM start
when vmware.create.full.clone is true i.e. the disk chain length is one.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Refactors and simplifies systemvm codebase file structures keeping
the same resultant systemvm.iso packaging
- Password server systemd script and new postinit script that runs
before sshd starts
- Fixes to keepalived and conntrackd config to make rVRs work again
- New /etc/issue featuring ascii based cloudmonkey logo/message and
systemvmtemplate version
- SystemVM python codebase linted and tested. Added pylint/pep to
Travis.
- iptables re-application fixes for non-VR systemvms.
- SystemVM template build fixes.
- Default secondary storage vm service offering boosted to have 2vCPUs
and RAM equal to console proxy.
- Fixes to several marvin based smoke tests, especially rVR related
tests. rVR tests to consider 3*advert_int+skew timeout before status
is checked.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This ports PR #1470 by @remibergsma.
Make the generated json files unique to prevent concurrency issues:
The json files now have UUIDs to prevent them from getting overwritten
before they've been executed. Prevents config to be pushed to the wrong
router.
2016-02-25 18:32:23,797 DEBUG [c.c.a.t.Request] (AgentManager-Handler-1:null) (logid:) Seq 2-4684025087442026584: Processing: { Ans: , MgmtId: 90520732674657, via: 2, Ver: v1, Flags: 10, [{"com.cloud.agent.api.routing.GroupA
nswer":{"results":["null - success: null","null - success: [INFO] update_config.py :: Processing incoming file => vm_dhcp_entry.json.4ea45061-2efb-4467-8eaa-db3d77fb0a7b\n[INFO] Processing JSON file vm_dhcp_entry.json.4ea4506
1-2efb-4467-8eaa-db3d77fb0a7b\n"],"result":true,"wait":0}}] }
On the router:
2016-02-25 18:32:23,416 merge.py __moveFile:298 Processed file written to /var/cache/cloud/processed/vm_dhcp_entry.json.4ea45061-2efb-4467-8eaa-db3d77fb0a7b.gz
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
com.cloud.hypervisor.hyperv.resource.HypervDummyResourceBase
class and change the log message in
com.cloud.hypervisor.hyperv.discoverer.HypervServerDiscoverer
Otherwise we send down a 'null' to a ProcessBuilder in Java instead of a String and this
causes a NPE.
We should check first if the Instance has a IPv6 address before sending it there.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* CLOUDSTACK-10160: Fix typo in Libvirt XML definition for Virtio-SCSI
The attribute for the XML element 'controller' should be 'model' and
not 'mode'.
Source: https://libvirt.org/formatdomain.html#elementsControllers
A scsi controller has an optional attribute model, which is one of
'auto', 'buslogic', 'ibmvscsi', 'lsilogic', 'lsisas1068', 'lsisas1078',
'virtio-scsi' or 'vmpvscsi'.
In the current state a regular SCSI device is attached and not a Virtio-SCSI
device.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* CLOUDSTACK-10160: Add UnitTest for LibvirtVMDef.SCSIDef
To make sure the XML output string is correct
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This commit adds support for passing IPv6 Addresses and/or Subnets as
Secondary IPs.
This is groundwork for CLOUDSTACK-9853 where IPv6 Subnets have to be
allowed in the Security Groups of Instances to we can add DHCPv6
Prefix Delegation.
Use ; instead of : for separating addresses, otherwise it would cause
problems with IPv6 Addresses.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
* CLOUDSTACK-9972: Enhance listVolume API to include physical size and utilization.
Also fixed pool, cluster and pod info
* CLOUDSTACK-9972: Fix volume_view and duplicate API constant
* CLOUDSTACK-9972: Backport Do not allow vms to be deployed on hosts that are in disabled pod
* CLOUDSTACK-9972: Fix localization missing keys
* CLOUDSTACK-9972: Fix sql path
- Migrate to embedded Jetty server.
- Improve ServerDaemon implementation.
- Introduce a new server.properties file for easier configuration.
- Have a single /etc/default/cloudstack-management to configure env.
- Reduce shaded jar file, removing unnecessary dependencies.
- Upgrade to Spring 5.x, upgrade several jar dependencies.
- Does not shade and include mysql-connector, used from classpath instead.
- Upgrade and use bountcastle as a separate un-shaded jar dependency.
- Remove tomcat related configuration and files.
- Have both embedded UI assets in uber jar and separate webapp directory.
- Refactor systemd and init scripts, cleanup packaging.
- Made cloudstack-setup-databases faster, using `urandom`.
- Remove unmaintained distro packagings.
- Moves creation and usage of server keystore in CA manager, this
deprecates the need to create/store cloud.jks in conf folder and
the db.cloud.keyStorePassphrase in db.properties file. This also
remove the need of the --keystore-passphrase in the
cloudstack-setup-encryption script.
- GZip contents dynamically in embedded Jetty
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Allow security policies to apply on port groups:
- Accepts security policies while creating network offering
- Deployed network will have security policies from the network offering
applied on the port group (in vmware environment)
- Global settings as fallback when security policies are not defined for a network
offering
- Default promiscuous mode security policy set to REJECT as it's the default
for standard/default vswitch
Portgroup vlan-trunking options for dvswitch: This allows admins to define
a network with comma separated vlan id and vlan
range such as vlan://200-400,21,30-50 and use the provided vlan range to
configure vlan-trunking for a portgroup in dvswitch based environment.
VLAN overlap checks are performed for:
- isolated network against existing shared and isolated networks
- dedicated vlan ranges for the physical/public network for the zone
- shared network against existing isolated network
Allow shared networks to bypass vlan overlap checks: This allows admins
to create shared networks with a `bypassvlanoverlapcheck` API flag
which when set to 'true' will create a shared network without
performing vlan overlap checks against isolated network and against
the vlans allocated to the datacenter's physical network (vlan ranges).
Notes:
- No vlan-range overlap checks are performed when creating shared networks
- Multiple vlan id/ranges should include the vlan:// scheme prefix
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Commit enables a new feature for KVM hypervisor which purpose is to increase virtually amount of RAM available beyond the actual limit.
There is a new parameter in agent.properties: host.overcommit.mem.mb which enables adding specified amount of RAM to actually available. It is necessary to utilize KSM and ZSwap features which extend RAM with deduplication and compression.
The watchdog timer adds functionality where the Hypervisor can detect if an
instance has crashed or stopped functioning.
The watchdog timer adds functionality where the Hypervisor can detect if an
instance has crashed or stopped functioning.
When the Instance has the 'watchdog' daemon running it will send heartbeats
to the /dev/watchdog device.
If these heartbeats are no longer received by the HV it will reset the Instance.
If the Instance never sends the heartbeats the HV does not take action. It only
takes action if it stops sending heartbeats.
This is supported since Libvirt 0.7.3 and can be defined in the XML format as
described in the docs: https://libvirt.org/formatdomain.html#elementsWatchdog
To the 'devices' section this will be added:
In the agent.properties the action to be taken can be defined:
vm.watchdog.action=reset
The same goes for the model. The Intel i6300esb is however the most commonly used.
vm.watchdog.model=i6300esb
When the Instance has the 'watchdog' daemon running it will send heartbeats
to the /dev/watchdog device.
If these heartbeats are no longer received by the HV it will reset the Instance.
If the Instance never sends the heartbeats the HV does not take action. It only
takes action if it stops sending heartbeats.
This is supported since Libvirt 0.7.3 and can be defined in the XML format as
described in the docs: https://libvirt.org/formatdomain.html#elementsWatchdog
To the 'devices' section this will be added:
<watchdog model='i6300esb' action='reset'/>
In the agent.properties the action to be taken can be defined:
vm.watchdog.action=reset
The same goes for the model. The Intel i6300esb is however the most commonly used.
vm.watchdog.model=i6300esb
Signed-off-by: Wido den Hollander <wido@widodh.nl>
If there are multiple files with the same name on vmware datastore, search operation may select any one file during volume related operations. This involves volume attach/detach, volume download, volume snapshot etc.
While using NetApp as the backup solution. This has .snapshot folder on the datastore and sometimes files from this folder gets selected during volume operations and the operation fails. Because of wrong selection of file following exception can be observed while volume deletion.
2017-02-23 19:39:05,750 ERROR [c.c.s.r.VmwareStorageProcessor] (DirectAgent-304:ctx-a1dbf5d8 ac.local) delete volume failed due to Exception: java.lang.RuntimeException
Message: Cannot delete file [4cbcd46d44c53f5c8244c0aad26a97e1] .snapshot/hourly.2017-02-23_1605/r-97-VM/ROOT-97.vmdk
To fix this behavior I have added a global configuration by name vmware.search.exclude.folders which can be comma separated list of folder paths.
I have also added a unit test to test the new method.
- All tests should pass on KVM, Simulator
- Add test cases covering FSM state transitions and actions
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Removed three bg thread tasks, uses FSM event-trigger based scheduling
- On successful recovery, kicks VM HA
- Improves overall HA scheduling and task submission, lower DB access
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Host-HA offers investigation, fencing and recovery mechanisms for host that for
any reason are malfunctioning. It uses Activity and Health checks to determine
current host state based on which it may degrade a host or try to recover it. On
failing to recover it, it may try to fence the host.
The core feature is implemented in a hypervisor agnostic way, with two separate
implementations of the driver/provider for Simulator and KVM hypervisors. The
framework also allows for implementation of other hypervisor specific provider
implementation in future.
The Host-HA provider implementation for KVM hypervisor uses the out-of-band
management sub-system to issue IPMI calls to reset (recover) or poweroff (fence)
a host.
The Host-HA provider implementation for Simulator provides a means of testing
and validating the core framework implementation.
Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces a new certificate authority framework that allows
pluggable CA provider implementations to handle certificate operations
around issuance, revocation and propagation. The framework injects
itself to `NioServer` to handle agent connections securely. The
framework adds assumptions in `NioClient` that a keystore if available
with known name `cloud.jks` will be used for SSL negotiations and
handshake.
This includes a default 'root' CA provider plugin which creates its own
self-signed root certificate authority on first run and uses it for
issuance and provisioning of certificate to CloudStack agents such as
the KVM, CPVM and SSVM agents and also for the management server for
peer clustering.
Additional changes and notes:
- Comma separate list of management server IPs can be set to the 'host'
global setting. Newly provisioned agents (KVM/CPVM/SSVM etc) will get
radomized comma separated list to which they will attempt connection
or reconnection in provided order. This removes need of a TCP LB on
port 8250 (default) of the management server(s).
- All fresh deployment will enforce two-way SSL authentication where
connecting agents will be required to present certificates issued
by the 'root' CA plugin.
- Existing environment on upgrade will continue to use one-way SSL
authentication and connecting agents will not be required to present
certificates.
- A script `keystore-setup` is responsible for initial keystore setup
and CSR generation on the agent/hosts.
- A script `keystore-cert-import` is responsible for import provided
certificate payload to the java keystore file.
- Agent security (keystore, certificates etc) are setup initially using
SSH, and later provisioning is handled via an existing agent connection
using command-answers. The supported clients and agents are limited to
CPVM, SSVM, and KVM agents, and clustered management server (peering).
- Certificate revocation does not revoke an existing agent-mgmt server
connection, however rejects a revoked certificate used during SSL
handshake.
- Older `cloudstackmanagement.keystore` is deprecated and will no longer
be used by mgmt server(s) for SSL negotiations and handshake. New
keystores will be named `cloud.jks`, any additional SSL certificates
should not be imported in it for use with tomcat etc. The `cloud.jks`
keystore is stricly used for agent-server communications.
- Management server keystore are validated and renewed on start up only,
the validity of them are same as the CA certificates.
New APIs:
- listCaProviders: lists all available CA provider plugins
- listCaCertificate: lists the CA certificate(s)
- issueCertificate: issues X509 client certificate with/without a CSR
- provisionCertificate: provisions certificate to a host
- revokeCertificate: revokes a client certificate using its serial
Global settings for the CA framework:
- ca.framework.provider.plugin: The configured CA provider plugin
- ca.framework.cert.keysize: The key size for certificate generation
- ca.framework.cert.signature.algorithm: The certificate signature algorithm
- ca.framework.cert.validity.period: Certificate validity in days
- ca.framework.cert.automatic.renewal: Certificate auto-renewal setting
- ca.framework.background.task.delay: CA background task delay/interval
- ca.framework.cert.expiry.alert.period: Days to check and alert expiring certificates
Global settings for the default 'root' CA provider:
- ca.plugin.root.private.key: (hidden/encrypted) CA private key
- ca.plugin.root.public.key: (hidden/encrypted) CA public key
- ca.plugin.root.ca.certificate: (hidden/encrypted) CA certificate
- ca.plugin.root.issuer.dn: The CA issue distinguished name
- ca.plugin.root.auth.strictness: Are clients required to present certificates
- ca.plugin.root.allow.expired.cert: Are clients with expired certificates allowed
UI changes:
- Button to download/save the CA certificates.
Misc changes:
- Upgrades bountycastle version and uses newer classes
- Refactors SAMLUtil to use new CertUtils
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Since libvirt 1.2.2 libvirt will properly create volumes
using RBD format 2.
We can use libvirt to creates the volumes which strips a bit of
code from the CloudStack Agent's responsbility.
RBD format 2 is already used by all volumes created by CloudStack.
This format is the most recent format of RBD and is still actively
being developed.
This removes the support for Ubuntu 12.04 as that does not have the
proper libvirt version available.
Signed-off-by: Wido den Hollander wido@widodh.nl
We can use libvirt to creates the volumes which strips a bit of
code from the CloudStack Agent's responsbility.
RBD format 2 is already used by all volumes created by CloudStack.
This format is the most recent format of RBD and is still actively
being developed.
This removes the support for Ubuntu 12.04 as that does not have the
proper libvirt version available.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
Issue
=====
While listing datacenters associated with a zone, only zone Id validation is required.
There is no need to have additional checks like zone is a legacy zone or not.
Fix
===
Removed unnecessary checks over zone ID and just checking if zone with specified ID exists or not.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
(cherry picked from commit 0ef1c17541)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Summary: CLOUDSTACK-8921
snapshot_store_ref table should store actual size of back snapshot in secondary storage
Calling SR scan to make sure size is updated correctly
(cherry picked from commit 4e4b67cd96)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
with XenServer & Local SR (Db_exn.Uniqueness_constraint_violation)
removed the host uuid from SR label so that any host which has access to
the SR(all the hosts in the same pool) can reuse the same SR
(cherry picked from commit 1aa6a72bc7)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
as that snapshot will never be going to use again and also it will fill up primary storage
(cherry picked from commit 336df84f17)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Updated hardcoded value with max data volumes limit from hypervisor capabilities.
(cherry picked from commit 93f5b6e8a3)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Libvirt / Qemu (KVM) does not collect statistics about these either.
On some systems it might even yield a 'internal error' from libvirt
when attempting to gather block statistics from such devices.
For example Ubuntu 16.04 (Xenial) has a issue with this.
Skip them when looping through all devices.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
The 'force' option provided with the stopVirtualMachine API command is
often assumed to be a hard shutdown sent to the hypervisor, when in fact
it is for CloudStacks' internal use. CloudStack should be able to send
the 'hard' power-off request to the hosts.
When forced parameter on the stopVM API is true, power off (hard shutdown)
a VM. This uses initial changes from #1635 to pass the forced parameter
to hypervisor plugin via the StopCommand, and fixes force stop (poweroff)
handling for KVM, VMware and XenServer.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.9:
Do not set gateway to 0.0.0.0 for windows clients
CLOUDSTACK-9904: Fix log4j to have @AGENTLOG@ replaced
ignore bogus default gateway when a shared network is secondary the default gateway gets overwritten by a bogus one dnsmasq does the right thing and replaces it with its own default which is not good for us so check for '0.0.0.0'
Activate NioTest following changes in CLOUDSTACK-9348 PR #1549
CLOUDSTACK-9828: GetDomRVersionCommand fails to get the correct version as output Fix tries to return the output as a single command, instead of appending output from two commands
CLOUDSTACK-3223 Exception observed while creating CPVM in VMware Setup with DVS
CLOUDSTACK-9787: Fix wrong return value in NetUtils.isNetworkAWithinNetworkB
with XenServer & Local SR (Db_exn.Uniqueness_constraint_violation)
removed the host uuid from SR label so that any host which has access to
the SR(all the hosts in the same pool) can reuse the same SR
1. Removed XenServerGuestOsMemoryMap from CitrixHelper.java
This java file was holding a static in memory map named XenServerGuestOsMemoryMap. This was the source for xenserver dynamic memory values(max and min). These values were moved to guest_os_details table.
2. DAO layer was modified to access these values.
3. VirtualMachineTo object was modified to populate the dynamic memory values.
4. addGuestOs and UpdateGuestOS api has been modified to update memory values.
This fixes log4j xml to have @AGENTLOG@ replaced with values defined
in build/replace.properties.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Test scenarios:
- Enable cluster HA after VR is created. Now stop and start VR and check its restart priority, should be High.
- Enable cluster HA before VR is created. Now create some VM and verify that VR created must have High restart priority.
CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.Updated the template_spool_ref table with the correct template (VMware - OVA file) size.
* pr/1880:
CLOUDSTACK-9720: [VMware] template_spool_ref table is not getting updated with correct template physical size in template_size column.
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Summary: CLOUDSTACK-8921
snapshot_store_ref table should store actual size of back snapshot in secondary storage
Calling SR scan to make sure size is updated correctly
Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm. free memory = total memory - (all vm memory)With memory over-provisioning set to 1, when mgmt server starts VMs in parallel on one host, then the memory allocated on that kvm can be larger than the actual physcial memory of the kvm host.
Fixed by checking free memory on host before starting Vm.
Added test case to check memory usage on Host.
Verified Vm deploy on Host with enough capacity and also without capacity
* pr/847:
Bug-ID: CLOUDSTACK-8880: calculate free memory on host before deploying Vm. free memory = total memory - (all vm memory)
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9794: Unable to attach more than 14 devices to a VMUpdated hardcoded value with max data volumes limit from hypervisor capabilities.
* pr/1953:
CLOUDSTACK-9794: Unable to attach more than 14 devices to a VM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9698 [VMware] Make hardcorded wait timeout for NIC adapter hotplug as configurableJira
===
CLOUDSTACK-9698 [VMware] Make hardcoded wait timeout for NIC adapter hotplug as configurable
Description
=========
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR running on VMware to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of VMware NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios. This is specific to VR running over VMware hypervisor.
Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.
Fix
===
Introduce new configuration parameter (via ConfigKey) "vmware.nic.hotplug.wait.timeout" which is "Wait timeout (milli seconds) for hot plugged NIC of VM to be detected by guest OS." as fallback instead of hard coded timeout, to ensure flexibility for admins given the listed scenarios above.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
* pr/1861:
CLOUDSTACK-9698 Make the wait timeout for NIC adapter hotplug as configurable
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
* 4.9:
moved logrotate from cron.daily to cron.hourly for vpcrouter in cloud-early-config
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
[4.9] CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agentThe router.aggregation.command.each.timeout in global configuration is only applied on new created KVM host.
For existing KVM host, changing the value will not be effective.
We need to propagate the configuration to existing host when cloudstack-agent is connected.
* pr/1856:
CLOUDSTACK-9569: propagate global configuration router.aggregation.command.each.timeout to KVM agent
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This adds support for virtio-scsi on KVM hosts, either
for guests that are associated with a new os_type of 'Other PV Virtio-SCSI (64-bit)',
or when a VM or template is regstered with a detail parameter rootDiskController=scsi.
Update cloudstack add template dialog to allow for selecting rootDiskController with KVM
Update cloudstack kvm virtio-scsi to enable discard=unmap
Currently ACS waits for 15 seconds (hard coded) for hot-plugged NIC in VR to get detected by guest OS.
The time taken to detect hot plugged NIC in guest OS depends on type of NIC adapter like (E1000, VMXNET3, E1000e etc.)
and guest OS itself. In uncommon scenarios the NIC detection may take longer time than 15 seconds,
in such cases NIC hotplug would be treated as failure which results in VPC tier configuration failure.
Alternatively making the wait timeout for NIC adapter hotplug as configurable will be helpful for admins in such scenarios.
Also in future if VMware introduces new NIC adapter types which may take time to get detected by guest OS, it is good to have flexibility of
configuring the wait timeout as fallback mechanism in such scenarios.
Signed-off-by: Sateesh Chodapuneedi <sateesh.chodapuneedi@accelerite.com>
* 4.9:
CLOUDSTACK-9746 system-vm: logrotate config causes critical failures
CLOUDSTACK-9788: Fix exception listNetworks with pagesize=0
CLOUDSTACK-8663: Fixed various issues to allow VM snapshots and volume snapshots to exist together
Fix HVM VM restart bug in XenServer
CLOUDSTACK-9363: Fix HVM VM restart bug in XenServerHere is the longer description of the problem:
By default XenServer limits HVM guests to only 4 disks. Two of those are reserved for the ROOT disk (deviceId=0) and CD ROM (device ID=3) which means that we can only attach 2 data disks. This limit however is removed when Xentools is installed on the guest. The information that a guest has Xentools installed and can handle more than 4 disks is stored in the VM metadata on XenServer. When a VM is shut down, Cloudstack removes the VM and all the metadata associated with the VM from XenServer. Now, when you start the VM again, even if it has Xentools installed, it will default to only 4 attachable disks.
Now this problem manifests itself when you have a HVM VM and you stop and start it with more than 2 data disks attached. The VM fails to start and the only way to start the VM is to detach the extra disks and then reattach them after the VM start.
In this fix, I am removing the check which is done before creating a `VBD` which enforces this limit. This will not affect current workflow and will fix the HVM issue.
@koushik-das this is related to the "autodetect" feature that you introduced a while back (https://issues.apache.org/jira/browse/CLOUDSTACK-8826). I would love your review on this fix.
* pr/1829:
Fix HVM VM restart bug in XenServer
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
snapshots to exist together
Reverting VM to disk only snapshot in Xenserver corrupts VM
Stale NFS secondary storage on XS leads to volume creation failure from snapshot
[4.10] CLOUDSTACK-8746: VM Snapshotting implementation for KVM
* pr/977:
Fixes for testing VM Snapshots on KVM. Related to PR 977
CLOUDSTACK-8746: vm snapshot implementation for KVM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9359: IPv6 for Basic NetworkingThis PR is a proposal for adding very basic IPv6 to Basic Networking. The main goal of this PR is that the API returns a valid IPv6 address over which the Instance is reachable.
The GUI will show the IPv6 address after deployment of the Instance.

If the table VLAN has a proper IPv6 CIDR configured the DirectPodBasedNetworkGuru will calculate the IPv6 Address the Instance will obtain using EUI-64 and SLAAC: https://tools.ietf.org/search/rfc4862
In this case the _vlan_ table contained:
<pre>mysql> select * from vlan \G
*************************** 1. row ***************************
id: 1
uuid: 90e0716c-5261-4992-bb9d-0afd3006f476
vlan_id: vlan://untagged
vlan_gateway: 172.16.0.1
vlan_netmask: 255.255.255.0
description: 172.16.0.10-172.16.0.250
vlan_type: DirectAttached
data_center_id: 1
network_id: 204
physical_network_id: 200
ip6_gateway: 2001:980:7936:112::1
ip6_cidr: 2001:980:7936:112::/64
ip6_range: NULL
removed: NULL
created: 2016-07-19 20:39:41
1 row in set (0.00 sec)
mysql></pre>
It will then log:
<pre>2016-10-04 11:42:44,998 DEBUG [c.c.n.g.DirectPodBasedNetworkGuru] (Work-Job-Executor-1:ctx-1975ec54 job-186/job-187 ctx-0d967d88) (logid:275c4961) Found IPv6 CIDR 2001:980:7936:112::/64 for VLAN 1
2016-10-04 11:42:45,009 INFO [c.c.n.g.DirectPodBasedNetworkGuru] (Work-Job-Executor-1:ctx-1975ec54 job-186/job-187 ctx-0d967d88) (logid:275c4961) Calculated IPv6 address 2001:980:7936:112:4ba:80ff:fe00:e9 using EUI-64 for NIC 6a05deab-b5d9-4116-80da-c94b48333e5e</pre>
The template has to be configured accordingly:
- No IPv6 Privacy Extensions
- Use SLAAC
- Follow RFC4862
This is also described in: https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+in+Basic+Networking
The next steps after this will be:
- Security Grouping to prevent IPv6 Address Spoofing
- Security Grouping to filter ICMP, UDP and TCP traffic
* pr/1700:
CLOUDSTACK-676: IPv6 In -and Egress filtering for Basic Networking
CLOUDSTACK-676: IPv6 Basic Security Grouping for KVM
CLOUDSTACK-9359: IPv6 for Basic Networking with KVM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9619: Updates for SAN-assisted snapshotsThis PR is to address a few issues in #1600 (which was recently merged to master for 4.10).
In StorageSystemDataMotionStrategy.performCopyOfVdi we call getSnapshotDetails. In one such scenario, the source snapshot in question is coming from secondary storage (when we are creating a new volume on managed storage from a snapshot of ours thats on secondary storage).
This usually worked in the regression tests due to a bit of "luck": We retrieve the ID of the snapshot (which is on secondary storage) and then try to pull out its StorageVO object (which is for primary storage). If you happen to have a primary storage that matches the ID (which is the ID of a secondary storage), then getSnapshotDetails populates its Map<String, String> with inapplicable data (that is later ignored) and you dont easily see a problem. However, if you dont have a primary storage that matches that ID (which I didnt today because I had removed that primary storage), then a NullPointerException is thrown.
I have fixed that issue by skipping getSnapshotDetails if the source is coming from secondary storage.
While fixing that, I noticed a couple more problems:
1) We can invoke grantAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
2) We can invoke revokeAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
I have corrected those issues, as well.
I then came across one more problem:
When using a SAN snapshot and copying it to secondary storage or creating a new managed-storage volume from a snapshot of ours on secondary storage, we attach to the SR in the XenServer code, but detach from it in the StorageSystemDataMotionStrategy code (by sending a message to the XenServer code to perform an SR detach). Since we know to detach from the SR after the copy is done, we should detach from the SR in the XenServer code (without that code having to be explicitly called from outside of the XenServer logic).
I went ahead and changed that, as well.
JIRA Ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-9619
* pr/1749:
CLOUDSTACK-9619: Updates for SAN-assisted snapshots
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This commit implements basic Security Grouping for KVM in
Basic Networking.
It does not implement full Security Grouping yet, but it does:
- Prevent IP-Address source spoofing
- Allow DHCPv6 clients, but disallow DHCPv6 servers
- Disallow Instances to send out Router Advertisements
The Security Grouping allows ICMPv6 packets as described by RFC4890
as they are essential for IPv6 connectivity.
Following RFC4890 it allows:
- Router Solicitations
- Router Advertisements (incoming only)
- Neighbor Advertisements
- Neighbor Solicitations
- Packet Too Big
- Time Exceeded
- Destination Unreachable
- Parameter Problem
- Echo Request
ICMPv6 is a essential part of IPv6, without it connectivity will break or be very
unreliable.
For now it allows any UDP and TCP packet to be send in to the Instance which
effectively opens up the firewall completely.
Future commits will implement Security Grouping further which allows controlling UDP and TCP
ports for IPv6 like can be done with IPv4.
Regardless of the egress filtering (which can't be done yet) it will always allow outbound DNS
to port 53 over UDP or TCP.
Signed-off-by: Wido den Hollander <wido@widodh.nl>