As discovered and discussed in #2376, adding some delay after stopping
the VM and reverting VM snapshot passes the
`test_change_service_offering_for_vm_with_snapshots` test case. The
suspect here is userVMDao or background vmsync that may not update
the VM state to PowerOff.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes regression failures seen in Trillian, fixes NPEs that cause Travis related failures.
This also removes the aria2 dependency from rpms that require users to enable/install epel-release.
This finally updates the checksums for 4.11 systemvmtemplates in db upgrade path.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)
Added support for root disks on managed storage with KVM
Added support for volume snapshots with managed storage on KVM
Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage
Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.
Included support for Reinstall VM on KVM with managed storage
Enabled offline migration on KVM from non-managed storage to managed storage and vice versa
Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)
Added support to download (extract) a managed-storage volume to a QCOW2 file
When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.
Included support for the KVM auto-convergence feature
The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
Added a new Global Setting: kvm.storage.live.migration.wait
For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)
Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.
Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot
Added an SIOC API plug-in to support VMware SIOC
When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.
Added the ability to revert a volume to a snapshot
Enabled cluster-scoped managed storage
Added support for VMware dynamic discovery
Extending Config Drive support
* Added support for VMware
* Build configdrive.iso on ssvm
* Added support for VPC and Isolated Networks
* Moved implementation to new Service Provider
* UI fix: add support for urlencoded userdata
* Add support for building systemvm behind a proxy
Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
Co-Authored-By: Frank Maximus <frank.maximus@nuagenetworks.net>
Co-Authored-By: Sigert Goeminne <sigert.goeminne@nuagenetworks.net>
CloudStack volumes and templates are one single virtual disk in case of XenServer/XCP and KVM hypervisors since the files used for templates and volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates are in OVA format, which are archives that can contain a complete VM including multiple VMDKs and other files such as ISOs. And currently, Cloudstack only supports Template creation based on OVA files containing a single disk. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, only the first disk is attached to the new instance and other disks are ignored.
Similarly with uploaded volumes, attaching an uploaded volume that contains multiple disks to a VM will result in only one VMDK to being attached to the VM.
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Support+OVA+files+containing+multiple+disks
This behavior needs to be improved in VMWare to support OVA files with multiple disks for both uploaded volumes and templates. i.e. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, the first disk should be attached to the new instance as the ROOT disk and volumes should be created based on other VMDK disks in the OVA file and should be attached to the instance.
Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes regression introduced in PR #2295:
- Pass assign=true to fetch new public IP
- Use wait_until instead of sleep+wait in tests
- Loop through list of public IP ranges to match the systemvm gateway
- Fix potential NPE seen when adding simulator host(s)
- Removes aria2 installation from setup_agent.sh using yum, it's already
dependency for cloudstack-agent package
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR introduces several features and fixes some bugs:
- account tags feature
- fixed resource tags bugs which happened during tags search (found wrong entries because of mysql string to number translation - see #905, but this PR does more and fixes also resource access - vulnerability during list resource tags)
- some marvin improvements (speed, sanity)
Improved resource tags code:
1. Enhanced listTags security
2. Added support for account tags (account tags are required to support tags common for all users of an account)
3. Improved the tag management code (refactoring and cleanup)
Marvin:
1. Fixed Marvin wait timeout between async pools. To decrease polling interval and improve CI speed.
2. Fixed /tmp/ to /tmp in zone configuration files.
3. Fixed + to os.path.join in log class.
4. Fixed + to os.path.join in deployDataCenter class.
5. Fixed typos in tag tests.
6. Modified Tags base class delete method.
Deploy Datacenter script:
1. Improved deployDatacenter. Added option logdir to specify where script places results of evaluation.
ConfigurationManagerImpl:
1. Added logging to ConfigurationManagerImpl to log when vlan is not found. Added test stubs for tags. Found accidental exception during simulator running after CI.
tests_tags.py:
1. Fixed stale undeleted tags.
2. Changed region:India to scope:TestName.
This feature allows using templates and ISOs avoiding secondary storage as intermediate cache on KVM. The virtual machine deployment process is enhanced to supported bypassed registered templates and ISOs, delegating the work of downloading them to primary storage to the KVM agent instead of the SSVM agent.
Template and ISO registration:
- When hypervisor is KVM, a checkbox is displayed with 'Direct Download' label.
- API methods registerTemplate and registerISO are both extended with this new parameter directdownload.
- On template or ISO registration, no download job is sent to SSVM agent, CloudStack would only persist an entry on template_store_ref indicating that template or ISO has been marked as 'Direct Download' (bypassing Secondary Storage). These entries are persisted as:
template_id = Template or ISO id on vm_template table
store_id NULL
download_state = BYPASSED
state = Ready
(Note: these entries allow users to deploy virtual machine from registered templates or ISOs)
- An URL validation command is sent to a random KVM host to check if template/ISO location can be reached. Metalink are also supported by this feature. In case of a metalink, it is fetched and URL check is performed on each of its URLs.
- Checksum should be provided as indicated on #2246: {ALGORITHM}CHKSUMHASH
- After template or ISO is registered, it would be displayed in the UI
Virtual machine deployment:
When a 'Direct Download' template is selected for deployment, CloudStack would delegate template downloading to destination storage pool via destination host by a new pluggable download manager.
Download manager would handle template downloading depending on URL protocol. In case of HTTP, request headers can be set by the user via vm_template_details. Those details should be persisted as:
Key: HTTP_HEADER
Value: HEADERNAME:HEADERVALUE
In case of HTTPS, a new API method is added uploadTemplateDirectDownloadCertificate to allow user importing a client certificate into all KVM hosts' keystore before deployment.
After template or ISO is downloaded to primary storage, usual entry would be persisted on template_spool_ref indicating the mapping between template/ISO and storage pool.
Steps to reproduce issue
Deploy a VM
Take snapshot of the root volume
Delete the snapshot
Before the garbage collector has run, shutdown the VM and assign the VM to other user.
When garage collector executes NPE shows in the logs.
This happens when the root disk size is overridden. The primary storage limit check should be performed based on overridden size instead of template size. Enabled root disk resize tests to run on simulator as well.
This feature allow admins to dedicate a range of public IP addresses to the SSVM and CPVM, such that they can be subject to specific external firewall rules. The option to dedicate a public IP range to the System VMs (SSVM & CPVM) is added to the createVlanIpRange API method and the UI.
Solution:
Global setting 'system.vm.public.ip.reservation.mode.strictness' is added to determine if the use of the system VM reservation is strict (when true) or preferred (false), false by default.
When a range has been dedicated to System VMs, CloudStack should apply IPs from that range to
the public interfaces of the CPVM and the SSVM depending on global setting's value:
If the global setting is set to false: then CloudStack will use any unused and unreserved public IP
addresses for system VMs only when the pool of reserved IPs has been exhausted
If the global setting is set to true: then CloudStack will fail to deploy the system VM when the pool
of reserved IPs has been exhausted, citing the lack of available IPs.
UI Changes
Under Infrastructure -> Zone -> Physical Network -> Public -> IP Ranges, button 'Account' label is refactored to 'Set reservation'.
When that button is clicked, dialog displayed is also refactored, including a new checkbox 'System VMs' which indicates if range should be dedicated for CPVM and SSVM, and a note indicating its usage.
When clicking on button for any created range, UI dialog displayed indicates whether IP range is dedicated for system vms or not.
While creating the response object for the 'listDomain' API, several database calls are triggered to fetch details like parent domain, project limit, IP limit, etc. These database calls are triggered for each record found in the main fetch query, which is causing the response to slow down.
Fix:
The database transactions are reduced to improve response of the Listdomain API
In CLOUDSTACK-9886, we are reading ping.interval and ping.timeout using configdao which involves direct reading of DB. So, replaced it with ConfigKey based approach.
Using cloudmonkey, when invoking the update template api call, it does not display the isdynamicallyscalable field as part of its template response.
fix done:
org.apache.cloudstack.api.response.TemplateResponse isdynamicallyscalable field is now populated in the server/src/com/cloud/api/query/dao/TemplateJoinDaoImpl.java.newUpdateResponse method.
Unit test:
the Unit test server/test/com/cloud/api/query/dao/TemplateJoinDaoImplTest.java testNewUpdateResponse() verifies that the TemplateResponse is populated correctly.
Marvin test:
the Marvin nosetest integration/smoke/test_templates.py test_02_edit_template(self) confirms that the template_response.isdynamicallyscalable field gets populated with the correct user data.
Test scenario:
Using cloudmonkey, when invoking the 'update template' API call, it should now display the isdynamicallyscalable field as part of its template response.
Description: For Volumes on Secondary Storage, (Uploaded Volume) the usage is not accounted for.
The fix is implemented as follows:
A new Usage Type is added for the Volume on secondary storage : VOLUME_SECONDARY (id=26)
A new storage type, 'Volume' is defined.
When a volume is uploaded and the usage server executes next,entry will be added to the usage_storage helper table for all the volumes uploaded since the Usage server executed last.
When the uploaded volume is attached, the 'deleted' column in the usage_storage table is set to the time-stamp when the volume was deleted
2 entries will be added to the cloud_usage table with usage_type=26 and usage_type=6 (Volume usage on primary). One for the duration the volume was on primary and other for the duration it was on secondary.
Entry is added to the helper table volume_usage for accounting for the primary storage.Next execution of the usage server and on-wards, usage entry for usage_type=6 only will be added.
This includes test related fixes and code review fixes based on
reviews from @rafaelweingartner, @marcaurele, @wido and @DaanHoogland.
This also includes VMware disk-resize limitation bug fix based on comments
from @sateesh-chodapuneedi and @priyankparihar.
This also includes the final changes to systemvmtemplate and fixes to
code based on issues found via test failures.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes test failures around VMware with the new systemvmtemplate.
In addition:
- Does not skip rVR related test cases for VMware
- Removes rc.local
- Processes unprocessed cmd_line.json
- Fixed NPEs around VMware tests/code
- On VMware, use udevadm to reconfigure nic/mac address than rebooting
- Fix proper acpi shutdown script for faster systemvm shutdowns
- Give at least 256MB of swap for VRs to avoid OOM on VMware
- Fixes smoke tests for environment related failures
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
On XenServer, both redundant router's vifs were getting deleted when any
PF rule is removed from any of the acquired public IPs. This fix
ensures that lastIp is set to `false` when processed by hypervisor
resources to avoid removing of VIFs when VPCs have any source nat IP.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Several systemvmtemplate optimizations
- Uses new macchinina template for running smoke tests
- Switch to latest Debian 9.3.0 release for systemvmtemplate
- Introduce a new `get_test_template` that uses tiny test template
such as macchinina as defined test_data.py
- rVR related fixes and improvements
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Refactors and simplifies systemvm codebase file structures keeping
the same resultant systemvm.iso packaging
- Password server systemd script and new postinit script that runs
before sshd starts
- Fixes to keepalived and conntrackd config to make rVRs work again
- New /etc/issue featuring ascii based cloudmonkey logo/message and
systemvmtemplate version
- SystemVM python codebase linted and tested. Added pylint/pep to
Travis.
- iptables re-application fixes for non-VR systemvms.
- SystemVM template build fixes.
- Default secondary storage vm service offering boosted to have 2vCPUs
and RAM equal to console proxy.
- Fixes to several marvin based smoke tests, especially rVR related
tests. rVR tests to consider 3*advert_int+skew timeout before status
is checked.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This ports the S2S config test by @swill from #2190 with additional
changes to make robust and environment agnostic.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Refactor cloud-early-config and make appliance specific scripts
- Make patching work without requiring restart of appliance and remove
postinit script
- Migrate to systemd, speedup booting/loading
- Takes about 5-15s to boot on KVM, and 10-30seconds for VMware and XenServer
- Appliance boots and works on KVM, VMware, XenServer and HyperV
- Update Debian9 ISO url with sha512 checksum
- Speedup console proxy service launch
- Enable additional kernel modules
- Remove unknown ssh key
- Update vhd-util URL as previous URL was down
- Enable sshd by default
- Use hostnamectl to add hostname
- Disable services by default
- Use existing log4j xml, patching not necessary by cloud-early-config
- Several minor fixes and file refactorings, removed dead code/files
- Removes inserv
- Fix dnsmasq config syntax
- Fix haproxy config syntax
- Fix smoke tests and improve performance
- Fix apache pid file path in cloud.monitoring per the new template
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The `assignDedicateIpAddress` previously had marked the newly fetched
IP as allocated but now it does not do that. This fails for VPCs
where SNATs IP are retained as allocating and not allocated after
creation.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows CloudStack administrators to create layer 2 networks on CloudStack. As these networks are purely layer 2, they don't require IP addresses or Virtual Router, only VLAN is necessary (provided by administrator or assigned by CloudStack). Also, network services should be handled externally, e.g. DNS, DHCP, as they are not provided by L2 networks.
As a consequence, a new Guest Network type is created within CloudStack: L2
Description:
Network offerings and networks support new guest type: L2.
L2 Network offering creation allows administrator to select Specify VLAN or let CloudStack assign it dynamically.
L2 Network creation allows administrator to specify VLAN tag (if network offerings allows it) or simply create network.
VM deployments on L2 networks:
VMs should not IP addresses or any network service
No Virtual Router deployed on network
If Specify VLAN = true for network offering, network gets implemented using a dynamically assigned VLAN
UI changes
A new button is added on Networks tab, available for admins, to allow L2 networks creation
* CLOUDSTACK-9972: Enhance listVolume API to include physical size and utilization.
Also fixed pool, cluster and pod info
* CLOUDSTACK-9972: Fix volume_view and duplicate API constant
* CLOUDSTACK-9972: Backport Do not allow vms to be deployed on hosts that are in disabled pod
* CLOUDSTACK-9972: Fix localization missing keys
* CLOUDSTACK-9972: Fix sql path
- Migrate to embedded Jetty server.
- Improve ServerDaemon implementation.
- Introduce a new server.properties file for easier configuration.
- Have a single /etc/default/cloudstack-management to configure env.
- Reduce shaded jar file, removing unnecessary dependencies.
- Upgrade to Spring 5.x, upgrade several jar dependencies.
- Does not shade and include mysql-connector, used from classpath instead.
- Upgrade and use bountcastle as a separate un-shaded jar dependency.
- Remove tomcat related configuration and files.
- Have both embedded UI assets in uber jar and separate webapp directory.
- Refactor systemd and init scripts, cleanup packaging.
- Made cloudstack-setup-databases faster, using `urandom`.
- Remove unmaintained distro packagings.
- Moves creation and usage of server keystore in CA manager, this
deprecates the need to create/store cloud.jks in conf folder and
the db.cloud.keyStorePassphrase in db.properties file. This also
remove the need of the --keystore-passphrase in the
cloudstack-setup-encryption script.
- GZip contents dynamically in embedded Jetty
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* VSP ID Caching
* VSP call Statistics
* 5.0 Support
Co-Authored-By: Frank Maximus <frank.maximus@nuagenetworks.net>
Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
* CLOUDSTACK-10046 digest helper for calculating checksums
* CLOUDSTACK-10046 cleanup unused checksum code
* CLOUDSTACK-10046 padding method proof of concept
* CLOUDSTACK-10046 only compare checksums if old value is valid
* Adding positive and negative tests for md5, sha-1 and sha-256, for xen, vmware and kvm hypervisors.
KVM Results:
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 189, in test_02_1_create_template_with_checksum_sha1_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{sha-1}bf580a13f791d86acf3449a7b457a91a14389264" didn\'t match the given value, "{sha-1}someInvalidValue"\n']
=== TestName: test_02_1_create_template_with_checksum_sha1_negative | Status : SUCCESS ===
=== TestName: test_02_create_template_with_checksum_sha1 | Status : SUCCESS ===.
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 203, in test_03_1_create_template_with_checksum_sha256_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{SHA-256}efc03633f2b8f5db08acbcc5dc1be9028572dfd8f1c6c8ea663f0ef94b458c5" didn\'t match the given value, "{SHA-256}someInvalidValue"\n']
=== TestName: test_03_1_create_template_with_checksum_sha256_negative | Status : SUCCESS ===
=== TestName: test_03_create_template_with_checksum_sha256 | Status : SUCCESS ===
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 217, in test_04_1_create_template_with_checksum_md5_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{md5}ada77653dcf1e59495a9e1ac670ad95f" didn\'t match the given value, "{md5}someInvalidValue"\n']
=== TestName: test_04_1_create_template_with_checksum_md5_negative | Status : SUCCESS ===
=== TestName: test_04_create_template_with_checksum_md5 | Status : SUCCESS ===
* CLOUDSTACK-10046 digest helper for calculating checksums
* CLOUDSTACK-10046 cleanup unused checksum code
* CLOUDSTACK-10046 padding method proof of concept
* CLOUDSTACK-10046 only compare checksums if old value is valid
* Adding positive and negative tests for md5, sha-1 and sha-256, for xen, vmware and kvm hypervisors.
KVM Results:
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 189, in test_02_1_create_template_with_checksum_sha1_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{sha-1}bf580a13f791d86acf3449a7b457a91a14389264" didn\'t match the given value, "{sha-1}someInvalidValue"\n']
=== TestName: test_02_1_create_template_with_checksum_sha1_negative | Status : SUCCESS ===
=== TestName: test_02_create_template_with_checksum_sha1 | Status : SUCCESS ===.
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 203, in test_03_1_create_template_with_checksum_sha256_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{SHA-256}efc03633f2b8f5db08acbcc5dc1be9028572dfd8f1c6c8ea663f0ef94b458c5" didn\'t match the given value, "{SHA-256}someInvalidValue"\n']
=== TestName: test_03_1_create_template_with_checksum_sha256_negative | Status : SUCCESS ===
=== TestName: test_03_create_template_with_checksum_sha256 | Status : SUCCESS ===
Negative Test Passed - Exception Occurred Under template download ['Traceback (most recent call last):\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 217, in test_04_1_create_template_with_checksum_md5_negative\n self.download(self.apiclient, template.id)\n', ' File "/Users/bstoyanov/Documents/sb2/cloudstack/test/integration/smoke/test_templates.py", line 260, in download\n template.status)\n', 'Exception: Failed to download template: status - Failed post download script: checksum "{md5}ada77653dcf1e59495a9e1ac670ad95f" didn\'t match the given value, "{md5}someInvalidValue"\n']
=== TestName: test_04_1_create_template_with_checksum_md5_negative | Status : SUCCESS ===
=== TestName: test_04_create_template_with_checksum_md5 | Status : SUCCESS ===
* Adding additional test with no checksum added when registering template
Result:
test_05_create_template_with_no_checksum (integration.smoke.test_templates.TestCreateTemplateWithChecksum) ... === TestName: test_05_create_template_with_no_checksum | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 42.320s
OK
* Fixing negative tests exception handling
* Adding tests for ISO checksum validation and fixing a zero prefix failure test in templates
* CLOUDSTACK-10046 padding
* CLOUDSTACK-10046 usability additions
* yet another IDE artifact hindering checkstyle
Co-Authored-By: Prashanth Manthena <prashanth.manthena@nuagenetworks.net>
Co-Authored-By: Sigert Goeminne <sigert.goeminne@nuagenetworks.net>
Bug: https://issues.apache.org/jira/browse/CLOUDSTACK-9832
Detail:
When the VPC offering does not contain VpcVirtualRouter as a SourceNat provider,
then we will not add the interface in the public network to the VpcVR.
CLOUDSTACK-9832: Move isSrcNat check to VpcManager
This causes VM deployment failure on the host that was disabled while adding the storage repository.
In the attachCluster function of the PrimaryDataStoreLifeCycle, we were only selecting hosts that are up and are in enabled state. Here if we select all up hosts, it will populate the DB properly and will fix this issue. Also added a unit test for attachCluster function.
Added ability to specify mac in deployVirtualMachine and
addNicToVirtualMachine api endpoints.
Validates mac address to be in the form of:
aa:bb:cc:dd:ee:ff , aa-bb-cc-dd-ee-ff , or aa.bb.cc.dd.ee.ff.
Ensures that mac address is a Unicast mac.
Ensures that the mac address is not already allocated for the
specified network.
It appears that asserts.equal(boolean.True, str.True) which seems to be causing the issue. Probably related to some api changes in recent PRs. Will fix the equation to str.lower() so it'll pass.
Strangely when running the tests from Pycharm CE they pass, it seems the IDE resolves the type issue during comparison. But when running from command line it failes...
After fixing this results came back as expected:
Configure a PF rule Private port : Start port ; 20 ENd POrt 25 || Public Port : Start port 20 ; ENd Port : 25.
Trigger UpdatePortForwardingRule api
ApI fails with following error : " Unable to update the private port of port forwarding rule as the rule has port range "
Solution-
Port range gets modified
- All tests should pass on KVM, Simulator
- Add test cases covering FSM state transitions and actions
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Host-HA offers investigation, fencing and recovery mechanisms for host that for
any reason are malfunctioning. It uses Activity and Health checks to determine
current host state based on which it may degrade a host or try to recover it. On
failing to recover it, it may try to fence the host.
The core feature is implemented in a hypervisor agnostic way, with two separate
implementations of the driver/provider for Simulator and KVM hypervisors. The
framework also allows for implementation of other hypervisor specific provider
implementation in future.
The Host-HA provider implementation for KVM hypervisor uses the out-of-band
management sub-system to issue IPMI calls to reset (recover) or poweroff (fence)
a host.
The Host-HA provider implementation for Simulator provides a means of testing
and validating the core framework implementation.
Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces a new certificate authority framework that allows
pluggable CA provider implementations to handle certificate operations
around issuance, revocation and propagation. The framework injects
itself to `NioServer` to handle agent connections securely. The
framework adds assumptions in `NioClient` that a keystore if available
with known name `cloud.jks` will be used for SSL negotiations and
handshake.
This includes a default 'root' CA provider plugin which creates its own
self-signed root certificate authority on first run and uses it for
issuance and provisioning of certificate to CloudStack agents such as
the KVM, CPVM and SSVM agents and also for the management server for
peer clustering.
Additional changes and notes:
- Comma separate list of management server IPs can be set to the 'host'
global setting. Newly provisioned agents (KVM/CPVM/SSVM etc) will get
radomized comma separated list to which they will attempt connection
or reconnection in provided order. This removes need of a TCP LB on
port 8250 (default) of the management server(s).
- All fresh deployment will enforce two-way SSL authentication where
connecting agents will be required to present certificates issued
by the 'root' CA plugin.
- Existing environment on upgrade will continue to use one-way SSL
authentication and connecting agents will not be required to present
certificates.
- A script `keystore-setup` is responsible for initial keystore setup
and CSR generation on the agent/hosts.
- A script `keystore-cert-import` is responsible for import provided
certificate payload to the java keystore file.
- Agent security (keystore, certificates etc) are setup initially using
SSH, and later provisioning is handled via an existing agent connection
using command-answers. The supported clients and agents are limited to
CPVM, SSVM, and KVM agents, and clustered management server (peering).
- Certificate revocation does not revoke an existing agent-mgmt server
connection, however rejects a revoked certificate used during SSL
handshake.
- Older `cloudstackmanagement.keystore` is deprecated and will no longer
be used by mgmt server(s) for SSL negotiations and handshake. New
keystores will be named `cloud.jks`, any additional SSL certificates
should not be imported in it for use with tomcat etc. The `cloud.jks`
keystore is stricly used for agent-server communications.
- Management server keystore are validated and renewed on start up only,
the validity of them are same as the CA certificates.
New APIs:
- listCaProviders: lists all available CA provider plugins
- listCaCertificate: lists the CA certificate(s)
- issueCertificate: issues X509 client certificate with/without a CSR
- provisionCertificate: provisions certificate to a host
- revokeCertificate: revokes a client certificate using its serial
Global settings for the CA framework:
- ca.framework.provider.plugin: The configured CA provider plugin
- ca.framework.cert.keysize: The key size for certificate generation
- ca.framework.cert.signature.algorithm: The certificate signature algorithm
- ca.framework.cert.validity.period: Certificate validity in days
- ca.framework.cert.automatic.renewal: Certificate auto-renewal setting
- ca.framework.background.task.delay: CA background task delay/interval
- ca.framework.cert.expiry.alert.period: Days to check and alert expiring certificates
Global settings for the default 'root' CA provider:
- ca.plugin.root.private.key: (hidden/encrypted) CA private key
- ca.plugin.root.public.key: (hidden/encrypted) CA public key
- ca.plugin.root.ca.certificate: (hidden/encrypted) CA certificate
- ca.plugin.root.issuer.dn: The CA issue distinguished name
- ca.plugin.root.auth.strictness: Are clients required to present certificates
- ca.plugin.root.allow.expired.cert: Are clients with expired certificates allowed
UI changes:
- Button to download/save the CA certificates.
Misc changes:
- Upgrades bountycastle version and uses newer classes
- Refactors SAMLUtil to use new CertUtils
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows changing permission for existing role permissions, as those were static and could not be changed once created. It also provides the ability to change these permissions in the UI using a drop down menu for each permission rule, in which admin can select ‘Allow’ or ‘Deny’ permission.
Changes in the API:
This feature modifies behaviour of updateRolePermission API method:
New optional parameters ‘ruleid’ and ‘permission’ are introduced, they are mutual exclusive to ‘ruleorder’ parameter. This defines two use cases:
Update role permission: ‘ruleid’ and ‘permission’ parameters needed
Update rules order: ‘ruleorder’ parameter needed
Parameter ‘ruleorder’ is now optional
updateRolePermission providing ‘ruleorder’ parameter should be sent via POST
CloudStack has several background polling tasks that are spread across
the codebase, the aim of this work is to provide a single manager to
handle submission, execution and handling of background tasks. With
the framework implemented, existing oobm background task has been
refactored to use this manager.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Return an exception if both parameter are missing.
This fixes an NPE in AffinityGroupServiceImpl.updateVMAffinityGroups() when the list was null.
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
This allows native CloudStack users to change password in UI when LDAP
is enabled. Overall changes:
- A new usersource returned in the listUsers response
- Removed ldap check in the UI, replaced with check based on user source
- DB changes to include user.source in user_view
- Changed UI error message for non-native users trying to change password
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The 'force' option provided with the stopVirtualMachine API command is
often assumed to be a hard shutdown sent to the hypervisor, when in fact
it is for CloudStacks' internal use. CloudStack should be able to send
the 'hard' power-off request to the hosts.
When forced parameter on the stopVM API is true, power off (hard shutdown)
a VM. This uses initial changes from #1635 to pass the forced parameter
to hypervisor plugin via the StopCommand, and fixes force stop (poweroff)
handling for KVM, VMware and XenServer.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
As of now, CloudStack can automatically import LDAP users based on the
configuration to a domain or an account. However, any new users in LDAP
aren't automatically reflected. The admin has to manually import them
again.
This feature enables admin to map LDAP group/OU to a CloudStack domain
and any changes are reflected in ACS as well.
This fixes the agreed upon url on download.cloudstack.org in various
sql files and misc scripts.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- commented some occurences of cloud.com as being harmless
* examples
* identifiers (internal)
- changed the URL for vhd-util download
- changed comments from 'cloud.com' to 'Apache CloudStack'
* 4.9:
CLOUDSTACK-9876: Removed test test_01_test_vm_volume_snapshot as we no longer have that restriction and now after fix for CLOUDSTACK-8663 we allow VM and volume snapshots to exist together
This adds support for virtio-scsi on KVM hosts, either
for guests that are associated with a new os_type of 'Other PV Virtio-SCSI (64-bit)',
or when a VM or template is regstered with a detail parameter rootDiskController=scsi.
Update cloudstack add template dialog to allow for selecting rootDiskController with KVM
Update cloudstack kvm virtio-scsi to enable discard=unmap
This improves the metrics view feature by improving the rendering performance
of metrics view tables, by reimplementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.
List of APIs introduced for improving the performance:
listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.9:
CLOUDSTACK-9691: Added test list_snapshots_with_removed_data_store
CLOUDSTACK-9691: Fixed unhandeled excetion in list snapshot command when a primary store is deleted related to it
Updated StrongSwan VPN ImplementationThis PR is a merge of @jayapalu changes in #872 and the changes I had to make to get the functionality working.
I have done pretty extensive testing of this code so far and we are looking to be in pretty good shape. One thing to note is that a `Diffie-Hellman` group **is required** in order for this feature to work correctly. It is not highlighted in the tests below, but I have shown that the `PFS` is not required for this feature to work. In #872 I have shown a more exhaustive set of tests of this code, but I have limited this set of tests to a recommended `IKE` and `ESP` configuration in order to reduce the noise and test the other areas of functionality.
**Test Results**
I am testing this functionality by creating two VPCs with VMs in each and creating a S2S VPN connection between the two VPCs. Then I SSH into a VM in one VPC and I ping the private IP of a VM in the other VPC. Then I tear it down and try a different configuration.
_Setup_
```
VPC 1 VPC 2
===== =====
VPN Gateway VPN Gateway
VPN Customer Gateway VPN Customer Gateway
VPN Connection <---> VPN Connection
- Passive = True - Passive = False
```
_Legend_
`SKIP` => At least one of the VPN Connections did not come up, so no test was run.
`OK` => The ping test was successful over the S2S VPN connection.
`FAIL` => The ping test failed over the S2S VPN connection.
`Passive` => Specifies if either the `<vpc_1> : <vpc_2>` sides of the VPN Connection is set to passive.
`Conn State` => Specifies the connection status of the `<vpc_1> : <vpc_2>` VPN Connection in the UI.
`Requires Reset` => If the ping test does not result in an `OK`, then a VPN Connection Reset is performed on either `<vpc_1> : <vpc_2>` sides of the VPN Connection based on which side is not showing `Connected`. The results in the `Status` column is the final result after the reset is performed.
_Results_
```
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| Status | IKE & ESP | DPD | Encap | IKE Life | ESP Life | Passive | Conn State | Requires Reset |
+========+======================+=======+=======+==========+==========+===============+=============================+================+
| OK | aes128-sha1;modp1536 | True | False | 86400 | 3600 | True : False | Disconnected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | True | 86400 | 3600 | True : False | Disconnected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | False | | 3600 | True : False | Disconnected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | False | 86400 | | True : False | Disconnected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | False | | | True : False | Disconnected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | False | 86400 | 3600 | False : False | Connected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | False | 86400 | 3600 | True : True | Disconnected : Disconnected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | True | False | 86400 | 3600 | False : True | Connected : Disconnected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | False | False | 86400 | 3600 | False : False | Connected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | False | False | 86400 | 3600 | True : False | Disconnected : Connected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | False | False | 86400 | 3600 | True : True | Disconnected : Disconnected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| OK | aes128-sha1;modp1536 | False | False | 86400 | 3600 | False : True | Connected : Disconnected | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| SKIP | aes128-sha1 | True | False | 86400 | 3600 | True : False | Disconnected : Error | True : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| SKIP | aes128-sha1 | False | False | 86400 | 3600 | True : False | Disconnected : Error | True : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| FAIL | aes128-sha1 | True | False | 86400 | 3600 | True : True | Disconnected : Disconnected | True : True |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
| SKIP | aes128-sha1 | True | False | 86400 | 3600 | False : False | Connected : Error | False : False |
+--------+----------------------+-------+-------+----------+----------+---------------+-----------------------------+----------------+
```
* pr/1741:
complete implementation of the StrongSwan VPN feature
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
Marvin test to verify that adding TCP ports 500,4500 and 1701 in vpn should not failPlease refer to JIRA ticket for more details
https://issues.apache.org/jira/browse/CLOUDSTACK-9117
Following is the result info:
Test to add TCP Port Forwarding rule for specific ports(500,1701 and 4500) in VPN ... === TestName: test_08_add_TCP_PF_Rule_In_VPN | Status : SUCCESS ===
ok
---
Ran 1 test in 166.799s
OK
* pr/1183:
Marvin test to verify that adding TCP ports 500,4500 and 1701 in vpn should not fail Bug-Id: CS-43653 Reviewed-by: Self
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-8717: Failed to start instance after restoring the running instance Changing PR title and commit message
In continuation with PR #1411 and #667
* pr/1416:
CLOUDSTACK-8717: Failed to start instance after restoring the running instance
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
[4.10] CLOUDSTACK-8746: VM Snapshotting implementation for KVM
* pr/977:
Fixes for testing VM Snapshots on KVM. Related to PR 977
CLOUDSTACK-8746: vm snapshot implementation for KVM
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9619: Updates for SAN-assisted snapshotsThis PR is to address a few issues in #1600 (which was recently merged to master for 4.10).
In StorageSystemDataMotionStrategy.performCopyOfVdi we call getSnapshotDetails. In one such scenario, the source snapshot in question is coming from secondary storage (when we are creating a new volume on managed storage from a snapshot of ours thats on secondary storage).
This usually worked in the regression tests due to a bit of "luck": We retrieve the ID of the snapshot (which is on secondary storage) and then try to pull out its StorageVO object (which is for primary storage). If you happen to have a primary storage that matches the ID (which is the ID of a secondary storage), then getSnapshotDetails populates its Map<String, String> with inapplicable data (that is later ignored) and you dont easily see a problem. However, if you dont have a primary storage that matches that ID (which I didnt today because I had removed that primary storage), then a NullPointerException is thrown.
I have fixed that issue by skipping getSnapshotDetails if the source is coming from secondary storage.
While fixing that, I noticed a couple more problems:
1) We can invoke grantAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
2) We can invoke revokeAccess on a snapshot thats actually on secondary storage (this doesnt amount to much because the VolumeServiceImpl ignores the call when its not for a primary-storage driver).
I have corrected those issues, as well.
I then came across one more problem:
When using a SAN snapshot and copying it to secondary storage or creating a new managed-storage volume from a snapshot of ours on secondary storage, we attach to the SR in the XenServer code, but detach from it in the StorageSystemDataMotionStrategy code (by sending a message to the XenServer code to perform an SR detach). Since we know to detach from the SR after the copy is done, we should detach from the SR in the XenServer code (without that code having to be explicitly called from outside of the XenServer logic).
I went ahead and changed that, as well.
JIRA Ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-9619
* pr/1749:
CLOUDSTACK-9619: Updates for SAN-assisted snapshots
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
(1) add support to create/delete/revert vm snapshots on running vms with QCOW2 format
(2) add new API to create volume snapshot from vm snapshot
(3) delete metadata of vm snapshots before stopping/migrating and recover vm snapshots after starting/migrating
(4) enable deleting of VM snapshot on stopped vm or vm snapshot is not listed in qcow2 image.
(5) enable smoke tests for vmsnaphsots on KVM
* CloudStack root pom change to use Amazon WS 11.1.16
caused our client to fail, as it was depending on classes,
which are not not present anymore.
Latest client version uses Gson instead.
* increase robustness of nuagevsp tests
`- test_nuage_internal_dns - move vm2 creation upwards
`- test_nuage_static_nat - delete vm in test step to avoid sut restriction
BUG-ID: CLOUDSTACK-9729i
Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
Bugfix-for: master
The test_vpc_vpn uses a cidr that overlaps with the base test environment's
CIDR causing intermittent failure. This changes the cidr to not overlap
with underlying infra and avoid future failures.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Due to OS/hypervisor/environmental configuration, detaching a disk/device
using libvirt can be successful without updating the domain configuration (xml).
This leads to reattachment failure as the device is blocked until the next
reboot. This fixes a specific environment case by performing stop/start on
the VM only in case of KVM, which will recreate a fresh domain config (xml)
as KVM VMs have transient domain configs (xmls don't persist).
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9594: API "list templates templatefilter=all" reveals allAPI "list templates templatefilter=all" reveals all templates.
Using a "list templates templatefilter=all" API call any domain admin can see all templates of all domains in ACS. Information returned includes the account and domain of the template's owner.
The template data shows what that VM is using and any hints from the label. This would give an advantage in what attack vectors to use. The account and domain can possibly be used in brute force attack to guess the password and login information.
Test Scenario:
created two accounts in different domain.
```
mysql> select account_id,username,api_key from user where id in (4,5);
+------------+-----------+----------------------------------------------------------------------------------------+
| account_id | username | api_key |
+------------+-----------+----------------------------------------------------------------------------------------+
| 4 | sudadmin1 | 3qeSuWadNzUFZ_i6c6zbwafjM3Eo0TWpkHw3En9jNsg5Ditk2N18DnbbL2quBYQ7FsdXQ8rwxbyFlE8vyUTwEg |
| 5 | sudadmin | N5uHVOrg1Ek1F1a_5OXTz4WpLG3ewHqcbPUSBjQ-2CTJdxmUe2go0S8fyqH4Np0scYiehYg2KqthZXCWEyKx1A |
+------------+-----------+----------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)
mysql> select account_name,domain_id from account where id in (4,5);
+--------------+-----------+
| account_name | domain_id |
+--------------+-----------+
| sudadmin | 2 |
| sudadmin1 | 3 |
+--------------+-----------+
2 rows in set (0.00 sec)
```
User sudadmin registered a private template named 'Debian'.
http://10.147.59.107:8080/client/api?apikey=N5uHVOrg1Ek1F1a_5OXTz4WpLG3ewHqcbPUSBjQ-2CTJdxmUe2go0S8fyqH4Np0scYiehYg2KqthZXCWEyKx1A&command=listTemplates&templatefilter=self&signature=ODt7zEWCLL20z1FT%2FIkd1molRaM%3D
listTemplate with "templatefilter=self", lists the newly registered template.
```
<listtemplatesresponse cloud-stack-version="4.8.0">
<count>1</count>
<template>
<id>51026d32-60ee-4e25-8ffd-3fa3c57fc14c</id>
<name>Debian</name>
<displaytext>Debian</displaytext>
<ispublic>false</ispublic>
<created>2016-11-10T17:18:00-0500</created>
<isready>true</isready>
<passwordenabled>false</passwordenabled>
<format>VHD</format>
<isfeatured>false</isfeatured>
<crossZones>false</crossZones>
<ostypeid>38c1fc84-a687-11e6-a8c8-06f654000053</ostypeid>
<ostypename>Debian GNU/Linux 7(64-bit)</ostypename>
<account>sudadmin</account>
<zoneid>25fa5b74-d4c2-4bad-8e3a-ceffcd10985e</zoneid>
<zonename>z1</zonename>
<status>Download Complete</status>
<size>2621440000</size>
<templatetype>USER</templatetype>
<hypervisor>XenServer</hypervisor>
<domain>SUDDOMAIN</domain>
<domainid>a350c00d-4048-4876-ae09-74ad4b7bb28c</domainid>
<isextractable>false</isextractable>
<checksum>e87a6d7291b999c92baa9623c9c3c207</checksum>
<details>{hypervisortoolsversion=xenserver61}</details>
<sshkeyenabled>false</sshkeyenabled>
<isdynamicallyscalable>false</isdynamicallyscalable>
</template>
</listtemplatesresponse>
```
User: sudadmin1
listTemplate with "templatefilter=self" does not list any template.
http://10.147.59.107:8080/client/api?apikey=3qeSuWadNzUFZ_i6c6zbwafjM3Eo0TWpkHw3En9jNsg5Ditk2N18DnbbL2quBYQ7FsdXQ8rwxbyFlE8vyUTwEg&command=listTemplates&templatefilter=self&signature=RfKsdg3RxDkqJotbTlHU2RdbdPA%3D
`<listtemplatesresponse cloud-stack-version="4.8.0"/>
`
NO TEMPLATES
**listTemplate with "templatefilter=all" lists all templates**
http://10.147.59.107:8080/client/api?apikey=3qeSuWadNzUFZ_i6c6zbwafjM3Eo0TWpkHw3En9jNsg5Ditk2N18DnbbL2quBYQ7FsdXQ8rwxbyFlE8vyUTwEg&command=listTemplates&templatefilter=all&signature=l5tubfyABT67d1jY702dvtZODbc%3D
Result:
```
<listtemplatesresponse cloud-stack-version="4.8.0">
<count>3</count>
<template>
<id>38451a02-a687-11e6-a8c8-06f654000053</id>
<name>CentOS 5.6(64-bit) no GUI (XenServer)</name>
<displaytext>CentOS 5.6(64-bit) no GUI (XenServer)</displaytext>
<ispublic>true</ispublic>
....
</template>
<template>
<id>51026d32-60ee-4e25-8ffd-3fa3c57fc14c</id>
<name>Debian</name>
<displaytext>Debian</displaytext>
<ispublic>false</ispublic>
<created>2016-11-10T17:18:00-0500</created>
<isready>true</isready>
<passwordenabled>false</passwordenabled>
<format>VHD</format>
<isfeatured>false</isfeatured>
<crossZones>false</crossZones>
<ostypeid>38c1fc84-a687-11e6-a8c8-06f654000053</ostypeid>
<ostypename>Debian GNU/Linux 7(64-bit)</ostypename>
**<account>sudadmin</account>**
<zoneid>25fa5b74-d4c2-4bad-8e3a-ceffcd10985e</zoneid>
<zonename>z1</zonename>
<size>2621440000</size>
<templatetype>USER</templatetype>
<hypervisor>XenServer</hypervisor>
<domain>SUDDOMAIN</domain>
<domainid>a350c00d-4048-4876-ae09-74ad4b7bb28c</domainid>
<isextractable>false</isextractable>
<checksum>e87a6d7291b999c92baa9623c9c3c207</checksum>
<details>{hypervisortoolsversion=xenserver61}</details>
<sshkeyenabled>false</sshkeyenabled>
<isdynamicallyscalable>false</isdynamicallyscalable>
</template>
<template>
<id>5f6af7bb-d965-4b9b-ab45-6d455b0d6bbe</id>
<name>SystemVM Template (XenServer)</name>
<displaytext>SystemVM Template (XenServer)</displaytext>
<ispublic>false</ispublic>
.....
</template>
</listtemplatesresponse>
```
**After Fix:**
http://10.147.59.107:8080/client/api?apikey=3qeSuWadNzUFZ_i6c6zbwafjM3Eo0TWpkHw3En9jNsg5Ditk2N18DnbbL2quBYQ7FsdXQ8rwxbyFlE8vyUTwEg&command=listTemplates&templatefilter=all&signature=l5tubfyABT67d1jY702dvtZODbc%3D
```
<listtemplatesresponse cloud-stack-version="4.8.0">
<count>1</count>
<template>
<id>38451a02-a687-11e6-a8c8-06f654000053</id>
<name>CentOS 5.6(64-bit) no GUI (XenServer)</name>
<displaytext>CentOS 5.6(64-bit) no GUI (XenServer)</displaytext>
<ispublic>true</ispublic>
<created>2016-11-10T09:32:44-0500</created>
<isready>true</isready>
<passwordenabled>false</passwordenabled>
<format>VHD</format>
<isfeatured>true</isfeatured>
<crossZones>true</crossZones>
<ostypeid>38a2bfd6-a687-11e6-a8c8-06f654000053</ostypeid>
<ostypename>CentOS 5.6 (64-bit)</ostypename>
<account>system</account>
<zoneid>25fa5b74-d4c2-4bad-8e3a-ceffcd10985e</zoneid>
<zonename>z1</zonename>
<size>21474836480</size>
<templatetype>BUILTIN</templatetype>
<hypervisor>XenServer</hypervisor>
<domain>ROOT</domain>
<domainid>383e0ea6-a687-11e6-a8c8-06f654000053</domainid>
<isextractable>true</isextractable>
<checksum>905cec879afd9c9d22ecc8036131a180</checksum>
<sshkeyenabled>false</sshkeyenabled>
<isdynamicallyscalable>true</isdynamicallyscalable>
</template>
</listtemplatesresponse>
```
Bug has been fixed considering below points
1. templatefilter=all or isofilter=all is applicable only to admin and domain admin.
2. With templatefilter=all or isofilter=all below are the visiblity of templates in system.
- admin should be able to see all templates/iso in system.
- domain admin should be able to see all public template and templates under its domain tree (including sub domain).
- domain admin in a project context should be able to see all public templates and templates registered
as project account and templates which are shared(using updateTemplatePermission api) with project account.
Also Modified "test/integration/component/test_escalation_listTemplateDomainAdmin.py"
This marvin test was written for this scenario but for the second account "templatefilter=all" is not used.
* pr/1763:
CLOUDSTACK-9594: reverted changes introduced in CLOUDSTACK-9376
CLOUDSTACK-9594: API "list templates templatefilter=all" reveals all templates of all domains
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9584: Fix intermittent test failure in `test_volumes`The component/test_volume failures happen when disk offering is random selected to be a custom one. This fixes that.
* pr/1822:
CLOUDSTACK-9584: Fix intermittent test failure in `test_volumes`
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9637: Template create from snapshot does not populate vm_t**ISSUE**
============
Template create from snapshot does not populate vm_template_details
**REPRO STEPS**
==================
1. Register a template A and specify property:
Root disk controller: scsi
NIC adapter type: E1000
Keyboard type: us
2. Create a vm instance from template A
3. Take volume snapshot for vm instance
4. Delete VM instance
5. Switch to "Storage->Snapshots", convert snapshot to a template B
6. Observe template B does not inherit property from template A, the table vm_template_details is empty
**SOLUTION**: Retrieve and add source template details to VMTemplateVO.
Before Fix:
```
mysql> select id,name,source_template_id from vm_template where id=202;
+-----+--------+--------------------+
| id | name | source_template_id |
+-----+--------+--------------------+
| 202 | Debian | NULL |
+-----+--------+--------------------+
1 row in set (0.00 sec)
mysql> select * from vm_template_details where template_id=202;
+----+-------------+--------------------+-------+---------+
| id | template_id | name | value | display |
+----+-------------+--------------------+-------+---------+
| 1 | 202 | keyboard | us | 1 |
| 2 | 202 | nicAdapter | E1000 | 1 |
| 3 | 202 | rootDiskController | scsi | 1 |
+----+-------------+--------------------+-------+---------+
3 rows in set (0.00 sec)
mysql> select id,name,source_template_id from vm_template where source_template_id=202;
+-----+----------------+--------------------+
| id | name | source_template_id |
+-----+----------------+--------------------+
| 203 | derived-debian | 202 |
+-----+----------------+--------------------+
1 row in set (0.00 sec)
mysql> select * from vm_template_details where template_id=203;
Empty set (0.00 sec)
After Fix:
mysql> select id,name,source_template_id from vm_template where source_template_id=202;
+-----+--------------------------+--------------------+
| id | name | source_template_id |
+-----+--------------------------+--------------------+
| 203 | derived-debian | 202 |
| 204 | debian-derived-after-fix | 202 |
+-----+--------------------------+--------------------+
2 rows in set (0.00 sec)
mysql> select * from vm_template_details where template_id=204;
+----+-------------+--------------------+-------+---------+
| id | template_id | name | value | display |
+----+-------------+--------------------+-------+---------+
| 4 | 204 | keyboard | us | 1 |
| 5 | 204 | nicAdapter | E1000 | 1 |
| 6 | 204 | rootDiskController | scsi | 1 |
+----+-------------+--------------------+-------+---------+
3 rows in set (0.00 sec)
```
**Marvin Test :** test_template_from_snapshot_with_template_details.py
**Result:**
```
test_01_create_template_snampshot (integration.component.test_template_from_snapshot_with_template_details.TestCreateTemplate) ... === TestName: test_01_create_template_snampshot | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 864.523s
OK
```
* pr/1805:
CLOUDSTACK-9637: Template create from snapshot does not populate vm_template_details
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9403 : Support for shared networks in Nuage VSP pluginThis is first phase of support of Shared Network in cloudstack through NuageVsp Network Plugin. A shared network is a type of virtual network that is shared between multiple accounts i.e. a shared network can be accessed by virtual machines that belong to many different accounts. This basic functionality will be supported with the below common use case:
- shared network can be used for monitoring purposes. A shared network can be assigned to a domain and can be used for monitoring VMs belonging to all accounts in that domain.
With the current implementation with NuageVsp plugin, Each shared network needs its unique IP address range, and can not overlap with another shared network.
In VSD, it is implemented in below manner:
- In order to have tenant isolation for shared networks, we will have to create a Shared L3 Subnet for each shared network, and instantiate it across the relevant enterprises. A shared network will only exist under an enterprise when it is needed, so when the first VM is spinned under that ACS domain inside that shared network.
PR contents:
1) Support for shared networks with tenant isolation on master with Nuage VSP SDN Plugin.
2) Marvin test coverage for shared networks on master with Nuage VSP SDN Plugin.
3) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
4) PEP8 & PyFlakes compliance with our Marvin test code.
* pr/1579:
CLOUDSTACK-9403: Support for shared networks in Nuage VSP plugin
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
-when processing static nat rule, add a mangle table rule, to mark the traffic
from the guest vm when it has associated static nat rule so that traffic gets
routed using the route tabe of the device which has public ip associated
-fix the case where nic_device_id is empty when ip is getting disassociated
resulting in empty deviceid in ips.json
-add utility methods in CsRule, and CsRoute to add 'ip rule' and 'ip route' rules respectivley
-ensure traffic from all public interfaces are connection marked with device number, and restored
for the reverse traffic. use the connection marked number to do device specific routing table lookup
fill the device specific routing table with default route
-component tests for testing multiple public interfaces of VR
ensure VLAN used for createPrivateGateway is determined after the guest
networks in the VPC is created, so that we skip VLAN allocated for guest
network for the private network of vpc gateway
marvin, VirtualMachine object, nic attribute does not have nic's in any
particualr order in the array. soi check isdefault attribute to the get non-default nic
earlier test made assumption that nic[0] is default nic, which is not true always
CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.cfg fileMultiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is added to the corresponding InternalLbVm instance, it replaces the existing one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules are getting dropped by the InternalLbVm instance.
PR contents:
1) Fix for this bug.
2) Marvin test coverage for Internal LB feature on master with native ACS setup (component directory) including validations for this bug fix.
3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins directory) to validate this bug fix.
4) PEP8 & PyFlakes compliance with the added Marvin test code.
* pr/1577:
CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.cfg file
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to underlay) in Nuage VSP pluginSupport for underlay features (Source & Static NAT to underlay) with Nuage VSP SDN Plugin including Marvin test coverage for corresponding Source & Static NAT features on master. Moreover, our Marvin tests are written in such a way that they can validate our supported feature set with both Nuage VSP SDN platform's overlay and underlay infra.
PR contents:
1) Support for Source NAT to underlay feature on master with Nuage VSP SDN Plugin.
2) Support for Static NAT to underlay feature on master with Nuage VSP SDN Plugin.
3) Marvin test coverage for Source & Static NAT to underlay on master with Nuage VSP SDN Plugin.
4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
5) PEP8 & PyFlakes compliance with our Marvin test code.
* pr/1580:
CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to underlay) in Nuage VSP plugin
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9402 : Marvin tests for Source NAT and Static NAT features verification with NuageVsp (both overlay and underlay infra).
Co-Authored-By: Prashanth Manthena <prashanth.manthena@nuagenetworks.net>, Frank Maximus <frank.maximus@nuagenetworks.net>
CLOUDSTACK-9379: Support nested virtualization at VM level on VMware Hypervisor## Introduction
[JIRA TICKET](https://issues.apache.org/jira/browse/CLOUDSTACK-9379)
It is desired to support nested virtualization at VM level for VMware hypervisor. Current behaviour supports enabling/desabling global nested virtualization by modifying global config `'vmware.nested.virtualization'`. It is wished to improve this feature, having control at VM level instead of a global control only.
A new global configuration is added, to enable/disable VM nested virtualization control: `'vmware.nested.virtualization.perVM'`. Default value=false
After a vm deployment or start command, vm params include `'nestedVirtualizationFlag'` key and its value is:
- true -> nested virtualization enabled
- false -> nested virtualization disabled
**We will determinate nested virtualization enabled/disabled by examining this 3 values:**
- **(1)** global configuration `'vmware.nested.virtualization'` value
- **(2)** global configuration `'vmware.nested.virtualization.perVM'` value
- **(3)** `'nestedVirtualizationFlag'` value in `user_vm_details` if present, `null` if not.
Using this 3 values, there are different use cases:
- **(1)** = TRUE, **(2)** = TRUE, **(3)** is null -> _ENABLED_
- **(1)** = TRUE, **(2)** = TRUE, **(3)** = TRUE -> _ENABLED_
- **(1)** = TRUE, **(2)** = TRUE, **(3)** = FALSE -> _DISABLED_
- **(1)** = TRUE, **(2)** = FALSE, **(3)** indifferent -> _ENABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** is null -> _DISABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** = TRUE -> _ENABLED_
- **(1)** = FALSE, **(2)** = TRUE, **(3)** = FALSE -> _DISABLED_
- **(1)** = FALSE, **(2)** = FALSE, **(3)** indifferent -> _DISABLED_
* pr/1542:
CLOUDSTACK-9379: Support nested virtualization at VM level on VMware Hypervisor
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
when guest VM has nic's in more than one guest network set the tag for
each host in /etc/dhcphosts.txt, and use the tag to add exception in
/etc/dhcpopts.txt to prevent sending default route, dns server in case if the nic is in non-default network
this was the behaviour with edithosts.sh prior to 4.6
added new test case test_router_dhcp_opts to test DHCP option file use of cloudstack
templates of all domains
Bug has been fixed considering below points
1. templatefilter=all or isofilter=all is applicable only to admin
and domain admin.
2. With templatefilter=all or isofilter=all below are the visiblity
of templates in system.
a. admin should be able to see all templates/iso in system.
b. domain admin should be able to see all public template and
templates under its domain tree (including sub domain).
c. domain admin in a project context should be able to see all public
templates and templates registered as project account and templates
which are shared(using updateTemplatePermission api) with project account.
Modified
"test/integration/component/test_escalation_listTemplateDomainAdmin.py"
This marvin test is written for this scenario but for the second account
"templatefilter=all" is not used.
're' meta chars, and causing VPN user add/deelte to fail
-there is no real use of python 're' in CsFile.py utility methods searchString, deleteLine
Replacing with regular string search instead.
-modifying the smoke test for VPN user add/delete to have all permissable chars
CLOUDSTACK-9438: Fix for CLOUDSTACK-9252 - Make NFS version changeable in UIJIRA TICKET: https://issues.apache.org/jira/browse/CLOUDSTACK-9438
### Introduction
From #1361 it was possible to configure NFS version for secondary storage mount.
However, changing NFS version requires inserting an new detail on `image_store_details` table, with `name = 'nfs.version'` and `value = X` where X is desired NFS version, and then restarting management server for changes to take effect.
Our improvement aims to make NFS version changeable from UI, instead of previously described workflow.
### Proposed solution
Basically, NFS version is defined as an image store ConfigKey, this implied:
* Adding a new Config scope: **ImageStore**
* Make `ImageStoreDetailsDao` class to extend `ResourceDetailsDaoBase` and `ImageStoreDetailVO` implement `ResourceDetail`
* Insert `'display'` column on `image_store_details` table
* Extending `ListCfgsCmd` and `UpdateCfgCmd` to support **ImageStore** scope, which implied:
** Injecting `ImageStoreDetailsDao` and `ImageStoreDao` on `ConfigurationManagerImpl` class, on `cloud-server` module.
### Important
It is important to mention that `ImageStoreDaoImpl` and `ImageStoreDetailsDaoImpl` classes were moved from `cloud-engine-storage` to `cloud-engine-schema` module in order to Spring find those beans to inject on `ConfigurationManagerImpl` in `cloud-server` module.
We had this maven dependencies between modules:
* `cloud-server --> cloud-engine-schema`
* `cloud-engine-storage --> cloud-secondary-storage --> cloud-server`
As `ImageStoreDaoImpl` and `ImageStoreDetailsDao` were defined in `cloud-engine-storage`, and they needed in `cloud-server` module, to be injected on `ConfigurationManagerImpl`, if we added dependency from `cloud-server` to `cloud-engine-storage` we would introduce a dependency cycle. To avoid this cycle, we moved those classes to `cloud-engine-schema` module
* pr/1615:
CLOUDSTACK-9438: Fix for CLOUDSTACK-9252 - Make NFS version changeable in UI
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9504: System VMs on Managed StorageThis PR makes it easier to spin up system VMs on managed storage.
Managed storage is when you have a dedicated volume on a SAN for a particular virtual disk (making it easier to deliver QoS).
For example, with this PR, you'd likely have a single virtual disk for a system VM. On XenServer, that virtual disk resides by itself in a storage repository (no other virtual disks share this storage repository).
It was possible in the past to spin up system VMs that used managed storage, but this PR facilitates the use case by making changes to the System Service Offering dialog (and by putting in some parameter checks in the management server).
JIRA ticket: https://issues.apache.org/jira/browse/CLOUDSTACK-9504
* pr/1642:
Added support for system VMs to make use of managed storage
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway
Signed-off-by: Murali Reddy <muralimmreddy@gmail.com>
This closes#1724
fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway
Signed-off-by: Murali Reddy <muralimmreddy@gmail.com>
This closes#1724
fix to ensure only physical network with guest traffic is picked up for
creating a private network for vpc private gateway
Signed-off-by: Murali Reddy <muralimmreddy@gmail.com>
This closes#1724
- Switches to macchinina as template for VM in the tests
- Modifies the ostype of the macchinina template to 'Other Linux (64-bit)'
- Check template download status, fixes Nonetype iterable issue
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
As per previous discussions and ticket, a template deletion may result in failure
(exception thrown) for templates that are not properly downloaded. The tearDown
method, a template may be tried for deletion but on failure we may ignore it
as account deletion/tearDown would retry to cleanup resource owned by the account.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- test_01_test_vm_volume_snapshot not supported for Xen, tests keep failing
- Skip snapshot tests for centos6/kvm as snapshot is not supported by older
qemu-img versions
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
SSH to VR for vmware, goes via the mgmt server and uses ssh keys at
/var/cloudstack path. Add suitable checks to tests failing on vmware.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Handle case where physical network instance does not have vlan attribute
- Handle case where listIso response may not have status attribute
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
The quota integration test requires special setup and is moved to plugins
directory as in 4.9 and master branch.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9505: Fix test_deploy_vgpu_enabled tests cleanupJIRA TICKET: https://issues.apache.org/jira/browse/CLOUDSTACK-9505
## Description
Cleanup resources after running `test_deploy_vgpu_enabled.py`. Although test passes, resources are left and need to be cleaned up
* pr/1693:
CLOUDSTACK-9505: Fix test_deploy_vgpu_enabled tests cleanup
Signed-off-by: John Burwell <meaux@cockamamy.net>
Switched to the official SolidFire SDK for PythonSolidFire has recently released an official SDK for Python.
I have converted all of the SolidFire integration tests over to making use of this new SDK.
For testing, I re-ran each test and observed success.
https://pypi.python.org/pypi/solidfire-sdk-python/1.1.0.92
* pr/1689:
Switched to the official SolidFire SDK for Python
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9480, CLOUDSTACK-9495 fix egress rule incorrect behaviorWhen 'default egress policy' is set to 'allow' in the network offering, any egress rule that is added will 'deny' the traffic overriding the default behaviour.
Conversely, when 'default egress policy' is set to 'deny' in the network offering, any egress rule that is added will 'allow' the traffic overriding the default behaviour.
While this works for 'tcp', 'udp' as expected, for 'icmp' protocol its always set to ALLOW. This patch keeps all protocols behaviour consistent.
Results of running test/integration/component/test_egress_fw_rules.py. With out the patch test_02_egress_fr2 test was failing. This patch fixes the test_02_egress_fr2 scenario.
-----------------------------------------------------------------------------------------------------
Test By-default the communication from guest n/w to public n/w is NOT allowed. ... === TestName: test_01_1_egress_fr1 | Status : SUCCESS ===
ok
Test By-default the communication from guest n/w to public n/w is allowed. ... === TestName: test_01_egress_fr1 | Status : SUCCESS ===
ok
Test Allow Communication using Egress rule with CIDR + Port Range + Protocol. ... === TestName: test_02_1_egress_fr2 | Status : SUCCESS ===
ok
Test Allow Communication using Egress rule with CIDR + Port Range + Protocol. ... === TestName: test_02_egress_fr2 | Status : SUCCESS ===
ok
Test Communication blocked with network that is other than specified ... === TestName: test_03_1_egress_fr3 | Status : SUCCESS ===
ok
Test Communication blocked with network that is other than specified ... === TestName: test_03_egress_fr3 | Status : SUCCESS ===
ok
Test Create Egress rule and check the Firewall_Rules DB table ... === TestName: test_04_1_egress_fr4 | Status : SUCCESS ===
ok
Test Create Egress rule and check the Firewall_Rules DB table ... === TestName: test_04_egress_fr4 | Status : SUCCESS ===
ok
Test Create Egress rule and check the IP tables ... SKIP: Skip
Test Create Egress rule and check the IP tables ... SKIP: Skip
Test Create Egress rule without CIDR ... === TestName: test_06_1_egress_fr6 | Status : SUCCESS ===
ok
Test Create Egress rule without CIDR ... === TestName: test_06_egress_fr6 | Status : SUCCESS ===
ok
Test Create Egress rule without End Port ... === TestName: test_07_1_egress_fr7 | Status : EXCEPTION ===
ERROR
Test Create Egress rule without End Port ... === TestName: test_07_egress_fr7 | Status : SUCCESS ===
ok
Test Port Forwarding and Egress Conflict ... SKIP: Skip
Test Port Forwarding and Egress Conflict ... SKIP: Skip
Test Delete Egress rule ... === TestName: test_09_1_egress_fr9 | Status : SUCCESS ===
ok
Test Delete Egress rule ... === TestName: test_09_egress_fr9 | Status : SUCCESS ===
ok
Test Invalid CIDR and Invalid Port ranges ... === TestName: test_10_1_egress_fr10 | Status : SUCCESS ===
ok
Test Invalid CIDR and Invalid Port ranges ... === TestName: test_10_egress_fr10 | Status : SUCCESS ===
ok
Test Regression on Firewall + PF + LB + SNAT ... === TestName: test_11_1_egress_fr11 | Status : SUCCESS ===
ok
Test Regression on Firewall + PF + LB + SNAT ... === TestName: test_11_egress_fr11 | Status : SUCCESS ===
ok
Test Reboot Router ... === TestName: test_12_1_egress_fr12 | Status : SUCCESS ===
ok
Test Reboot Router ... === TestName: test_12_egress_fr12 | Status : EXCEPTION ===
ERROR
Test Redundant Router : Master failover ... === TestName: test_13_1_egress_fr13 | Status : SUCCESS ===
ok
Test Redundant Router : Master failover ... === TestName: test_13_egress_fr13 | Status : SUCCESS ===
ok
-----------------------------------------------------------------------------------------------------
* pr/1666:
fix egress rule incorrect behavior
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9480: Egress Firewall: Incorrect use of Allow/Deny for ICMP
fix ensures, ICMP, TCP, UDP are handled similalry w.r.t egress rule action
CLOUDSTACK-9495: Egress rules functionalty broken when protocol=all specified
when protocol=all specified, CIDR was ignored. Fix ensures if CIDR is specified
its always used in configuring iptable rules
2 new test cased to test /32 CIDR
Adding support for cross-cluster storage migration for managed storage when using XenServerThis PR adds support for cross-cluster storage migration of VMs that make use of managed storage with XenServer.
Managed storage is when you have a 1:1 mapping between a virtual disk and a volume on a SAN (in the case of XenServer, an SR is placed on this SAN volume and a single virtual disk placed in the SR).
Managed storage allows features such as storage QoS and SAN-side snapshots to work (sort of analogous to VMware VVols).
This PR focuses on enabling VMs that are using managed storage to be migrated across XenServer clusters.
I have successfully run the following tests on this branch:
TestVolumes.py
TestSnapshots.py
TestVMSnapshots.py
TestAddRemoveHosts.py
TestVMMigrationWithStorage.py (which is a new test that is being added with this PR)
* pr/1671:
Adding support for cross-cluster storage migration for managed storage when using XenServer
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
CLOUDSTACK-9428: Fix for CLOUDSTACK-9211 - Improve performance of 3D GPU support in cloud-plugin-hypervisor-vmwareJIRA TICKET: https://issues.apache.org/jira/browse/CLOUDSTACK-9428
### Introduction
On #1310 passing vRAM size to support 3D GPU problem was addressed on VMware. It was found out that it could be improved to increase performance by reducing extra API calls, as we'll describe later
### Improvement
On WMware, `VmwareResource` manages execution of `StartCommand.` Before sending power on command to ESXi hypervisor, vm is configured by calling `reconfigVMTask` web method on vSphere's client `VimPortType` web service.
It was found out that we were using this method 2 times when passing vRAM size, as it implied creating a new vm config spec only editing video card specs and making an extra call to `reconfigVMTask.`
We propose reducing the extra web service call by adjusting vm's config spec. This way video card gets properly configured (when passing vRAM size) in the same configure call, increasing performance.
### Use case (passing vRAM size)
* Deploy a new VM, let its id be X
* Stop VM
* Execute SQL, where X is vm's id and Z is vRAM size (in kB):
````
INSERT INTO cloud.user_vm_details (vm_id, name, value) VALUES (X, 'mks.enable3d', 'true');
INSERT INTO cloud.user_vm_details (vm_id, name, value) VALUES (X, 'mks.use3dRenderer', 'automatic');
INSERT INTO cloud.user_vm_details (vm_id, name, value) VALUES (X, 'svga.autodetect', 'false');
INSERT INTO cloud.user_vm_details (vm_id, name, value) VALUES (X, 'svga.vramSize', Z);
````
* Start VM
* pr/1605:
CLOUDSTACK-9428: Add marvin test
CLOUDSTACK-9428: Fix for CLOUDSTACK-9211 - Improve performance
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
DNS on VR should not be publically accessible as it may be prone to DNS
amplification/reflection attacks. This fixes the issue by only allowing VR
DNS (port 53) to be accessible from guest network cidr, as per the fix in:
https://issues.apache.org/jira/browse/CLOUDSTACK-6432
- Only allows guest network cidrs to query VR DNS on port 53.
- Includes marvin smoke test that checks the VR DNS accessibility checks from
guest and non-guest network.
- Fixes Marvin sshClient to avoid using ssh agent when password is provided,
previous some environments may have seen 'No existing session' exception without
this fix.
- Adds a new dnspython dependency that is used to perform dns resolutions in the
tests.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Often, patch and security releases do not require schema migrations or
data migrations. However, if an empty upgrade class and associated
scripts are not defined, the upgrade process will break. With this
change, if a release does not have an upgrade, a noop DbUpgrade is added
to the upgrade path. This approach allows the upgrade to proceed and
for the database to properly reflect the installed version. This change
should make the release process simpler as RMs no longer need to
rememeber to create this boilerplate code when starting a new release.
Beginning with the 4.8.2.0 and 4.9.1.0 releases, the project will
formally adopt a four (4) position release number to properly accomodate
rekeases that contain only CVE fixes. The DatabaseUpgradeChecker and
Version classes made assumptions that they would always parse and
compare three (3) position version numbers. This change adds the
CloudStackVersion value object that supports both three (3) and four (4)
version numbers. It encapsulates version comparsion logic, as well as,
the rules to allow three (3) and four (4) to interoperate.
* Modifies DatabaseUpgradeChecker to handle derive an upgrade path for
a version that was not explicitly specified. It determines the
releases the first release before it with database migrations and uses
that list as the basis for the list for version being calculated. A
noop upgrade is then added to the list which causes no schema changes
or data migrations, but will update the database to the version.
* Adds unit tests for the upgrade path calculation logic in
DatabaseUpgradeChecker
* Removes dummy upgrade logic for the 4.8.2.0 introduced in previous
versions of this patch
* Introduces the CloudStackVersion value object which parses and
compares three (3) and four (4) position version numbers. This class
is intended to replace com.cloud.maint.Version.
* Adds the junit-dataprovider dependency -- allowing test data to be
concisely generated separately from the execution of a test case.
Used extensively in the CloudStackVersionTest.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
In several of the list_acl_tests, the tests run for simulator only where
in the (class) setup domains and accounts are created for the test. When the
tests end the (class) teardown methods would delete and remove these resources.
Due to dependence of one of the resources on the other, domain2 on domain1,
domain2 needs to be removed/cleaned up before domain1. Due to this issue,
several Travis test runs have failed in the past such as:
https://travis-ci.org/apache/cloudstack/jobs/152610967https://travis-ci.org/apache/cloudstack/jobs/152610968
Changing the order of cleanup fixes the tests.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
in BVT
Bug-Id:## CLOUDSTACK-9328
[CLOUDSTACK-9328]: Made changes as per the review comment from Shwetaag
[CLOUDSTACK-9328]: Made changes based on the CI results
[CLOUDSTACK-9296] Start ipsec for client VPNThis fix starts the IPSEC daemon when enabling client side vpn
* pr/1423:
[CLOUDSTACK-9296] Start ipsec for client VPN
Signed-off-by: Will Stevens <williamstevens@gmail.com>
* 4.7:
CLOUDSTACK-9376: Restrict listTemplates API with filter=all for root admin
CLOUDSTACK-9369: Restrict default login to ldap/native users
Add lsb-release dependency to mgmt server and agent on Debian/Ubuntu.
Emit template UUID and class type over event bus when deleting templates.
- Restricts default login auth handler to ldap and native-cloudstack users
- Refactors and create re-usable method to find domain by id/path
- Adds unit test for refactored method in DomainManagerImpl
- Adds smoke test for login handler
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9388: Remove string conversion in assertion statementRemove string convertion in Assertion statement, since the start port parameter in listFirewallAPI response is of type integer
Test Result:
=========
"Checking firewall rules deletion after static NAT disable ... === TestName: test_01_firewall_rules_port_fw | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 153.974s
OK
* pr/1561:
CLOUDSTACK-9388: Remove string conversion in assertion statement
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9365 : updateVirtualMachine with userdata should not error when a VM is attached to multiple networks from which one or more doesn't support userdata
* pr/1523:
Marvin script for cloudstack-9365
CLOUDSTACK-9365 : updateVirtualMachine with userdata should not error when a VM is attached to multiple networks from which one or more doesn't support userdata
Signed-off-by: Will Stevens <williamstevens@gmail.com>
introduced new capacityType parameter in updateCapacityState method and necessary changes to add capacity_type clause in sql
also fixed incorrect sql builder logic (unused code path for which it is never surfaced )
Added marvin test to check host and storagepool capacity when host is disabled
Added conditions to ensure the capacity_type is added only when capacity_type length is greater than 0.
Added checks in marvin test to ensure the capacity exists for a host before disabling it.
Added checks to avoid index out of range exception
- Fixes oobm integration test to skip if known ipmitool bug is hit
- Fixes ProcessTest unit test case to use sleep
- Removes redundant unit test that covers code in ProcessTest
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Notify listeners when a host has been added to a cluster, is about to be removed from a cluster, or has been removed from a cluster
This PR addresses the following JIRA ticket:
https://issues.apache.org/jira/browse/CLOUDSTACK-8813
The problem is that there needs to be notifications sent when a host is added to, about to be removed from, and removed from a cluster.
Such notifications can be used for many purposes. For example, it can allow storage plug-ins to update ACLs on their storage systems. Also, it can allow us to clean up IQNs from ESXi hosts that are no longer needed.
* pr/816:
CLOUDSTACK-8813: Notify listeners when a host has been added to a cluster, is about to be removed from a cluster, or has been removed from a cluster
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9299: Out-of-band Management for CloudStackSupport access to a hosts out-of-band management interface (e.g. IPMI, iLO,
DRAC, etc.) to manage host power operations (on/off etc.) and querying current
power state in CloudStack.
Given the wide range of out-of-band management interfaces such as iLO and iDRA,
the service implementation allows for development of separate drivers as plugins.
This feature comes with a ipmitool based driver that uses the
ipmitool (http://linux.die.net/man/1/ipmitool) to communicate with any
out-of-band management interface that support IPMI 2.0.
This feature allows following common use-cases:
- Restarting stalled/failed hosts
- Powering off under-utilised hosts
- Powering on hosts for provisioning or to increase capacity
- Allowing system administrators to see the current power state of the host
For testing this feature, please install `ipmitool` (using yum/apt/brew) and `ipmisim`:
https://pypi.python.org/pypi/ipmisim
The default ipmitool location is assumed in /usr/bin, if this is different in your env please fix the global setting, see FS for details on various global settings.
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Out-of-band+Management+for+CloudStack
/cc @jburwell @swill @abhinandanprateek @murali-reddy @borisstoyanov
* pr/1502:
maven: ignore utils/testsmallfileinactive for rat checking
CLOUDSTACK-9378: Fix for #1497
HypervisorUtilsTest: increate timeout to 8seconds
travis: Use patched version of ipmitool for tests
CLOUDSTACK-9299: Out-of-band Management for CloudStack
Signed-off-by: Will Stevens <williamstevens@gmail.com>
* 4.8:
CLOUDSTACK-9287 - Improve test by checking if pvt gw is removed and fix typos
Handle private gateways more reliably
CLOUDSTACK-9287 - Fix RVR public interface
CLOUDSTACK-9287 - Add integration test to cover the private gateway related changes
CLOUDSTACK-9287 - Refactor the interface state configuration
CLOUDSTACK-9287 - Check if the nic profile has already been removed from a certain router
CLOUDSTACK-9287 - Bring up the private gw interface on state change to master
CLOUDSTACK-9287 - Make sure private gw interface is not used for default gw
CLOUDSTACK-9287 - Add integration test to cover the private gw interface/mac address issues
CLOUDSTACK-9287 - Put private gateway interface down on backup router
CLOUDSTACK-9287 - Generate new mac address if router is redundant and nic profile exists
Add private gateway IP to router initialization config
apply static routes on change to master state
* 4.7:
CLOUDSTACK-9287 - Improve test by checking if pvt gw is removed and fix typos
Handle private gateways more reliably
CLOUDSTACK-9287 - Fix RVR public interface
CLOUDSTACK-9287 - Add integration test to cover the private gateway related changes
CLOUDSTACK-9287 - Refactor the interface state configuration
CLOUDSTACK-9287 - Check if the nic profile has already been removed from a certain router
CLOUDSTACK-9287 - Bring up the private gw interface on state change to master
CLOUDSTACK-9287 - Make sure private gw interface is not used for default gw
CLOUDSTACK-9287 - Add integration test to cover the private gw interface/mac address issues
CLOUDSTACK-9287 - Put private gateway interface down on backup router
CLOUDSTACK-9287 - Generate new mac address if router is redundant and nic profile exists
Add private gateway IP to router initialization config
apply static routes on change to master state
CLOUDSTACK-9287 - Fix unique mac address per rVPC routerThis is work by @wilderrodrigues, see PR #1413 It contains important fixes and I think it needs to be included so I send the PR again.
* pr/1483:
CLOUDSTACK-9287 - Improve test by checking if pvt gw is removed and fix typos
CLOUDSTACK-9287 - Fix RVR public interface
CLOUDSTACK-9287 - Add integration test to cover the private gateway related changes
CLOUDSTACK-9287 - Refactor the interface state configuration
CLOUDSTACK-9287 - Check if the nic profile has already been removed from a certain router
CLOUDSTACK-9287 - Bring up the private gw interface on state change to master
CLOUDSTACK-9287 - Make sure private gw interface is not used for default gw
CLOUDSTACK-9287 - Add integration test to cover the private gw interface/mac address issues
CLOUDSTACK-9287 - Put private gateway interface down on backup router
CLOUDSTACK-9287 - Generate new mac address if router is redundant and nic profile exists
Signed-off-by: Will Stevens <williamstevens@gmail.com>
- For out-of-band management feature (CLOUDSTACK-9299) use patched version of
ipmitool that would work on trusty travis machines
- The ipmitool used is from xenial/16.04 release with patch from RedHat
https://bugzilla.redhat.com/show_bug.cgi?id=1286035
- Installs ipmitool from xenial repositories to get all the dependencies
and then install patched deb version
- Skip test if the known failure occurs
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Support access to a host’s out-of-band management interface (e.g. IPMI, iLO,
DRAC, etc.) to manage host power operations (on/off etc.) and querying current
power state in CloudStack.
Given the wide range of out-of-band management interfaces such as iLO and iDRA,
the service implementation allows for development of separate drivers as plugins.
This feature comes with a ipmitool based driver that uses the
ipmitool (http://linux.die.net/man/1/ipmitool) to communicate with any
out-of-band management interface that support IPMI 2.0.
This feature allows following common use-cases:
- Restarting stalled/failed hosts
- Powering off under-utilised hosts
- Powering on hosts for provisioning or to increase capacity
- Allowing system administrators to see the current power state of the host
For testing this feature `ipmisim` can be used:
https://pypi.python.org/pypi/ipmisim
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Out-of-band+Management+for+CloudStack
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows root administrators to define new roles and associate API
permissions to them.
A limited form of role-based access control for the CloudStack management server
API is provided through a properties file, commands.properties, embedded in the
WAR distribution. Therefore, customizing API permissions requires unpacking the
distribution and modifying this file consistently on all servers. The old system
also does not permit the specification of additional roles.
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Dynamic+Role+Based+API+Access+Checker+for+CloudStack
DB-Backed Dynamic Role Based API Access Checker for CloudStack brings following
changes, features and use-cases:
- Moves the API access definitions from commands.properties to the mgmt server DB
- Allows defining custom roles (such as a read-only ROOT admin) beyond the
current set of four (4) roles
- All roles will resolve to one of the four known roles types (Admin, Resource
Admin, Domain Admin and User) which maintains this association by requiring
all new defined roles to specify a role type.
- Allows changes to roles and API permissions per role at runtime including additions or
removal of roles and/or modifications of permissions, without the need
of restarting management server(s)
Upgrade/installation notes:
- The feature will be enabled by default for new installations, existing
deployments will continue to use the older static role based api access checker
with an option to enable this feature
- During fresh installation or upgrade, the upgrade paths will add four default
roles based on the four default role types
- For ease of migration, at the time of upgrade commands.properties will be used
to add existing set of permissions to the default roles. cloud.account
will have a new role_id column which will be populated based on default roles
as well
Dynamic-roles migration tool: scripts/util/migrate-dynamicroles.py
- Allows admins to migrate to the dynamic role based checker at a future date
- Performs a harder one-way migrate and update
- Migrates rules from existing commands.properties file into db and deprecates it
- Enables an internal hidden switch to enable dynamic role based checker feature
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9351: Add ids parameter to resource listing API calls## General behaviour
A new parameter is added in each method, its type a list of IDs of the entity, it will be mutually exclusive with id. (Similar to <code>id</code> and <code>ids</code> parameters in <code>listVirtualMachines</code> method)
### API Methods affected
* <code>listTemplates</code>: new parameter **<code>ids</code>**, mutually exclusive with <code>id</code>
* <code>listVolumes</code>: new parameter **<code>ids</code>**, mutually exclusive with <code>id</code>
* <code>listSnapshots</code>: new parameter **<code>ids</code>**, mutually exclusive with <code>id</code>
* <code>listVMSnapshots</code>: new parameter **<code>vmsnapshotids</code>**, mutually exclusive with <code>vmsnapshotid</code>
* pr/1497:
CLOUDSTACK-9351: Add marvin test and add it to travis file
CLOUDSTACK-9351: Add ids parameter to resource listing API calls
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9350: KVM-HA- Fix CheckOnHost for Local storage- KVM-HA- Fix CheckOnHost for Local storage
- Also skip HA on VMs that are using local storage
* pr/1496:
CLOUDSTACK-9350: KVM-HA- Fix CheckOnHost for Local storage - Also skip HA on VMs that are using local storage
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Bump ssh retries to prevent false positives of test_loadbalanceNo more false positives after this change.
```
[root@cs1 integration]# cat /tmp//MarvinLogs/test_loadbalance_TR5RJD/results.txt
Test to create Load balancing rule with source NAT ... === TestName: test_01_create_lb_rule_src_nat | Status : SUCCESS ===
ok
Test to create Load balancing rule with non source NAT ... === TestName: test_02_create_lb_rule_non_nat | Status : SUCCESS ===
ok
Test for assign & removing load balancing rule ... === TestName: test_assign_and_removal_lb | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 3 tests in 930.418s
OK
```
* pr/1473:
bump ssh retries to prevent false positives of test_loadbalance
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9323: Fix cancel host maintenance canFix cancel host maintenance so that if maintenance is cancelled the host come back to normal state gracefully.
Added marvin tests for host maintennace.
* pr/1454:
CLOUDSTACK-9323: Fix Cancel maintenance so that if maintenance is cancelled the host come back to normal state gracefully. Added marvin tests for host maintennace.
Signed-off-by: Will Stevens <williamstevens@gmail.com>
* 4.8:
Removed sleeps and used validateList as requested.
Added required_hardware="false" attr above test_02_root_volume_attach_detach
Modified test_volumes.py to include a hypervisor test for root attach/detach testing
Let hypervisor type KVM and Simulator detach root volumes. Updated test_volumes.py to include a test for detaching and reattaching a root volume from a vm. I also had to update base.py to allow attach_volume to have the parameter deviceid to be passed as needed.
* 4.7:
Removed sleeps and used validateList as requested.
Added required_hardware="false" attr above test_02_root_volume_attach_detach
Modified test_volumes.py to include a hypervisor test for root attach/detach testing
Let hypervisor type KVM and Simulator detach root volumes. Updated test_volumes.py to include a test for detaching and reattaching a root volume from a vm. I also had to update base.py to allow attach_volume to have the parameter deviceid to be passed as needed.
CLOUDSTACK-9322: Support for Internal LB fuctionality with Nuage VSP SDN Plugin including Marvin test coverageTask: https://issues.apache.org/jira/browse/CLOUDSTACK-9322
PR contents:
1) UI changes to support LB provider InternalLbVm during VPC offering creation.
2) Bug fix for CLOUDSTACK-9320.
3) Nuage VSP SDN Plugin related enhancements for VPC network functionality.
4) Marvin test coverage for Internal LB support on master with Nuage VSP SDN Plugin.
5) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
6) PEP8 & PyFlakes compliance with our Marvin test code.
Test run:
CloudStack$ nosetests --with-marvin --marvin-config=nuage_ant.cfg test/integration/plugins/nuagevsp/ -a tags=nuagevsp
Test results:
Test user data and password reset functionality with Nuage VSP SDN plugin ... === TestName: test_nuage_UserDataPasswordReset | Status : SUCCESS ===
ok
Test Nuage VSP VPC Offering with different combinations of LB service providers ... === TestName: test_01_nuage_internallb_vpc_Offering | Status : SUCCESS ===
ok
Test Nuage VSP VPC Network Offering with and without Internal LB service ... === TestName: test_02_nuage_internallb_vpc_network_offering | Status : SUCCESS ===
ok
Test Nuage VSP VPC Networks with and without Internal LB service ... === TestName: test_03_nuage_internallb_vpc_networks | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality with different combinations of Internal LB rules ... === TestName: test_04_nuage_internallb_rules | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality by performing (wget) traffic tests within a VPC ... === TestName: test_05_nuage_internallb_traffic | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality with different LB algorithms by performing (wget) traffic tests ... === TestName: test_06_nuage_internallb_algorithms_traffic | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality with restarts of VPC network components by performing (wget) ... === TestName: test_07_nuage_internallb_vpc_network_restarts_traffic | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality with InternalLbVm appliance operations by performing (wget) ... === TestName: test_08_nuage_internallb_appliance_operations_traffic | Status : SUCCESS ===
ok
Test Basic VPC Network Functionality with Nuage VSP SDN plugin ... === TestName: test_nuage_vpc_network | Status : SUCCESS ===
ok
Test Nuage VSP SDN plugin with basic Isolated Network functionality ... === TestName: test_nuage_vsp | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 11 tests in 12094.705s
OK
Test run logs:
[results.txt](https://github.com/apache/cloudstack/files/187587/results.txt)
[runinfo.txt](https://github.com/apache/cloudstack/files/187588/runinfo.txt)
Test config file:
[nuage_ant.txt](https://github.com/apache/cloudstack/files/222711/nuage_ant.txt)
Note: Attached the Marvin config file as .txt instead of .cfg.
PEP8 & PyFlakes Compliance:
CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/*.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/nuageTestCase.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/test_nuage_password_reset.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/test_nuage_vpc_internal_lb.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/test_nuage_vpc_network.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/test_nuage_vsp.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/*.py
#CLOUDSTACK-9322
* pr/1452:
CLOUDSTACK-9322 : Marvin tests for Internal Lb with Nuage VSP
CLOUDSTACK-9320 : InternalLBVM is not getting destroyed when the last Internal Load Balancer rule is removed for the corresponding source IP address
CLOUDSTACK-9322 : Changes to support InternalLbVm with Nuage VSP plugin
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-8745 : verify usage after root disk migrationput storage in maintenance mode and start ha vm and check usage ... === TestName: test_ha_with_storage_maintenance | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 842.294s
OK
* pr/713:
CLOUDSTACK-8745 : verify usage after root disk migration
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Test to create vpn customer gateway with hostnameWhile adding vpn customer gateway for site to site vpn connection, cs should also accept host name apart from gateway ip address. It should not be restricted to just ip address.
* pr/1308:
Added few validation steps after adding vpncustomer gateway with hostname Changes are as per review comments in PR#1308
Test to verify CS-45057 Bug-Id: CS-45057 Reviewed-by: Self
Signed-off-by: Will Stevens <williamstevens@gmail.com>
New test to validate starting vm after nic removal and attachPlease refer bug CLOUDSTACK-9219 for more details.
Test Results:
==========
Test to verify vm start after NIC removal and reattach ... === TestName: test_30_remove_nic_reattach | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 277.478s
OK
* pr/1326:
New test to validate starting vm after nic removal and attach Bug-Id: CLOUDSTACK-9219
Signed-off-by: Will Stevens <williamstevens@gmail.com>
[CLOUDSTACK-9218]Test to verify restart network after master VR destroyedPlease refer CLOUDSTACK-9218 for more details
Test Results:
===========
Test restarting RvR network without cleanup after destroying master VR ... === TestName: test_restart_ntwk_MVR_destroyed | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 581.194s
OK
* pr/1323:
Added new test to verify restart network after destorying master VR Bug-Id: CLOUDSTACK-9218
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Updated test_volumes.py to include a test for detaching and reattaching a root volume from a vm. I also had to update base.py to allow attach_volume to have the parameter deviceid to be passed as needed.
CLOUDSTACK-9066: Update testpath to delete account after deleting VM's of that account In testpath_snapshot_hardning.py testpath account was getting cleared prior to VM's of that account hence while cleaning up the account the VM's in that account also gets deleted hence while clearing VM's it gives exception as "No VM found".
* pr/1078:
CLOUDSTACK-9066: Update testpath to delete account after deleting VMs of that account
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-9140: Testcase to verify if Dedicated cluster is used for virtual routers that belong to non dedicated account
* pr/1218:
CLOUDSTACK-9140: Testcase to verify if Dedicated cluster is used for virtual routers that belong to non dedicated account --Adding verification steps to make sure that vm and VR are being deployed on dedicated cluster
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-9026: Modifying testpath for adding missing parameterAdding service_offering creation in testpath_storage_migration.py testpath which is missing right now
* pr/1031:
CLOUDSTACK-9026: Modifying testpath for adding missing parameter
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-9091: Update testpath for parameter issuesIn testpath_volume_snapshot testpath creating volume from snapshot function is passing zonid parameter to function in base class but there it doesn't take as separate parameter it takes it from "services" so updating it.
* pr/1130:
CLOUDSTACK-9091: Update testpath for parameter issues
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-9128: Testcase to verify physical_size attribute of snapshot_store_ref table Verify if physical_size attribute of snapshot_store_ref table stores actual physical size of the snapshot
* pr/1199:
CLOUDSTACK-9128: Testcase to verify if snapshot_store_ref table stores actual size of back snapshot in secondary storage
Signed-off-by: sanjeev <sanjeev@apache.org>
Squashing two commits in to single commitCLOUDSTACK-8717: Failed to start instance after restoring the running instance
-Modified code to add tag to aonly one cluster wide SP
-Added validateList function
-Added code to clear tags in tearDown class
* pr/1411:
Squashing two commits in to single commit
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-8717: Failed to start instance after restoring the running instance
-Modified code to add tag to aonly one cluster wide SP
-Added validateList function
-Added code to clear tags in tearDown class
CLOUDSTACK-8717: Failed to start instance after restoring the running instance On setup with two cluster wide primary storage verify restoring a running instance.(As while restoring instance root disk may get created on another primary storage.)
* pr/667:
CLOUDSTACK-8717: Failed to start instance after restoring the running instance -Modified code to add tag to aonly one cluster wide SP -Added validateList function -Added code to clear tags in tearDown class
CLOUDSTACK-8717: Failed to start instance after restoring the running instance
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-8728: Testcase to Verify if VRs IP Testcase to check if VR's IP changes if it is destroyed and re-created in basic zone.
* pr/684:
CLOUDSTACK-8728: Testcase to Verify if VRs IP changes if it is destroyed and re created in Basic Zone -Merging the testcase in exsiting testcase by adding support for basic zone -Merging two commits into single commit
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-9012 :automation of cores feature test pathhttps://issues.apache.org/jira/browse/CLOUDSTACK-9012
Automated a full scenario of coreos guest OS support:
it includes registering coreos templates present at http://dl.openvm.eu/cloudstack/coreos/x86_64/
1. based on hypervisor types of zone
2. creating ssh key pair
3. creating a sample user data
4. creating a coreos virtual machine using this ssh keypair and userdata
5. verifying ssh access to coreo os machine using keypair and core username
6. verifying userdata is applied on virtual machine and the service asked in sample data is actually running
7. Verifying userdata in router vm as well
* pr/1011:
added suggested changes to coreos automation
automation of cores feature test path
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-9121: Adding VmSnapshot validation in testpath_revert_snap.pyIn testpath_revert_snap.py, there was no code to check if VM snapshot is created or not hence adding code for snapshot validation.
* pr/1190:
CLOUDSTACK-9121: Adding VmSnapshot validation in testpath_revert_snap.py
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-8895: Verify if storage on storage pool can be attached to VMTest case to verify if data volume uploaded in a storage pool(Cluster wide storage pool) is available for attachment to a Virtual Machine.and also check that after attachment the volume is in correct storage pool.
* pr/869:
CLOUDSTACK-8895: Verify if storage can be selected when attaching uploaded data volume to VM
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-8731-checking usage event for delete volume checking usage event for delete volume. I have incorporated all the suggested changes.
* pr/1048:
CLOUDSTACK-8731-checking usage event for delete volume
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9161: fix the quota marvin test 1. Create a dummy user, as existing user may already have stale quota
data
2. fix the tests to use the dummy user
3. a boundary condition was revealed and fixed for a new user where
quota service has never run and created bootstrap entries
* pr/1240:
CLOUDSTACK-9161: fix the quota marvin test
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.7:
CLOUDSTACK-9154 - Sets the pub interface down when all guest nets are gone
CLOUDSTACK-9187 - Makes code ready for more something like ethXXXX, if we ever get that far
CLOUDSTACK-9188 - Reads network GC interval and wait from configDao
CLOUDSTACK-9187 - Fixes interface allocation to VRRP instances
CLOUDSTACK-9187 - Adds test to cover multiple nics and nic removal
CLOUDSTACK-9154 - Adds test to cover nics state after GC
CLOUDSTACK-9154 - Returns the guest iterface that is marked as added
Conflicts:
engine/orchestration/src/org/apache/cloudstack/engine/orchestration/NetworkOrchestrator.java
[4.7] Critical VPCVR issues fixed: CLOUDSTACK-9154; CLOUDSTACK-9187; and CLOUDSTACK-9188This PR applies the same fixes as in the PR #1259, but against branch 4.7.
Please refer to PR #1259 for the tests results and all the comments already made there.
Issues fixed are:
* CLOUDSTACK-9154: rVPC doesn't recover from cleaning up of network garbage collector
* CLOUDSTACK-9187: rVPC routers in Master/Master due to concurrency problem when writing the keepalivd.conf
* CLOUDSTACK-9188: NetworkGarbageCollector is not using gc.interval and gc.wait from settings
Those changes have been covered by 2 new tests added to ```smoke/test_vpc_redundant.py```:
* test_04_rvpc_network_garbage_collector_nics
* test_05_rvpc_multi_tiers
The test ```test_04_rvpc_network_garbage_collector_nics``` depends on the global settings for the network.gc.interval and gc.wait. If one wants the test to run quicker, please change the settings (default is 600 seconds for each) and restart the Management Server before running the tests. I would suggest to set it to 60 seconds.
In addition, the NetworkGarbageCollector was redefining the settings above mentioned and not reading their values through ConfigDao. Due to that, the settings were not being applied properly and the test was waiting to long to check the VPC routers.
* pr/1277:
CLOUDSTACK-9154 - Sets the pub interface down when all guest nets are gone
CLOUDSTACK-9187 - Makes code ready for more something like ethXXXX, if we ever get that far
CLOUDSTACK-9188 - Reads network GC interval and wait from configDao
CLOUDSTACK-9187 - Fixes interface allocation to VRRP instances
CLOUDSTACK-9187 - Adds test to cover multiple nics and nic removal
CLOUDSTACK-9154 - Adds test to cover nics state after GC
CLOUDSTACK-9154 - Returns the guest iterface that is marked as added
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-9001: Modifying snapshot results validation Currently snapshots results validation is based on length of snapshot result but if snapshot creation fails then None type object will not have "len" attribute hence modifying the validation.
* pr/994:
CLOUDSTACK-9001: Modifying snapshot results validation in testpath_uuid_event testpath
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-9005: Modifying tearDown functionModifying tearDown function to check if data volume is in detached state before deleting the volume
* pr/1000:
CLOUDSTACK-9005: Modifying tearDown function
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.7:
CLOUDSTACK-9222 Prevent cloud.log.1 filling up the disk
Add integration test for restartVPC with cleanup, and Private Gateway enabled.
Nullpointer Exception in NicProfileHelperImpl
NicProfileHelperImpl NullpointerException when ipVO is nullWhen a VPC has a private gateway, and one would like to restart the VPC with **cleanup** it would fail.
This PR adds a NullPointer check and verifies it with an integration test.
```
test_01_vpc_privategw_acl (integration.smoke.test_privategw_acl.TestPrivateGwACL) ... === TestName: test_01_vpc_privategw_acl | Status : SUCCESS ===
ok
test_02_vpc_privategw_static_routes (integration.smoke.test_privategw_acl.TestPrivateGwACL) ... === TestName: test_02_vpc_privategw_static_routes | Status : SUCCESS ===
ok
test_03_vpc_privategw_restart_vpc_cleanup (integration.smoke.test_privategw_acl.TestPrivateGwACL) ... === TestName: test_03_vpc_privategw_restart_vpc_cleanup | Status : SUCCESS ===
ok
test_04_rvpc_privategw_static_routes (integration.smoke.test_privategw_acl.TestPrivateGwACL) ... === TestName: test_04_rvpc_privategw_static_routes | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 4 tests in 2945.055s
OK
```
* pr/1328:
Add integration test for restartVPC with cleanup, and Private Gateway enabled.
Nullpointer Exception in NicProfileHelperImpl
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.7:
Fix unable to setup more than one Site2Site VPN Connection
FIX S2S VPN rVPC: Check only redundant routers in state MASTER
PEP8 of integration/smoke/test_vpc_vpn
Add S2S VPN test for Redundant VPC
Make integration/smoke/test_vpc_vpn Hypervisor independant
FIX VPN: non-working ipsec commands
[UI] MADNESS
[DB] Add force_encap field to s2s_customer_gateway table
[ROUTER] Add forceencaps field to python router ipsec config method
[TEST] unittest needs rework
[MARVIN] Add forceencap field to VpnCustomerGateway class in marvin base
[CORE] Add Force UDP Encapsulation option to Site2Site VPN
CLOUDSTACK-9186: Root admin cannot see VPC created by Domain admin user
CLOUDSTACK-9192: UpdateVpnCustomerGateway is failing
CLOUDSTACK-6485 prevent ip asignment of private gw iface
CLOUDSTACK-9204 Do not error when staticroute is already gone
make both check lines consistent
CLOUDSTACK-9181 Prevent syntax error in checkrouter.sh
CLOUDSTACK-9202 Bump ssh timeout
- Refactors the set_backup, set_master and set_fault methods to have better names for the variable
- Increase the sleep on the test in order to wait for the routers to be ready. It's now 3 times the GC settings
1. Create a dummy user, as existing user may already have stale quota
data
2. fix the tests to use the dummy user
3. a boundary condition was revealed and fixed for a new user where
quota service has never run and created bootstrap entries
Quota Marvin: If the quota plugin is not enabled skip tests
Quota Service: Enable quota plugin in zone setup configuration
Quota: Moving test_quota.py the test to test/integration/plugins
In most automated environment this test case will not run as it
requires a mangement server restart to enable the plugin. Due to
this requirement moving it to plugin folder. This condition is
already documented in the test case.
CLOUDSTACK-9123 - As a Developer I want the test_internal_lb.py to work with multiple hypervisorsThis PR make the test_internal_lb.py work with other Hypervisors types.
- Get hypervisor information from testClient.getHypervisorInfo()
- Adds XenServer, HyperV and VMware hypervisors
- Makes sure the OS type is "Other PV (64-bit)"
* pr/1204:
CLOUDSTACK-9123 - Make test compliant with multiple hypervisors
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.6:
CLOUDSTACK-9106 - Makes Enum name compliant with Java code conventions.
CLOUDSTACK-9106 - Adds a test to cover the changes in the applyVpnUsers() method
CLOUDSTACK-9106 - Makes the router commands call more consistent.
CLOUDSTACK-9106 - Enables private gateway tests on Redundant VPCs
CLOUDSTACK-9106 - Refactor the createPrivateNicProfileForGateway() method
CLOUDSTACK-9106 - Reduces the amount of iterations through the routers of a VPC
Add support for not (re)starting server after cloud-setup-management.
Closed PRs that will not be considered for merge:
This closes#1158
This closes#1097
Prevent live-lock in NSX API clientThe NSX api client relies on a sequence of responses to identify the need to authenticate and to follow redirects. In order to avoid live-locks, the NSX API client has a counter that will abort the execution after 5 consecutive requests that do not produce a Success (200) response.
When a NSX controller enters a faulty state it can allow authentication requests but deny any other API call. In such cases the NSX API client will consider the denied request a reason to follow a redirect and will enter into a live-lock (because the actual redirection is not happening).
This PR changes the NSX API client to no reset it's counter on a Success response from an authentication request. That is, the counter is only reset if another type of API call yields a Success response.
In addition, this PR also:
* changes the configuration of the license-maven-plugin to ignore files generated by pmd
* moves the NSX marvin test to a plugins folder
* refactors the NSX marvin test to reduce duplication
* adds an extra test case to the NSX marvin test that checks that NSX tunnels are properly created
* pr/1178:
Test NSX tunnel in guest network
Refactor test cases to reduce duplication
Use logger to print debug messages during test
Move NSX integrationt test to new plugins folder
Ignore pmd generated files during license check
Fix NSX rest client to not reset execution counter after a login
Add test for NSX plugin that simulates a live lock
Signed-off-by: Remi Bergsma <github@remi.nl>
Quota service while allowing for scalability will make sure that the cloud is
not exploited by attacks, careless use and program errors. To address this
problem, we propose to employ a quota-enforcement service that allows resource
usage within certain bounds as defined by policies and available quotas for
various entities. Quota service extends the functionality of usage server to
provide a measurement for the resources used by the accounts and domains using a
common unit referred to as cloud currency in this document. It can be configured
to ensure that your usage won’t exceed the budget allocated to accounts/domain
in cloud currency. It will let user know how much of the cloud resources he is
using. It will help the cloud admins, if they want, to ensure that a user does
not go beyond his allocated quota. Per usage cycle if a account is found to be
exceeding its quota then it is locked. Locking an account means that it will not
be able to initiat e a new resource allocation request, whether it is more
storage or an additional ip. Needless to say quota service as well as any action
on the account is configurable.
Changes from Github code review:
- Added marvin test for quota plugin API
- removed unused commented code
- debug messages in debug enabled check
- checks for nulls, fixed access to member variables and feature
- changes based on PR comments
- unit tests for UsageTypes
- unit tests for all Cmd classes
- unit tests for all service and manager impls
- try-catch-finally or try-with-resource in dao impls for failsafe db switching
- remove dead code
- add missing quota calculation case (regression fixed)
- replace tabs with spaces in pom.xmls
- quota: though default value for quota_calculated is 0, the usage server
makes it null while entering usage entries. Flipping the condition so
as to acocunt for that.
- quotatypes: fix NPE in quota type
- quota framework test fixes
- made statement period configurable
- changed default email templates to reflect the fact that exhausted quota may not result in a locked account
- added quotaUpdateCmd that refreshes quota balances and sends alerts and statements
- report quotaSummary command returns quota balance, quota usage and state for all account
- made UI framework changes to allow for text area input in edit views
- process usage entries that have greater than 0 usage
- orocess quota entries only if tariff is non zero
- if there are credit entries but no balance entry create a dummy balance entry
- remove any credit entries that are before the last balance entry
when displaying balance statement
- on a rerun the last balance is now getting added
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Quota+Service+-+FS
PR: https://github.com/apache/cloudstack/pull/768
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.6:
CLOUDSTACK-9075 - Uses the same vlan since it should have been already released
CLOUDSTACK-9075 - Adds VPC static routes test
CLOUDSTACK-9075 - Covers Private GW ACL with Redundant VPCs
CLOUDSTACK-9075 - Add method to get list of Physical Networks per zone
CLOUDSTACK-6276 Removing unused parameter in integration test for projects
CLOUDSTACK-6276 Removing unused parameter in integration test
CLOUDSTACK-6276 Fixing affinity groups for projects
CLOUDSTACK-6276 Fixing affinity groups for projectsWith some contributions from @resmo and @ustcweizhou.
This closes https://github.com/apache/cloudstack/pull/508
To test manually (need at least 2 hosts):
Create a project
Create an affinity group in that project
Deploy a vm with that affinity group
Deploy a second vm with that affinity group
They should be on different hosts
Ran old and new tests for affinity groups on the simulator
Test create affinity group as admin in project ... === TestName: test_01_admin_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group as domain admin for projects ... === TestName: test_02_doadmin_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group as user for projects ... === TestName: test_03_user_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group that exists (same name) for projects ... === TestName: test_4_user_create_aff_grp_existing_name_for_project | Status : SUCCESS ===
ok
#Delete Affinity Group by id. ... === TestName: test_01_delete_aff_grp_by_id | Status : SUCCESS ===
ok
#Delete Affinity Group by id should fail for user not in project ... === TestName: test_02_delete_aff_grp_by_id_another_user | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups ... === TestName: test_01_deploy_vm_anti_affinity_group | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups with more vms than hosts. ... === TestName: test_02_deploy_vm_anti_affinity_group_fail_on_not_enough_hosts | Status : SUCCESS ===
ok
List affinity group for a vm for projects ... === TestName: test_01_list_aff_grps_for_vm | Status : SUCCESS ===
ok
List multiple affinity groups associated with a vm for projects ... === TestName: test_02_list_multiple_aff_grps_for_vm | Status : SUCCESS ===
ok
List affinity groups by id for projects ... === TestName: test_03_list_aff_grps_by_id | Status : SUCCESS ===
ok
List Affinity Groups by name for projects ... === TestName: test_04_list_aff_grps_by_name | Status : SUCCESS ===
ok
List Affinity Groups by non-existing id for projects ... === TestName: test_05_list_aff_grps_by_non_existing_id | Status : SUCCESS ===
ok
List Affinity Groups by non-existing name for projects ... === TestName: test_06_list_aff_grps_by_non_existing_name | Status : SUCCESS ===
ok
List affinity group should list all for a vms associated with that group for projects ... === TestName: test_07_list_all_vms_in_aff_grp | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupids ... === TestName: test_01_update_aff_grp_by_ids | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 16 tests in 581.706s
OK
Deploy vm as Admin in Affinity Group belonging to regular user (should fail) ... === TestName: test_01_deploy_vm_another_user | Status : SUCCESS ===
ok
Create Affinity Group as admin for regular user ... === TestName: test_02_create_aff_grp_user | Status : SUCCESS ===
ok
List Affinity Groups as admin for all the users ... === TestName: test_03_list_aff_grp_all_users | Status : SUCCESS ===
ok
List Affinity Groups belonging to admin user ... === TestName: test_04_list_all_admin_aff_grp | Status : SUCCESS ===
ok
List Affinity Groups belonging to regular user passing account id and domain id ... === TestName: test_05_list_all_users_aff_grp | Status : SUCCESS ===
ok
List Affinity Groups belonging to regular user passing group id ... === TestName: test_06_list_all_users_aff_grp_by_id | Status : SUCCESS ===
ok
Delete Affinity Group belonging to regular user ... === TestName: test_07_delete_aff_grp_of_other_user | Status : SUCCESS ===
ok
Test create affinity group as admin ... === TestName: test_01_admin_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group as domain admin ... === TestName: test_02_doadmin_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group as user ... === TestName: test_03_user_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group that exists (same name) ... === TestName: test_04_user_create_aff_grp_existing_name | Status : SUCCESS ===
ok
Test create affinity group with existing name but within different account ... === TestName: test_05_create_aff_grp_same_name_diff_acc | Status : SUCCESS ===
ok
Test create affinity group of non-existing type ... === TestName: test_06_create_aff_grp_nonexisting_type | Status : SUCCESS ===
ok
Delete Affinity Group by name ... === TestName: test_01_delete_aff_grp_by_name | Status : SUCCESS ===
ok
Delete Affinity Group as admin for an account ... === TestName: test_02_delete_aff_grp_for_acc | Status : SUCCESS ===
ok
Delete Affinity Group which has vms in it ... === TestName: test_03_delete_aff_grp_with_vms | Status : SUCCESS ===
ok
Delete Affinity Group with id which does not belong to this user ... === TestName: test_05_delete_aff_grp_id | Status : SUCCESS ===
ok
Delete Affinity Group by name which does not belong to this user ... === TestName: test_06_delete_aff_grp_name | Status : SUCCESS ===
ok
Delete Affinity Group by id. ... === TestName: test_08_delete_aff_grp_by_id | Status : SUCCESS ===
ok
Root admin should be able to delete affinity group of other users ... === TestName: test_09_delete_aff_grp_root_admin | Status : SUCCESS ===
ok
Deploy VM without affinity group ... === TestName: test_01_deploy_vm_without_aff_grp | Status : SUCCESS ===
ok
Deploy VM by aff grp name ... === TestName: test_02_deploy_vm_by_aff_grp_name | Status : SUCCESS ===
ok
Deploy VM by aff grp id ... === TestName: test_03_deploy_vm_by_aff_grp_id | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups ... === TestName: test_04_deploy_vm_anti_affinity_group | Status : SUCCESS ===
ok
Deploy vms by affinity group id ... === TestName: test_05_deploy_vm_by_id | Status : SUCCESS ===
ok
Deploy vm in affinity group of another user by name ... === TestName: test_06_deploy_vm_aff_grp_of_other_user_by_name | Status : SUCCESS ===
ok
Deploy vm in affinity group of another user by id ... === TestName: test_07_deploy_vm_aff_grp_of_other_user_by_id | Status : SUCCESS ===
ok
Deploy vm in multiple affinity groups ... === TestName: test_08_deploy_vm_multiple_aff_grps | Status : SUCCESS ===
ok
Deploy multiple vms in multiple affinity groups ... === TestName: test_09_deploy_vm_multiple_aff_grps | Status : SUCCESS ===
ok
Deploy VM by aff grp name and id ... === TestName: test_10_deploy_vm_by_aff_grp_name_and_id | Status : SUCCESS ===
ok
List affinity group for a vm ... === TestName: test_01_list_aff_grps_for_vm | Status : SUCCESS ===
ok
List multiple affinity groups associated with a vm ... === TestName: test_02_list_multiple_aff_grps_for_vm | Status : SUCCESS ===
ok
List affinity groups by id ... === TestName: test_03_list_aff_grps_by_id | Status : SUCCESS ===
ok
List Affinity Groups by name ... === TestName: test_04_list_aff_grps_by_name | Status : SUCCESS ===
ok
List Affinity Groups by non-existing id ... === TestName: test_05_list_aff_grps_by_non_existing_id | Status : SUCCESS ===
ok
List Affinity Groups by non-existing name ... === TestName: test_06_list_aff_grps_by_non_existing_name | Status : SUCCESS ===
ok
List affinity group should list all for a vms associated with that group ... === TestName: test_07_list_all_vms_in_aff_grp | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupids ... === TestName: test_01_update_aff_grp_by_ids | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupnames ... === TestName: test_02_update_aff_grp_by_names | Status : SUCCESS ===
ok
Update the list of affinityGroups for vm which is not associated ... === TestName: test_03_update_aff_grp_for_vm_with_no_aff_grp | Status : SUCCESS ===
ok
Update the list of Affinity Groups to empty list ... SKIP: Skip - Failing - work in progress
Update the list of Affinity Groups on running vm ... === TestName: test_05_update_aff_grp_on_running_vm | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 42 tests in 976.432s
OK (SKIP=1)
* pr/1134:
CLOUDSTACK-6276 Removing unused parameter in integration test for projects
CLOUDSTACK-6276 Removing unused parameter in integration test
CLOUDSTACK-6276 Fixing affinity groups for projects
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-9080: Resource limits for Primary arent respected during attachprimary store resource limit check is not performed while attaching a
volume to a vm. Added them same.
Also added a marvin test case to verify the same.
Testing:
BEFORE
No error is shown in UI when trying to attach a volume even after reaching the resource limits.
```
mysql> select * from resource_limit where type="primary_storage";
+----+-----------+------------+-----------------+-------------+
| id | domain_id | account_id | type | max |
+----+-----------+------------+-----------------+-------------+
| 10 | NULL | 4 | primary_storage | 21474836480 |
+----+-----------+------------+-----------------+-------------+
1 row in set (0.00 sec)
mysql> select * from resource_count where account_id=4 and type='primary_storage';
+----+------------+-----------+-----------------+-------------+
| id | account_id | domain_id | type | count |
+----+------------+-----------+-----------------+-------------+
| 63 | 4 | NULL | primary_storage | 48318382080 |
+----+------------+-----------+-----------------+-------------+
1 row in set (0.00 sec)
```
AFTER
Following error message is shown in UI and the volume is not attached

The resource limits stays the same
```
mysql> select * from resource_limit where type="primary_storage";
+----+-----------+------------+-----------------+-------------+
| id | domain_id | account_id | type | max |
+----+-----------+------------+-----------------+-------------+
| 10 | NULL | 4 | primary_storage | 21474836480 |
+----+-----------+------------+-----------------+-------------+
1 row in set (0.01 sec)
mysql> select * from resource_count where account_id=4 and type='primary_storage';
+----+------------+-----------+-----------------+-------------+
| id | account_id | domain_id | type | count |
+----+------------+-----------+-----------------+-------------+
| 63 | 4 | NULL | primary_storage | 48318382080 |
+----+------------+-----------+-----------------+-------------+
1 row in set (0.00 sec)
```
Marvin test: nosetests --with-marvin --marvin-config=setup/dev/advanced.cfg --zone=xen-zone0 --hypervisor=xenserver test/integration/component/test_ps_resource_limits_volume.py
before the change
```
# do ... === TestName: test_attach_volume_exceeding_primary_limits | Status : FAILED ===
AssertionError: Resource count 23 should match with the expected resource count 22\n
```
After the change
```
# do ... === TestName: test_attach_volume_exceeding_primary_limits | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 1178.354s
OK
```
* pr/1107:
CLOUDSTACK-9080: Resource limits for Primary arent respected during attach.
Signed-off-by: Remi Bergsma <github@remi.nl>
primary store resource limit check is not performed while attaching a
volume to a vm. Added them same.
Also added a marvin test case to verify the same.
* 4.6:
CLOUDSTACK-9015 - Delete public IP in order to get both IP and NAT rule removed.
CLOUDSTACK-9015 - Add test to cover the rVPC routers stop/start/reboot scenario
CLOUDSTACK-9015 - Make sure the Backup router can talk to the Master router after a stop/start/reboot
[4.6.1] CLOUDSTACK-9015 - Redundant VPC Virtual Router's state is BACKUP & BACKUP or MASTER & MASTERThis PR closes#1064
All the details can be found in the original PR, which won't be merged because it was created agains master. Once this PR is closed, the original one will be also closed.
* pr/1070:
CLOUDSTACK-9015 - Delete public IP in order to get both IP and NAT rule removed.
CLOUDSTACK-9015 - Add test to cover the rVPC routers stop/start/reboot scenario
CLOUDSTACK-9015 - Make sure the Backup router can talk to the Master router after a stop/start/reboot
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8925 - Default allow for Egress rules is not being configured properly in VR iptables rulesThis PR fixes the router default policy for egress. When the default is DENY, the router still allows outgoing connections.
The test component/test_routers_network_ops.py was improved to cover that case as well. The results were:
Test redundant router internals ... === TestName: test_01_isolate_network_FW_PF_default_routes_egress_true | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: test_02_isolate_network_FW_PF_default_routes_egress_false | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: test_01_RVR_Network_FW_PF_SSH_default_routes_egress_true | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: test_02_RVR_Network_FW_PF_SSH_default_routes_egress_false | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 4 tests in 3636.656s
OK
/tmp//MarvinLogs/test_routers_network_ops_QDL429/results.txt (END)
* pr/1023:
CLOUDSTACK-8925 - Implement the default egress DENY/ALLOW properly
CLOUDSTACK-8925 - Improve the default egress tests in order to cover newly entered rules
CLOUDSTACK-8925 - Add egress dataset to test_data.py
CLOUDSTACK-8925 - Drop the traffic when default egress is set to false
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-9007 - Write test to check that the /etc/dhcphosts.txt doesn't contain duplicate IPsThis PR contains a test that will cover the fix on PR #981
The tests does the following:
* Creates account, service offering, network offering, network
* Deploys two virtual machines
- Each machine with a pre-assigned IP
* Creates two FW and PF rules
* Checks that SSH into the VMs works
* Checks default routes from both VMs
* Checks that the /etc/dhcphosts.txt contains 1 entry per VM IP
* Destroys/Expunges 1 VM
* Creates a new VM with the same IP as the destroyed one
* Checks that the /etc/dhcphosts.txt contains 1 entry per VM IP
* pr/1002:
CLOUDSTACK-9007 - Add test check that /etc/dhcphosts.txt doesn't contain duplicate IPs
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8991 - IP address is not removed from VR even after disabling static NATThis PR fixes the Public IP removal form the virtual routers. It also improves the existing test_network.py.
* pr/989:
CLOUDSTACK-8991 - Process the IPs that have been removed
CLOUDSTACK-8991 - Remove public IP form interface in case add = false
CLOUDSTACK-8991 - Make sure the public IP is removed form the router before checking
Signed-off-by: Remi Bergsma <github@remi.nl>
smoke/test_internal_lb.py: Fix template not ready errorAdd wait for template download
Refactored template section of services
Added some extra logging in the setup phase
* pr/971:
Add wait for template download Refactored template section of services
Signed-off-by: Remi Bergsma <github@remi.nl>
Fix error message in test_isolate_network_FW_PF_default_routesWhile running test_isolate_network_FW_PF_default_routes it is expected
that SSH'ing into a VM does not work immediately. However, when it fails
(as expected) witht he follwoing error
====Trying SSH Connection: Host:192.168.23.12 Uer:root
Port:22 RetryCnt:1===
Traceback (most recent call last):
File "/home/jenkins/workspace/mccloud/mct-run-marvin-tests@2/venv/lib/python2.7/site-packages/marvin/sshClient.py", line 121, in createConnection timeout=self.timeout)
File "/home/jenkins/workspace/mccloud/mct-run-marvin-tests@2/venv/lib/python2.7/site-packages/paramiko/client.py", line 251, in connect retry_on_signal(lambda: sock.connect(addr))
File /home/jenkins/workspace/mccloud/mct-run-marvin-tests@2/venv/lib/python2.7/site-packages/paramiko/util.py", line 270, in retry_on_signal return function()
File /home/jenkins/workspace/mccloud/mct-run-marvin-tests@2/venv/lib/python2.7/site-packages/paramiko/client.py", line 251, in <lambda> retry_on_signal(lambda: sock.connect(addr))
File "/usr/lib64/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args)
error: [Errno 113] No route to host
it would try to print a message that generates a actual error:
```
DEBUG: ====Trying SSH Connection: Host:192.168.23.12 User:root
Port:22 RetryCnt:0===
test_isolate_network_FW_PF_default_routes
(integration.component.test_routers_network_ops.TestIsolatedNetworks):
CRITICAL: EXCEPTION: test_isolate_network_FW_PF_default_routes:
Traceback (most recent call last):,
File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()',
File "/home/jenkins/workspace/mccloud/mct-run-marvin-tests@2/test/integration/component/test_routers_network_ops.py", line 448, in test_isolate_network_FW_PF_default_routes self.fail("Failed to SSH into VM - %s" % (public_ip.ipaddress.ipaddress)),
"AttributeError: 'unicode' object has no attribute 'ipaddress'"
```
* pr/972:
Fix error message in test_isolate_network_FW_PF_default_routes
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8935 - Cannot remove [r]VPC networks due to RTNETLINK errorThis PR fixes the "sequence item 0: expected string, NoneType found" error found in the CsDhcp.py file when attempting to remo a network from a VPC.
* pr/967:
CLOUDSTACK-8935 - Filter the DNS list because it might contain 1 None entry which breaks the code.
CLOUDSTACK-8935 - Clean up network resources in the right order
CLOUDSTACK-8935 - Do not retry 60 times when we expect the SSH to fail
CLOUDSTACK-8935 - Check if the key is available in the dictionary
CLOUDSTACK-8935 - Add a check to avoid exception related to None value
Signed-off-by: Remi Bergsma <github@remi.nl>
While running test_isolate_network_FW_PF_default_routes it is expected
that SSH'ing into a VM does not work immediately. However, when it fails
(as expected) witht he follwoing error
====Trying SSH Connection: Host:192.168.23.12 Uer:root
Port:22 RetryCnt:1===
Traceback (most recent call last):
File "/home/jenkins/workspace/mccloud/mct-run-marvin-tests@2/venv/lib/python2.7/site-packages/marvin/sshClient.py", line 121, in createConnection timeout=self.timeout)
File "/home/jenkins/workspace/mccloud/mct-run-marvin-tests@2/venv/lib/python2.7/site-packages/paramiko/client.py", line 251, in connect retry_on_signal(lambda: sock.connect(addr))
File /home/jenkins/workspace/mccloud/mct-run-marvin-tests@2/venv/lib/python2.7/site-packages/paramiko/util.py", line 270, in retry_on_signal return function()
File /home/jenkins/workspace/mccloud/mct-run-marvin-tests@2/venv/lib/python2.7/site-packages/paramiko/client.py", line 251, in <lambda> retry_on_signal(lambda: sock.connect(addr))
File "/usr/lib64/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args)
error: [Errno 113] No route to host
it would try to print a message that generates a actual error:
DEBUG: ====Trying SSH Connection: Host:192.168.23.12 User:root
Port:22 RetryCnt:0===
test_isolate_network_FW_PF_default_routes
(integration.component.test_routers_network_ops.TestIsolatedNetworks):
CRITICAL: EXCEPTION: test_isolate_network_FW_PF_default_routes:
Traceback (most recent call last):,
File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()',
File "/home/jenkins/workspace/mccloud/mct-run-marvin-tests@2/test/integration/component/test_routers_network_ops.py", line 448, in test_isolate_network_FW_PF_default_routes self.fail("Failed to SSH into VM - %s" % (public_ip.ipaddress.ipaddress)),
"AttributeError: 'unicode' object has no attribute 'ipaddress'"
CLOUDSTACK-8933 SSVm and CPVM do not survive a reboot from APIThis closes PR #930 as well.
I Rebased @bvbharat's PR with latest Master and tested the SSVM/CPVM and the routers: rVPC; VPC; VR; and RVR.
* pr/959:
CLOUDSTACK-8933 - Improves the the test internals of the SSVM/CPVM
CLOUDSTACK-8933 - Replace infinite loop by a for loop
CLOUDSTACK-8933 SSVm and CPVM do not survive a reboot from API
This closes#930
Signed-off-by: Remi Bergsma <github@remi.nl>
Add optional fields: iprange and fordisplay to Marvin base.py class method Vpn.create
Add optional field: passive to Marvin base.py class method Vpn.createVpnConnection
- Do not use the API call because it will read what is in the database, that might not have been updated yet
* Check the status in the router directly instead
- Remove all the sleeps
- It was working before because the Routers were restarting about 10 times for each operation
e.g. adding a VM to a network ot acquiring a new IP.
- Adding stat_rules of internal LB to iptables
We needed one extra rule in the INPUT chain
- With the keepalived fixed they should not be needed anymore. So first reducing them drasticaly
- I am now making a backup of the template file, write to the template file and compare it with the existing configuration
- The template file is recovered afer the process
- I also check if the process is running
- I fixed a bug in the compare method
- I am now updating the configuration variable once the file content is flushed to disk
- Due to an issue with VPC routers (CLOUDSTACK-8935) we are not able to destroy networks before destroying the routers
- Added a forcestop/destroy routers inside the tearDown to make sure it passes. The issue will be addressed in a separate PR
- Make sure the routers list is cleaned after destroy_routers() is called
- Populate routers list after the router is recreated
- Add egress tests in order to check if VMs can reach the outside world
- Increase the wait when testing redundant routers: they fight to become master
- Make sure the clean up is done properly
- That's not the place to fix the default routes for redundant VPC,
- Adding tests to cover PF and FW in isolated networks
* Will still add some tests for egress as well
- The cidr was replaced by the single IP, which broke the feature.
- Wait during transition from master to backup otherwise the test fails due to wronge state
CLOUDSTACK-8726 : Automation for Quickly attaching multiple data disks to a new VMAttach multiple Volumes simultaneously to a Running VM ... === TestName: test_attach_multiple_volumes | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 196.931s
OK
* pr/683:
changed the testcase skip code into setup method
Imparting changes mentioned by nitt10prashant
Automation for multiple disk attachments to instance
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-8756:Incorrect guest os mapping in CCP 4.2.1-6 for CentOS 5.9Check the bug 8756 for more details
* pr/728:
CLOUDSTACK-8756:Incorrect guest os mapping in CCP 4.2.1-6 for CentOS 5.9
Signed-off-by: Rajani Karuturi <rajani.karuturi@citrix.com>
CLOUDSTACK-8688 - default policies for INPUT and FORWARD should be set to DROP instead of ACCEPT
- In order to be able to access the routers via the link local interface, we have to add a rules with NEW and ESTABLISHED state
* pr/765:
CLOUDSTACK-8688 - Adding Marvin tests in order to cover the fixes applied
CLOUDSTACK-8688 - default policies for INPUT and FORWARD should be set to DROP instead of ACCEPT
Signed-off-by: wilderrodrigues <wrodrigues@schubergphilis.com>
- Changing refactored the utils.get_process_status() function
- Adding 2 tests: test_01_single_VPC_iptables_policies and test_02_routervm_iptables_policies
CLOUDSTACK-8759 - Destroying VPC router results in a new unusable VPC routerHow to reproduce the problem:
1. Stop/Destroy the VPC router
2. Add a virtual machine to one of the VPC tier - it will trigger a VPC router creation
3. Router is created, but the NICs are not configured
How to recover without this fix:
1. Stop/destroy the VPC router and restart the VPC
Side effects: private gateways could be misconfigured.
Root cause:
In the VpcNetworkHelperImpl.configureDefaultNics() method, the guest network nic was added in the map prior to the control and public NICs. The order in the map should not matter, however in the LibvirtComputingResource.createVifs() method, there is a logic that relies on the device index - the array index - in order to create the control nic. I advise a refactor on the data model in order to be able to identify the NIC type instead of relying in the array index.
An integration test was added to cover the fix:
* test_vpc_router_nics.py
Environment:
Management Server running on CentOS 7.1
KVM host running on CentOS 7.1
CloudStack Agent/Common 4.6.0-SNAPSHOT
Executing the test:
```
nosetests --with-marvin --marvin-config=/data/shared/marvin/mct-zone2-kvm2-ISOLATED.cfg -s -a tags=advanced,required_hardware=true component/test_vpc_router_nics.py
```
Remark: during the SSH there might be stack traces on the console due to the connection retry. It takes some time to get the PF rules in place and reach the VMs. So, just let the test run until the end.
```
Test results:
Create a vpc with two networks with two vms in each network ... === TestName: test_01_VPC_nics_after_destroy | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 1 test in 774.020s
OK
/tmp//MarvinLogs/test_vpc_router_nics_VH6E9S/results.txt (END)
```
* pr/773:
CLOUDSTACK-8759 - Fix guets nic allocation
CLOUDSTACK-8759 - Adding a marvin test in order to cover the fix
CLOUDSTACK-8759 - The guest nic has to be added after the control nic
Signed-off-by: Remi Bergsma <github@remi.nl>
- The test will create a VPC, add 2 tier, 2 VMs, ACL, PF and SSH into the VM
- Then it will stop the router, destroy the router, add another VM to 1 tier and check that we can reach all the VMs
Made following fixes in simulator
- Support for ScaleVmCommand/NetworkRulesVmSecondaryIpCommand in resource layer
- Added support for scaling up a running VM in simulator
- Fixed some method names not following convention
In order to test PR #725 using simulator some of these changes are needed.
Based on the way HV check is present in the scale VM API, had to explicitly put simulator related check to allow support. The ideal way would be to remove all these HV specific check from code and made them some configuration (by putting them in hypervisor_capabilities table in DB). But that will be a bigger effort outside the scope of this PR.
Signed-off-by: Koushik Das <koushik@apache.org>
CLOUDSTACK-8716: Verify creation of snapshot from volume when the task is performed repeatedly in zone wide primary StorageOn VMWare with a Zone wide primary storage and more than two clusters verify successful creation of snapshot multiple times.
* pr/665:
CLOUDSTACK-8716: Verify creation of snapshot from volume when the task is performed repeatedly in zone wide primary Storage
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8723: Verify API call "listUsageRecords" returns usage of new volume created after restore VMAfter restoring a running VM current ROOT disk gets destroyed and new ROOT disk gets created.
This testcase is to check if volume usage of this newly created volume is listed in listUsageRecords API.
* pr/675:
CLOUDSTACK-8723: Verify API call "listUsageRecords" returns usage of new volume created after restore VM
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8757:FTP modules are not loaded in VRcheck bug 8757 for more details .
* pr/729:
CLOUDSTACK-8757:FTP modules are not loaded in VR
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-8716: Verify creation of snapshot from volume when the task is performed repeatedly in zone wide primary Storage
-Removing redundent code
-Added validate list function for list snapshot operation
CLOUDSTACK-8716: Verify creation of snapshot from volume when the task is performed repeatedly in zone wide primary Storage
Removed duplicate test data related to vm properties.Modified tests dependent on it
Removed duplicte service offerings from test data and modified tests dependent on it
Bug-Id: CLOUDSTACK-8617
This closes#644
CLOUDSTACK-8689-Verify-effect-of-changing-value-of-XenServer-Max-guest-limitis-on-previously-added-hosts -Addning check for empty list and increamenting maxguestlimit accordingly
CLOUDSTACK-8689: Verify effect of changing value of XenServer Maxguestlimits on previously added hosts -As testcase is changing maxguestlimits global setting it will affect on other testcases also hence moving it to component/maint folder
This closes#638
As part of volume sync, that runs during of SSVM start-up, the volume_store_ref entry was getting deleted. Volume GC relies on this entry to move volume to destroyed state.
Since the entry was getting deleted, GC thread never moved the volume from UploadError/UploadAbandoned to Destroyed. Fix is to not remove the volume_store_ref entry as part
of volume sync and let GC thread handle the clean up.
This closes#611
- We use no preempt mode with state set as EQUAL to both nodes, no need to have Priotities setup
- Do not add IPs as comments to the configuration. If a new guest interface is added, the file will change anyway.
- This was used in the past when keepalived would restart for each new interface added
- Removed the long sleep form the tests: we now sleep 5 seconds per PF rule added
CLOUDSTACK-8616 - Fix keepalived.ts/2 files comparison
- Add call to set_fault() in case of router transits to that state
- Removing commented out code
CLOUDSTACK-8616 - Fixing check_heartbeat.sh.templ
CLOUDSTACK-8616 - Call set_fault from the check_heartbeat.sh script
Signed-off-by: wilderrodrigues <wrodrigues@schubergphilis.com>
listing the hosts for migration for instance with tag
Added marvin test for this issue.
Steps
1. Create a Compute service offering with the tag.
2. Create a Guest VM with the compute service offering
created above.
3. find hosts to migrate the vm crated above
Validations
1. Ensure that the offering is created with the
tag.The listServiceOffering API should list show tag
2. findHostsForMigration cmd should list both suitable
and not-suitable hosts