maven command for finding current version might need to download packages and without batch mode it will end in variable. Added '-B' to enable batch mode.
This fixes a regression introduced in #2799, by exporting $TYPE
before the `patch` is called to patch/extract archives for ssvm/cpvm.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
VMware router will be rebooted based on #2794, per current config
the VRs on reboot will go through fsck checks slowing down the deployment
process by few seconds. This will ensure that fsck checks are done
on every 3rd boot of the VR. The `4` is used because 1st boot is done
during the build of systemvmtemplate appliance.
Add upgrade path for a new 4.11.2 systemvmtemplate.
Other changes:
- Add support for XS 7.5 Fixes#2834.
- Reboot VR only if mgmt gw is not pingable on vmware.
- Enable passive ftp by enabling nf_conntrack_helper. This is change in behaviour since linux 4.7
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Cleaup and code-formatting POM files
* Remove obsolete mycila license-maven-plugin
* Remove obsolete console-proxy/plugin project
* Move console-proxy-rdbconsole under console-proxy parent
* Use correct parent path for rdpconsole
* Order alphabetally items in setnextversion.sh
* Unifiy License header in POMs
* Alphabetic order of modules definition
* Extract all defined versions into parent pom
* Remove obsolete files: version-info.in, configure-info.in
* Remove redundant defaultGoal
* Remove useless checkstyle plugin from checkstyle project
* Order alphabetally items in pom.xml
* Add aditional SPACEs to fix debian build
* Don't execute checkstyle on parent projects
* Use UTF-8 encoding in building checkstyle project
* Extract plugin versions into properties
* Execute PMD plugin on all the projects with -Penablefindbugs
* Upgrade maven plugins to latest version
* Make sure to always look for apache parent pom from repository
* Fix incorrect version grep in debian packaging
* Fix rebase conflicts
* Fix rebase conflicts
* Remove PMD for now to be fixed on another PR
This is a new feature for CS that allows Admin users improved
troubleshooting of network issues in CloudStack hosted networks.
Description: For troubleshooting purposes, CloudStack administrators may wish to execute network utility commands remotely on system VMs, or request system VMs to ping/traceroute/arping to specific addresses over specific interfaces. An API command to provide such functionalities is being developed without altering any existing APIs. The targeted system VMs for this feature are the Virtual Router (VR), Secondary Storage VM (SSVM) and the Console Proxy VM (CPVM).
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+Remote+Diagnostics+API
ML discussion:
https://markmail.org/message/xt7owmb2c6iw7tva
Fixes the version in pom etc. to be consistent with versioning pattern as X.Y.Z.0-SNAPSHOT after a minor release.
Signed-off-by: Khosrow Moossavi <khos2ow@gmail.com>
* packaging: use libuuid x86_64 package for cloudstack-common
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 64 bit links is packaged
* post scan filter to exclude libuuid.so.1
* Revert "packaging: use libuuid x86_64 package for cloudstack-common"
This reverts commit b3fb8957fe.
* post scan filter to exclude libuuid.so.1 (centos63)
* revert removal of 32 bit support for vhd-util libs
This fixes incorrect xenstore-read binary path, this failed systemvm
to be patched/started correctly on xenserver. The other fix is to keep
the xen-domU flag that may be returned by virt-what. This effect
won't change the cmdline being consumed as the mgmt server side (java)
code sets the boot args in both xenstore and as pv args. The systemvm's
/boot is ext2 that can be booted by PyGrub on both old and recent
XenServer versions.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
On patching, the global cacerts keystore is imported in 'cloud' service
specific local keystore. This fixes#2541.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* CLOUDSTACK-10289: Config Drive Metadata: Use VM UUID instead of VM id
* CLOUDSTACK-10288: Config Drive Userdata: support for binary userdata
* CLOUDSTACK-10358: SSH keys are missing on Config Drive disk in some cases
* dependencies update
* Add extra blank line required by ...!?
* fix W605 invalid escape sequence and more blank lines
* print all installed python packages versions
This reduces systemvmtemplate size by 600MB and installs nftables,
updates iptables. This also fixes a failing smoke test.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-10341: VR minor fixes to systemvmtemplate (#2468)
CLOUDSTACK-10340: Add setter to hypervisorType in VMInstanceVO (#2504)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Fixes rsyslog: fix config error in rsylslog.conf
Feb 26 08:09:54 r-413-VM liblogging-stdlog[19754]: action '*' treated as ':omusrmsg:*' - please use ':omusrmsg:*' syntax instead, '*' will not be supported in the future [v8.24.0 try http://www.rsyslog.com/e/2184 ]
Feb 26 08:09:54 r-413-VM liblogging-stdlog[19754]: error during parsing file /etc/rsyslog.conf, on or before line 95: warnings occured in file '/etc/rsyslog.conf' around line 95 [v8.24.0 try http://www.rsyslog.com/e/2207 ]
- Run apache2 only after cloud-postinit
- Increase /run size for VR with 256M RAM
root@r-395-VM:~# systemctl daemon-reload
Failed to reload daemon: Refusing to reload, not enough space available on /run/systemd. Currently, 15.8M are free, but a safety buffer of 16.0M is enforced.
tmpfs 23M 6.5M 16M 29% /run
- new flag `-T, --use-timestamp` to use `timestamp` when POM version contains SNAPSHOT
- in the final artifacts (jar) name
- in the final package (rpm, deb) name
- in `/etc/cloudstack-release` file of SystemVMs
- in the Management Server > About dialog
- if there's a "branding" string in the POM version (e.g. `x.y.z.a-NAME[-SNAPSHOT]`),
the branding name will be used in the final generated pacakge name such as following:
- `cloudstack-management-x.y.z.a-NAME.NUMBER.el7.centos.x86_64`
- `cloudstack-management_x.y.z.a-NAME-NUMBER~xenial_all.deb`
- branding string can be overriden with newly added `-b, --brand` flag
- handle the new format version for VR version
- fix long opts (they were broken)
- tolerate and show a warning message for unrecognized flags
- usage help reformat
* Deprecate Version class in favor or CloudStackVersion
* Refactored nuage tests
Added simulator support for ConfigDrive
Allow all nuage tests to run against simulator
Refactored nuage tests to remove code duplication
* Move test data from test_data.py to nuage_test_data.py
Nuage test data is now contained in nuage_test_data.py instead of
test_data.py
Removed all nuage test data from nuage_test_data.py
* CLOUD-1252 fixed cleanup of vpc tier network
* Import libVSD into the codebase
* CLOUDSTACK-1253: Volumes are not expunged in simulator
* Fixed some merge issues in test_nuage_vsp_mngd_subnets test
* Implement GetVolumeStatsCommand in Simulator
* Add vspk as marvin nuagevsp dependency, after removing libVSD dependency
* correct libVSD files for license purposes
pep8 pyflakes compliant
This deprecates and remove TLS 1.0 and 1.1 from preferred list of
protocols and keeps only TLSv1.2.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* 4.11:
CLOUDSTACK-10306: Upgrade to VMware 6.5 vim jar dependency (#2467)
CLOUDSTACK-10298: fix for recreation of an earlier deleted Nuage managed network (#2460)
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.
- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml
Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)
Added support for root disks on managed storage with KVM
Added support for volume snapshots with managed storage on KVM
Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage
Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.
Included support for Reinstall VM on KVM with managed storage
Enabled offline migration on KVM from non-managed storage to managed storage and vice versa
Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)
Added support to download (extract) a managed-storage volume to a QCOW2 file
When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.
Included support for the KVM auto-convergence feature
The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
Added a new Global Setting: kvm.storage.live.migration.wait
For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)
Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.
Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot
Added an SIOC API plug-in to support VMware SIOC
When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.
Added the ability to revert a volume to a snapshot
Enabled cluster-scoped managed storage
Added support for VMware dynamic discovery
Extending Config Drive support
* Added support for VMware
* Build configdrive.iso on ssvm
* Added support for VPC and Isolated Networks
* Moved implementation to new Service Provider
* UI fix: add support for urlencoded userdata
* Add support for building systemvm behind a proxy
Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
Co-Authored-By: Frank Maximus <frank.maximus@nuagenetworks.net>
Co-Authored-By: Sigert Goeminne <sigert.goeminne@nuagenetworks.net>
CloudStack volumes and templates are one single virtual disk in case of XenServer/XCP and KVM hypervisors since the files used for templates and volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates are in OVA format, which are archives that can contain a complete VM including multiple VMDKs and other files such as ISOs. And currently, Cloudstack only supports Template creation based on OVA files containing a single disk. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, only the first disk is attached to the new instance and other disks are ignored.
Similarly with uploaded volumes, attaching an uploaded volume that contains multiple disks to a VM will result in only one VMDK to being attached to the VM.
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Support+OVA+files+containing+multiple+disks
This behavior needs to be improved in VMWare to support OVA files with multiple disks for both uploaded volumes and templates. i.e. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, the first disk should be attached to the new instance as the ROOT disk and volumes should be created based on other VMDK disks in the OVA file and should be attached to the instance.
Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This enables security updates in preseed file and removes purges
old kernel, and increases maximum /boot partition size. Build failures
were found due to insufficient space in /boot. Tested with packer+qemu
on Ubuntu 17.10.
Also silently remove xmas cloudstack cloudmonkey logo without hurting
anyone's sentiments (no monkeys were harmed in this commit ;).
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Perform a dist-upgrade to upgrade all packages especially the linux
kernel before building the systemvmtemplate.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This PR introduces several features and fixes some bugs:
- account tags feature
- fixed resource tags bugs which happened during tags search (found wrong entries because of mysql string to number translation - see #905, but this PR does more and fixes also resource access - vulnerability during list resource tags)
- some marvin improvements (speed, sanity)
Improved resource tags code:
1. Enhanced listTags security
2. Added support for account tags (account tags are required to support tags common for all users of an account)
3. Improved the tag management code (refactoring and cleanup)
Marvin:
1. Fixed Marvin wait timeout between async pools. To decrease polling interval and improve CI speed.
2. Fixed /tmp/ to /tmp in zone configuration files.
3. Fixed + to os.path.join in log class.
4. Fixed + to os.path.join in deployDataCenter class.
5. Fixed typos in tag tests.
6. Modified Tags base class delete method.
Deploy Datacenter script:
1. Improved deployDatacenter. Added option logdir to specify where script places results of evaluation.
ConfigurationManagerImpl:
1. Added logging to ConfigurationManagerImpl to log when vlan is not found. Added test stubs for tags. Found accidental exception during simulator running after CI.
tests_tags.py:
1. Fixed stale undeleted tags.
2. Changed region:India to scope:TestName.
This feature allows using templates and ISOs avoiding secondary storage as intermediate cache on KVM. The virtual machine deployment process is enhanced to supported bypassed registered templates and ISOs, delegating the work of downloading them to primary storage to the KVM agent instead of the SSVM agent.
Template and ISO registration:
- When hypervisor is KVM, a checkbox is displayed with 'Direct Download' label.
- API methods registerTemplate and registerISO are both extended with this new parameter directdownload.
- On template or ISO registration, no download job is sent to SSVM agent, CloudStack would only persist an entry on template_store_ref indicating that template or ISO has been marked as 'Direct Download' (bypassing Secondary Storage). These entries are persisted as:
template_id = Template or ISO id on vm_template table
store_id NULL
download_state = BYPASSED
state = Ready
(Note: these entries allow users to deploy virtual machine from registered templates or ISOs)
- An URL validation command is sent to a random KVM host to check if template/ISO location can be reached. Metalink are also supported by this feature. In case of a metalink, it is fetched and URL check is performed on each of its URLs.
- Checksum should be provided as indicated on #2246: {ALGORITHM}CHKSUMHASH
- After template or ISO is registered, it would be displayed in the UI
Virtual machine deployment:
When a 'Direct Download' template is selected for deployment, CloudStack would delegate template downloading to destination storage pool via destination host by a new pluggable download manager.
Download manager would handle template downloading depending on URL protocol. In case of HTTP, request headers can be set by the user via vm_template_details. Those details should be persisted as:
Key: HTTP_HEADER
Value: HEADERNAME:HEADERVALUE
In case of HTTPS, a new API method is added uploadTemplateDirectDownloadCertificate to allow user importing a client certificate into all KVM hosts' keystore before deployment.
After template or ISO is downloaded to primary storage, usual entry would be persisted on template_spool_ref indicating the mapping between template/ISO and storage pool.
This feature allow admins to dedicate a range of public IP addresses to the SSVM and CPVM, such that they can be subject to specific external firewall rules. The option to dedicate a public IP range to the System VMs (SSVM & CPVM) is added to the createVlanIpRange API method and the UI.
Solution:
Global setting 'system.vm.public.ip.reservation.mode.strictness' is added to determine if the use of the system VM reservation is strict (when true) or preferred (false), false by default.
When a range has been dedicated to System VMs, CloudStack should apply IPs from that range to
the public interfaces of the CPVM and the SSVM depending on global setting's value:
If the global setting is set to false: then CloudStack will use any unused and unreserved public IP
addresses for system VMs only when the pool of reserved IPs has been exhausted
If the global setting is set to true: then CloudStack will fail to deploy the system VM when the pool
of reserved IPs has been exhausted, citing the lack of available IPs.
UI Changes
Under Infrastructure -> Zone -> Physical Network -> Public -> IP Ranges, button 'Account' label is refactored to 'Set reservation'.
When that button is clicked, dialog displayed is also refactored, including a new checkbox 'System VMs' which indicates if range should be dedicated for CPVM and SSVM, and a note indicating its usage.
When clicking on button for any created range, UI dialog displayed indicates whether IP range is dedicated for system vms or not.
This includes test related fixes and code review fixes based on
reviews from @rafaelweingartner, @marcaurele, @wido and @DaanHoogland.
This also includes VMware disk-resize limitation bug fix based on comments
from @sateesh-chodapuneedi and @priyankparihar.
This also includes the final changes to systemvmtemplate and fixes to
code based on issues found via test failures.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- This migrates the current systemvmtemplate build system from
veewee/virtualbox to packer and qemu based.
- This also introduces and updates a CentOS7 built-in template.
- Remove old appliance build scripts and files.
- Adds iftop package (CLOUDSTACK-9785)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes test failures around VMware with the new systemvmtemplate.
In addition:
- Does not skip rVR related test cases for VMware
- Removes rc.local
- Processes unprocessed cmd_line.json
- Fixed NPEs around VMware tests/code
- On VMware, use udevadm to reconfigure nic/mac address than rebooting
- Fix proper acpi shutdown script for faster systemvm shutdowns
- Give at least 256MB of swap for VRs to avoid OOM on VMware
- Fixes smoke tests for environment related failures
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Several systemvmtemplate optimizations
- Uses new macchinina template for running smoke tests
- Switch to latest Debian 9.3.0 release for systemvmtemplate
- Introduce a new `get_test_template` that uses tiny test template
such as macchinina as defined test_data.py
- rVR related fixes and improvements
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Refactors and simplifies systemvm codebase file structures keeping
the same resultant systemvm.iso packaging
- Password server systemd script and new postinit script that runs
before sshd starts
- Fixes to keepalived and conntrackd config to make rVRs work again
- New /etc/issue featuring ascii based cloudmonkey logo/message and
systemvmtemplate version
- SystemVM python codebase linted and tested. Added pylint/pep to
Travis.
- iptables re-application fixes for non-VR systemvms.
- SystemVM template build fixes.
- Default secondary storage vm service offering boosted to have 2vCPUs
and RAM equal to console proxy.
- Fixes to several marvin based smoke tests, especially rVR related
tests. rVR tests to consider 3*advert_int+skew timeout before status
is checked.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Refactor cloud-early-config and make appliance specific scripts
- Make patching work without requiring restart of appliance and remove
postinit script
- Migrate to systemd, speedup booting/loading
- Takes about 5-15s to boot on KVM, and 10-30seconds for VMware and XenServer
- Appliance boots and works on KVM, VMware, XenServer and HyperV
- Update Debian9 ISO url with sha512 checksum
- Speedup console proxy service launch
- Enable additional kernel modules
- Remove unknown ssh key
- Update vhd-util URL as previous URL was down
- Enable sshd by default
- Use hostnamectl to add hostname
- Disable services by default
- Use existing log4j xml, patching not necessary by cloud-early-config
- Several minor fixes and file refactorings, removed dead code/files
- Removes inserv
- Fix dnsmasq config syntax
- Fix haproxy config syntax
- Fix smoke tests and improve performance
- Fix apache pid file path in cloud.monitoring per the new template
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Load the nf_conntrack_ipv6 module for IPv6 connection tracking on SSVM
- Move systemd services to /etc and enable services after they have been
installed
- Disable most services by default and enable in cloud-early-config
- Start services after enabling them using systemd
- In addition remove /etc/init.d/cloud as this is no longer needed and
done by systemd
- Accept DOS/MBR as file format for ISO images as well
Under Debian 7 the 'file' command would return:
debian-9.1.0-amd64-netinst.iso: ISO 9660 CD-ROM filesystem data UDF filesystem data
Under Debian 9 however it will return
debian-9.1.0-amd64-netinst.iso: DOS/MBR boot sector
This would make the HTTPTemplateDownloader in the Secondary Storage VM refuse the ISO as
a valid template because it's not a correct format.
Changes this behavior so that it accepts both.
This allows us to use Debian 9 as a System VM template.
Not sure though if enabling them is enough for systemd to still start them
on first boot
Signed-off-by: Wido den Hollander <wido@widodh.nl>
SystemVM changes to work on Debian 9
- Migrate away from chkconfig to systemctl
- Remove xenstore-utils override deb pkg
- Fix runlevel in sysv scripts for systemd
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows CloudStack administrators to create layer 2 networks on CloudStack. As these networks are purely layer 2, they don't require IP addresses or Virtual Router, only VLAN is necessary (provided by administrator or assigned by CloudStack). Also, network services should be handled externally, e.g. DNS, DHCP, as they are not provided by L2 networks.
As a consequence, a new Guest Network type is created within CloudStack: L2
Description:
Network offerings and networks support new guest type: L2.
L2 Network offering creation allows administrator to select Specify VLAN or let CloudStack assign it dynamically.
L2 Network creation allows administrator to specify VLAN tag (if network offerings allows it) or simply create network.
VM deployments on L2 networks:
VMs should not IP addresses or any network service
No Virtual Router deployed on network
If Specify VLAN = true for network offering, network gets implemented using a dynamically assigned VLAN
UI changes
A new button is added on Networks tab, available for admins, to allow L2 networks creation
- Migrate to embedded Jetty server.
- Improve ServerDaemon implementation.
- Introduce a new server.properties file for easier configuration.
- Have a single /etc/default/cloudstack-management to configure env.
- Reduce shaded jar file, removing unnecessary dependencies.
- Upgrade to Spring 5.x, upgrade several jar dependencies.
- Does not shade and include mysql-connector, used from classpath instead.
- Upgrade and use bountcastle as a separate un-shaded jar dependency.
- Remove tomcat related configuration and files.
- Have both embedded UI assets in uber jar and separate webapp directory.
- Refactor systemd and init scripts, cleanup packaging.
- Made cloudstack-setup-databases faster, using `urandom`.
- Remove unmaintained distro packagings.
- Moves creation and usage of server keystore in CA manager, this
deprecates the need to create/store cloud.jks in conf folder and
the db.cloud.keyStorePassphrase in db.properties file. This also
remove the need of the --keystore-passphrase in the
cloudstack-setup-encryption script.
- GZip contents dynamically in embedded Jetty
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* VSP ID Caching
* VSP call Statistics
* 5.0 Support
Co-Authored-By: Frank Maximus <frank.maximus@nuagenetworks.net>
Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
Added ability to specify mac in deployVirtualMachine and
addNicToVirtualMachine api endpoints.
Validates mac address to be in the form of:
aa:bb:cc:dd:ee:ff , aa-bb-cc-dd-ee-ff , or aa.bb.cc.dd.ee.ff.
Ensures that mac address is a Unicast mac.
Ensures that the mac address is not already allocated for the
specified network.
Configure a PF rule Private port : Start port ; 20 ENd POrt 25 || Public Port : Start port 20 ; ENd Port : 25.
Trigger UpdatePortForwardingRule api
ApI fails with following error : " Unable to update the private port of port forwarding rule as the rule has port range "
Solution-
Port range gets modified
- All tests should pass on KVM, Simulator
- Add test cases covering FSM state transitions and actions
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces a new certificate authority framework that allows
pluggable CA provider implementations to handle certificate operations
around issuance, revocation and propagation. The framework injects
itself to `NioServer` to handle agent connections securely. The
framework adds assumptions in `NioClient` that a keystore if available
with known name `cloud.jks` will be used for SSL negotiations and
handshake.
This includes a default 'root' CA provider plugin which creates its own
self-signed root certificate authority on first run and uses it for
issuance and provisioning of certificate to CloudStack agents such as
the KVM, CPVM and SSVM agents and also for the management server for
peer clustering.
Additional changes and notes:
- Comma separate list of management server IPs can be set to the 'host'
global setting. Newly provisioned agents (KVM/CPVM/SSVM etc) will get
radomized comma separated list to which they will attempt connection
or reconnection in provided order. This removes need of a TCP LB on
port 8250 (default) of the management server(s).
- All fresh deployment will enforce two-way SSL authentication where
connecting agents will be required to present certificates issued
by the 'root' CA plugin.
- Existing environment on upgrade will continue to use one-way SSL
authentication and connecting agents will not be required to present
certificates.
- A script `keystore-setup` is responsible for initial keystore setup
and CSR generation on the agent/hosts.
- A script `keystore-cert-import` is responsible for import provided
certificate payload to the java keystore file.
- Agent security (keystore, certificates etc) are setup initially using
SSH, and later provisioning is handled via an existing agent connection
using command-answers. The supported clients and agents are limited to
CPVM, SSVM, and KVM agents, and clustered management server (peering).
- Certificate revocation does not revoke an existing agent-mgmt server
connection, however rejects a revoked certificate used during SSL
handshake.
- Older `cloudstackmanagement.keystore` is deprecated and will no longer
be used by mgmt server(s) for SSL negotiations and handshake. New
keystores will be named `cloud.jks`, any additional SSL certificates
should not be imported in it for use with tomcat etc. The `cloud.jks`
keystore is stricly used for agent-server communications.
- Management server keystore are validated and renewed on start up only,
the validity of them are same as the CA certificates.
New APIs:
- listCaProviders: lists all available CA provider plugins
- listCaCertificate: lists the CA certificate(s)
- issueCertificate: issues X509 client certificate with/without a CSR
- provisionCertificate: provisions certificate to a host
- revokeCertificate: revokes a client certificate using its serial
Global settings for the CA framework:
- ca.framework.provider.plugin: The configured CA provider plugin
- ca.framework.cert.keysize: The key size for certificate generation
- ca.framework.cert.signature.algorithm: The certificate signature algorithm
- ca.framework.cert.validity.period: Certificate validity in days
- ca.framework.cert.automatic.renewal: Certificate auto-renewal setting
- ca.framework.background.task.delay: CA background task delay/interval
- ca.framework.cert.expiry.alert.period: Days to check and alert expiring certificates
Global settings for the default 'root' CA provider:
- ca.plugin.root.private.key: (hidden/encrypted) CA private key
- ca.plugin.root.public.key: (hidden/encrypted) CA public key
- ca.plugin.root.ca.certificate: (hidden/encrypted) CA certificate
- ca.plugin.root.issuer.dn: The CA issue distinguished name
- ca.plugin.root.auth.strictness: Are clients required to present certificates
- ca.plugin.root.allow.expired.cert: Are clients with expired certificates allowed
UI changes:
- Button to download/save the CA certificates.
Misc changes:
- Upgrades bountycastle version and uses newer classes
- Refactors SAMLUtil to use new CertUtils
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This allows native CloudStack users to change password in UI when LDAP
is enabled. Overall changes:
- A new usersource returned in the listUsers response
- Removed ldap check in the UI, replaced with check based on user source
- DB changes to include user.source in user_view
- Changed UI error message for non-native users trying to change password
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9669:egress destination cidr VR python script changes
CLOUDSTACK-9669:egress destination API and orchestration changes
CLOUDSTACK-9669: Added the ipset package in systemvm template
CLOUDSTACK-9669:Added licence header for new files
CLOUDSTACK-9669: replacing 0.0.0.0/0 with the network cidr
ipset member add with 0.0.0.0/0 fails. So 0.0.0.0/0 replaced with the network cidr.
In source cidr 0.0.0.0/0 is nothing but network cidr.
updated the default egress all cidr with network cidr
As of now, CloudStack can automatically import LDAP users based on the
configuration to a domain or an account. However, any new users in LDAP
aren't automatically reflected. The admin has to manually import them
again.
This feature enables admin to map LDAP group/OU to a CloudStack domain
and any changes are reflected in ACS as well.
This fixes the agreed upon url on download.cloudstack.org in various
sql files and misc scripts.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- commented some occurences of cloud.com as being harmless
* examples
* identifiers (internal)
- changed the URL for vhd-util download
- changed comments from 'cloud.com' to 'Apache CloudStack'
[dvswitch blocker] CLOUDSTACK-9591: Fix systemvmtemplate to not include network detailsThis removes nic/network specific details while exporting the systemvmtemplate for vmware (ova file). Having this causes the ssvms to not deploy in dvswitch-based vmware environments that have no vswitch portgroups (dummy etc). Tested this on a local Trillian env.
* pr/2022:
CLOUDSTACK-9591: Fix guest VM ovf xml to remove network nodes
CLOUDSTACK-9591: Fix systemvmtemplate to not include network details
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
This removes nic/network specific details while exporting the systemvmtemplate
for vmware (ova file). Having this causes the ssvms to not deploy in
dvswitch-based vmware environments that have no vswitch portgroups (dummy etc).
Tested this on a local Trillian env.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This adds support for virtio-scsi on KVM hosts, either
for guests that are associated with a new os_type of 'Other PV Virtio-SCSI (64-bit)',
or when a VM or template is regstered with a detail parameter rootDiskController=scsi.
Update cloudstack add template dialog to allow for selecting rootDiskController with KVM
Update cloudstack kvm virtio-scsi to enable discard=unmap
This improves the metrics view feature by improving the rendering performance
of metrics view tables, by reimplementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.
List of APIs introduced for improving the performance:
listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Switches Travis to use jdk1.8
- Changes java-version to 1.8
- Change jdk/maven version to 1.8
- Switch to F5/java8 compatible library release
- Switch packaging to use jdk 1.8, and jre 1.8 in init/systemd scripts
- Switch systemvm to openjdk-8-jre
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
XenServer 7 SupportThis PR adds support for XenServer 7. I have manually done the following tests
- Create a new cluster with XenServer7
- Add Primary storage: Should create an SR on XS7
- Add another XS7 host to the Pool
- Add host2 to Cloudstack
- Create VM1 from template
- Create VM2 from template
- Ping/SSH VM1 to VM2 and vice-versa
- Stop/Delete/Expunge VM2
- Create Data disk
- Attach it to VM1
- Create VM snaphsot of VM1
- Restore VM snapshot of VM1
- Delete VM snapshot of VM1
- Create Volume snapshot of Datadisk
- Create volume snapshot of Root disk
- Create new template from snapshot of root disk
- Create volume from snapshot of datadisk
- Detach datadisk volume
- Delete datadisk volume
- Aquire a public IP
- Create a static nat to VM1
- Live migrate VM1 while traffic on VM
- Delete VM1
* pr/1711:
[CLOUDSTACK-9662] Add support for XenServer 7
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Upgrades Maven dependency version to v1.55
- Fixes bountycastle usages and issues
- Adds timeout to jetty/annotation scanning
- Fixes servlet issue, uses servlet 3.1.0
- Downgrade javassist used by reflections to fix annotation process errors
- Make console-proxy-rdp bc dependency same as rest of the codebase
- Picks up PR #1510 by Daan
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This fixes build_asf.sh release script to update checkstyle pom.xml with the
provided new version.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.cfg fileMultiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.cfg file. Moreover, each time a new Internal LB rule is added to the corresponding InternalLbVm instance, it replaces the existing one. Thus, traffic corresponding to these un-resolved (old) Internal LB rules are getting dropped by the InternalLbVm instance.
PR contents:
1) Fix for this bug.
2) Marvin test coverage for Internal LB feature on master with native ACS setup (component directory) including validations for this bug fix.
3) Enhancements on our exiting Internal LB Marvin test code (nuagevsp plugins directory) to validate this bug fix.
4) PEP8 & PyFlakes compliance with the added Marvin test code.
* pr/1577:
CLOUDSTACK-9321 : Multiple Internal LB rules (more than one Internal LB rule with same source IP address) are not getting resolved in the corresponding InternalLbVm instance's haproxy.cfg file
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to underlay) in Nuage VSP pluginSupport for underlay features (Source & Static NAT to underlay) with Nuage VSP SDN Plugin including Marvin test coverage for corresponding Source & Static NAT features on master. Moreover, our Marvin tests are written in such a way that they can validate our supported feature set with both Nuage VSP SDN platform's overlay and underlay infra.
PR contents:
1) Support for Source NAT to underlay feature on master with Nuage VSP SDN Plugin.
2) Support for Static NAT to underlay feature on master with Nuage VSP SDN Plugin.
3) Marvin test coverage for Source & Static NAT to underlay on master with Nuage VSP SDN Plugin.
4) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
5) PEP8 & PyFlakes compliance with our Marvin test code.
* pr/1580:
CLOUDSTACK-9402 : Support for underlay features (Source & Static NAT to underlay) in Nuage VSP plugin
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9402 : Marvin tests for Source NAT and Static NAT features verification with NuageVsp (both overlay and underlay infra).
Co-Authored-By: Prashanth Manthena <prashanth.manthena@nuagenetworks.net>, Frank Maximus <frank.maximus@nuagenetworks.net>
This commit adds a additional VirtIO channel with the name
'org.qemu.guest_agent.0' to all Instances.
With the Qemu Guest Agent the Hypervisor gains more control over the Instance if
these tools are present inside the Instance, for example:
* Power control
* Flushing filesystems
* Fetching Network information
In the future this should allow safer snapshots on KVM since we can instruct the
Instance to flush the filesystems prior to snapshotting the disk.
More information: http://wiki.qemu.org/Features/QAPI/GuestAgent
Keep in mind that on Ubuntu AppArmor still needs to be disabled since the default
AppArmor profile doesn't allow libvirt to write into /var/lib/libvirt/qemu
This commit does not add any communication methods through API-calls, it merely
adds the channel to the Instances and installs the Guest Agent in the SSVMs.
With the addition of the Qemu Guest Agent channel a second channel appears in /dev
on a SSVM as a VirtIO port.
The order in which the ports are defined in the XML matters for the naming inside
the SSVM VM and by not relying on /dev/vportXX but looking for a static name the
SSVM still boots properly if the order in the XML definition is changed.
A SSVM with both ports attached will have something like this:
root@v-215-VM:~# ls -l /dev/virtio-ports
total 0
lrwxrwxrwx 1 root root 11 May 13 21:41 org.qemu.guest_agent.0 -> ../vport0p2
lrwxrwxrwx 1 root root 11 May 13 21:41 v-215-VM.vport -> ../vport0p1
root@v-215-VM:~# ls -l /dev/vport*
crw------- 1 root root 251, 1 May 13 21:41 /dev/vport0p1
crw------- 1 root root 251, 2 May 13 21:41 /dev/vport0p2
root@v-215-VM:~#
In this case the SSVM port points to /dev/vport0p1, but if the order in the XML
is different it might point to /dev/vport0p2
By looking for a portname with a pre-defined pattern in /dev/virtio-ports we
do not rely on the order in the XML definition.
Signed-off-by: Wido den Hollander <wido@widodh.nl>
This PR adds an ability to Pass a new parameter, locationType,
to the “createSnapshot” API command. Depending on the locationType,
we decide where the snapshot should go in case of managed storage.
There are two possible values for the locationType param
1) `Standard`: The standard operation for managed storage is to
keep the snapshot on the device. For non-managed storage, this will
be to upload it to secondary storage. This option will be the
default.
2) `Archive`: Applicable only to managed storage. This will
keep the snapshot on the secondary storage. For non-managed
storage, this will result in an error.
The reason for implementing this feature is to avoid a single
point of failure for primary storage. Right now in case of managed
storage, if the primary storage goes down, there is no easy way
to recover data as all snapshots are also stored on the primary.
This features allows us to mitigate that risk.
[CLOUDSTACK-9337]Enhance vcenter.py to create data center in vcenter server automaticallyThese changes have been made to support vmware deployments in CI.
For CI to create cloudstack setup with vmware, it is required to create datacenter, cluster and hosts in vcenter server before adding in cloudstack. Added few methods in vcenter.py to perform these tasks.
Please refer to CLOUDSTACK-9337 for more details.
* pr/1464:
[CLOUDSTACK-9337]Enhance vcenter.py to created data center in vcenter server automatically (Programmatically)
Signed-off-by: Rajani Karuturi <rajani.karuturi@accelerite.com>
- Switches to macchinina as template for VM in the tests
- Modifies the ostype of the macchinina template to 'Other Linux (64-bit)'
- Check template download status, fixes Nonetype iterable issue
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Handle case where physical network instance does not have vlan attribute
- Handle case where listIso response may not have status attribute
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Move the localization resource files from Java Properties format to JSON Key-Value format
Change the Transifex sync script to handle JSON resource files instead of Properties files
Update the README
Remove old version from the Transifex configuration file
Remove unused gen-l10n.py script and update the ui/pom.xml to remove the execution of this script
Add the Transifex config for next version of CS (4.10)
* pr/1619:
Add the Transifex config for next version of CS (4.10)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
We use some JSP file just for translation of strings in the UI. This is
achievable purely in JavaScript. This removes those JSPs, simplifies
translation usage and workflow (purely JS based). The l10n js (dictionary)
files are generated from existing messages.properties files during client-ui
code generation phase.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
* Renames schema-490to491*.sql to schema490to4910*.sql
* Renames the Upgrade490to491 class to Upgrade490to4910
* Removes the unused s_logger contant from Upgrade490to4910
* Updates the version in tools/marvin/setup to 4.9.1.0-SNAPSHOT
DNS on VR should not be publically accessible as it may be prone to DNS
amplification/reflection attacks. This fixes the issue by only allowing VR
DNS (port 53) to be accessible from guest network cidr, as per the fix in:
https://issues.apache.org/jira/browse/CLOUDSTACK-6432
- Only allows guest network cidrs to query VR DNS on port 53.
- Includes marvin smoke test that checks the VR DNS accessibility checks from
guest and non-guest network.
- Fixes Marvin sshClient to avoid using ssh agent when password is provided,
previous some environments may have seen 'No existing session' exception without
this fix.
- Adds a new dnspython dependency that is used to perform dns resolutions in the
tests.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Often, patch and security releases do not require schema migrations or
data migrations. However, if an empty upgrade class and associated
scripts are not defined, the upgrade process will break. With this
change, if a release does not have an upgrade, a noop DbUpgrade is added
to the upgrade path. This approach allows the upgrade to proceed and
for the database to properly reflect the installed version. This change
should make the release process simpler as RMs no longer need to
rememeber to create this boilerplate code when starting a new release.
Beginning with the 4.8.2.0 and 4.9.1.0 releases, the project will
formally adopt a four (4) position release number to properly accomodate
rekeases that contain only CVE fixes. The DatabaseUpgradeChecker and
Version classes made assumptions that they would always parse and
compare three (3) position version numbers. This change adds the
CloudStackVersion value object that supports both three (3) and four (4)
version numbers. It encapsulates version comparsion logic, as well as,
the rules to allow three (3) and four (4) to interoperate.
* Modifies DatabaseUpgradeChecker to handle derive an upgrade path for
a version that was not explicitly specified. It determines the
releases the first release before it with database migrations and uses
that list as the basis for the list for version being calculated. A
noop upgrade is then added to the list which causes no schema changes
or data migrations, but will update the database to the version.
* Adds unit tests for the upgrade path calculation logic in
DatabaseUpgradeChecker
* Removes dummy upgrade logic for the 4.8.2.0 introduced in previous
versions of this patch
* Introduces the CloudStackVersion value object which parses and
compares three (3) and four (4) position version numbers. This class
is intended to replace com.cloud.maint.Version.
* Adds the junit-dataprovider dependency -- allowing test data to be
concisely generated separately from the execution of a test case.
Used extensively in the CloudStackVersionTest.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Marvin: Fix codegenerator to work with API discoveryThis fixes Marvin cloudstackAPI generator to work with a live running mgmt server's api discovery.
* pr/1599:
marvin: fix codegeneration against API discovery endpoint
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This makes the commands.xml based codegeneration equivalent to the
API discovery end point based discovery.
This fixes the fields that the (api discovery based) codegenerator should
produce in the generated python classes (cmd and response classes per
api/module). The issue was that the autogenerated cloudstackAPI differed between
api-based and apidocs-based code generation. With this fix the generated classes
match exactly thereby allowing us to go with either methods to generate
cloudstackAPI.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Update base debian iso to version 7.11
- Upgrade ruby version to 2.3.0 (latest/stable)
- Fix Gemfile
- Update README
- Fix openswan pkg name with the same version
- Remove cloud-cleanup it's not available
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces two new cloudstack packages: marvin and integration-tests.
The two packages will make it easier for CI systems to install Marvin for a
specific cloudstack release/build and run integration tests that are specific
for that version/build.
- maven: add explicit juniper-contrail-api maven repository
- marvin: build source distribution for both install and package mvn phases
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Dynamically load drivers before creating our DB connectionsSolution to the mailing thread titled "MySQL : No suitable driver found for jdbc:mysql".
It doesn't harm that we explicitely load the MySQL driver, and for those which would use a commons-dbcp version < 1.4 this would fix it as well. Since JDBC 4.0, the JDBC driver can auto-register itself, but for some weird cases (like ours), it's not working. Therefore we need to explicitly load the JDBC driver.
* pr/1553:
Dynamic loading of DB driver + support for other DB providers
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9373: Class methods over-shawdowing instance methodsWe have some methods in base.py that are named the same.
Per my findings below, Python methods in a class should not be named the same even if one is a class method and the other is an instance method.
The solution discussed on dev@ is to remove the instance versions (reason listed in e-mail text, which is listed in JIRA ticket).
https://issues.apache.org/jira/browse/CLOUDSTACK-9373
* pr/1528:
CLOUDSTACK-9373: Removing a few instance methods where there are class methods that are overshadowing them
Signed-off-by: Will Stevens <williamstevens@gmail.com>
- For out-of-band management feature (CLOUDSTACK-9299) use patched version of
ipmitool that would work on trusty travis machines
- The ipmitool used is from xenial/16.04 release with patch from RedHat
https://bugzilla.redhat.com/show_bug.cgi?id=1286035
- Installs ipmitool from xenial repositories to get all the dependencies
and then install patched deb version
- Skip test if the known failure occurs
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Support access to a host’s out-of-band management interface (e.g. IPMI, iLO,
DRAC, etc.) to manage host power operations (on/off etc.) and querying current
power state in CloudStack.
Given the wide range of out-of-band management interfaces such as iLO and iDRA,
the service implementation allows for development of separate drivers as plugins.
This feature comes with a ipmitool based driver that uses the
ipmitool (http://linux.die.net/man/1/ipmitool) to communicate with any
out-of-band management interface that support IPMI 2.0.
This feature allows following common use-cases:
- Restarting stalled/failed hosts
- Powering off under-utilised hosts
- Powering on hosts for provisioning or to increase capacity
- Allowing system administrators to see the current power state of the host
For testing this feature `ipmisim` can be used:
https://pypi.python.org/pypi/ipmisim
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Out-of-band+Management+for+CloudStack
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
- Removes commands.properties file
- Fixes apidocs and marvin to be independent of commands.properties usage
- Removes bundling of commands.properties in deb/rpm packaging
- Removes file references across codebase
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows root administrators to define new roles and associate API
permissions to them.
A limited form of role-based access control for the CloudStack management server
API is provided through a properties file, commands.properties, embedded in the
WAR distribution. Therefore, customizing API permissions requires unpacking the
distribution and modifying this file consistently on all servers. The old system
also does not permit the specification of additional roles.
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Dynamic+Role+Based+API+Access+Checker+for+CloudStack
DB-Backed Dynamic Role Based API Access Checker for CloudStack brings following
changes, features and use-cases:
- Moves the API access definitions from commands.properties to the mgmt server DB
- Allows defining custom roles (such as a read-only ROOT admin) beyond the
current set of four (4) roles
- All roles will resolve to one of the four known roles types (Admin, Resource
Admin, Domain Admin and User) which maintains this association by requiring
all new defined roles to specify a role type.
- Allows changes to roles and API permissions per role at runtime including additions or
removal of roles and/or modifications of permissions, without the need
of restarting management server(s)
Upgrade/installation notes:
- The feature will be enabled by default for new installations, existing
deployments will continue to use the older static role based api access checker
with an option to enable this feature
- During fresh installation or upgrade, the upgrade paths will add four default
roles based on the four default role types
- For ease of migration, at the time of upgrade commands.properties will be used
to add existing set of permissions to the default roles. cloud.account
will have a new role_id column which will be populated based on default roles
as well
Dynamic-roles migration tool: scripts/util/migrate-dynamicroles.py
- Allows admins to migrate to the dynamic role based checker at a future date
- Performs a harder one-way migrate and update
- Migrates rules from existing commands.properties file into db and deprecates it
- Enables an internal hidden switch to enable dynamic role based checker feature
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Marvin: Replace a timer.sleep(30) with pulling logichttps://issues.apache.org/jira/browse/CLOUDSTACK-9374
From the ticket:
In the base.py file, there is a Host class with a delete instance method.
This method first attempts to transition the host into the maintenance resource state.
The first step in this process is to transition the host into the prepare-for-maintenance resource state.
A while later, the host can be transitioned completely into the maintenance resource state.
In an attempt to wait for this transition to occur, the delete method has a timer.sleep(30) call.
The hope is that the host will have transitioned from the prepare-for-maintenance resource state to the maintenance resource state within 30 seconds, but this does not always happen.
We should correct this problem by putting in logic to query the management server for the resource state of the host. If it's in the expected state, move on; else, sleep for a bit and try again (up to a certain limit).
* pr/1529:
Replace a timer.sleep(30) with pulling logic
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9350: KVM-HA- Fix CheckOnHost for Local storage- KVM-HA- Fix CheckOnHost for Local storage
- Also skip HA on VMs that are using local storage
* pr/1496:
CLOUDSTACK-9350: KVM-HA- Fix CheckOnHost for Local storage - Also skip HA on VMs that are using local storage
Signed-off-by: Will Stevens <williamstevens@gmail.com>
* 4.8:
Update L10N resource files with 4.8 strings from Transifex (20160504) Force "translator" mode with the transifex client. Update Transifex client config file for 4.8 resources/L10N ref. (generated by Tx client)
CLOUDSTACK-9323: Fix cancel host maintenance canFix cancel host maintenance so that if maintenance is cancelled the host come back to normal state gracefully.
Added marvin tests for host maintennace.
* pr/1454:
CLOUDSTACK-9323: Fix Cancel maintenance so that if maintenance is cancelled the host come back to normal state gracefully. Added marvin tests for host maintennace.
Signed-off-by: Will Stevens <williamstevens@gmail.com>
* 4.8:
Removed sleeps and used validateList as requested.
Added required_hardware="false" attr above test_02_root_volume_attach_detach
Modified test_volumes.py to include a hypervisor test for root attach/detach testing
Let hypervisor type KVM and Simulator detach root volumes. Updated test_volumes.py to include a test for detaching and reattaching a root volume from a vm. I also had to update base.py to allow attach_volume to have the parameter deviceid to be passed as needed.
* 4.7:
Removed sleeps and used validateList as requested.
Added required_hardware="false" attr above test_02_root_volume_attach_detach
Modified test_volumes.py to include a hypervisor test for root attach/detach testing
Let hypervisor type KVM and Simulator detach root volumes. Updated test_volumes.py to include a test for detaching and reattaching a root volume from a vm. I also had to update base.py to allow attach_volume to have the parameter deviceid to be passed as needed.
4.9 mvn version safeupgradeonlyUpgrades maven dependencies versions that can be safely upgraded without breaking console-proxy/crypto usage.
Bisected changes from: https://github.com/apache/cloudstack/pull/1397
cc @swill @DaanHoogland
* pr/1510:
maven: fix dependency version support by JDK7
further maven dependency updates from Daan
framework/quota: fix checkstyle issue
maven: Upgrade dependency versions
Signed-off-by: Will Stevens <williamstevens@gmail.com>
BUG-ID:CLOUDSTACK-9331:added code in marvin frame&new config file for advBaremetal support Added code in marvin framework&new config file to support baremetal advanced testcases
* pr/1458:
BUG-ID:CLOUDSTACK-9331:added code in marvin frame&new config file to support baremetal advanced testcases
Signed-off-by: Will Stevens <williamstevens@gmail.com>
CLOUDSTACK-9322: Support for Internal LB fuctionality with Nuage VSP SDN Plugin including Marvin test coverageTask: https://issues.apache.org/jira/browse/CLOUDSTACK-9322
PR contents:
1) UI changes to support LB provider InternalLbVm during VPC offering creation.
2) Bug fix for CLOUDSTACK-9320.
3) Nuage VSP SDN Plugin related enhancements for VPC network functionality.
4) Marvin test coverage for Internal LB support on master with Nuage VSP SDN Plugin.
5) Enhancements on our exiting Marvin test code (nuagevsp plugins directory).
6) PEP8 & PyFlakes compliance with our Marvin test code.
Test run:
CloudStack$ nosetests --with-marvin --marvin-config=nuage_ant.cfg test/integration/plugins/nuagevsp/ -a tags=nuagevsp
Test results:
Test user data and password reset functionality with Nuage VSP SDN plugin ... === TestName: test_nuage_UserDataPasswordReset | Status : SUCCESS ===
ok
Test Nuage VSP VPC Offering with different combinations of LB service providers ... === TestName: test_01_nuage_internallb_vpc_Offering | Status : SUCCESS ===
ok
Test Nuage VSP VPC Network Offering with and without Internal LB service ... === TestName: test_02_nuage_internallb_vpc_network_offering | Status : SUCCESS ===
ok
Test Nuage VSP VPC Networks with and without Internal LB service ... === TestName: test_03_nuage_internallb_vpc_networks | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality with different combinations of Internal LB rules ... === TestName: test_04_nuage_internallb_rules | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality by performing (wget) traffic tests within a VPC ... === TestName: test_05_nuage_internallb_traffic | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality with different LB algorithms by performing (wget) traffic tests ... === TestName: test_06_nuage_internallb_algorithms_traffic | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality with restarts of VPC network components by performing (wget) ... === TestName: test_07_nuage_internallb_vpc_network_restarts_traffic | Status : SUCCESS ===
ok
Test Nuage VSP VPC Internal LB functionality with InternalLbVm appliance operations by performing (wget) ... === TestName: test_08_nuage_internallb_appliance_operations_traffic | Status : SUCCESS ===
ok
Test Basic VPC Network Functionality with Nuage VSP SDN plugin ... === TestName: test_nuage_vpc_network | Status : SUCCESS ===
ok
Test Nuage VSP SDN plugin with basic Isolated Network functionality ... === TestName: test_nuage_vsp | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 11 tests in 12094.705s
OK
Test run logs:
[results.txt](https://github.com/apache/cloudstack/files/187587/results.txt)
[runinfo.txt](https://github.com/apache/cloudstack/files/187588/runinfo.txt)
Test config file:
[nuage_ant.txt](https://github.com/apache/cloudstack/files/222711/nuage_ant.txt)
Note: Attached the Marvin config file as .txt instead of .cfg.
PEP8 & PyFlakes Compliance:
CloudStack$ pep8 --max-line-length=150 test/integration/plugins/nuagevsp/*.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/nuageTestCase.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/test_nuage_password_reset.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/test_nuage_vpc_internal_lb.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/test_nuage_vpc_network.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/test_nuage_vsp.py
CloudStack$ pyflakes test/integration/plugins/nuagevsp/*.py
#CLOUDSTACK-9322
* pr/1452:
CLOUDSTACK-9322 : Marvin tests for Internal Lb with Nuage VSP
CLOUDSTACK-9320 : InternalLBVM is not getting destroyed when the last Internal Load Balancer rule is removed for the corresponding source IP address
CLOUDSTACK-9322 : Changes to support InternalLbVm with Nuage VSP plugin
Signed-off-by: Will Stevens <williamstevens@gmail.com>
Updated most dependencies to latest minor releases, EXCEPT:
- Gson 2.x
- Major spring framework version
- Servlet version
- Embedded jetty version
- Mockito version (beta)
- Mysql lib minor version upgrade (breaks mysql-ha plugin)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Updated test_volumes.py to include a test for detaching and reattaching a root volume from a vm. I also had to update base.py to allow attach_volume to have the parameter deviceid to be passed as needed.
- Migrate to trusty based Travis VMs
- Increase tests across five build matrices
- Fix xunit-reader output, include time
- Fix pip/python usage, pkg installation
- Build CloudStack in parallel with -T4
- Deploy database with optimized global settings
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9012 :automation of cores feature test pathhttps://issues.apache.org/jira/browse/CLOUDSTACK-9012
Automated a full scenario of coreos guest OS support:
it includes registering coreos templates present at http://dl.openvm.eu/cloudstack/coreos/x86_64/
1. based on hypervisor types of zone
2. creating ssh key pair
3. creating a sample user data
4. creating a coreos virtual machine using this ssh keypair and userdata
5. verifying ssh access to coreo os machine using keypair and core username
6. verifying userdata is applied on virtual machine and the service asked in sample data is actually running
7. Verifying userdata in router vm as well
* pr/1011:
added suggested changes to coreos automation
automation of cores feature test path
Signed-off-by: sanjeev <sanjeev@apache.org>
CLOUDSTACK-9161: fix the quota marvin test 1. Create a dummy user, as existing user may already have stale quota
data
2. fix the tests to use the dummy user
3. a boundary condition was revealed and fixed for a new user where
quota service has never run and created bootstrap entries
* pr/1240:
CLOUDSTACK-9161: fix the quota marvin test
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9041: Modifying template creation from snapshot function In create_from_snapshot function of Template class there is no parameter to accept if the template is public hence default it is creating private templates
Hence adding this parameter.
* pr/1041:
CLOUDSTACK-9041: Modifying template creation from snapshot function in base.py
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.7:
Fix unable to setup more than one Site2Site VPN Connection
FIX S2S VPN rVPC: Check only redundant routers in state MASTER
PEP8 of integration/smoke/test_vpc_vpn
Add S2S VPN test for Redundant VPC
Make integration/smoke/test_vpc_vpn Hypervisor independant
FIX VPN: non-working ipsec commands
[UI] MADNESS
[DB] Add force_encap field to s2s_customer_gateway table
[ROUTER] Add forceencaps field to python router ipsec config method
[TEST] unittest needs rework
[MARVIN] Add forceencap field to VpnCustomerGateway class in marvin base
[CORE] Add Force UDP Encapsulation option to Site2Site VPN
CLOUDSTACK-9186: Root admin cannot see VPC created by Domain admin user
CLOUDSTACK-9192: UpdateVpnCustomerGateway is failing
CLOUDSTACK-6485 prevent ip asignment of private gw iface
CLOUDSTACK-9204 Do not error when staticroute is already gone
make both check lines consistent
CLOUDSTACK-9181 Prevent syntax error in checkrouter.sh
CLOUDSTACK-9202 Bump ssh timeout
1. Create a dummy user, as existing user may already have stale quota
data
2. fix the tests to use the dummy user
3. a boundary condition was revealed and fixed for a new user where
quota service has never run and created bootstrap entries
Quota Marvin: If the quota plugin is not enabled skip tests
Quota Service: Enable quota plugin in zone setup configuration
Quota: Moving test_quota.py the test to test/integration/plugins
In most automated environment this test case will not run as it
requires a mangement server restart to enable the plugin. Due to
this requirement moving it to plugin folder. This condition is
already documented in the test case.
build_asf.sh: fix debian changelog alteringThe script already adds a new entry so we shouldn't replace the
previous one. That is done only in the next version script.
As discussed with @DaanHoogland
* pr/1239:
build_asf.sh: fix debian changelog altering
Signed-off-by: Remi Bergsma <github@remi.nl>
L10N update before 4.7.0 RC1Some sentences (mainly for the new quota plugin) needs to be translated before the 4.7.0 release candidate (planned for 14th december)
https://www.transifex.com/ke4qqq/CloudStack_UI/47xmessagesproperties/
Stats:
German (Germany) 100%
French (France) 100%
Portuguese (Brazil) 98%
Norwegian Bokml (Norway) 98%
Dutch (Netherlands) 98%
Chinese (China) 97%
Japanese (Japan) 97%
Hungarian 96%
(only languages over 90%)
Ping me after adding the L10N strings into transifex, I will update this PR.
Thanks
cc @remibergsma @agneya2001 @DaanHoogland @nvazquez @barunhalder1234 @K0zka
* pr/1217:
Update L10N resource files with 4.7 strings from Transifex (20151211)
Signed-off-by: Remi Bergsma <github@remi.nl>
Quota service while allowing for scalability will make sure that the cloud is
not exploited by attacks, careless use and program errors. To address this
problem, we propose to employ a quota-enforcement service that allows resource
usage within certain bounds as defined by policies and available quotas for
various entities. Quota service extends the functionality of usage server to
provide a measurement for the resources used by the accounts and domains using a
common unit referred to as cloud currency in this document. It can be configured
to ensure that your usage won’t exceed the budget allocated to accounts/domain
in cloud currency. It will let user know how much of the cloud resources he is
using. It will help the cloud admins, if they want, to ensure that a user does
not go beyond his allocated quota. Per usage cycle if a account is found to be
exceeding its quota then it is locked. Locking an account means that it will not
be able to initiat e a new resource allocation request, whether it is more
storage or an additional ip. Needless to say quota service as well as any action
on the account is configurable.
Changes from Github code review:
- Added marvin test for quota plugin API
- removed unused commented code
- debug messages in debug enabled check
- checks for nulls, fixed access to member variables and feature
- changes based on PR comments
- unit tests for UsageTypes
- unit tests for all Cmd classes
- unit tests for all service and manager impls
- try-catch-finally or try-with-resource in dao impls for failsafe db switching
- remove dead code
- add missing quota calculation case (regression fixed)
- replace tabs with spaces in pom.xmls
- quota: though default value for quota_calculated is 0, the usage server
makes it null while entering usage entries. Flipping the condition so
as to acocunt for that.
- quotatypes: fix NPE in quota type
- quota framework test fixes
- made statement period configurable
- changed default email templates to reflect the fact that exhausted quota may not result in a locked account
- added quotaUpdateCmd that refreshes quota balances and sends alerts and statements
- report quotaSummary command returns quota balance, quota usage and state for all account
- made UI framework changes to allow for text area input in edit views
- process usage entries that have greater than 0 usage
- orocess quota entries only if tariff is non zero
- if there are credit entries but no balance entry create a dummy balance entry
- remove any credit entries that are before the last balance entry
when displaying balance statement
- on a rerun the last balance is now getting added
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Quota+Service+-+FS
PR: https://github.com/apache/cloudstack/pull/768
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
CLOUDSTACK-9051: update nic IP address of stopped vmThis provides the feature to change ip address of NIC on a stopped vm by API and UI.
* pr/1086:
CLOUDSTACK-9051: reprogram network as a part of vm nic ip update
CLOUDSTACK-9051: add unit tests for UpdateVmNicIp
CLOUDSTACK-9051: update nic IP address of stopped vm
Signed-off-by: Daan Hoogland <daan@onecht.net>
* 4.6:
CLOUDSTACK-9075 - Uses the same vlan since it should have been already released
CLOUDSTACK-9075 - Adds VPC static routes test
CLOUDSTACK-9075 - Covers Private GW ACL with Redundant VPCs
CLOUDSTACK-9075 - Add method to get list of Physical Networks per zone
CLOUDSTACK-6276 Removing unused parameter in integration test for projects
CLOUDSTACK-6276 Removing unused parameter in integration test
CLOUDSTACK-6276 Fixing affinity groups for projects
CLOUDSTACK-6276 Fixing affinity groups for projectsWith some contributions from @resmo and @ustcweizhou.
This closes https://github.com/apache/cloudstack/pull/508
To test manually (need at least 2 hosts):
Create a project
Create an affinity group in that project
Deploy a vm with that affinity group
Deploy a second vm with that affinity group
They should be on different hosts
Ran old and new tests for affinity groups on the simulator
Test create affinity group as admin in project ... === TestName: test_01_admin_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group as domain admin for projects ... === TestName: test_02_doadmin_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group as user for projects ... === TestName: test_03_user_create_aff_grp_for_project | Status : SUCCESS ===
ok
Test create affinity group that exists (same name) for projects ... === TestName: test_4_user_create_aff_grp_existing_name_for_project | Status : SUCCESS ===
ok
#Delete Affinity Group by id. ... === TestName: test_01_delete_aff_grp_by_id | Status : SUCCESS ===
ok
#Delete Affinity Group by id should fail for user not in project ... === TestName: test_02_delete_aff_grp_by_id_another_user | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups ... === TestName: test_01_deploy_vm_anti_affinity_group | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups with more vms than hosts. ... === TestName: test_02_deploy_vm_anti_affinity_group_fail_on_not_enough_hosts | Status : SUCCESS ===
ok
List affinity group for a vm for projects ... === TestName: test_01_list_aff_grps_for_vm | Status : SUCCESS ===
ok
List multiple affinity groups associated with a vm for projects ... === TestName: test_02_list_multiple_aff_grps_for_vm | Status : SUCCESS ===
ok
List affinity groups by id for projects ... === TestName: test_03_list_aff_grps_by_id | Status : SUCCESS ===
ok
List Affinity Groups by name for projects ... === TestName: test_04_list_aff_grps_by_name | Status : SUCCESS ===
ok
List Affinity Groups by non-existing id for projects ... === TestName: test_05_list_aff_grps_by_non_existing_id | Status : SUCCESS ===
ok
List Affinity Groups by non-existing name for projects ... === TestName: test_06_list_aff_grps_by_non_existing_name | Status : SUCCESS ===
ok
List affinity group should list all for a vms associated with that group for projects ... === TestName: test_07_list_all_vms_in_aff_grp | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupids ... === TestName: test_01_update_aff_grp_by_ids | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 16 tests in 581.706s
OK
Deploy vm as Admin in Affinity Group belonging to regular user (should fail) ... === TestName: test_01_deploy_vm_another_user | Status : SUCCESS ===
ok
Create Affinity Group as admin for regular user ... === TestName: test_02_create_aff_grp_user | Status : SUCCESS ===
ok
List Affinity Groups as admin for all the users ... === TestName: test_03_list_aff_grp_all_users | Status : SUCCESS ===
ok
List Affinity Groups belonging to admin user ... === TestName: test_04_list_all_admin_aff_grp | Status : SUCCESS ===
ok
List Affinity Groups belonging to regular user passing account id and domain id ... === TestName: test_05_list_all_users_aff_grp | Status : SUCCESS ===
ok
List Affinity Groups belonging to regular user passing group id ... === TestName: test_06_list_all_users_aff_grp_by_id | Status : SUCCESS ===
ok
Delete Affinity Group belonging to regular user ... === TestName: test_07_delete_aff_grp_of_other_user | Status : SUCCESS ===
ok
Test create affinity group as admin ... === TestName: test_01_admin_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group as domain admin ... === TestName: test_02_doadmin_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group as user ... === TestName: test_03_user_create_aff_grp | Status : SUCCESS ===
ok
Test create affinity group that exists (same name) ... === TestName: test_04_user_create_aff_grp_existing_name | Status : SUCCESS ===
ok
Test create affinity group with existing name but within different account ... === TestName: test_05_create_aff_grp_same_name_diff_acc | Status : SUCCESS ===
ok
Test create affinity group of non-existing type ... === TestName: test_06_create_aff_grp_nonexisting_type | Status : SUCCESS ===
ok
Delete Affinity Group by name ... === TestName: test_01_delete_aff_grp_by_name | Status : SUCCESS ===
ok
Delete Affinity Group as admin for an account ... === TestName: test_02_delete_aff_grp_for_acc | Status : SUCCESS ===
ok
Delete Affinity Group which has vms in it ... === TestName: test_03_delete_aff_grp_with_vms | Status : SUCCESS ===
ok
Delete Affinity Group with id which does not belong to this user ... === TestName: test_05_delete_aff_grp_id | Status : SUCCESS ===
ok
Delete Affinity Group by name which does not belong to this user ... === TestName: test_06_delete_aff_grp_name | Status : SUCCESS ===
ok
Delete Affinity Group by id. ... === TestName: test_08_delete_aff_grp_by_id | Status : SUCCESS ===
ok
Root admin should be able to delete affinity group of other users ... === TestName: test_09_delete_aff_grp_root_admin | Status : SUCCESS ===
ok
Deploy VM without affinity group ... === TestName: test_01_deploy_vm_without_aff_grp | Status : SUCCESS ===
ok
Deploy VM by aff grp name ... === TestName: test_02_deploy_vm_by_aff_grp_name | Status : SUCCESS ===
ok
Deploy VM by aff grp id ... === TestName: test_03_deploy_vm_by_aff_grp_id | Status : SUCCESS ===
ok
test DeployVM in anti-affinity groups ... === TestName: test_04_deploy_vm_anti_affinity_group | Status : SUCCESS ===
ok
Deploy vms by affinity group id ... === TestName: test_05_deploy_vm_by_id | Status : SUCCESS ===
ok
Deploy vm in affinity group of another user by name ... === TestName: test_06_deploy_vm_aff_grp_of_other_user_by_name | Status : SUCCESS ===
ok
Deploy vm in affinity group of another user by id ... === TestName: test_07_deploy_vm_aff_grp_of_other_user_by_id | Status : SUCCESS ===
ok
Deploy vm in multiple affinity groups ... === TestName: test_08_deploy_vm_multiple_aff_grps | Status : SUCCESS ===
ok
Deploy multiple vms in multiple affinity groups ... === TestName: test_09_deploy_vm_multiple_aff_grps | Status : SUCCESS ===
ok
Deploy VM by aff grp name and id ... === TestName: test_10_deploy_vm_by_aff_grp_name_and_id | Status : SUCCESS ===
ok
List affinity group for a vm ... === TestName: test_01_list_aff_grps_for_vm | Status : SUCCESS ===
ok
List multiple affinity groups associated with a vm ... === TestName: test_02_list_multiple_aff_grps_for_vm | Status : SUCCESS ===
ok
List affinity groups by id ... === TestName: test_03_list_aff_grps_by_id | Status : SUCCESS ===
ok
List Affinity Groups by name ... === TestName: test_04_list_aff_grps_by_name | Status : SUCCESS ===
ok
List Affinity Groups by non-existing id ... === TestName: test_05_list_aff_grps_by_non_existing_id | Status : SUCCESS ===
ok
List Affinity Groups by non-existing name ... === TestName: test_06_list_aff_grps_by_non_existing_name | Status : SUCCESS ===
ok
List affinity group should list all for a vms associated with that group ... === TestName: test_07_list_all_vms_in_aff_grp | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupids ... === TestName: test_01_update_aff_grp_by_ids | Status : SUCCESS ===
ok
Update the list of affinityGroups by using affinity groupnames ... === TestName: test_02_update_aff_grp_by_names | Status : SUCCESS ===
ok
Update the list of affinityGroups for vm which is not associated ... === TestName: test_03_update_aff_grp_for_vm_with_no_aff_grp | Status : SUCCESS ===
ok
Update the list of Affinity Groups to empty list ... SKIP: Skip - Failing - work in progress
Update the list of Affinity Groups on running vm ... === TestName: test_05_update_aff_grp_on_running_vm | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 42 tests in 976.432s
OK (SKIP=1)
* pr/1134:
CLOUDSTACK-6276 Removing unused parameter in integration test for projects
CLOUDSTACK-6276 Removing unused parameter in integration test
CLOUDSTACK-6276 Fixing affinity groups for projects
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.6:
Use version for RC branch name instead of branch
make sure all files are updates with new version
Update L10N resource files with 4.6 strings from Transifex (20151129)
Fix secondary storage not working with swift
CLOUDSTACK-9083: Add disk serial to kvm virt xml
CLOUDSTACK-9048: Fix typo for public network description
* pr/1051:
CLOUDSTACK-9048: Fix typo for public network description
Signed-off-by: Remi Bergsma <github@remi.nl>
* 4.6:
more poms didn't get updated with script
implemented upgrade path from 4.6.0 to 4.6.1
checkstyle pom didn't get updated with script
debian: add 4.6.1-snapshot to changelog
Updating pom.xml version numbers for release 4.6.1-SNAPSHOT
Updating pom.xml version numbers for release 4.6.0
squashed commit for dockerfiles part#2 including comments from PR#910This PR replace PR#910 which include fix from comments in PR#910.
This PR simplify download of systemvm templates, work with docker-compose and enable integration-api port and use of localstorage without reconfiguration of cloudstack.
Rebased on Oct28.
* pr/999:
squashed commit for dockerfiles part#2 including comments from PR#910
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8925 - Default allow for Egress rules is not being configured properly in VR iptables rulesThis PR fixes the router default policy for egress. When the default is DENY, the router still allows outgoing connections.
The test component/test_routers_network_ops.py was improved to cover that case as well. The results were:
Test redundant router internals ... === TestName: test_01_isolate_network_FW_PF_default_routes_egress_true | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: test_02_isolate_network_FW_PF_default_routes_egress_false | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: test_01_RVR_Network_FW_PF_SSH_default_routes_egress_true | Status : SUCCESS ===
ok
Test redundant router internals ... === TestName: test_02_RVR_Network_FW_PF_SSH_default_routes_egress_false | Status : SUCCESS ===
ok
----------------------------------------------------------------------
Ran 4 tests in 3636.656s
OK
/tmp//MarvinLogs/test_routers_network_ops_QDL429/results.txt (END)
* pr/1023:
CLOUDSTACK-8925 - Implement the default egress DENY/ALLOW properly
CLOUDSTACK-8925 - Improve the default egress tests in order to cover newly entered rules
CLOUDSTACK-8925 - Add egress dataset to test_data.py
CLOUDSTACK-8925 - Drop the traffic when default egress is set to false
Signed-off-by: Remi Bergsma <github@remi.nl>
CLOUDSTACK-8960: Remove Citrix Resources from test_data.pyReplace URLs related to templates and ISOs with the ones accessbile to everybody in the community.
I have copied all the templates and ISOs to my webspace at http://people.apache.org/~sanjeev/ so they can be accessible from anywhere.
* pr/939:
CLOUDSTACK-8960: Remove Citrix Resources from test_data.py Replace URLs related to templates and ISOs with the ones accessbile to everybody in the community
Signed-off-by: Remi Bergsma <github@remi.nl>
Add optional fields: iprange and fordisplay to Marvin base.py class method Vpn.create
Add optional field: passive to Marvin base.py class method Vpn.createVpnConnection
CLOUDSTACK-8756:Incorrect guest os mapping in CCP 4.2.1-6 for CentOS 5.9Check the bug 8756 for more details
* pr/728:
CLOUDSTACK-8756:Incorrect guest os mapping in CCP 4.2.1-6 for CentOS 5.9
Signed-off-by: Rajani Karuturi <rajani.karuturi@citrix.com>
- Changing refactored the utils.get_process_status() function
- Adding 2 tests: test_01_single_VPC_iptables_policies and test_02_routervm_iptables_policies
CLOUDSTACK-8716: Verify creation of snapshot from volume when the task is performed repeatedly in zone wide primary StorageOn VMWare with a Zone wide primary storage and more than two clusters verify successful creation of snapshot multiple times.
* pr/665:
CLOUDSTACK-8716: Verify creation of snapshot from volume when the task is performed repeatedly in zone wide primary Storage
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Example of problem:
ATTENTION: Merging pull request #731 from remibergsma/centos7-kvm into 'master' branch in 5 seconds. CTRL+c to abort..
-n 5
-n 4
-n 3
-n 2
-n 1
-n 0
Should be compatible with more environments if printf is used instead.
CLOUDSTACK-8716: Verify creation of snapshot from volume when the task is performed repeatedly in zone wide primary Storage
-Removing redundent code
-Added validate list function for list snapshot operation
CLOUDSTACK-8716: Verify creation of snapshot from volume when the task is performed repeatedly in zone wide primary Storage
Removed duplicate test data related to vm properties.Modified tests dependent on it
Removed duplicte service offerings from test data and modified tests dependent on it
Bug-Id: CLOUDSTACK-8617
This closes#644
- Adding keepalived installation in the right script. I added the change on the buildsystemvm.sh, which is no longer used.
Signed-off-by: wilderrodrigues <wrodrigues@schubergphilis.com>
* Move config options to SAML plugin
This moves all configuration options from Config.java to SAML auth manager. This
allows us to use the config framework.
* Make SAML2UserAuthenticator validate SAML token in httprequest
* Make logout API use ConfigKeys defined in saml auth manager
* Before doing SAML auth, cleanup local states and cookies
* Fix configurations in 4.5.1 to 4.5.2 upgrade path
* Fail if idp has no sso URL defined
* Add a default set of SAML SP cert for testing purposes
Now to enable and use saml, one needs to do a deploydb-saml after doing a deploydb
* UI remembers login selections, IDP server
- CLOUDSTACK-8458:
* On UI show dropdown list of discovered IdPs
* Support SAML Federation, where there may be more than one IdP
- New datastructure to hold metadata of SP or IdP
- Recursive processing of IdP metadata
- Fix login/logout APIs to get new interface and metadata data structure
- Add org/contact information to metadata
- Add new API: listIdps that returns list of all discovered IdPs
- Refactor and cleanup code and tests
- CLOUDSTACK-8459:
* Add HTTP-POST binding to SP metadata
* Authn requests must use either HTTP POST/Artifact binding
- CLOUDSTACK-8461:
* Use unspecified x509 cert as a fallback encryption/signing key
In case a IDP's metadata does not clearly say if their certificates need to be
used as signing or encryption and we don't find that, fallback to use the
unspecified key itself.
- CLOUDSTACK-8462:
* SAML Auth plugin should not do authorization
This removes logic to create user if they don't exist. This strictly now
assumes that users have been already created/imported/authorized by admins.
As per SAML v2.0 spec section 4.1.2, the SP provider should create authn requests using
either HTTP POST or HTTP Artifact binding to transfer the message through a
user agent (browser in our case). The use of HTTP Redirect was one of the reasons
why this plugin failed to work for some IdP servers that enforce this.
* Add new User Source
By reusing the source field, we can find if a user has been SAML enabled or not.
The limitation is that, once say a user is imported by LDAP and then SAML
enabled - they won't be able to use LDAP for authentication
* UI should allow users to pass in domain they want to log into, though it is
optional and needed only when a user has accounts across domains with same
username and authorized IDP server
* SAML users need to be authorized before they can authenticate
- New column entity to track saml entity id for a user
- Reusing source column to check if user is saml enabled or not
- Add new source types, saml2 and saml2disabled
- New table saml_token to solve the issue of multiple users across domains and
to enforce security by tracking authn token and checking the samlresponse for
the tokens
- Implement API: authorizeSamlSso to enable/disable saml authentication for a
user
- Stubs to implement saml token flushing/expiry
- CLOUDSTACK-8463:
* Use username attribute specified in global setting
Use username attribute defined by admin from a global setting
In case of encrypted assertion/attributes:
- Decrypt them
- Check signature if provided to check authenticity of message using IdP's
public key and SP's private key
- Loop through attributes to find the username
- CLOUDSTACK-8538:
* Add new global config for SAML request sig algorithm
- CLOUDSTACK-8539:
* Add metadata refresh timer task and token expiring
- Fix domain path and save it to saml_tokens
- Expire hour old saml tokens
- Refresh metadata based on timer task
- Fix unit tests
This closes#489
(cherry picked from commit 20ce346f3a)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Conflicts:
client/WEB-INF/classes/resources/messages_hu.properties
plugins/hypervisors/xenserver/src/com/cloud/hypervisor/xenserver/resource/wrapper/xenbase/CitrixCheckHealthCommandWrapper.java
plugins/user-authenticators/saml2/src/org/apache/cloudstack/api/command/SAML2LoginAPIAuthenticatorCmd.java
ui/scripts/ui-custom/login.js
Revert the fix to throw failed test on exception, will fix that in a separate branch so all the other fixes can be committed already and not raise failure on an old intermittent issue that was hidden
Signed-off-by: Daan Hoogland <daan@onecht.net>
Add urandom as random source in before_script.sh
Remove commented lines in before_script.sh
Remove commented lines in install.sh
Signed-off-by: Daan Hoogland <daan@onecht.net>