Commit Graph

341 Commits

Author SHA1 Message Date
Rohit Yadav 36166046cf
ScaleIO: Storage Plugin (Phase 0+1) (#77)
* scaleio: prototype storage plugin

- plugin skeleton
- add storage pool, create/attach data disk

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>

* kvm: attach disk example

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>

* Updated ScaleIO storage plugin to support Volume operations

* ScaleIO storage plugin - Support for VM operations and other updates

* ScaleIO storage pool plugin changes

- Added validation to check existing ScaleIO storage pool and update capacity details
- Updated resize volume for ScaleIO to pick the rounded 8GB boundary size
- Added support for setting ScaleIO storage pool statistics (bandwidthLimitInKbps, iopsLimit)

* Fixed IOPS validation and volume size update when resizing ScaleIO volume

* Removed connect/disconnect disk changes from ScaleIO storage adaptor
- ScaleIO datastore driver does map/unmap ScaleIO volume (from MS) using grant/revoke access
- Not required to map/unmap ScaleIO volume from the storage adaptor

* Updated connect disk, to wait for ScaleIO volume to become available in the KVM host

* Updated ScaleIO storage provider, pool type, url scheme and related paramters to the new "PowerFlex" brand

* Fixed size rounding issue while creating PowerFlex volume and added validations to PowerFlex Gateway API client

* Updated host sdc connection check for ScaleIO/PowerFlex pool on host connect

* Updated volume snapshots support for volumes on ScaleIO/PowerFlex storage pool and Added some validations for ScaleIO disks in host

* Added primary storage level configurable setting "storage.pool.disk.wait" to wait for disk availability

- Confiure the disk availability wait time, mainly introduced for ScaleIO/PowerFlex storage pool (can be used for other managed storages), to wait for the disk to become available in the host before performing any operation on it

* Enabled template spooling to ScaleIO/PowerFlex storage pool and create VM from the spooled template.
Added ScaleIO SDC limits support for volumes using offering parameters: bandwidthLimitInKbps, iopsLimit.

* Added support for VM snapshots on ScaleIO/PowerFlex storage pool
Minor improvements for IOPS (SDC Limits) configuration

* Updated access for ScaleIO/PowerFlex volumes on VM Start and Stop
Added primary storage level configurable setting "storage.pool.client.timeout" for storage API client
Enabled cluster wide storage pool support for ScaleIO/PowerFlex storage
Minor improvements for ScaleIO/PowerFlex disk access in the KVM host

* Added support for direct download of templates (raw, qcow2) on ScaleIO/PowerFlex storage pool

* Added support for config drives in host cache for KVM

- Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
- Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
- Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
- Added new parameter "vm.configdrive.host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path for config drives

* Updated disk access while migrating the VM with volumes on ScaleIO/PowerFlex storage pool
Changed the parameter "vm.configdrive.host.cache.location" to "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties to specify the host cache path
Changes to create config drives on the "/config" directory on the host cache path
Changes to suppport migrate VM with config drive on the host cache path

* Additonal changes to support migrate VM with config drive on the host cache

* Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates
Updated full deployment destination for preparing the network(s) on VM start

* Propagate the direct download certificates uploaded to the newly added KVM hosts

* Code improvements for ScaleIO/PowerFlex storage plugin

* Updated storage stats collection and tests for ScaleIO/PowerFlex storage plugin

* Fix for template size of direct download templates on capacity check for ScaleIO/PowerFlex storage pool
Updated data object grant and revoke access for connected SDCs to ScaleIO/PowerFlex storage pool

* Discover the template size for direct download templates using any available host from the zones specified on template registration

When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

* Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

* Ensure the volume to be expunged, is expunge ready on storage cleanup

* Do not set the storage migration flag for the volumes on zone wide PowerFlex/ScaleIO pool when listing the hosts available for cross-cluster migration

* Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

* Added alerts for PowerFlex/ScaleIO SDC disconnection on the host(s)

* Retry VM deployment/start when the host cannot access volume/template on the ScaleIO/PowerFlex storage

* Changes to find a potential host that can access the ScaleIO/PowerFlex storage pool

* Updated ScaleIO/PowerFlex storage pool stats for checking the available capacity and usage

* Updated ScaleIO/PowerFlex volumes naming convention to avoid the naming conflicts on sharing

* Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand

- Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

* Updated ScaleIO/PowerFlex storage pool capacity stats

* Cleanup unused templates and host entries on PowerFlex/ScaleIO storage pool deletion

* Check the router filesystem is writable or not, before performing health checks

- Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
- The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.

* Updated the router filesystem writable check using script, instead cmd execution

- Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

* Update volume stats (physical and virtual size) for the volumes on PowerFlex/ScaleIO storage pool

Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
2020-10-07 16:02:02 +05:30
pavanaravapalli d4b537efa7
UEFI Implementation: Enabled UEFI Support for Guest VM's on Hypervisor KVM,VMware. enabled boot modes [Legacy,Secure] support for UEFI boot with known caveats. (#3638)
Co-authored-by: Pavan Kumar Aravapalli <pavan_aravapalli@accelerite.com>
Co-authored-by: dahn <daan.hoogland@shapeblue.com>
2020-03-13 20:56:26 +01:00
Nicolas Vazquez 73122fd0a9
[KVM] Direct download agnostic of the storage provider (#3828)
* Remove constraint for NFS storage

* Add new property on agent.properties

* Add free disk space on the host prior template download

* Add unit tests for the free space check

* Fix free space check - retrieve avaiable size in bytes

* Update default location for direct download

* Improve the method to retrieve hosts to retry on depending on the destination pool type and scope

* Verify location for temporary download exists before checking free space

* In progress - refactor and extension

* Refactor and fix

* Last fixes and marvin tests

* Remove unused test file

* Improve logging

* Change default path for direct download

* Fix upload certificate

* Fix ISO failure after retry

* Fix metalink filename mismatch error

* Fix iso direct download

* Fix for direct download ISOs on local storage and shared mount point

* Last fix iso

* Fix VM migration with ISO

* Refactor volume migration to remove secondary storage intermediate

* Fix simulator issue
2020-03-06 19:56:54 +01:00
Wei Zhou ce894238d9
vpc: add bypassvlanoverlapcheck parameter when create private g… (#3899) 2020-02-23 21:21:08 +00:00
Nicolas Vazquez ce896a477d
[Vmware] Enable PVLAN support on L2 networks (#3732)
* Enable PVLAN support on L2 networks

* Fix prevent null pointer on details

* Add marvin tests

* Fixes from comments

* Fix: missing pvlan type on plugniccommand

* Fix checks on network creation for vlans overlap

* Fix remove prefix from secondary vlan id

* Improve checks on physical network for pvlans

* Fix compatibility with previous pvlan creation

* Fix shared networks backwards pvlan compatibility

* Add ui fix for pvlan type not passed to api

* Add check for isolated vlan id overlap

* Include check for dynamic vlan reserved for secondary vlan

* Fix marvin tests errors

* Fix redundant imports

* Skip marvin test for pvlan if dvswitch is not present

* spelling

Co-authored-by: Andrija Panic <45762285+andrijapanicsb@users.noreply.github.com>
2020-02-07 15:43:01 +01:00
Abhishek Kumar 0f5b0e67f8
VM ingestion (#3606)
The VM ingestion feature allows CloudStack to discover, on-board, import existing VMs in an infra. The feature currently works only for VMware, with a hypervisor agnostic framework which may be extended for KVM and XenServer in future.
2020-02-03 15:43:52 +01:00
Wei Zhou ac581d1546
New feature: Resource count (CPU/RAM) take only running vms into calculation (#3760)
* marvin: check resource count of more types

* New feature: add flag resource.count.running.vms.only to count resource consumption of only running vms

Stopped VMs do not use CPU/RAM actually.
A new global configuration resource.count.running.vms.only is added to determine whether resource (cpu/memory) of only running vms (including Starting/Stopping) will be taken into calculation of resource consumption.

* Add integration test for resource count of only running vms
2020-01-30 10:36:50 +01:00
Abhishek Kumar b2db8979f2 server: fix for respecting secondary storage threshold limit (#3480)
Retrieval of an image store using ImageStoreProviderManager has been refactored by introducing three different methods,
DataStore getRandomImageStore(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will not be used here.
DataStore getImageStoreWithFreeCapacity(List<DataStore> imageStores);
To get an image store for reading purpose. Threshold capacity check will be used here and the store with max free space will be returned. If no store with filled storage less than the threshold is found, the NULL value will be returned.
List<DataStore> listImageStoresWithFreeCapacity(List<DataStore> imageStores);
To get a list of image stores for writing purpose which fulfills threshold capacity check.

Correspondingly DataStoreManager methods have been refactored to return similar values for a given zone.

Fixes #3287 - NULL value will be returned when secondary storage is needed for writing but there is not store with free space.
Fixes #3041 - Rather than returning random secondary storage for writing, storage with max. free space will be returned.
Fixes #3478 - For migration on VMware, all writable secondary storage will be mounted while preparation.

Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2019-07-31 15:37:59 +05:30
Nicolas Vazquez 0fbf5006b8 kvm: live storage migration intra cluster from NFS source and destination (#2983)
Feature Specification: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95653548

Live storage migration on KVM under these conditions:

From source and destination hosts within the same cluster
From NFS primary storage to NFS cluster-wide primary storage
Source NFS and destination NFS storage mounted on hosts
In order to enable this functionality, database should be updated in order to enable live storage capacibilty for KVM, if previous conditions are met. This is due to existing conflicts between qemu and libvirt versions. This has been tested on CentOS 6 hosts.

Additional notes:

To use this feature set the storage_motion_supported=1 in the hypervisor_capability table for KVM. This is done by default as the feature may not work in some environments, read below.
This feature of online storage+VM migration for KVM will only work with CentOS6 and possible Ubuntu as KVM hosts but not with CentOS7 due to:
https://bugs.centos.org/view.php?id=14026
https://bugzilla.redhat.com/show_bug.cgi?id=1219541
On CentOS7 the error we see is: " error: unable to execute QEMU command 'migrate': this feature or command is not currently supported" (reference https://ask.openstack.org/en/question/94186/live-migration-unable-to-execute-qemu-command-migrate/). Reading through various lists looks like the migrate feature with qemu may be available with paid versions of RHEL-EV but not centos7 however this works with CentOS6.
Fix for CentOS 7:

Create repo file on /etc/yum.repos.d/:
[qemu-kvm-rhev]
name=oVirt rebuilds of qemu-kvm-rhev
baseurl=http://resources.ovirt.org/pub/ovirt-3.5/rpm/el7Server/
mirrorlist=http://resources.ovirt.org/pub/yum-repo/mirrorlist-ovirt-3.5-el7Server
enabled=1
skip_if_unavailable=1
gpgcheck=0
yum install qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 qemu-kvm-ev-2.3.0-29.1.el7.x86_64 qemu-img-ev-2.3.0-29.1.el7.x86_64
Reboot host

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-10 15:35:26 +05:30
Rohit Yadav bbc0ae873d
storage: post process locally uploaded multi-disk ova template (#3215)
Problem: When a multi-disk OVA template is uploaded, only the root disk is recognized and VMs deployed using such template only get the root disk provisioned.
Root Cause: The template processor for multi-disk OVA was not used in the template upload processor.
Solution: Added support for local multi-disk OVA template upload. After a multi-disk OVA template is
uploaded, the mechanism that worked on multi-disk OVA templates registered using URL is now also used to discovers and creates data-disk templates in cloud.vm_template table and on the secondary storage.

To enable SSL on SSVMs :
• Upload the certificates like you usually do via the API or UI->Infrastructure tab
• Set the global settings secstorage.encrypt.copy, secstorage.ssl.cert.domain to appropriate values
along with the CPVM ones
• Restart management server (no need to destroy/restart SSVM (or the ssvm agent))

Test cases:
- Upload template and check it creates multi-disk folders on secondary 
storage and entries in cloud.vm_template table
- Upload template and kill/shutdown management server. Then restart MS
to check if template sync works
- Copy template across zone of an uploaded template

Signed-off-by: Rohit Yadav rohit.yadav@shapeblue.com
2019-06-05 23:07:40 +05:30
Rohit Yadav b2b99ca63e Merge remote-tracking branch 'origin/4.11' into 4.12
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-03 17:15:41 +05:30
Nicolas Vazquez c9ce3e2344 router: Persistent DHCP leases file on VRs and cleanup /etc/hosts on VM deletion (#3351)
Since the CloudStack virtual router was redesigned on version 4.6 it has been observed that the DHCP leases file is not persistent across network operations. This causes conflicts on guest VMs static IPs, causing these static IPs to not be renewed by the DHCP server running on isolated and VPC networks' virtual routers (dnsmasq). On stopping or destroying a VM, its dhcp/dns records are not removed from the virtual router causing ghost effects.

Fixes #3272
Fixes #3354

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2019-06-03 17:04:16 +05:30
Nathan Johnson 637cc6ec4e feature: add libvirt / qemu io bursting (#3133)
* feature: add libvirt / qemu io bursting

Adds the ability to set bursting features from libvirt / qemu

This allows you to utilize the iops and bytes temporary "burst" mode
introduced with libvirt 2.4 and improved upon with libvirt 2.6.

https://blogs.igalia.com/berto/2016/05/24/io-bursts-with-qemu-2-6/

* updates per rafael et al
2019-02-04 19:47:44 -02:00
GabrielBrascher 460d3127ec Fix conflict and merge forward PR #3122 from 4.11 to master (4.12) 2019-02-04 19:24:59 -02:00
Nathan Johnson bf805d1483 Add back ability to disable backup of snapshot to secondary (#3122)
* The snapshot.backup.rightafter configuration variable was removed by:

SHA: 6bb0ca2f85

This adds it back, though named snapshot.backup.to.secondary now instead.

This global parameter, once set, will allow you to prevent automatic backups of
     snapshots to secondary storage, unless they're actually needed.

Fixes #3096

* updates per review
2019-02-04 19:08:42 -02:00
dahn b363fd49f7 Vmware offline migration (#2848)
* - Offline VM and Volume migration on Vmware hypervisor hosts
- Also add VM disk consolidation call on successful VM migrations

* Fix indentation of marvin test file and reformat against PEP8

* * Fix few comment typos
* Refactor debug messages to use String.format() when debug log level is enabled.

* Send list of commands returned by hypervisor Guru instead of explicitly selecting the first one

* Fix unhandled NPE during VM migration

* Revert back to distinct event descriptions for VM to host or storage pool migration

* Reformat test_primary_storage file against PEP-8 and Remove unused imports

* Revert back the deprecation messages in the custom StringUtils class to favour the use of the ApacheUtils
2019-01-25 10:05:13 -02:00
Gabriel Beims Bräscher e45bed74a5 server: remove unused StrategyPriority.PLUGIN. (#3014)
Remove unused StrategyPriority.PLUGIN enum. The PLUGIN Strategy priority is not used, except by three JUnit test methods.
2018-11-14 15:07:37 +05:30
Rohit Yadav 233f46c94b Merge remote-tracking branch 'origin/4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-10-17 20:29:58 +05:30
Rohit Yadav 5ce14df31f
network: Allow ability to disable rolling restart feature (#2900)
This adds a global setting for admins who may not want the rolling
restart of routers or are seeing any issues around it. In future, this
setting may be removed.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-10-17 20:27:08 +05:30
Mike Tutkowski 3db33b7385 Support online migration of a virtual disk on XenServer from non-managed storage to managed storage 2018-08-12 00:23:36 -06:00
Rohit Yadav 7c6777b8d3 Merge branch '4.11': allow config drives on primary storage for KVM (#2651)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-21 14:50:55 +05:30
Rohit Yadav acc5fdcdbd
CLOUDSTACK-10290: allow config drives on primary storage for KVM (#2651)
This introduces a new global setting `vm.configdrive.primarypool.enabled` to toggle creation/hosting of config drive iso files on primary storage, the default will be false causing them to be hosted on secondary storage. The current support is limited from hypervisor resource side and in current implementation limited to `KVM` only. The next big change is that config drive is created at a temporary location by management server and shipped to either KVM or SSVM agent via cmd-answer pattern, the data of which is not logged in logs. This saves us from adding genisoimage dependency on cloudstack-agent pkg.

The APIs to reset ssh public key, password and user-data (via update VM API) requires that VM should be shutdown. Therefore, in the refactoring I removed the case of updation of existing ISO. If there are objections I'll re-put the strategy to detach+attach new config iso as a way of updation. In the refactored implementation, the folder name is changed to lower-cased configdrive. And during VM start, migration or shutdown/removal if primary storage is enable for use, the KVM agent will handle cleanup tasks otherwise SSVM agent will handle them.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-21 14:27:23 +05:30
Rafael Weingärtner 15eddf3dd6 Merge forward branch '4.11' PR #2629
Fix primary storage count when deleting volumes (#2629)
2018-05-16 16:59:17 -03:00
Rafael Weingärtner b9ed42bd29
Fix primary storage count when deleting volumes (#2629)
* Primary Storage count for an account does not decrease when a Data Disk is deleted

When a data disk is created and not attached in a running VM, the "deleteVolume" will not decrement the count for used primary storage in the VMs accounting information. The property that is not being decremented is called "primarystoragetotal"; this information can be retrieved via "listAccounts" API method.

Steps to reproduce this issue:
1 - Create an account, deploy a VM in it
2 - Check the primary storage count for the account with listAccounts API
3 - Create a data disk
4 - Check the primary storage count for the account with listAccounts API
5 - Delete the Data disk
6 - Check the primary storage count for the account with listAccounts API - It is the same as before deleting the data disk (it should not be the same as the value in step 2!)

* formatting and cleanups

* fix imports that were wrongly changed during rebase
2018-05-16 15:28:28 -03:00
Rohit Yadav 65511c4335 Merge branch '4.11': Reduce VR downtime during network restart (#2508)
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-11 13:09:03 +05:30
Rohit Yadav a77ed56b86
CLOUDSTACK-9114: Reduce VR downtime during network restart (#2508)
This introduces a rolling restart of VRs when networks are restarted
with cleanup option for isolated and VPC networks. A make redundant option is
shown for isolated networks now in UI.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-11 12:48:07 +05:30
Marc-Aurèle Brothier 46bd94c6a2 [CLOUDSTACK-10254] checkstyle: add package name declaration validation (#2422)
* checktyle: verify package name matches directory structure

* fix new checkstyle findings on directory with package name mismatch
2018-04-26 10:32:08 -03:00
Marc-Aurèle Brothier 893a88d225 CLOUDSTACK-10105: Use maven standard project structure in all projects (#2283)
Remove maven standard module (which only a few were using) and get ride of maven customization for the projects structure.

- moved all directories to src/main/java, src/main/resources, src/main/scripts, src/test/java, src/test/resources
- grep scan to search for src/com and src/org left over
- grep for <project>/scripts to fix pom.xml configuration
- remove custom <build> configuration in pom.xml

Signed-off-by: Marc-Aurèle Brothier <m@brothier.org>
2018-01-20 03:19:27 +05:30
Mike Tutkowski a30a31c9b7 CLOUDSTACK-9620: Enhancements for managed storage (#2298)
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)

Added support for root disks on managed storage with KVM

Added support for volume snapshots with managed storage on KVM

Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage

Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.

Included support for Reinstall VM on KVM with managed storage

Enabled offline migration on KVM from non-managed storage to managed storage and vice versa

Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)

Added support to download (extract) a managed-storage volume to a QCOW2 file

When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.

Included support for the KVM auto-convergence feature

The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)

On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.

Added a new Global Setting: kvm.storage.live.migration.wait

For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)

Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.

Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot

Added an SIOC API plug-in to support VMware SIOC

When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.

Added the ability to revert a volume to a snapshot

Enabled cluster-scoped managed storage

Added support for VMware dynamic discovery
2018-01-15 00:05:52 +05:30
Frank Maximus b176648f90 CLOUDSTACK-9813: Extending Config Drive support (#2097)
Extending Config Drive support

* Added support for VMware
* Build configdrive.iso on ssvm
* Added support for VPC and Isolated Networks
* Moved implementation to new Service Provider
* UI fix: add support for urlencoded userdata
* Add support for building systemvm behind a proxy

Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>
Co-Authored-By: Frank Maximus <frank.maximus@nuagenetworks.net>
Co-Authored-By: Sigert Goeminne <sigert.goeminne@nuagenetworks.net>
2018-01-12 15:14:40 +05:30
Abhinandan Prateek 64832fd70a CLOUDSTACK-4757: Support OVA files with multiple disks for templates (#2146)
CloudStack volumes and templates are one single virtual disk in case of XenServer/XCP and KVM hypervisors since the files used for templates and volumes are virtual disks (VHD, QCOW2). However, VMware volumes and templates are in OVA format, which are archives that can contain a complete VM including multiple VMDKs and other files such as ISOs. And currently, Cloudstack only supports Template creation based on OVA files containing a single disk. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, only the first disk is attached to the new instance and other disks are ignored.
Similarly with uploaded volumes, attaching an uploaded volume that contains multiple disks to a VM will result in only one VMDK to being attached to the VM.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Support+OVA+files+containing+multiple+disks

This behavior needs to be improved in VMWare to support OVA files with multiple disks for both uploaded volumes and templates. i.e. If a user creates a template from a OVA file containing more than 1 disk and launches an instance using this template, the first disk should be attached to the new instance as the ROOT disk and volumes should be created based on other VMDK disks in the OVA file and should be attached to the instance.

Signed-off-by: Abhinandan Prateek <abhinandan.prateek@shapeblue.com>
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-10 22:10:41 +05:30
Nicolas Vazquez e86bb41e0e CLOUDSTACK-10146: Bypass Secondary Storage for KVM templates (#2379)
This feature allows using templates and ISOs avoiding secondary storage as intermediate cache on KVM. The virtual machine deployment process is enhanced to supported bypassed registered templates and ISOs, delegating the work of downloading them to primary storage to the KVM agent instead of the SSVM agent.

Template and ISO registration:
- When hypervisor is KVM, a checkbox is displayed with 'Direct Download' label.
- API methods registerTemplate and registerISO are both extended with this new parameter directdownload.
- On template or ISO registration, no download job is sent to SSVM agent, CloudStack would only persist an entry on template_store_ref indicating that template or ISO has been marked as 'Direct Download' (bypassing Secondary Storage). These entries are persisted as:
template_id = Template or ISO id on vm_template table
store_id NULL
download_state = BYPASSED
state = Ready
(Note: these entries allow users to deploy virtual machine from registered templates or ISOs)
- An URL validation command is sent to a random KVM host to check if template/ISO location can be reached. Metalink are also supported by this feature. In case of a metalink, it is fetched and URL check is performed on each of its URLs.
- Checksum should be provided as indicated on #2246: {ALGORITHM}CHKSUMHASH
- After template or ISO is registered, it would be displayed in the UI

Virtual machine deployment:
When a 'Direct Download' template is selected for deployment, CloudStack would delegate template downloading to destination storage pool via destination host by a new pluggable download manager.
Download manager would handle template downloading depending on URL protocol. In case of HTTP, request headers can be set by the user via vm_template_details. Those details should be persisted as:
Key: HTTP_HEADER
Value: HEADERNAME:HEADERVALUE

In case of HTTPS, a new API method is added uploadTemplateDirectDownloadCertificate to allow user importing a client certificate into all KVM hosts' keystore before deployment.
After template or ISO is downloaded to primary storage, usual entry would be persisted on template_spool_ref indicating the mapping between template/ISO and storage pool.
2018-01-09 12:22:18 +05:30
subhash yedugundla 8eca04e1f6 CLOUDSTACK-9572: Snapshot on primary storage not cleaned up after Storage migration (#1740)
Snapshot on primary storage not cleaned up after Storage migration. This happens in the following scenario:

Steps To Reproduce
Create an instance on the local storage on any host
Create a scheduled snapshot of the volume:
Wait until ACS created the snapshot. ACS is creating a snapshot on local storage and is transferring this snapshot to secondary storage. But the latest snapshot on local storage will stay there. This is as expected.
Migrate the instance to another XenServer host with ACS UI and Storage Live Migration
The Snapshot on the old host on local storage will not be cleaned up and is staying on local storage. So local storage will fill up with unneeded snapshots.
2018-01-05 11:19:56 +05:30
Sigert Goeminne 26759d1d13 CLOUDSTACK-10189: Adding nuage VSD managed network support to CloudStack (#2360)
Exposing externalId en domainId field in the UI to CS users.

Co-Authored-By: Sigert Goeminne sigert.goeminne@nuagenetworks.net
Co-Authored-By: Raf Smeets raf.smeets@nuagenetworks.net
2017-12-28 14:55:15 +05:30
Sigert Goeminne d49765619d CLOUDSTACK-10024: Network migration support
Co-Authored-By: Frank Maximus frank.maximus@nuagenetworks.net
Co-Authored-By: Raf Smeets raf.smeets@nuagenetworks.net

New API’s:

* migrateNetwork
* migrateVpc
2017-12-21 11:25:17 +01:00
Sigert Goeminne 77864992fe CLOUDSTACK-9776: extra DHCP options support for Nuage VSP
Co-Authored-By: Frank Maximus <frank.maximus@nuagenetworks.net>
Co-Authored-By: Prashanth Manthena <prashanth.manthena@nuagenetworks.net>
Co-Authored-By: Raf Smeets <raf.smeets@nuagenetworks.net>

Bug: https://issues.apache.org/jira/browse/CLOUDSTACK-9776

Design-Doc: https://cwiki.apache.org/confluence/display/CLOUDSTACK/CloudStack+extra+DHCP+option+support
2017-11-21 11:44:39 +01:00
Frank Maximus d077b3efc6
Merge pull request #2004 from nuagenetworks/feature/vr_without_public_ip
CLOUDSTACK-9832: Do not assign public IP NIC to the VPC VR when the VPC offering does not contain VpcVirtualRouter as a SourceNat provider
2017-11-02 11:56:05 +01:00
Rohit Yadav 41fdb88970 CLOUDSTACK-10047: DVSwitch fixes and improvements (#2293)
Allow security policies to apply on port groups:
- Accepts security policies while creating network offering
- Deployed network will have security policies from the network offering
  applied on the port group (in vmware environment)
- Global settings as fallback when security policies are not defined for a network
  offering
- Default promiscuous mode security policy set to REJECT as it's the default
  for standard/default vswitch

Portgroup vlan-trunking options for dvswitch: This allows admins to define
a network with comma separated vlan id and vlan
range such as vlan://200-400,21,30-50 and use the provided vlan range to
configure vlan-trunking for a portgroup in dvswitch based environment.

VLAN overlap checks are performed for:
- isolated network against existing shared and isolated networks
- dedicated vlan ranges for the physical/public network for the zone
- shared network against existing isolated network

Allow shared networks to bypass vlan overlap checks: This allows admins
to create shared networks with a `bypassvlanoverlapcheck` API flag
which when set to 'true' will create a shared network without
performing vlan overlap checks against isolated network and against
the vlans allocated to the datacenter's physical network (vlan ranges).

Notes:
- No vlan-range overlap checks are performed when creating shared networks
- Multiple vlan id/ranges should include the vlan:// scheme prefix

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2017-10-25 15:14:42 +05:30
Frank Maximus 1d382e0cb4 CLOUDSTACK-9832: Remove public interface from VPC Virtual Router
Co-Authored-By: Prashanth Manthena <prashanth.manthena@nuagenetworks.net>
Co-Authored-By: Sigert Goeminne <sigert.goeminne@nuagenetworks.net>

Bug: https://issues.apache.org/jira/browse/CLOUDSTACK-9832

Detail:
When the VPC offering does not contain VpcVirtualRouter as a SourceNat provider,
then we will not add the interface in the public network to the VpcVR.

CLOUDSTACK-9832: Move isSrcNat check to VpcManager
2017-10-11 11:35:53 +02:00
Harika Punna 6bb0ca2f85 This feature separates the snapshot creation on primary and its backing up on secondary.
As part of this, a new parameter, which is optional, is added to CreateSnapshotCmd, which seperates the creation and backup.

More details in the FS-
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Separate+creation+and+backup+operations+for+a+volume+snapshot
2017-10-04 14:39:03 +05:30
Rohit Yadav 7ce54bf7a8 CLOUDSTACK-9993: Securing Agents Communications (#2239)
This introduces a new certificate authority framework that allows
pluggable CA provider implementations to handle certificate operations
around issuance, revocation and propagation. The framework injects
itself to `NioServer` to handle agent connections securely. The
framework adds assumptions in `NioClient` that a keystore if available
with known name `cloud.jks` will be used for SSL negotiations and
handshake.

This includes a default 'root' CA provider plugin which creates its own
self-signed root certificate authority on first run and uses it for
issuance and provisioning of certificate to CloudStack agents such as
the KVM, CPVM and SSVM agents and also for the management server for
peer clustering.

Additional changes and notes:
- Comma separate list of management server IPs can be set to the 'host'
  global setting. Newly provisioned agents (KVM/CPVM/SSVM etc) will get
  radomized comma separated list to which they will attempt connection
  or reconnection in provided order. This removes need of a TCP LB on
  port 8250 (default) of the management server(s).
- All fresh deployment will enforce two-way SSL authentication where
  connecting agents will be required to present certificates issued
  by the 'root' CA plugin.
- Existing environment on upgrade will continue to use one-way SSL
  authentication and connecting agents will not be required to present
  certificates.
- A script `keystore-setup` is responsible for initial keystore setup
  and CSR generation on the agent/hosts.
- A script `keystore-cert-import` is responsible for import provided
  certificate payload to the java keystore file.
- Agent security (keystore, certificates etc) are setup initially using
  SSH, and later provisioning is handled via an existing agent connection
  using command-answers. The supported clients and agents are limited to
  CPVM, SSVM, and KVM agents, and clustered management server (peering).
- Certificate revocation does not revoke an existing agent-mgmt server
  connection, however rejects a revoked certificate used during SSL
  handshake.
- Older `cloudstackmanagement.keystore` is deprecated and will no longer
  be used by mgmt server(s) for SSL negotiations and handshake. New
  keystores will be named `cloud.jks`, any additional SSL certificates
  should not be imported in it for use with tomcat etc. The `cloud.jks`
  keystore is stricly used for agent-server communications.
- Management server keystore are validated and renewed on start up only,
  the validity of them are same as the CA certificates.

New APIs:
- listCaProviders: lists all available CA provider plugins
- listCaCertificate: lists the CA certificate(s)
- issueCertificate: issues X509 client certificate with/without a CSR
- provisionCertificate: provisions certificate to a host
- revokeCertificate: revokes a client certificate using its serial

Global settings for the CA framework:
- ca.framework.provider.plugin: The configured CA provider plugin
- ca.framework.cert.keysize: The key size for certificate generation
- ca.framework.cert.signature.algorithm: The certificate signature algorithm
- ca.framework.cert.validity.period: Certificate validity in days
- ca.framework.cert.automatic.renewal: Certificate auto-renewal setting
- ca.framework.background.task.delay: CA background task delay/interval
- ca.framework.cert.expiry.alert.period: Days to check and alert expiring certificates

Global settings for the default 'root' CA provider:
- ca.plugin.root.private.key: (hidden/encrypted) CA private key
- ca.plugin.root.public.key: (hidden/encrypted) CA public key
- ca.plugin.root.ca.certificate: (hidden/encrypted) CA certificate
- ca.plugin.root.issuer.dn: The CA issue distinguished name
- ca.plugin.root.auth.strictness: Are clients required to present certificates
- ca.plugin.root.allow.expired.cert: Are clients with expired certificates allowed

UI changes:
- Button to download/save the CA certificates.

Misc changes:
- Upgrades bountycastle version and uses newer classes
- Refactors SAMLUtil to use new CertUtils

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2017-08-28 12:15:11 +02:00
Nitin Kumar Maharana e243a31e41 CLOUDSTACK-8672 : NCC Integration with CloudStack.
Improvements.
2017-07-20 12:42:43 +05:30
Rajani Karuturi 9fd0965087 Merge pull request #2126 from Accelerite/CLOUDSTACK-9740
CLOUDSTACK-9740 : Search for secondary IP of NIC that is attached to an instance is not working
2017-06-06 16:29:45 +05:30
Nitesh Sarda 5eed75120b CLOUDSTACK-9740 : Search for secondary IP of NIC that is attached to an instance is not working 2017-05-31 15:42:51 +05:30
Anshul Gangwar f52719a9cf CLOUDSTACK-9707: While using hostid parameter, vm gets deployed on another if the host
given is running out of capacity. If host id is specified the deployment should happen
on the given host and it should fail if the host is out of capacity. We are retrying
deployment on the entire zone without the given host id if we fail once. The retry,
which will retry on other hosts, should only be attempted if host id isn't given.

Also, introduces global setting
allow.deploy.vm.if.deploy.on.given.host.fails with which old behaviour
can be restored
2017-05-18 12:21:30 +05:30
Rajani Karuturi 83b93d2f60 Merge pull request #1971 from bvbharatk/CLOUDSTACK-9726
CLOUDSTACK-9726 Update state is not changed to UPDATE_FAILED in case …
2017-05-17 11:19:25 +05:30
Bharat Kumar 55067a8692 CLOUDSTACK-9726 Update state is not changed to UPDATE_FAILED in case when Host is put in Maintenance Mode. 2017-03-27 05:44:05 -07:00
Jayapal 7eea445703 CLOUDSTACK-9723: Enable unique mac address across the zones 2017-02-23 12:39:31 +05:30
nvazquez 6ce6cf67f0 CLOUDSTACK-9738: [Vmware] Optimize vm expunge process for instances with vm snapshots 2017-02-06 23:39:01 -03:00
Rohit Yadav e6cc78f531 CLOUDSTACK-9710: Switch to JRE1.8
- Switches Travis to use jdk1.8
- Changes java-version to 1.8
- Change jdk/maven version to 1.8
- Switch to F5/java8 compatible library release
- Switch packaging to use jdk 1.8, and jre 1.8 in init/systemd scripts
- Switch systemvm to openjdk-8-jre

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2017-01-11 14:04:03 +05:30