Commit Graph

171 Commits

Author SHA1 Message Date
Nicolas Vazquez 35357dc8f9
FR03: NSX Integration (#1)
* NSX integration - skeletal code

* Fix module not loading on startup

* add upgrade path and daos
\n add nsx controller command

* add support for adding and listing nsx provider to a zone

* add license

* add default VPC offering and update upgrade path

* add global setting to enable nsx plugin

* add delete nsx controller operation

* add nsxresource

* add NSX resource , api client, create tier1 gw

* update db

* update response and add license

* Add support to create and delete nsx tier-1 gateway

* add license

* cleanup and add skeletal code for network creation

* add create/delete segment and UI integration

* add license

* address code smells - part 1

* fix test / build failure

* NSX integration - skeletal code

* Fix module not loading on startup

* add upgrade path and daos
\n add nsx controller command

* add support for adding and listing nsx provider to a zone

* add license

* add default VPC offering and update upgrade path

* add global setting to enable nsx plugin

* add delete nsx controller operation

* add nsxresource

* add NSX resource , api client, create tier1 gw

* update db

* update response and add license

* Add support to create and delete nsx tier-1 gateway

* add license

* cleanup and add skeletal code for network creation

* add create/delete segment and UI integration

* add license

* address code smells - part 1

* fix test / build failure

* add ui changes + update nsx_provider table transport zones + use NSX broadcast domain for add nics to router

* ui: fix password field, and backend changes

* add route advertisement

* update offering

* update offering

* add sleep before deletion of vpc / tier g/w for ports to be removed

* move creation of segments to design phase

* change provider to VPC router for Dhcp & dns service in an nsx offering

* Add public nic for NSX

* reserve first IP (after g/w) of subnet for router nic - NSX

* revert reserving 1st IP in vpc segments

* [NSX] Create a DHCP relay and add it to a VPC tier segment (#107)

* Create DHCP relay command and execute request

* In progress integrate with networking

* Create DHCP relay config on the network VR allocation

* Revert domain router dao changes

* Create DHCP relay con VR nic plug to NSX network

* Link DHCP relay config to segment after creation

* [NSX] Cleanup DHCP Relay config on segment deletion (#108)

* Cleanup DHCP Relay config on segment deletion

* update segment & relay name generators and call delete dhcprelay after deletion of segment

* address comment

* [NSX] Fix DHCP relay config deletion was missing zone name (#8068)

* [NSX] Refactor API wrapper operations (#8059)

* [NSX] Refactor API wrapper operations

* Big refactor

* Address review comment

* change network cidr to cidr to prevent NPE

* add domain and zone names to the various networks - vpc & tier

---------

Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>

* Nsx unit tests (#8090)

* Add tests

* add test for NsxGuestNetworkGuru

* add unit tests for NsxResource

* add unti tests for NsxElement

* cleanup

* [NSX] Refactor API wrapper operations

* update tests

* update tests - add nsxProviderServiceImpl test

* add unit test - NsxServiceImpl

* add license

* Big refactor

* Address review comment

* change network cidr to cidr to prevent NPE

* add domain and zone names to the various networks - vpc & tier

* fix tests

---------

Co-authored-by: nvazquez <nicovazquez90@gmail.com>

* modify NSX resource naming convention (#8095)

* modify NSX resource naming convention

* remove unused imports

* add a setup phase between desgin and implementation of a network for intermediary steps

* add method to all classes

* NSX: Refactor Network & VPC offering (#8110)

* [NSX] Refactor API wrapper operations

* Network offering changes for NSX

* fix services and provider combination

* address comments: rename param

* update nsx_mode parameter

---------

Co-authored-by: nvazquez <nicovazquez90@gmail.com>

* fix test

* [NSX] Allow NSX isolated networks (#8132)

* Add network offerings for NSX on isolated networks

* Fix offerings creation

* In progress NSX isolated network

* Fixes

* Fix NIC allocation to router

* NSX: Add Step for Adding Public traffic network for NSX During zone creation (#8126)

* NSX: Add Step for Adding Public traffic network for NSX

* address comments and cleanup

* address comment

* remove indent

* NSX: Create and Delete static NAT & Port forward  rules (#8131)

* NSX: Create and delete NSX Static Nat rules

* fix issues with static nat

* add static nat

* Support to add and delete Port forward rules

* add license

* fix adding multiple pf rules

* cleanup

* fix lint check

* fix smoke tests

* fix smoke tests

* Nsx add lb rule (#8161)

* NSX: Create and delete NSX Static Nat rules

* fix issues with static nat

* add static nat

* Support to add and delete Port forward rules

* add license

* fix adding multiple pf rules

* cleanup

* NSX: Add support to create and delete Load balancer rules

* fix deletion of lb rules

* add header file and update protocol detail

* build failure fix

* [NSX] Add SNAT support (#8100)

* In progress add source NAT

* Fix after merge

* Fix tests

* Fix NPE on isolated network deletion

* Reserve source NAT IP when its not passed for NSX VPC

* Create source NAT rule on VR NIC allocation

* Fix update VPC and remove VPC to update and remove SNAT rule

* Fix packaging

* Address review comment

* Fix build

* fix build - unused import

* Add defensive checks

* Add missing design to NSX public guru

---------

Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>

* NSX: Fix VR public NIC allocation (#8166)

* NSX: fix LB member addition and deletion and add defensive checks (#8167)

* Fix public NIC NPE on broadcast URI

* NSX: Router Public nic to get IP from systemVM Ip range (#8172)

* NSX: Router Public nic to get IP from systemVM Ip range

* Fix VR IP address and setSourceNatIp command

* NSX: hide systemVM reserved IP range SourceNAT

* fix test

---------

Co-authored-by: nvazquez <nicovazquez90@gmail.com>

* fix test failure

* test failure fix

* [NSX] Fix update source NAT IP (#8176)

* [NSX] Fix update source NAT IP

* Fix startup

* Fix API result

* NSX - add LB route Advertizement (#8192)

* [NSX] Add ACL types support (#8224)

* NSX: Create segment group on segment creation

* Add unit tests

* Remove group for segment before removing segment

* Create Distributed Firewall rules

* Remove distributed firewall policy on segment deletion

* Fix policy rule ID and add more unit tests

* Fix DROP action rules and transform tests

* Add new ACL rules

* Fixes

* associate security policies with groups and not to DFW and add deletion of rules

* Fix name convention

---------

Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>

* NSX: Fix creation of VPCs (#8320)

* Fix ACL rules creation (#8323)

* [NSX] Fix database views (#8325)

* NSX: Add CKS Support & Firewall rules for Isolated Networks (#8189)

* NSX: Add ALL LB IP to the list of route advertisements in tier1

* NSX: Support Source NAT on NSX Isolated networks

* NSX: Cks Support

* NSX: Create segment group on segment creation

* Add unit tests

* Remove group for segment before removing segment

* Create Distributed Firewall rules

* Remove distributed firewall policy on segment deletion

* Fix policy rule ID and add more unit tests

* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs

* Add support for routed NSX Isolated networks \n and non RFC 1918 compliant IPs

* Add Firewall rules

* build failure - fix unit test

* fix npes

* Add support to delete firewall rules

* update nsx cks offering

* add license

* update order of ports in PF & FW rules

* fix filter for getting transport zones

* CKS support changed - MTU updated, etc

* add LB for CKS on VPC

* address comments

* adapt upstream cks logic for vpc

* rever mtu hack

* update UI changes as per upstream fix

* change display test for CKS n/w offerings for isolated and VPC tiers

* add extra line for linter

* address comment

* revert list change

---------

Co-authored-by: nvazquez <nicovazquez90@gmail.com>

* fix ui build failure

* [NSX] Address SonarCloud Bugs (#8341)

* [NSX] Address SonarCloud Bugs

* Fix NSX API connection issues

* NSX: Add unit tests to increase coverage (#8355)

* NSX: Add unit tests

* cleanup unused imports

* add more unit tests

* add tests for publicnsxnetworkguru

* add license

* fix build failures

* address sonar comment

* fix security hotspots

* NSX: Add more unit tests (#8381)

* NSX : Unit tests

* remove unused imports

* remove unused import causing build failure

* fix build failures due to unused imports

* fix build failure

* fix test assertion

* remove unused imports

* remove unused import

* Nsx UI zone bug (#8398)

* NSX: Attempt to fix NSX Zone creation bug for public networks

* fix zone wizard public traffic issue

* add proper filtering of offerings based on VPC nsx mode

* clean up console logs

* NSX: Fix code smells and reported bugs (#8409)

* NSX: Fix code smells and reported bugs

* fox override issue

* remove unused imports

* fix test

* refactor code to reduce complexity

* add lisence

* cleanup

* fix build failure

* fix build failure

* address comments

* test - add config to ignore certain files from test coverage

* test exclusion of classes from test cov

* rever pom changes

* [NSX] Add more unit tests (#8431)

* [NSX] Add more unit tests

* More tests

* Fix build errors

* NSX: Prevent creation of L2 and Shared networks for NSX (#8463)

* NSX: Prevent creation of L2 and Shared networks for NSX

* add checks to backend to prevent creation of l2 and shared networks in nsx zones and filter only nsx offerings when creating isolated networks

* cleanup

* NSX: Fix code smells (#8436)

* NSX: Fix code smells

* Add changes to service creation logic

* CKS: Add action to during firewall rule creation (#8498)

* NSX,UI: Deduplicate network list when creating kubernetes clusters (#8513)

* NSX: Make LB service selectable in network offering (#8512)

* NSX: Make LB service selectable in network offering

* fix label

* address comments

* address comments

* NSX: Add appropriate error message when icmp type is set to -1 for NSX (#8504)

* NSX: Add appropriate error message when icmp type is set to -1 for NSX

* address comments

* update text

* fix test

* fix test - build failure

* fix test - build failure

* NSX: Cleanup NSX resources during k8s cluster cleanup (#8528)

* fix test failure

* NSX: Improve segment deletion process (#8538)

* NSX: Add passive monitor for NSX LB to test whether a server is available (#8533)

* NSX: Add passive monitor for NSX LB to test whether a server is available

* Add active monitors too

* fix build failure

* NSX: Add check for ICMP code / type for NSX zones (#8542)

* NSX: Fix Routed Mode for Isolated and VPC networks (#8534)

* NSX: Fix Routed Mode for Isolated and VPC networks

* NSX: Fix Routed mode - add checks for ports added for FW rules

* clean up code

* fix build failure

* NSX: Add retry logic with sleep to delete segments (#8554)

* NSX: Add retry logic with sleep to delete segments

* add logs

* Update pom XML for the NSX project

---------

Co-authored-by: Pearl Dsilva <pearl1594@gmail.com>
2024-01-24 12:24:15 -03:00
Abhishek Kumar 43066e4020 Updating pom.xml version numbers for release 4.19.0.0
Signed-off-by: Abhishek Kumar <abhishek.mrt22@gmail.com>
2024-01-22 10:28:05 +05:30
Nicolas Vazquez 8d42ca8ccf
Use project version on pom dependencies (#8529)
This PR fixes the POM dependencies from a hardcoded value to the project.version property on dependencies
2024-01-18 20:16:06 +05:30
Rene Glover 1031c31e6a
FiberChannel Multipath for KVM + Pure Flash Array and HPE-Primera Support (#7889)
This PR provides a new primary storage volume type called "FiberChannel" that allows access to volumes connected to hosts over fiber channel connections. It requires Multipath to provide path discovery and failover. Second, the PR adds an AdaptivePrimaryDatastoreProvider that abstracts how volumes are managed/orchestrated from the connector to communicate with the primary storage provider, using a ProviderAdapter interface, allowing the code interacting with the primary storage provider API's to be simpler and have no direct dependencies on Cloudstack code. Lastly, the PR provides an implementation of the ProviderAdapter classes for the HP Enterprise Primera line of storage solutions and the Pure Flash Array line of storage solutions.
2023-12-09 11:31:33 +05:30
kishankavala 5651eab49c
ObjectStore Framework with MinIO and Simulator plugins (#7752)
This PR adds Object Storage feature to CloudStack.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/%5BDRAFT%5D+CloudStack+Object+Store
2023-12-01 17:51:00 +05:30
Harikrishna 235e4fe190
Oauth2 integration with CloudStack (#7996)
OAuth2, the industry-standard authorization or authentication framework, simplifies the process of
granting access to resources. CloudStack supports OAuth2 authentication wherein users can login into
CloudStack without using a username and password. Support for Google and Github providers has been added.
Other OAuth2 providers can be easily integrated with CloudStack using its plugin framework.

The login page will show provider options when the OAuth2 is enabled and corresponding providers are configured.

"OAuth configuration" sub-section is present under "Configuration" where admins can register the corresponding
OAuth providers.
2023-10-31 13:25:28 +05:30
Vishesh ea90848429
Feature: Add support for DRS in a Cluster (#7723)
This pull request (PR) implements a Distributed Resource Scheduler (DRS) for a CloudStack cluster. The primary objective of this feature is to enable automatic resource optimization and workload balancing within the cluster by live migrating the VMs as per configuration.
Administrators can also execute DRS manually for a cluster, using the UI or the API.
Adds support for two algorithms - condensed & balanced. Algorithms are pluggable allowing ACS Administrators to have customized control over scheduling.

Implementation
There are three top level components:

    Scheduler
    A timer task which:

    Generate DRS plan for clusters
    Process DRS plan
    Remove old DRS plan records

    DRS Execution
    We go through each VM in the cluster and use the specified algorithm to check if DRS is required and to calculate cost, benefit & improvement of migrating that VM to another host in the cluster. On the basis of cost, benefit & improvement, the best migration is selected for the current iteration and the VM is migrated. The maximum number of iterations (live migrations) possible on the cluster is defined by drs.iterations which is defined as a percentage (as a value between 0 and 1) of total number of workloads.

    Algorithm
    Every algorithms implements two methods:
        needsDrs - to check if drs is required for cluster
        getMetrics - to calculate cost, benefit & improvement of a migrating a VM to another host.

Algorithms

    Condensed - Packs all the VMs on minimum number of hosts in the cluster.
    Balanced - Distributes the VMs evenly across hosts in the cluster.
    Algorithms use drs.level to decide the amount of imbalance to allow in the cluster.

APIs Added

listClusterDrsPlan

    id - ID of the DRS plan to list
    clusterid - to list plans for a cluster id

generateClusterDrsPlan

    id - cluster id
    iterations - The maximum number of iterations in a DRS job defined as a percentage (as a value between 0 and 1) of total number of workloads. Defaults to value of cluster's drs.iterations setting.

executeClusterDrsPlan

    id - ID of the cluster for which DRS plan is to be executed.
    migrateto - This parameter specifies the mapping between a vm and a host to migrate that VM. Format of this parameter: migrateto[vm-index].vm=<uuid>&migrateto[vm-index].host=<uuid>.

Config Keys Added

    ClusterDrsPlanExpireInterval
    Key drs.plan.expire.interval
    Scope Global
    Default Value 30 days
    Description The interval in days after which old DRS records will be cleaned up.

    ClusterDrsEnabled
    Key drs.automatic.enable
    Scope Cluster
    Default Value false
    Description Enable/disable automatic DRS on a cluster.

    ClusterDrsInterval
    Key drs.automatic.interval
    Scope Cluster
    Default Value 60 minutes
    Description The interval in minutes after which a periodic background thread will schedule DRS for a cluster.

    ClusterDrsIterations
    Key drs.max.migrations
    Scope Cluster
    Default Value 50
    Description Maximum number of live migrations in a DRS execution.

    ClusterDrsAlgorithm
    Key drs.algorithm
    Scope Cluster
    Default Value condensed
    Description DRS algorithm to execute on the cluster. This PR implements two algorithms - balanced & condensed.

    ClusterDrsLevel
    Key drs.imbalance
    Scope Cluster
    Default Value 0.5
    Description Percentage (as a value between 0.0 and 1.0) of imbalance allowed in the cluster. 1.0 means no imbalance
    is allowed and 0.0 means imbalance is allowed.

    ClusterDrsMetric
    Key drs.imbalance.metric
    Scope Cluster
    Default Value memory
    Description The cluster imbalance metric to use when checking the drs.imbalance.threshold. Possible values are memory and cpu.
2023-10-26 11:48:18 +05:30
David Jumani 941cc83372
Feature: Safely shutdown cloudstack (#6755)
Co-authored-by: dahn <daan.hoogland@gmail.com>
2023-04-12 12:44:14 +02:00
Daan Hoogland fb4f6a334d Updating pom.xml version numbers for release 4.19.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <daan@onecht.net>
2023-03-15 19:46:01 +01:00
Daan Hoogland 05cda2729f Updating pom.xml version numbers for release 4.18.1.0-SNAPSHOT
Signed-off-by: Daan Hoogland <daan@onecht.net>
2023-03-15 19:38:14 +01:00
Daan Hoogland 0574087284 Updating pom.xml version numbers for release 4.18.0.0
Signed-off-by: Daan Hoogland <daan@onecht.net>
2023-03-11 09:35:41 +01:00
Harikrishna a3feccf70c
User two factor authentication (#6924)
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2023-02-13 09:14:17 +01:00
David Jumani c774b865c9
Tungsten integration (#7065)
Co-authored-by: rtodirica <rtodirica@ena.com>
Co-authored-by: Huy Le <huylm@unitech.vn>
Co-authored-by: radu-todirica <Radu.Todirica@ness.com>
Co-authored-by: Huy Le <minh.le@ext.ewerk.com>
Co-authored-by: Simon Weller <siweller77@gmail.com>
Co-authored-by: dahn <daan@onecht.net>
2023-02-01 09:19:53 +01:00
dahn df96af3de4
delete F5 and SRX plugins (#7023) 2023-01-11 12:07:44 +01:00
fermosan 9009dd1db8
Emc networker b&r (#6550)
Co-authored-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2023-01-09 15:46:25 +01:00
Wei Zhou 889045fba5
new plugins: Add non-strict affinity groups (#6845) 2022-12-20 15:09:52 +01:00
nvazquez 0bcc609f05
Updating pom.xml version numbers for release 4.18.0.0-SNAPSHOT
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-06-06 12:25:35 -03:00
nvazquez 038a669d6b
Updating pom.xml version numbers for release 4.17.1.0-SNAPSHOT
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-06-06 12:19:44 -03:00
nvazquez c56220fcf2
Updating pom.xml version numbers for release 4.17.0.0
Signed-off-by: nvazquez <nicovazquez90@gmail.com>
2022-05-31 14:33:47 -03:00
slavkap 4004dfcfd8
StorPool storage plugin (#6007)
* StorPool storage plugin

Adds volume storage plugin for StorPool SDS

* Added support for alternative endpoint

Added option to switch to alternative endpoint for SP primary storage

* renamed all classes from Storpool to StorPool

* Address review

* removed unnecessary else

* Removed check about the storage provider

We don't need this check, we'll get if the snapshot is on StorPool be
its name from path

* Check that current plugin supports all functionality before upgrade CS

* Smoke tests for StorPool plug-in

* Fixed conflicts

* Fixed conflicts and added missed Apache license header

* Removed whitespaces in smoke tests

* Added StorPool plugin jar for Debian

the StorPool jar will be included into cloudstack-agent package for
Debian/Ubuntu
2022-04-14 11:12:01 -03:00
dahn b014617416
no axis (#5993)
* no axis

* remove f5 from ui

Co-authored-by: Daan Hoogland <dahn@onecht.net>
2022-03-07 22:46:29 -03:00
nicolas 3f79436840
Updating pom.xml version numbers for release 4.17.0.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:55:52 -03:00
nicolas 93c3c3b9ac
Updating pom.xml version numbers for release 4.16.1.0-SNAPSHOT
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-09 22:50:22 -03:00
nicolas 44c08b5acc
Updating pom.xml version numbers for release 4.16.0.0
Signed-off-by: nicolas <nicovazquez90@gmail.com>
2021-11-04 14:14:57 -03:00
Peinthor Rene 66c39c1589
storage: Linstor volume plugin (#4994)
This adds a volume(primary) storage plugin for the Linstor SDS.
Currently it can create/delete/migrate volumes, snapshots should be possible,
but currently don't work for RAW volume types in cloudstack.

* plugin-storage-volume-linstor: notify libvirt guests about the resize
2021-09-16 10:50:58 +05:30
sureshanaparti eba186aa40
storage: New Dell EMC PowerFlex Plugin (formerly ScaleIO, VxFlexOS) (#4304)
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ

Documentation PR: apache/cloudstack-documentation#169

This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack

Other improvements addressed in addition to PowerFlex/ScaleIO support:

- Added support for config drives in host cache for KVM
	=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
	=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
	=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
	=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
	=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)

- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates

- Updated full deployment destination for preparing the network(s) on VM start

- Propagate the direct download certificates uploaded to the newly added KVM hosts

- Discover the template size for direct download templates using any available host from the zones specified on template registration
	=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones

- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)

- Retry VM deployment/start when the host cannot grant access to volume/template

- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
	=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore

- Check the router filesystem is writable or not, before performing health checks
	=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
	=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
	=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not

- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)

* Addressed some issues for few operations on PowerFlex storage pool.

- Updated migration volume operation to sync the status and wait for migration to complete.

- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.

- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).

- Updated resize volume error message string.

- Blocked the below operations on PowerFlex storage pool:
  -> Extract Volume
  -> Create Snapshot for VMSnapshot

* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.

- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
  Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html

Other fixes included:

- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)

- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.

- Perform basic tests (for connectivity and file system) on router before updating the health check config data
	=> Validate the basic tests (connectivity and file system check) on router
	=> Cleanup the health check results when router is destroyed

* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0

* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool

and Minor improvements in PowerFlex/ScaleIO storage plugin code

* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.

- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.

* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py

* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)

* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.

* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration

* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure

* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
2021-02-24 14:58:33 +05:30
Daan Hoogland e26202f23e Updating pom.xml version numbers for release 4.16.0.0-SNAPSHOT
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2021-01-04 11:32:10 +00:00
Daan Hoogland 01b3e361c7 Updating pom.xml version numbers for release 4.15.0.0
Signed-off-by: Daan Hoogland <dahn@onecht.net>
2020-12-23 16:32:25 +00:00
Pearl Dsilva c578004fe5
projects: Role based users in Projects (#4128)
Enabling Role Based users in projects
Primate PR related to the FR: apache/cloudstack-primate#382
Doc PR: https://github.com/apache/cloudstack-documentation/pull/145

Co-authored-by: Pearl Dsilva <pearl.dsilva@shapeblue.com>
Co-authored-by: Suresh Kumar Anaparti <suresh.anaparti@shapeblue.com>
2020-08-13 15:45:39 +05:30
Gabriel Beims Bräscher ba6e2ac843
plugins: Redfish Client & Redfish OOBM Driver (#4175)
This PR adds support for the OOBM Redfish protocol, implementing a Java client to send HTTP requests to Redfish supported systems.

Implementation overview:
- Redfish Java client: a Java Client for Redfish that makes Redfish actions available to the HA workflow via an OOB driver.
- OOB Redfish driver: a new Out-of-band driver was created for Redfish, allowing to integrate the Redfish Client with the CloudStack Out-of-band management implementation.

Fixes: #3624
2020-07-30 10:51:16 +05:30
andrijapanicsb 5f926c3353 Updating pom.xml version numbers for release 4.15.0.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 10:18:39 +01:00
andrijapanicsb 05e9b11694 Updating pom.xml version numbers for release 4.14.1.0-SNAPSHOT
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-23 09:59:32 +01:00
andrijapanicsb 6f96b3b2b3 Updating pom.xml version numbers for release 4.14.0.0
Signed-off-by: andrijapanicsb <andrija.panic@shapeblue.com>
2020-05-11 15:03:14 +01:00
Abhishek Kumar 8cc70c7d87
CloudStack Kubernetes Service (#3680) 2020-03-06 08:51:23 +01:00
Rohit Yadav 318924d801
CloudStack Backup & Recovery Framework (#3553) 2020-03-03 13:27:58 +01:00
Paul Angus 50fc045f36 Updating pom.xml version numbers for release 4.14.0.0-SNAPSHOT
Signed-off-by: Paul Angus <paul.angus@shapeblue.com>
2019-09-07 09:57:46 +01:00
manojkverma e3d70b7dcc storage: Datera storage plugin (#3470)
Features:

Zone-wide and cluster-wide primary storage support
VM template caching automatically on Datera, the subsequent VMs can be created instantaneously by fast cloning the root volume.
Rapid storage-native snapshot
Multiple managed primary storages can be created with a single Datera cluster to provide better management of
Total provisioned capacity
Default storage QoS values
Replica size ( 1 to 5 )
IP pool assignment for iSCSI target
Volume Placement ( hybrid, single_flash, all_flash )
Volume snapshot to VM template
Volume to VM template
Volume size increase using service policy
Volume QoS change using service policy
Enabled KVM support
New Datera app_instance name format to include ACS volume name
VM live migration
2019-07-25 14:13:04 +05:30
Frank Maximus e11f7ee1ba RIP Nuage Cloudstack Plugin (#3146)
may it rest in peaces
2019-05-14 10:58:24 +02:00
GabrielBrascher 8d3feb100a Updating pom.xml version numbers for release 4.13.0.0-SNAPSHOT
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-20 18:47:35 -03:00
GabrielBrascher a137398bf1 Updating pom.xml version numbers for release 4.12.0.0
Signed-off-by: GabrielBrascher <gabriel@pcextreme.nl>
2019-03-14 10:11:46 -03:00
Khosrow Moossavi 7c6630bca7 Cleanup POMs (#2613)
* Cleaup and code-formatting POM files

* Remove obsolete mycila license-maven-plugin

* Remove obsolete console-proxy/plugin project

* Move console-proxy-rdbconsole under console-proxy parent

* Use correct parent path for rdpconsole

* Order alphabetally items in setnextversion.sh

* Unifiy License header in POMs

* Alphabetic order of modules definition

* Extract all defined versions into parent pom

* Remove obsolete files: version-info.in, configure-info.in

* Remove redundant defaultGoal

* Remove useless checkstyle plugin from checkstyle project

* Order alphabetally items in pom.xml

* Add aditional SPACEs to fix debian build

* Don't execute checkstyle on parent projects

* Use UTF-8 encoding in building checkstyle project

* Extract plugin versions into properties

* Execute PMD plugin on all the projects with -Penablefindbugs

* Upgrade maven plugins to latest version

* Make sure to always look for apache parent pom from repository

* Fix incorrect version grep in debian packaging

* Fix rebase conflicts

* Fix rebase conflicts

* Remove PMD for now to be fixed on another PR
2018-07-25 14:39:37 -03:00
Mike Tutkowski c7d6376964 Removing an old, unused NetApp plug-in 2018-06-08 12:55:39 -06:00
Rohit Yadav 93e374599a Merge branch '4.11'
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-05-21 13:23:13 +05:30
Nicolas Vazquez 06f7e495dc Host Affinity plugin (#2630)
This implements a new host-affinity plugin.
2018-05-21 12:49:08 +05:30
Rohit Yadav 0ece15f86e Updating pom.xml version numbers for release 4.11.1.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-02-26 16:57:48 +01:00
Rohit Yadav 6ffbce6159 Updating pom.xml version numbers for release 4.11.0.1-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-02-05 11:13:50 +01:00
Rohit Yadav 5dada1f7ed Updating pom.xml version numbers for release 4.11.0.0
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-26 13:13:37 +01:00
Rohit Yadav 072dbc0720 Updating pom.xml version numbers for master to 4.12.0.0-SNAPSHOT
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2018-01-15 17:43:45 +05:30
Mike Tutkowski a30a31c9b7 CLOUDSTACK-9620: Enhancements for managed storage (#2298)
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)

Added support for root disks on managed storage with KVM

Added support for volume snapshots with managed storage on KVM

Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage

Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.

Included support for Reinstall VM on KVM with managed storage

Enabled offline migration on KVM from non-managed storage to managed storage and vice versa

Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)

Added support to download (extract) a managed-storage volume to a QCOW2 file

When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.

Included support for the KVM auto-convergence feature

The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)

On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.

Added a new Global Setting: kvm.storage.live.migration.wait

For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)

Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.

Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot

Added an SIOC API plug-in to support VMware SIOC

When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.

Added the ability to revert a volume to a snapshot

Enabled cluster-scoped managed storage

Added support for VMware dynamic discovery
2018-01-15 00:05:52 +05:30
Rohit Yadav b6dc40faef CLOUDSTACK-10103: Cloudian Connector for CloudStack (#2284)
Several organizations use Cloudian as S3 provider, this implements the
Cloudian Management Console connector for CloudStack that can do the
following:

- Provide ease in connector configuration using CloudStack global
  settings
- Perform SSO from CloudStack UI into Cloudian Management Console (CMC)
  when the connector is enabled
- Automatic provisioning and de-provisioning of CloudStack accounts and
  domains as Cloudian users and groups respectively
- During CloudStack UI logout, logout user from CMC
- CloudStack account will be mapped to Cloudian Users, and CloudStack
  domain will be mapped to Cloudian Groups.
- The CloudStack admin account is mapped to Cloudian admin (user name
  configurable).
- The user/group provisioning will be from CloudStack to Cloudian only,
  i.e. user/group addition/removal/updation/deactivation in Cloudian
  portal (CMC) won't propagate the changes to CloudStack.

FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudian+Connector+for+CloudStack

New APIs:
- `cloudianIsEnabled`: API to check whether Cloudian Connector is enabled.
- `cloudianSsoLogin`: Performs SSO for the logged-in, requesting user
                      and returns the URL that can be used to perform
                      SSO and log into CMC.

New Global Settings:
- cloudian.connector.enabled  (false)
If set to true, this enables the Cloudian Connector for CloudStack.
Restarting management server(s) is required.
- cloudian.admin.host (s3-admin.cloudian.com)
The host where Cloudian Admin services are accessible.
- cloudian.admin.port (19443)
The admin service port.
- cloudian.admin.protocol (https)
The admin service API scheme/protocol.
- cloudian.validate.ssl (true)
 When set to true, this validates the certificate of the https-enabled
admin API service.
- cloudian.admin.user (sysadmin)
The admin user's name when making (admin) API calls.
- cloudian.admin.password (public)
The admin password used when making (admin) API calls.
- cloudian.api.request.timeout (5)
The API request timeout in seconds used by the internal HTTP/s client.
- cloudian.cmc.admin.user (admin)
The CMC admin user's name.
- cloudian.cmc.host (cmc.cloudian.com)
The CMC host.
- cloudian.cmc.port (8443)
The CMC service port.
- cloudian.cmc.protocol (https)
 The CMC service scheme/protocol.
- cloudian.sso.key (ss0sh5r3dk3y)
The Single-Sign-On shared key.

Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
2017-10-25 10:49:45 +05:30