* StorPool storage plugin
Adds volume storage plugin for StorPool SDS
* Added support for alternative endpoint
Added option to switch to alternative endpoint for SP primary storage
* renamed all classes from Storpool to StorPool
* Address review
* removed unnecessary else
* Removed check about the storage provider
We don't need this check, we'll get if the snapshot is on StorPool be
its name from path
* Check that current plugin supports all functionality before upgrade CS
* Smoke tests for StorPool plug-in
* Fixed conflicts
* Fixed conflicts and added missed Apache license header
* Removed whitespaces in smoke tests
* Added StorPool plugin jar for Debian
the StorPool jar will be included into cloudstack-agent package for
Debian/Ubuntu
This adds a volume(primary) storage plugin for the Linstor SDS.
Currently it can create/delete/migrate volumes, snapshots should be possible,
but currently don't work for RAW volume types in cloudstack.
* plugin-storage-volume-linstor: notify libvirt guests about the resize
Added support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack (for KVM hypervisor) and enabled VM/Volume operations on that pool (using pool tag).
Please find more details in the FS here:
https://cwiki.apache.org/confluence/x/cDl4CQ
Documentation PR: apache/cloudstack-documentation#169
This enables support for PowerFlex/ScaleIO (v3.5 onwards) storage pool as a primary storage in CloudStack
Other improvements addressed in addition to PowerFlex/ScaleIO support:
- Added support for config drives in host cache for KVM
=> Changed configuration "vm.configdrive.primarypool.enabled" scope from Global to Zone level
=> Introduced new zone level configuration "vm.configdrive.force.host.cache.use" (default: false) to force host cache for config drives
=> Introduced new zone level configuration "vm.configdrive.use.host.cache.on.unsupported.pool" (default: true) to use host cache for config drives when storage pool doesn't support config drive
=> Added new parameter "host.cache.location" (default: /var/cache/cloud) in KVM agent.properties for specifying the host cache path and create config drives on the "/config" directory on the host cache path
=> Maintain the config drive location and use it when required on any config drive operation (migrate, delete)
- Detect virtual size from the template URL while registering direct download qcow2 (of KVM hypervisor) templates
- Updated full deployment destination for preparing the network(s) on VM start
- Propagate the direct download certificates uploaded to the newly added KVM hosts
- Discover the template size for direct download templates using any available host from the zones specified on template registration
=> When zones are not specified while registering template, template size discovery is performed using any available host, which is picked up randomly from one of the available zones
- Release the VM resources when VM is sync-ed to Stopped state on PowerReportMissing (after graceful period)
- Retry VM deployment/start when the host cannot grant access to volume/template
- Mark never-used or downloaded templates as Destroyed on deletion, without sending any DeleteCommand
=> Do not trigger any DeleteCommand for never-used or downloaded templates as these doesn't exist and cannot be deleted from the datastore
- Check the router filesystem is writable or not, before performing health checks
=> Introduce a new test "filesystem.writable.test" to check the filesystem is writable or not
=> The router health checks keeps the config info at "/var/cache/cloud" and updates the monitor results at "/root" for health checks, both are different partitions. So, test at both the locations.
=> Added new script: "filesystem_writable_check.py" at /opt/cloud/bin/ to check the filesystem is writable or not
- Fixed NPE issue, template is null for DATA disks. Copy template to target storage for ROOT disk (with template id), skip DATA disk(s)
* Addressed some issues for few operations on PowerFlex storage pool.
- Updated migration volume operation to sync the status and wait for migration to complete.
- Updated VM Snapshot naming, for uniqueness in ScaleIO volume name when more than one volume exists in the VM.
- Added sync lock while spooling managed storage template before volume creation from the template (non-direct download).
- Updated resize volume error message string.
- Blocked the below operations on PowerFlex storage pool:
-> Extract Volume
-> Create Snapshot for VMSnapshot
* Added the PowerFlex/ScaleIO client connection pool to manage the ScaleIO gateway clients, which uses a single gateway client per Powerflex/ScaleIO storage pool and renews it when the session token expires.
- The token is valid for 8 hours from the time it was created, unless there has been no activity for 10 minutes.
Reference: https://cpsdocs.dellemc.com/bundle/PF_REST_API_RG/page/GUID-92430F19-9F44-42B6-B898-87D5307AE59B.html
Other fixes included:
- Fail the VM deployment when the host specified in the deployVirtualMachine cmd is not in the right state (i.e. either Resource State is not Enabled or Status is not Up)
- Use the physical file size of the template to check the free space availability on the host, while downloading the direct download templates.
- Perform basic tests (for connectivity and file system) on router before updating the health check config data
=> Validate the basic tests (connectivity and file system check) on router
=> Cleanup the health check results when router is destroyed
* Updated PowerFlex/ScaleIO storage plugin version to 4.16.0.0
* UI Changes to support storage plugin for PowerFlex/ScaleIO storage pool.
- PowerFlex pool URL generated from the UI inputs(Gateway, Username, Password, Storage Pool) when adding "PowerFlex" Primary Storage
- Updated protocol to "custom" for PowerFlex provider
- Allow VM Snapshot for stopped VM on KVM hypervisor and PowerFlex/ScaleIO storage pool
and Minor improvements in PowerFlex/ScaleIO storage plugin code
* Added support for PowerFlex/ScaleIO volume migration across different PowerFlex storage instances.
- findStoragePoolsForMigration API returns PowerFlex pool(s) of different instance as suitable pool(s), for volume(s) on PowerFlex storage pool.
- Volume(s) with snapshots are not allowed to migrate to different PowerFlex instance.
- Volume(s) of running VM are not allowed to migrate to other PowerFlex storage pools.
- Volume migration from PowerFlex pool to Non-PowerFlex pool, and vice versa are not supported.
* Fixed change service offering smoke tests in test_service_offerings.py, test_vm_snapshots.py
* Added the PowerFlex/ScaleIO volume/snapshot name to the paths of respective CloudStack resources (Templates, Volumes, Snapshots and VM Snapshots)
* Added new response parameter “supportsStorageSnapshot” (true/false) to volume response, and Updated UI to hide the async backup option while taking snapshot for volume(s) with storage snapshot support.
* Fix to remove the duplicate zone wide pools listed while finding storage pools for migration
* Updated PowerFlex/ScaleIO volume migration checks and rollback migration on failure
* Fixed the PowerFlex/ScaleIO volume name inconsistency issue in the volume path after migration, due to rename failure
This PR adds support for the OOBM Redfish protocol, implementing a Java client to send HTTP requests to Redfish supported systems.
Implementation overview:
- Redfish Java client: a Java Client for Redfish that makes Redfish actions available to the HA workflow via an OOB driver.
- OOB Redfish driver: a new Out-of-band driver was created for Redfish, allowing to integrate the Redfish Client with the CloudStack Out-of-band management implementation.
Fixes: #3624
Features:
Zone-wide and cluster-wide primary storage support
VM template caching automatically on Datera, the subsequent VMs can be created instantaneously by fast cloning the root volume.
Rapid storage-native snapshot
Multiple managed primary storages can be created with a single Datera cluster to provide better management of
Total provisioned capacity
Default storage QoS values
Replica size ( 1 to 5 )
IP pool assignment for iSCSI target
Volume Placement ( hybrid, single_flash, all_flash )
Volume snapshot to VM template
Volume to VM template
Volume size increase using service policy
Volume QoS change using service policy
Enabled KVM support
New Datera app_instance name format to include ACS volume name
VM live migration
* Cleaup and code-formatting POM files
* Remove obsolete mycila license-maven-plugin
* Remove obsolete console-proxy/plugin project
* Move console-proxy-rdbconsole under console-proxy parent
* Use correct parent path for rdpconsole
* Order alphabetally items in setnextversion.sh
* Unifiy License header in POMs
* Alphabetic order of modules definition
* Extract all defined versions into parent pom
* Remove obsolete files: version-info.in, configure-info.in
* Remove redundant defaultGoal
* Remove useless checkstyle plugin from checkstyle project
* Order alphabetally items in pom.xml
* Add aditional SPACEs to fix debian build
* Don't execute checkstyle on parent projects
* Use UTF-8 encoding in building checkstyle project
* Extract plugin versions into properties
* Execute PMD plugin on all the projects with -Penablefindbugs
* Upgrade maven plugins to latest version
* Make sure to always look for apache parent pom from repository
* Fix incorrect version grep in debian packaging
* Fix rebase conflicts
* Fix rebase conflicts
* Remove PMD for now to be fixed on another PR
Allowed zone-wide primary storage based on a custom plug-in to be added via the GUI in a KVM-only environment (previously this only worked for XenServer and VMware)
Added support for root disks on managed storage with KVM
Added support for volume snapshots with managed storage on KVM
Enable creating a template directly from a volume (i.e. without having to go through a volume snapshot) on KVM with managed storage
Only allow the resizing of a volume for managed storage on KVM if the volume in question is either not attached to a VM or is attached to a VM in the Stopped state.
Included support for Reinstall VM on KVM with managed storage
Enabled offline migration on KVM from non-managed storage to managed storage and vice versa
Included support for online storage migration on KVM with managed storage (NFS and Ceph to managed storage)
Added support to download (extract) a managed-storage volume to a QCOW2 file
When uploading a file from outside of CloudStack to CloudStack, set the min and max IOPS, if applicable.
Included support for the KVM auto-convergence feature
The compression flag was actually added in version 1.0.3 (1000003) as opposed to version 1.3.0 (1003000) (changed this to reflect the correct version)
On KVM when using iSCSI-based managed storage, if the user shuts a VM down from the guest OS (as opposed to doing so from CloudStack), we need to pass to the KVM agent a list of applicable iSCSI volumes that need to be disconnected.
Added a new Global Setting: kvm.storage.live.migration.wait
For XenServer, added a check to enforce that only volumes from zone-wide managed storage can be storage motioned from a host in one cluster to a host in another cluster (cannot do so at the time being with volumes from cluster-scoped managed storage)
Don’t allow Storage XenMotion on a VM that has any managed-storage volume with one or more snapshots.
Enabled for managed storage with VMware: Template caching, create snapshot, delete snapshot, create volume from snapshot, and create template from snapshot
Added an SIOC API plug-in to support VMware SIOC
When starting a VM that uses managed storage in a cluster other than the one it last was running in, we need to remove the reference to the iSCSI volume from the original cluster.
Added the ability to revert a volume to a snapshot
Enabled cluster-scoped managed storage
Added support for VMware dynamic discovery
Several organizations use Cloudian as S3 provider, this implements the
Cloudian Management Console connector for CloudStack that can do the
following:
- Provide ease in connector configuration using CloudStack global
settings
- Perform SSO from CloudStack UI into Cloudian Management Console (CMC)
when the connector is enabled
- Automatic provisioning and de-provisioning of CloudStack accounts and
domains as Cloudian users and groups respectively
- During CloudStack UI logout, logout user from CMC
- CloudStack account will be mapped to Cloudian Users, and CloudStack
domain will be mapped to Cloudian Groups.
- The CloudStack admin account is mapped to Cloudian admin (user name
configurable).
- The user/group provisioning will be from CloudStack to Cloudian only,
i.e. user/group addition/removal/updation/deactivation in Cloudian
portal (CMC) won't propagate the changes to CloudStack.
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Cloudian+Connector+for+CloudStack
New APIs:
- `cloudianIsEnabled`: API to check whether Cloudian Connector is enabled.
- `cloudianSsoLogin`: Performs SSO for the logged-in, requesting user
and returns the URL that can be used to perform
SSO and log into CMC.
New Global Settings:
- cloudian.connector.enabled (false)
If set to true, this enables the Cloudian Connector for CloudStack.
Restarting management server(s) is required.
- cloudian.admin.host (s3-admin.cloudian.com)
The host where Cloudian Admin services are accessible.
- cloudian.admin.port (19443)
The admin service port.
- cloudian.admin.protocol (https)
The admin service API scheme/protocol.
- cloudian.validate.ssl (true)
When set to true, this validates the certificate of the https-enabled
admin API service.
- cloudian.admin.user (sysadmin)
The admin user's name when making (admin) API calls.
- cloudian.admin.password (public)
The admin password used when making (admin) API calls.
- cloudian.api.request.timeout (5)
The API request timeout in seconds used by the internal HTTP/s client.
- cloudian.cmc.admin.user (admin)
The CMC admin user's name.
- cloudian.cmc.host (cmc.cloudian.com)
The CMC host.
- cloudian.cmc.port (8443)
The CMC service port.
- cloudian.cmc.protocol (https)
The CMC service scheme/protocol.
- cloudian.sso.key (ss0sh5r3dk3y)
The Single-Sign-On shared key.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This implements a CloudStack Prometheus exporter as a plugin, that serves
metrics on a HTTP port.
New global settings:
1. prometheus.exporter.enable - (default: false), Enable the prometheus
exporter plugin, management server restart needed.
2. prometheus.exporter.port - (default: 9595), The prometheus exporter
server port.
3. prometheus.exporter.allowed.ips - (default: 127.0.0.1), List of comma
separated prometheus server ips (with no spaces) that should be allowed to
access the URLs.
The following list of metrics are provided per pop (zone) with the exporter:
• Per host:
o CPU cores: used, total
o CPU usage: used, total (in MHz)
o Memory usage: used, total (in MiBs)
o Total VMs running on the host
• CPU cores: allocated (per zone)
• CPU usage: allocated (per zone, in MHz)
• Memory usage: allocated (per zone, in MiBs)
• Hosts: online, offline, total
• VMs: in all states -- starting, running, stopping, stopped, destroyed,
expunging, migrating, error, unknown
• Volumes: ready, destroyed, total
• Primary Storage Pool: (Disk size) used, allocated, unallocated, total (in GiBs)
• Secondary Storage Pool: (Disk size) used, allocated, unallocated, total (in GiBs)
• Private IPs: allocated, total
• Public IPs: allocated, total
• Shared Network IPs: allocated, total
• VLANs: allocated, total
Additional metrics for the environment:
• Summed domain (level=1) limit for CPU cores
• Summed domain (level=1) limit for memory/ram
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This introduces a new certificate authority framework that allows
pluggable CA provider implementations to handle certificate operations
around issuance, revocation and propagation. The framework injects
itself to `NioServer` to handle agent connections securely. The
framework adds assumptions in `NioClient` that a keystore if available
with known name `cloud.jks` will be used for SSL negotiations and
handshake.
This includes a default 'root' CA provider plugin which creates its own
self-signed root certificate authority on first run and uses it for
issuance and provisioning of certificate to CloudStack agents such as
the KVM, CPVM and SSVM agents and also for the management server for
peer clustering.
Additional changes and notes:
- Comma separate list of management server IPs can be set to the 'host'
global setting. Newly provisioned agents (KVM/CPVM/SSVM etc) will get
radomized comma separated list to which they will attempt connection
or reconnection in provided order. This removes need of a TCP LB on
port 8250 (default) of the management server(s).
- All fresh deployment will enforce two-way SSL authentication where
connecting agents will be required to present certificates issued
by the 'root' CA plugin.
- Existing environment on upgrade will continue to use one-way SSL
authentication and connecting agents will not be required to present
certificates.
- A script `keystore-setup` is responsible for initial keystore setup
and CSR generation on the agent/hosts.
- A script `keystore-cert-import` is responsible for import provided
certificate payload to the java keystore file.
- Agent security (keystore, certificates etc) are setup initially using
SSH, and later provisioning is handled via an existing agent connection
using command-answers. The supported clients and agents are limited to
CPVM, SSVM, and KVM agents, and clustered management server (peering).
- Certificate revocation does not revoke an existing agent-mgmt server
connection, however rejects a revoked certificate used during SSL
handshake.
- Older `cloudstackmanagement.keystore` is deprecated and will no longer
be used by mgmt server(s) for SSL negotiations and handshake. New
keystores will be named `cloud.jks`, any additional SSL certificates
should not be imported in it for use with tomcat etc. The `cloud.jks`
keystore is stricly used for agent-server communications.
- Management server keystore are validated and renewed on start up only,
the validity of them are same as the CA certificates.
New APIs:
- listCaProviders: lists all available CA provider plugins
- listCaCertificate: lists the CA certificate(s)
- issueCertificate: issues X509 client certificate with/without a CSR
- provisionCertificate: provisions certificate to a host
- revokeCertificate: revokes a client certificate using its serial
Global settings for the CA framework:
- ca.framework.provider.plugin: The configured CA provider plugin
- ca.framework.cert.keysize: The key size for certificate generation
- ca.framework.cert.signature.algorithm: The certificate signature algorithm
- ca.framework.cert.validity.period: Certificate validity in days
- ca.framework.cert.automatic.renewal: Certificate auto-renewal setting
- ca.framework.background.task.delay: CA background task delay/interval
- ca.framework.cert.expiry.alert.period: Days to check and alert expiring certificates
Global settings for the default 'root' CA provider:
- ca.plugin.root.private.key: (hidden/encrypted) CA private key
- ca.plugin.root.public.key: (hidden/encrypted) CA public key
- ca.plugin.root.ca.certificate: (hidden/encrypted) CA certificate
- ca.plugin.root.issuer.dn: The CA issue distinguished name
- ca.plugin.root.auth.strictness: Are clients required to present certificates
- ca.plugin.root.allow.expired.cert: Are clients with expired certificates allowed
UI changes:
- Button to download/save the CA certificates.
Misc changes:
- Upgrades bountycastle version and uses newer classes
- Refactors SAMLUtil to use new CertUtils
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This improves the metrics view feature by improving the rendering performance
of metrics view tables, by reimplementing the logic at the backend and data
served via APIs. In large environments, the older implementation would
make several API calls that increases both network and database load.
List of APIs introduced for improving the performance:
listClustersMetrics
listHostsMetrics
listInfrastructure
listStoragePoolsMetrics
listVMsMetrics
listVolumesMetrics
listZonesMetrics
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Support access to a host’s out-of-band management interface (e.g. IPMI, iLO,
DRAC, etc.) to manage host power operations (on/off etc.) and querying current
power state in CloudStack.
Given the wide range of out-of-band management interfaces such as iLO and iDRA,
the service implementation allows for development of separate drivers as plugins.
This feature comes with a ipmitool based driver that uses the
ipmitool (http://linux.die.net/man/1/ipmitool) to communicate with any
out-of-band management interface that support IPMI 2.0.
This feature allows following common use-cases:
- Restarting stalled/failed hosts
- Powering off under-utilised hosts
- Powering on hosts for provisioning or to increase capacity
- Allowing system administrators to see the current power state of the host
For testing this feature `ipmisim` can be used:
https://pypi.python.org/pypi/ipmisim
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Out-of-band+Management+for+CloudStack
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This feature allows root administrators to define new roles and associate API
permissions to them.
A limited form of role-based access control for the CloudStack management server
API is provided through a properties file, commands.properties, embedded in the
WAR distribution. Therefore, customizing API permissions requires unpacking the
distribution and modifying this file consistently on all servers. The old system
also does not permit the specification of additional roles.
FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Dynamic+Role+Based+API+Access+Checker+for+CloudStack
DB-Backed Dynamic Role Based API Access Checker for CloudStack brings following
changes, features and use-cases:
- Moves the API access definitions from commands.properties to the mgmt server DB
- Allows defining custom roles (such as a read-only ROOT admin) beyond the
current set of four (4) roles
- All roles will resolve to one of the four known roles types (Admin, Resource
Admin, Domain Admin and User) which maintains this association by requiring
all new defined roles to specify a role type.
- Allows changes to roles and API permissions per role at runtime including additions or
removal of roles and/or modifications of permissions, without the need
of restarting management server(s)
Upgrade/installation notes:
- The feature will be enabled by default for new installations, existing
deployments will continue to use the older static role based api access checker
with an option to enable this feature
- During fresh installation or upgrade, the upgrade paths will add four default
roles based on the four default role types
- For ease of migration, at the time of upgrade commands.properties will be used
to add existing set of permissions to the default roles. cloud.account
will have a new role_id column which will be populated based on default roles
as well
Dynamic-roles migration tool: scripts/util/migrate-dynamicroles.py
- Allows admins to migrate to the dynamic role based checker at a future date
- Performs a harder one-way migrate and update
- Migrates rules from existing commands.properties file into db and deprecates it
- Enables an internal hidden switch to enable dynamic role based checker feature
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
Quota service while allowing for scalability will make sure that the cloud is
not exploited by attacks, careless use and program errors. To address this
problem, we propose to employ a quota-enforcement service that allows resource
usage within certain bounds as defined by policies and available quotas for
various entities. Quota service extends the functionality of usage server to
provide a measurement for the resources used by the accounts and domains using a
common unit referred to as cloud currency in this document. It can be configured
to ensure that your usage won’t exceed the budget allocated to accounts/domain
in cloud currency. It will let user know how much of the cloud resources he is
using. It will help the cloud admins, if they want, to ensure that a user does
not go beyond his allocated quota. Per usage cycle if a account is found to be
exceeding its quota then it is locked. Locking an account means that it will not
be able to initiat e a new resource allocation request, whether it is more
storage or an additional ip. Needless to say quota service as well as any action
on the account is configurable.
Changes from Github code review:
- Added marvin test for quota plugin API
- removed unused commented code
- debug messages in debug enabled check
- checks for nulls, fixed access to member variables and feature
- changes based on PR comments
- unit tests for UsageTypes
- unit tests for all Cmd classes
- unit tests for all service and manager impls
- try-catch-finally or try-with-resource in dao impls for failsafe db switching
- remove dead code
- add missing quota calculation case (regression fixed)
- replace tabs with spaces in pom.xmls
- quota: though default value for quota_calculated is 0, the usage server
makes it null while entering usage entries. Flipping the condition so
as to acocunt for that.
- quotatypes: fix NPE in quota type
- quota framework test fixes
- made statement period configurable
- changed default email templates to reflect the fact that exhausted quota may not result in a locked account
- added quotaUpdateCmd that refreshes quota balances and sends alerts and statements
- report quotaSummary command returns quota balance, quota usage and state for all account
- made UI framework changes to allow for text area input in edit views
- process usage entries that have greater than 0 usage
- orocess quota entries only if tariff is non zero
- if there are credit entries but no balance entry create a dummy balance entry
- remove any credit entries that are before the last balance entry
when displaying balance statement
- on a rerun the last balance is now getting added
FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Quota+Service+-+FS
PR: https://github.com/apache/cloudstack/pull/768
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
1. provide compatibility with the Big Cloud Fabric (BCF) controller
L2 Connectivity Service in both VPC and non-VPC modes
2. virtual network terminology updates: VNS --> BCF_SEGMENT
3. uses HTTPS with trust-always certificate handling
4. topology sync support with BCF controller
5. support multiple (two) BCF controllers with HA
6. support VM migration
7. support Firewall, Static NAT, and Source NAT with NAT enabled option
8. add VifDriver for Indigo Virtual Switch (IVS)
This closes#151
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This is a plugin that puts in ovm3 support ranging from 3.3.1 to 3.3.2. Basic
functionality is in here, advanced networking etc..
Snapshots only work when a VM is stopped now due to the semantics of OVM's raw
image implementation (so snapshots should work on a storage level underneath the
hypervisor shrug)
This closes#113
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This commit produces event bus messages to a "cloudstack" topic
in Apache Kafka. Configuration is expected to be found in
/etc/cloudstack/management/kafka.producer.properties and will
generally be of the form:
bootstrap.servers=kafka-host1:9092,kafka-host2:9092
key.serializer=org.apache.kafka.common.serialization.StringSerializer
value.serializer=org.apache.kafka.common.serialization.StringSerializer
There is no way to parameterize the topic yet, and the consuming
code is just place-holder. I think adding a consumer within cloudstack
is very debatable and likely not needed.
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This is a feature to handle DNS entries by means of an external DNS Provider,
such as Bind. These entries include DNS domains and reverse domains, VM records
and reverse records.
For a complete description, please refer to the design document available at
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Bind+and+PowerDNS+integration+by+Globo+DNSAPI
For the discussion about this feature on the dev mailing list, please refer to
http://markmail.org/thread/fvwf36hpxotiibka
Summary:
- new Network Service Provider called GloboDNS
- new Network Element to manage network domains and VM records (entries) on an external API
- new Network Resource to communicate with GloboDNS (open source)
- new API command to add DNS server
- new global option to determine if this provider should override VM entries on external DNS server
- changes in UI to include GloboDNS in Providers list
Signed-off-by: Rohit Yadav <rohit.yadav@shapeblue.com>
This a NexentaStor iSCSI volume driver.
Now implemented only following functions:
* create volume
* delete volume
Currently delete volume still in progress.
Signed-off-by: Edison Su <sudison@gmail.com>
this checkin adds support for plug-in that provides an in memory event
bus which could be used as alternative to RabbitMQ based event bus. Both
publisher are subscriber should be running with management server to use
in-memroy event bus.
Adding the missing file
During HA and maintenance call different planners (if the original planners are not able to find capacity) which skip some heurestics
This patch adds a network plugin to support Palo Alto Networks firewall (their appliance and their VM series firewall).
More information in the FS: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Palo+Alto+Firewall+Integration
Features supported are:
- List/Add/Delete Palo Alto service provider
- List/Add/Delete Palo Alto network service offering
- List/Add/Delete Palo Alto network with above service offering
- Add instance to the new network (creates the public IP and private gateway/cidr on the PA as well as the source nat rule)
- List/Add/Delete Ingress Firewall rule
- List/Add/Delete Egress Firewall rule
- List/Add/Delete Port Forwarding rule
- List/Add/Delete Static Nat rule
- Supports Palo Alto Networks 'Log Forwarding' profile globally per device (additional docs to come)
- Supports Palo Alto Networks 'Security Profile Groups' functionality globally per device (additional docs to come)
Knowns limitations:
- Only supports one public IP range in CloudStack.
- Currently not verifying SSL certificates when creating a connection between CloudStack and the Palo Alto Networks firewall.
- Currently not tracking usage on Public IPs.
Signed-off-by: Sheng Yang <sheng.yang@citrix.com>
architecture allows additional functionality to be easily added. Incorporating the plugin in CloudStack will allow
the community to participate in improving the features available with Hyper-V. The plugin uses a Director Connect
Agent architecture described here: https://cwiki.apache.org/confluence/display/CLOUDSTACK/Progress
Add ability to pass kvp data via the key cloudstack-vm-userdata
Rearrange code to make it clearer what .NET objects are being used.
Test failures are easier to deal with if test key is not deleted.
Acquire management/pod ip for control ip when VR deploys in HyperV
Fixed deletion on VM's on hyperv host when mgmt server gets restarted due to HA
Implementation for attach iso command. Attaches an iso to a given vm.
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue and/or TODO:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
- Documentation!
Signed-off-by : Toshiaki Hatano <haeena@haeena.net>
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"