related to commit 5f8a2ee9be. The field image_data_store_id should not be used in templates.sql as it does not exist yet. Should be filled during the upgrade from 4.1 to 4.2, but i'm leaving that to Edison.
Signed-off-by: Hugo Trippaers <htrippaers@schubergphilis.com>
CloudStack uses Guest CIDR for dhcp-range for the Guest VMs. The entire
CIDR is used by CloudStack for assigning IPs to Guest VMs. IP Address
Reservation will allow part of address space to be used fornon CloudStack
hosts/physical servers also, by restricting the address space of CloudStack
Guest VMs. Reservation can be configured using update Network API by specifying
guestvmCidr as an additional parameter. Reservation will be applicable for
Isolated Guest Networks including VPC. reservediprange in the response
will return the IP range that can be used for non Cloudstack hosts.
Tested manually the following scenarios:
Applying reservation when there are running VMs inside the
guest_vm_cidr.
Applying reservation when there are running VMs outside the
guest_vm_cidr.(not allowed)
Applying reservation when external device like Netscaler is configured
in the guest_cidr.
Applying reservation in VPC tiers.
Applying reservation outside the range of guest_cidr.(not allowed)
Supporting kickstart in CloudStack baremetal
able to start vm
Conflicts:
client/tomcatconf/componentContext.xml.in
server/src/com/cloud/baremetal/BareMetalTemplateAdapter.java
server/src/com/cloud/baremetal/BareMetalVmManagerImpl.java
server/src/com/cloud/vm/UserVmManagerImpl.java
In c63dbb8804 I removed the rule from create-schema:
- `queue_proc_time` datetime COMMENT 'when processing started for the item',
But, upgrade path schema-40to410.sql had a different rule which caused the bug:
+ALTER TABLE `cloud`.`sync_queue_item` ADD `queue_proc_time` DATETIME NOT NULL
COMMENT 'when processing started for the item' AFTER `queue_proc_number`;
In this fix we just revert to whatever rule was defined in create-schema as the
developer may have forgetten to fix same rule in create-schema and upgrade path.
This commit can be reverted or the code be fixed if we want that queue_proc_time
cannot be null.
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
- Remove create-schema-view.sql, views are created when mgmt server does rolling
upgrade from 4.0.0 to 4.1.0
- Fix reference and usage of the sql file in scripts
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
- Move changes since 4.0 to schema upgrade path (schema40-410.sql)
- Comment out some table names where we're trying to copy uuid from id, they
don't exists
- We don't run above step for tables which are newly created for 410 and don't
exist in 4.0 for example autoscale related ones, code is commented and not removed
- Drop indexes which are removed before dropping the column
- Comment out insertion, as for default region we're inserting the same in
code, in ConfigurationServerImpl:createDefaultRegion(), fix same in premium
Testing;
Deployed fresh 4.0 database, compiled and ran mgmt server. It did a smooth
rolling upgrade from 4.0 to 4.1.0 with no database exceptions or any other db error.
TODO:
- 4.2.0 relates changes like ipv6 should go into its schema-410to420.sql
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
Fixed Component annontation for usage parsers
Fixed mvn target to run usage
removed UsageServerComponentConfig which is not required
Added region_id to account table in cloud_usage db
Conflicts:
setup/db/db/schema-40to410.sql
Addition of two new resource types i.e. CPU and Memory in the existing pool of
resource types.
Added some methods to set the limits on these resources using updateResourceLimit
API command and to get a count using updateResourceCount. Also added calls in the
Virtual machine life cycle to check these limits and to increment/decrement the new
resource types
Resource Name :: Resource type number
CPU 8
Memory 9
Also added Unit Tests for the same.
Also ass public_ipv6_address for ipv6 address management.
Extend nics and vlans for ipv6 address.
Add dependency to com.googlecode.ipv6(java-ipv6).
Modify dhcpcommand for ipv6.
- Fixed new join dao impls as spring components
- Fixed component context xml to load api rate limit checker
- Fixed root pom.xml for duplicate plugin
- Fixed list data centers method
- Fixed following conflicts:
api/src/org/apache/cloudstack/api/command/admin/network/CreateNetworkOfferingCmd.java
api/src/org/apache/cloudstack/api/command/user/offering/ListServiceOfferingsCmd.java
api/src/org/apache/cloudstack/api/command/user/template/DeleteTemplateCmd.java
api/src/org/apache/cloudstack/api/command/user/template/ExtractTemplateCmd.java
plugins/api/discovery/src/org/apache/cloudstack/discovery/ApiDiscoveryServiceImpl.java
server/src/com/cloud/api/ApiDBUtils.java
server/src/com/cloud/api/ApiServer.java
server/src/com/cloud/api/query/QueryManagerImpl.java
server/src/com/cloud/configuration/DefaultComponentLibrary.java
server/src/com/cloud/server/ManagementServerImpl.java
server/src/com/cloud/storage/swift/SwiftManagerImpl.java
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
Applied some formatting
Changed invalid comment
Added workaround for non existing table vm_template_iso
Fix non prefixed table names in views
Removed 'use cloud' statement
Detail: This merges the resizevolume feature branch, which provides the
ability to migrate a disk between disk offerings, thereby changing its
size, or specifying a new size if current disk offering is custom.
BUG-ID: CLOUDSTACK-644
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1358358209 -0700
Although I still think the templates aren't well maintained, I just
added 12.04 since this is an LTS and people probably want it in the
list of templates.
This system should be more generic I think though.
right approach to populate uuid column since it will impact upgrade as
well), and populate UUID column in seed data sql script.
Signed-off-by: Min Chen <min.chen@citrix.com>
The basic idea behind this is, deploy a fix sized threadpool for updating RvR
status, then using producer/consumer model. There is a global configuration
router.check.poolsize(10 by default) to control the pool size.
Using pool size 100 for 1000 RvR is tested with simulator and works well.
Also we can adjust the global configuration option router.check.interval to e.g.
60s from default 30s to mitigate the issue.
For LB device in inline mode, the ip deployer(the owner of public ip) is the
firewall in front of it, not itself. So check if it's inline or not, if it's
inline, return the firewall as ip deployer
These columns were not added by the upgrade from 4.0 to 4.1 causing SQL-errors
These columns are present in create-schema.sql, so by adding them here the upgrade also works to 4.1
Multiple fixes:
1. changes to the mvn configuration
a. include simulator to client.war
b. activate simulator by profile
2. templates for simulator
3. developer prefill for simulator
a. Use deplydb-simulator to setup simulator db
4. Inherit components-simulator.xml from components.xml
5. ListVolumesCommand missed for MockStorageManager
6. Include simulator properties into utils/db.properties
TODO:
Secondary storage VMs don't come up because ComponentLocator doesn't
retain a unique set of adapaters by name. Fix this in subsequent
checkin.
BootArgs carry router priority for master and backup and simulator will
reuse the priority to decide RvR status. Deprecating the odd/even logic.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
1. Modified create-schema.sql to add version as 4.1.0 instead of 4.0.0
2. Removed schema-40to41.sql amd moved the content to schema-40to410.sql
3. Added to schema-40to410.sql Upgrade40to41.java
Conflicts:
server/src/com/cloud/upgrade/dao/Upgrade40to41.java
setup/db/create-schema.sql
setup/db/db/schema-40to41.sql
setup/db/db/schema-40to410.sql
Signed-off-by: Min Chen <min.chen@citrix.com>
1. Modified create-schema.sql to add version as 4.1.0 instead of 4.0.0
2. Removed schema-40to41.sql amd moved the content to schema-40to410.sql
3. Added to schema-40to410.sql Upgrade40to41.java
This is improvement of:
commit 1ca493e4fa
Author: Sheng Yang <sheng.yang@cloud.com>
Date: Wed Feb 29 17:43:50 2012 -0800
bug 14042: Don't set dhcp:router option on DHCP server for non-default
network on CentOS/RHEL
The old solution only works on CentOS/RHEL, this one would enable the ability to more
guest OS, and enable user to choose what policy should be for each guest os
type.
IdentityProxy is only referenced in CreateCmdResponse, which involves
some async job logic change. Since it is not impacting list performance,
will leave it there for now.
Signed-off-by: Min Chen <min.chen@citrix.com>
- introduces Capability in the network offering, which
decides when EIP service is enabled, by defualt public IP
should be assigned to the VM or not
- default network offering with EIP/ELB service will still work with old EIP
semantics, i.e) assign a public IP to each VM on start
This is part 1 of list API refactoring. Commands covered:
listVmsCmd, listRoutersCmd Response covered:
UserVmResponse, DomainRouterResponse. DB views created:
user_vm_view, domain_router_view.
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
This commit merges the nicira-l3support branch with master. This
effectively adds nicira nvp l3 support to master. The NiciraNVP Provider
can support the following services with this modification: Connectivity,
SourceNat, StaticNat and PortForwarding
Testing done:
Create, Delete network offerings with Nicira Element
Use Gui to add, modify, remove Nicira Element and Provider
Provision, deprovision SourceNat networks
Provision, deprovision Portforwarding and StaticNat rules
Tested with Nicira NVP release 2.1.0, 2.2.0 and 2.2.1 (2.2.x recommended)
Support for local data disk. Currently enable/disable config is at zone level, in subsequent checkins it can be made more granular.
Following changes are made:
- Create disk offering API now takes an extra parameter to denote storage type (local or shared). This is similar to storage type in service offering.
- Create/delete of data volume on local storage
- Attach/detach for local data volumes. Re-attach is allowed as long as vm host and data volume storage pool host is same.
- Migration of VM instance is not supported if it uses local root or data volumes.
- Migrate is not supported for local volumes.
- Zone level config to enable/disable local storage usage for service and disk offerings.
- Local storage gets discovered when a host is added/reconnected if zone level config is enabled. When disabled existing local storages are not removed but any new local storage is not added.
- Deploy VM command validates service and disk offerings based on local storage config.
- Upgrade uses the global config 'use.local.storage' to set the zone level config for local storage.
(cherry picked from commit 62710aed37606168012a0ed255a876c8e7954010)
Support for local data disk. Currently enable/disable config is at zone level, in subsequent checkins it can be made more granular.
Following changes are made:
- Create disk offering API now takes an extra parameter to denote storage type (local or shared). This is similar to storage type in service offering.
- Create/delete of data volume on local storage
- Attach/detach for local data volumes. Re-attach is allowed as long as vm host and data volume storage pool host is same.
- Migration of VM instance is not supported if it uses local root or data volumes.
- Migrate is not supported for local volumes.
- Zone level config to enable/disable local storage usage for service and disk offerings.
- Local storage gets discovered when a host is added/reconnected if zone level config is enabled. When disabled existing local storages are not removed but any new local storage is not added.
- Deploy VM command validates service and disk offerings based on local storage config.
- Upgrade uses the global config 'use.local.storage' to set the zone level config for local storage.
(cherry picked from commit 62710aed37606168012a0ed255a876c8e7954010)
Support for up to 16 VDIs per VM on XS 6.0 and above (16 VDIs => root + cd + 14 data volumes). Currently in CS number of data disk that can be attached to VM is hard-coded to 6. Made this setting configurable by moving it to hypervisor capabilities. Although XS 6.0 and above supports upto 16 VDIs but while testing on XS 6.0.2 found that only 13 data volumes can be attached to a VM. So for XS 6.0 and 6.0.2 max_data_volumes_limit is set to 13 currently.
Signed-off-by: Koushik Das <Koushik.Das@citrix.com>
This patch adds RBD (RADOS Block Device) support for primary storage in combination with KVM.
To get this patch working you need:
- libvirt-java 0.4.8
- libvirt with RBD storage pool support (>0.9.13)
- Qemu with RBD support (>0.14)
The primary storage does not support all the functions of CloudStack yet, for example snapshotting is disabled
due to the fact that backupping up a RBD snapshot is not possible in the way CloudStack wants to do it.
Creating templates from RBD volumes goes well, creating a VM from a template however is still a hit-and-miss.
NFS primary storage is also still required, you are not able to run your System VM's from RBD, they will need
to run on NFS.
Other then these points you can run instances with RBD backed disks.
2) Added services api support for plugging/unplugging the nics to VpcElement
Conflicts:
api/src/com/cloud/network/NetworkService.java
core/src/com/cloud/vm/VMInstanceVO.java
server/src/com/cloud/network/NetworkManagerImpl.java
server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java
server/test/com/cloud/network/MockNetworkManagerImpl.java
1) Added API frameworks for the feature. New commands:
* CreateVPCCmd
* ListVPCsCmd
* DeleteVPCCmd
* UpdateVPCCmd
* CreateVPCOfferingCmd
* UpdateVPCOfferingCmd
* DeleteVPCOfferingCmd
* ListVPCOfferingsCmd
2) New db tables:
* `cloud`.`vpc`
* `cloud`.`vpc_offerings`
* `cloud`.`vpc_offering_service_map`
and corresponding VO/Dao objects.
Added vpc_id field to `cloud.`networks` table - not null when network belongs to VPC
3) New Manager and Service interfaces- VpcManager/VpcService
4) Automatically create new VpcOffering (if doesn't exist) on system start
5) New Action events:
* VPC.CREATE
* VPC.UPDATE
* VPC.DELETE
* VPC.OFFERING.CREATE
* VPC.OFFERING.UPDATE
* VPC.OFFERING.DELETE
Conflicts:
api/src/com/cloud/api/ApiConstants.java
client/tomcatconf/commands.properties.in
server/src/com/cloud/api/ApiDBUtils.java
server/src/com/cloud/network/NetworkManagerImpl.java
setup/db/create-schema.sql
Description:
Removed the vcenter_dc_name and vcenter_ipaddr
fields from the virtual_supervisor_module
table, the CiscoNexusVSMDeviceVO, addClusterCmd,
and all other references to these two fields.
Fixing null pointer exceptions when checking
for nexus related global parameter values in
addClusterCmd.
Conflicts:
api/src/com/cloud/api/commands/AddClusterCmd.java
Description:
Putting in table creation DMLs in upgrade script
when upgrading from Acton (3.0.2) to Bonita (3.0.3).
Please note that the field names in the tables need
to be changed as per naming conventions. Those changes
will be checked in in an upcoming commit.
Conflicts:
setup/db/db/schema-302to303.sql
Description:
1. Added the PortProfile infrastructure:
a. PortProfileVO : The VO class to represent a db
record of the table port_profile. Each db record
represents one port profile.
b. PortProfileDao: The interface that declares search
functions on the port_profile table.
c. PortProfileDaoImpl: The class that defines the
interfaces declared in PortProfileDao.
d. PortProfileManagerImpl: The class that contains
routines that will add or delete db records from
the port_profile table. If you want to create/delete
a portprofile, call functions from this class.
e. Changes to create-schema.sql to create the port_profile
table.
2. Cleaned up code:
a. Removed a number of unused Dao and Manager objects in
CiscoNexusVSMDeviceManagerImpl.
b. Removed the ListCiscoNexusVSMNetworksCmd command.
c. Removed a bunch of import statements in a few files.
Description:
1. Modify addCiscoNexusVSMCmd to enable a VSM
by default, when it is added to a cluster.
2. Put in two new APIs exposed to the user -
a. EnableCiscoNexusVSMCmd
b. DisableCiscoNexusVSMCmd
Disabling a VSM does not delete it. It only
prevents the Management Server from using that
VSM. This is useful if the VSM is in
maintenance mode.
Description:
1. Changed AddCiscoNexusVSMCmd to:
a. Extend BaseCmd instead of BaseAsyncCmd.
b. Take in more required parameters (viz
vCenterDCName and vCenterIpAddress)
1a. Changed DeleteCiscoNexusVSMCmd to also
extend BaseCmd.
2. Put in changes that will ensure that
When a VSM is added, it is disabled by default.
3. Fixed code that was leading to exceptions
related to DB reads/writes to VSM related tables.
4. Added new API Constants in ApiConstants.java.
NOTE - Always initialize new attributes in
ApiConstants.java to values in small case.
Never put in upper case there. Also regardless
of what names you give attributes in the
*Cmd.java's class, you pass in parameters via
API calls by specifying <key>=<value> where the
<key> is taken from the value you specified in
ApiConstants.java.
5. Modified the addCiscoNexusVSM() function in
CiscoNexusVSMDeviceManagerImpl.java to write VSM
records to the db.
Description:
Update create-schema.sql to create tables for
VSM and VSM-Cluster mapping.
Fixed an incorrect exception path in
CSExceptionErrorCode.
Conflicts:
utils/src/com/cloud/utils/exception/CSExceptionErrorCode.java
This fix will enable support for multiple NetScaler devices providing EIP service in same zone.
- Introduced global setting "eip.use.multiple.netscalers" to turn multiple netscaler support
- Enhanced configureNetscalerLoadBalancer API to take the PBR setup between the POD's subnet
and NetScaler device
- logic to pick a NetScaler (based on the guest IP and corresponding pod) while configuring INAT rule
2) Added new api - changeServiceForSystemVm - to support service offering upgrade for system vms
3) Removed global config parameters that are not in use anymore: consoleproxy.ram.size, consoleproxy.cpu.mhz, secstorage.vm.ram.size, secstorage.vm.cpu.mhz
Reviewed-By: Sheng Yang
Changes:
Added 'removed' column to physical_network_service_providers to avoid the Foreign Key constraint error.
Conflicts:
setup/db/db/schema-30to301.sql
status 14500: resolved fixed
reviwed-by: Frank Zhang
Conflicts:
server/test/com/cloud/network/MockNetworkManagerImpl.java
setup/db/db/schema-30to301.sql
Bug 13641 - OVM add host to OVM cluster results in host remaining in state: Alert
Bug 13652 - OVM add primary storage to OVM cluster FAIL
making Ovm work on Acton
status 13662: resolved fixed
status 13641: resolved fixed
status 13652: resolved fixed
reviewed-by: edison
Changes:
To migrate systems using 'use.user.concentrated.pod.allocation' as true and 'vm.allocation.algorithm' as true, we need to
add following changes:
- There will be 5 values to 'vm.allocation.algorithm': 'random', 'firstfit', 'userdispersing', 'userconcentratedpod_random', 'userconcentratedpod_firstfit'
- 'userconcentratedpod_random' means we apply user concentration to pods and clusters. To hosts and pools we use random ordering.
- 'userconcentratedpod_firstfit' means we apply user concentration to pods and clusters. To hosts and pools we use firstfit ordering.
Changes:
- Throw error is anyone tries to update the resource limits for ROOT domain using updateResourceLimit API
- For ROOT domain always return -1 (infinite) limit
- DB upgrade: remove any limits set for ROOT domain
2) Added elasticIp and elasticLb network capabilities. Provided support to create network offering with these capabilities.
3) Added one more default network offering having elasticip and elasticlb
4) Public network support to Basic zone. You can associate/disassociate IP addresses now
Conserve mode means, we can use same IP for different purposes, in order to
"conserve" ip resources. But in this offering, all the service providers should
be the same, and the network created from this offering may be prohibited from
update to different network offering whose services are provided by different
service providers - because different service providers would need different IPs
for different services.
If user want to update the "conserve mode" network with the network offering
that has different service providers, each public IP should have only one usage,
only them the update is allowed.
* SSVM to act as a direct connect agent
* Storage Resources handle SSVM commands
* create-schema.sql already has simulator_network_label. removing the label from create-schema-simulator.sql
Commit 4f904d5fd9dbe5252b7a6075f712e9254059e2c0 "Changes to
PhysicalNetworkTrafficType to accomodate the simulator hypervisor type" broke
master. This patch fixes it.
- Changes to schema file schema-2214to30.sql: moved out cleanup to separate file, added some NAAS changes
- Added physicalNetwork setup to Upgrade2214to30.java data migration
- Unit test and sample file
bug 12318: NaaS: Dynamic CIDR for virtual router
This patch in fact use ExternalGuestNetworkGuru to replace GuestNetworkGuru. The
problem is the virtual router would normally use 10.1.1.0/8 as CIDR, but when we
want to upgrade to external firewall e.g. Netscaler, the CIDR would need to be
changed to different value e.g. 10.x.x.0/24 based on VLAN, because the external
firewall can not support one CIDR for multiply VLAN right now. So we have to use
the same policy for virtual router.
This patch also add one field "specified_cidr" to the networks table. If this
field is true, then it means user specify the CIDR of this network, thus we can
not granutee the CIDR after upgrade is valid, so we would like to prohibit the
upgrade of network offering.
This should also fix bug 12318. The reason for bug 12318 is the pre-set gateway
address of domR is overrided by ExternalGuestNetworkGuru. After this patch,
ExternalGuestNetworkGuru would respect the existed value in Nic, rather than
simply wiping it out. It would do calcuation to get the relevant address after
VLAN changed.
More clean up can be done in the future, when we proved that this policy change
doesn't break...
status 12234: resolved fixed
status 12318: resolved fixed
As per the new design following would be done.
(a) any ISO-derived disk can be extracted
(b) there will be a global config to disable extraction of ISO based volumes.
That way people concerned about (a) can just use (b) to fix it.
Reviewed by : Kishan.
status 11811: resolved fixed
Changes:
- Added a two new deployment planners 'UserDispersingPlanner' and 'UserConcentratedPodPlanner' to the DeploymentPlanners
- Planner can be chosen by setting the global config variable 'vm.allocation.algorithm' to either of the following values:
('random', 'firstfit', 'userdispersing', 'userconcentratedpod')
- By default, the value is 'random'. When the value is 'random', FirstFitPlanner is invoked as before that shuffles the resource lists.
- Now Admin can choose whether the deployment heuristic should be applied starting at cluster or pod level. This can be done by using the
global config variable 'apply.allocation.algorithm.to.pods' which is false by default. Thus by default as earlier, planner starts at clusters directly.
'UserConcentratedPodPlanner' changes:
- Earlier to 3.0, FirstFitPlanner used to reorder the clusters in case this heuristic was chosen.
- Now this is done by a separate planner and is applied only when 'vm.allocation.algorithm' is set to this planner
- It reorders the capacity based clusters/pods such that those pods having more number of Running Vms for the given account are tried first.
- Note that this userconcentration is applied only to pods and clusters. Not to hosts or storagepools within a cluster.
'UserDispersingPlanner' changes:
- 'UserDispersingPlanner' reorders the capacity ordered pods and clusters based on number of 'Running' VMs for the given account in ascending order. Aim is to choose thodes pods/clusters first which have less number of Running VMs for the given account
- Admin can provide weights to capacity and user dispersion so that both parameters get considered in reordering the pods/clusters. This can be done by setting
the global config parameter 'vm.user.dispersion.weight'. Default value is 1. Thus if this planner is chosen, by default, ordering will be done only by number of Running Vms, unless the weight is changed.
- HostAlllocators and StoragePoolAllocators also reorder the hosts and pools by ascending order of number of Running VMS/ Ready Volumes respectively for the given account. Thus try to choose that host or pool within a cluster with less number of VMs for the account.
-made Netscaler, SRX, F5 network elements as pluggable service
-added abstract load balancer device manager ExternaLoadBalancerDeviceManager
-made both F5 and Netscaler pluggable service to extend ExternaLoadBalancerDeviceManager
-added abstract firewall device manager ExternalFirewallDeviceManager
-made SRX pluugable service to extende ExternalFirewallDeviceManager
-added API's to configure and manage netscaler devices
status 11938: resolved fixed
reviewed-by: Frank Zhang
This fix would cover following scenario:
* the customer is upgrading from 2.2.11 to 2.2.13.
* the incorrect indexes are being dropped as a part of 2.2.12 to 2.2.13 upgarde, but we still insert them as a part of 2.2.11 to 2.2.12, and it might lead to the db upgrade failure. The only one way to handle this case - remove them from 2.2.11 to 2.2.12 upgrade path
only owner of the network can access it; if it's domain - all accounts in the domain and domain children can have an access.
* aclType replaces 2 old fields: isShared and isDomainSpecific.
* All 2.2.x account specific networks will have aclType=Account; 2.2.x Domain specific networks - aclType=domain; 2.2.x Zone level networks - aclType=Domain with domainId = Root domain id
- Create Default physicalnetwork and add traffic types while creating a zone
- DeleteProvider should error out if there are networks using the provider.
- Other validations
- ListSupportedNetworkServiceProvidersCmd will now return Providers along with its element's services and boolean 'canEnableIndividualServices' that indicates if for this Provider services can be enabled/disabled
- add & update NetworkServiceProvider changed to take in the list of services to enable. While adding a provider, if list is null then all services supported by the element are enabled by default.
- ListNetworkServices enhanced to take in a provider name and returns services of that specific provider.
As DhcpElement/VirtualRouterElement/RedundantVirtualRouterElement is decided to
be the service provider of the physical network, this API should be called to
add a new element, with correlated network service provider ID.
Then e.g. ConfigureVirtualRouterElementCmd should be called to configure and
enable the element.
DHCP range, domain name, etc. are the property of network, not virtual router
specific.
The focus of virtual router configuration would on separate enable/disable each
service it provided.
- Make all API commands Async and add events
- Make BroadcatsDomainRange case insensitive
- Process all _networkElements to build the Service -> Provider map during NetworkMgr::configure()
It would cover the configuration of DHCPElement, VirtualRouterElement and
RedundantVirtualRouterElement.
Also add foreign key in domain_router table to reflect the domain_router is
created from which element and use what configuration.
- Create Zone changes and changes to data_center table to remove vlan, securityGroup fields
- Physical Network lifecycle APIs
- Physical Network Service Provider APIs
- DB schema changes
* moved all services to the separate table, map them to the network_offering+provider.
* added state/securityGroupEnabled properties for the networkOffering
* added ability to list by state/securityGroupEnabled in listNetworkOfferings api command
2) New service: SourceNat
Since we would introduce a way to specify each service provider in the network
offering, it's better for redundant virtual router as a separate service
provider.
Also isRedundant() flag in the network offering would be removed. Redundant
virtual router temporality won't work from now. Until we're able to add
different network elements/service providers in network_offering.
1) Introduce new managers - ProjectManager and DomainManager. Moved all domain related code from AccountManager to DomainManager.
2) Moved some code from ManagementServerImpl to the correct managers.
3) New resource limit for Domain - Project
We've added "Strategy.Managed" for source nat ip address, to prevent it from
releasing when we try to execute restartNetwork command. But we didn't update
the existed nics when mgmt upgraded. This would result in restartNetwork command
fail(NPE) when try to restart an existed network.
status 11504: resolved fixed
Reviewed-by: Alena
Added two New values "all" and "default" to global config "network.loadbalancer.haproxy.stats.visibility" . With this change, it can take six possible value:
global - stats visible from public network.
guest-network - stats visible only to guestnetwork.
link-local - stats visible only to link local network(for xen and kvm).
disabled - stats disabled.
all - stats available on public,guest and link-local. (Newly added)
default - stats availble on the serving http port, this does need any specific http port.(Newly added)
Except "default" and "disabled", all the rest of 4 need to configure the stats port.
Force stop the router would release all the resources it used, but router may
still running. Add a column "stop_pending" in the database, and stop it when the
router come back.
Admin would able to choose to force destroy such router, then recover the
network using restartNetwork command with cleanup=false.
status 11036: resolved fixed
1) Use row locks instead of global lock when update resource_count table. When update resource_count for account, make sure that we lock account+all related domains
2) Insert resource_count records for account/domain at the moment when account/domain is created.
3) As a part of DB upgrade, insert missing resource_count records for all non-removed accounts/domains
Conflicts:
core/src/com/cloud/alert/AlertManager.java
server/test/com/cloud/agent/MockAgentManagerImpl.java
Changes :
- Fixing API doc +response name + errorMessage
- Adding seperate events to Egress rules
- Egress rules Using the same database table as that of ingress with new column type.
Pending Tasks:
- db upgrade
- database table rename from security_ingress_rule to generic name, renaming some of the jave class from ingress to generic name.
- Retesting on kvm
Changes:
- Adding a new table 'hypervisor_capabilities' that will record capabilities for each hypervisor version. Added db schema changes for this.
- Currently a few capabilities have been added, namely, 'max_guests_limit' and 'security_group_enabled'
- Added a new column 'hypervisor_version' to host table. StartupRouting command now takes in this parameter. It should be set when a host connects.
- If a host's hypervisor version is not present, we find all the capabilities rows for that hypervisor type and use the first record.
- 'max_guests_limit' is the maximum number of running guest Vms that a host can have for the given hypervisor.
- Host Allocators use this limit and skip a host if the number of running VMs on that host exceeds this limit.
Added New value "link-local" to global config network.loadbalancer.haproxy.stats.visibility . With this change it can take new parameter "link-local" value apart from the existing 3 values global,guest-network,disabled.
global - stats visible from public network
guest-network - stats visible only to guestnetwork.
link-local - stats visible only to link local network.
disabled - stats disabled.
Added New value "link-local" to global config network.loadbalancer.haproxy.stats.visibility . With this change it can take new parameter "link-local" value apart from the existing 3 values global,guest-network,disabled.
global - stats visible from public network
guest-network - stats visible only to guestnetwork.
link-local - stats visible only to link local network
disabled - stats disabled.
Changes:
- CreateTemplate and RegisterTemplate now support adding a template tag. It is a string value. This is root-admin only action - only admin can add template tags.
- ListTemplates will return the template tag in response.
- HostAllocator changed to use template tag along with the existing tag on service offering. If both tags are present, allocator now finds hosts satisfying both tags. If no hosts have both tags, allocation will fail.
- DB changes to add new column to vm_template table.
- DB upgrade changes for upgrade from 2.2.10 to 2.2.11
status 11029: resolved fixed
Commit also includes the following:
* map firewall rule to pf/lb/staticNat/vpn when the firewall rule is created as a part of pf/lb/staticNat/vpn rule creation
* when delete firewall rules, also delete related firewall rule
1) Added new apis: createFirewallRule, deleteFirewallRule, listFirewallRules
2) Modified existing apis - added boolean openFirewall parameter to createPortForwardingRule/createIpForwardingRule/createRemoteAccessVpn. If parameter is set to true, open firewall on the domR before creating an actual PF rule there
Modified backend calls appropriately.
3) Schema changes for firewall_rules table:
* startPort/endPort can be null now
* added icmp_type, icmp_code fields (can be not null only when protocol is icmp)
4) Added new manager - FirewallManagerImpl
Conflicts:
api/src/com/cloud/api/BaseCmd.java
client/tomcatconf/commands.properties.in
server/src/com/cloud/api/ApiResponseHelper.java
server/src/com/cloud/configuration/DefaultComponentLibrary.java
server/src/com/cloud/network/lb/LoadBalancingRulesManagerImpl.java
server/src/com/cloud/network/rules/RulesManagerImpl.java
1) Added new apis: createFirewallRule, deleteFirewallRule, listFirewallRules
2) Modified existing apis - added boolean openFirewall parameter to createPortForwardingRule/createIpForwardingRule/createRemoteAccessVpn. If parameter is set to true, open firewall on the domR before creating an actual PF rule there
Modified backend calls appropriately.
3) Schema changes for firewall_rules table:
* startPort/endPort can be null now
* added icmp_type, icmp_code fields (can be not null only when protocol is icmp)
4) Added new manager - FirewallManagerImpl
* when management server dies and notifies other management servers about this, the running management server has to cleanup host_transfer records belonging to the died management server
* issue agent load balancing task only when agent load (number of connected agents in the system) exceeds "agent.load.threshold" - 70% by default
* when management server dies and notifies other management servers about this, the running management server has to cleanup host_transfer records belonging to the died management server
* issue agent load balancing task only when agent load (number of connected agents in the system) exceeds "agent.load.threshold" - 70% by default
Conflicts:
server/src/com/cloud/configuration/Config.java
setup/db/db/schema-228to229.sql
The step to upgrade xenserver,
1. put cluster in Unmanaged state through UI , then MS will not talk to hosts in the cluster
2. upgrade xenserver according to XenServer upgrade guide.
3. put cluster in Managed state through UI, then MS will reconnect hosts
TODO,
1. UI
2. vm pool sync , leveraged from kelven's work
The step to upgrade xenserver,
1. put cluster in Unmanaged state through UI , then MS will not talk to hosts in the cluster
2. upgrade xenserver according to XenServer upgrade guide.
3. put cluster in Managed state through UI, then MS will reconnect hosts
TODO,
1. UI
2. vm pool sync , leveraged from kelven's work
Part 2
commit 797839360c65cd348d2eb20630521177ab0919de
bug 9154: redundant virtual router
commit 8ff7f230204d4d3a7a4adee75523a9a84f4276fe
bug 9154: Replace domain_router.is_master with domain_router.redundant_state in DB
commit 230b99e9e0b152648f1dd2a5eab6f22315b8e7b4
bug 9154: Add redundant state to DomainRouterResponse
commit ccefb5ff5e83d713798a347c99bce1a0d04b4317
bug 9154: Add router fault state report
commit 7a3090378f9785caecf741b70554f6ea17c41764
bug 9154: Send alert if found two virtual routers in master state
commit 66831056e4bf27665871bccd24e6159071564847
bug 9154: Code clean up
commit bf3f58a85741fa7118bd848a42d8b21baa4478d4
bug 9154: Add isRedundantRouter to DomainRouterResponse
Part 1
This backport contained:
commit 52317c718c25111c2535657139b541db0c9d1e1f
bug 9154: Initial check in for enabling redundant virtual router
commit 54199112055d754371bfb141168fb5538bf6d6ea
Add host verification for CheckRouterCommand
commit cef978a228c90056ead9be10cbc4de74c2b8de76
Fix CheckRouterAnswer's isMaster report
commit 4072f0a6991ac3b63601a1764fbe14188965f62f
Some build fixes and code refactoring for redundant router
commit 4d3350b7cd8ee2706a9bace4437fc194e36c8dd5
Redundant Router: Fix OVS
commit 6a228830e7c46d819fa0c3317e159e041337e887
Fix findByNetwork()/findByNetworkAndPod()'s return
commit c627777b3d5bdbcd60db4032cebd349a5b1ecd83
Redundant Router: Fix isVmAlive()
commit e1275d2514adc41f8744f5107d4069c38be195f1
Only issue CheckRouterCommand to redundant routers
And all modification to the scripts till
commit 4e3942462ed3fde3a3d7011e95839e2128fba514
logging changes
in the master branch.
This reverts commit 5537e65129f669da7ca9a22e53cc3daa9e69ac97.
Reverting the fix because only the unique key was missing; the column was inserted as a part of 225 to 226 upgrade
This reverts commit 5537e65129f669da7ca9a22e53cc3daa9e69ac97.
Reverting the fix because only the unique key was missing; the column was inserted as a part of 225 to 226 upgrade
* remove lb to vm mappings for Removed vms (result of the bug in 2.1.x when mappings weren't removed during the vm expunge)
* encode.api.response is false by default
* remove lb to vm mappings for Removed vms (result of the bug in 2.1.x when mappings weren't removed during the vm expunge)
* encode.api.response is false by default
status 10305: resolved fixed
While creating a system vm offering specify the type. If no type specified the default to domainrouter.
While requesting a set of system offering specify the paramter systemvmtype.
status 10305: resolved fixed
While creating a system vm offering specify the type. If no type specified the default to domainrouter.
While requesting a set of system offering specify the paramter systemvmtype.
status 9697: resolved fixed
Do encoding for ASCII chars only (done to eliminate problems with multiple language support)
To disable encoding, set "encode.api.response" to false
status 9697: resolved fixed
Do encoding for ASCII chars only (done to eliminate problems with multiple language support)
To disable encoding, set "encode.api.response" to false
This reverts commit 97f2b9936a8b9e3a057116d327b058253458b4ef.
Use the following solution instead:
* add unique_name field to the network_offerings table. Use this filed as a unique offering identifier in the code
* Added db upgrade steps to 225to226 sql script
Conflicts:
server/src/com/cloud/offerings/NetworkOfferingVO.java
This reverts commit 97f2b9936a8b9e3a057116d327b058253458b4ef.
Use the following solution instead:
* add unique_name field to the network_offerings table. Use this filed as a unique offering identifier in the code
* Added db upgrade steps to 225to226 sql script
* remove records from load_balancer_vm_map if the parent record is missing in load_balancer table
* remove user vm records from vm_instance table when corresponding record is missing in user_vm table
* remove records from load_balancer_vm_map if the parent record is missing in load_balancer table
* remove user vm records from vm_instance table when corresponding record is missing in user_vm table
2) Added new config parameter 'allow.subdomain.network.access' - default value is true. If it's set to false, the child domain can't use the network of the parent domain
Conflicts:
server/src/com/cloud/network/NetworkManagerImpl.java
2) Added new config parameter 'allow.subdomain.network.access' - default value is true. If it's set to false, the child domain can't use the network of the parent domain
This patch enable redundant virtual routers.
1. To enable this feature, db need to be updated using follow SQL by now(we
would get a UI way later):
UPDATE network_offerings SET redundant_router=1 WHERE guest_type="Virtual" AND
system_only=0;
2. System would try to start up two routers at different hosts. But if there is
only one host in the zone, system would start up two routers on it.
3. The failover part is using keepalived, and connection tracking part is using
conntrackd. There would be one master router and one backup router. The status
of router(master or backup) can be query from the database table domain_router
now. Management server would update the status every 30s by default.
4. The routers for the same zone would use same external NIC(same ip and mac).
The script used for fail-over would ensure only one external NIC present in the
network at any time.
5. Currently management server don't got the ability to stop one of router is
both of them reported as master. The feature is in the todo list.
After two routers start up, disconnect anyone of them, the guest network
shouldn't be affected, and established connection(http, ssh, etc.) should still
works. The fail-over on gateway part should be 3~4 seconds.
Currently the patch works with KVM. Would deal with vmware and XenServer soon.