Description:
1. Added the PortProfile infrastructure:
a. PortProfileVO : The VO class to represent a db
record of the table port_profile. Each db record
represents one port profile.
b. PortProfileDao: The interface that declares search
functions on the port_profile table.
c. PortProfileDaoImpl: The class that defines the
interfaces declared in PortProfileDao.
d. PortProfileManagerImpl: The class that contains
routines that will add or delete db records from
the port_profile table. If you want to create/delete
a portprofile, call functions from this class.
e. Changes to create-schema.sql to create the port_profile
table.
2. Cleaned up code:
a. Removed a number of unused Dao and Manager objects in
CiscoNexusVSMDeviceManagerImpl.
b. Removed the ListCiscoNexusVSMNetworksCmd command.
c. Removed a bunch of import statements in a few files.
Description:
1. Missed replacing older table name for VSMs in a few
files (changed the name from
external_virtual_switch_management_devices to
virtual_supervisor_module). Fixed that in this commit.
2. Missed adding the new Dao ClusterVSMMapDao in the Dao
loading in DefaultComponentLibrary. Fixed.
3. Fixed wrong searchbuilder options passed to ipaddrSearch
in CiscoNexusVSMDeviceDaoImpl.
At this point, the mgmt server comes up, loading the
Nexus related modules without dying.
Description:
1) Added a new properties file for Cisco N1kv VSM commands:
cisconexusvsm_commands.properties.in
2) Added the CiscoNexusVSMElement to the components.xml file.
3) Modified CiscoNexusVSMElement to implement NetworkElement.
The NetworkElement interface functions are not
relevant to the N1KV VSM, so we override them
with noops.
4) Added an addDao() of CiscoNexusVSMDeviceDaoImpl in populateDaos(),
else we'd run into a failure to look up the VSM's dao when the
mgmt server is starting up:
com.cloud.utils.exception.CloudRuntimeException: Unable to find DAO com.cloud.network.dao.CiscoNexusVSMDeviceDao
5) Also added the CiscoNexusVSMElementService in populateServices(),
and modified CiscoNexusVSMElement to implement Manager as well.
6) populateServices() was running into an exception that indicated
that it was unable to find a commands.properties file for the
cisco n1kv vsm service. Fixed it by changing getProperties() in
CiscoNexusVSMElement to return the correct string
"cisconexusvsm_commands.properties", and putting in an @Override
for getProperties() in CiscoNexusVSMElement. Also fixed up all
the other functions in CiscoNexusVSMElement that needed to have
@Override. Also updated build/developers.xml with this file
location. And did other small cleanup.
7) More clean up in CiscoNexusVSMDeviceManagerImpl.
Conflicts:
server/src/com/cloud/configuration/DefaultComponentLibrary.java
This fix will enable support for multiple NetScaler devices providing EIP service in same zone.
- Introduced global setting "eip.use.multiple.netscalers" to turn multiple netscaler support
- Enhanced configureNetscalerLoadBalancer API to take the PBR setup between the POD's subnet
and NetScaler device
- logic to pick a NetScaler (based on the guest IP and corresponding pod) while configuring INAT rule
2) Added new api - changeServiceForSystemVm - to support service offering upgrade for system vms
3) Removed global config parameters that are not in use anymore: consoleproxy.ram.size, consoleproxy.cpu.mhz, secstorage.vm.ram.size, secstorage.vm.cpu.mhz
Reviewed-by: Kishan
Changes:
- Separated out the External Network Usage task from the ExternalLBDeviceMgr because ExternalLbDeviceMgrImpl :: start() was getting multiple times during management server satrtup. The reason for this is that this is the baseclass for F5 and NetScalarElement.
- This caused us to schedule the ExternalNetworkUsageTask multiple times
- Also we have LBRulesMgr calling this ExternalLbDeviceMgrImpl by creating an instance of this class which is declared abstract
- Hence having a separate implementation to manage the network usage stats should solve this.
Changes:
To migrate systems using 'use.user.concentrated.pod.allocation' as true and 'vm.allocation.algorithm' as true, we need to
add following changes:
- There will be 5 values to 'vm.allocation.algorithm': 'random', 'firstfit', 'userdispersing', 'userconcentratedpod_random', 'userconcentratedpod_firstfit'
- 'userconcentratedpod_random' means we apply user concentration to pods and clusters. To hosts and pools we use random ordering.
- 'userconcentratedpod_firstfit' means we apply user concentration to pods and clusters. To hosts and pools we use firstfit ordering.
Changes:
- We do not need these global setting anymore. These will be hidden since 3.0
- The default traffic label will be picked from the global setting which is null by default. When traffic label is null it means the resource uses tag on the default gateway
- Changes to invoke discoverer to reload the resource object on host connection
- Since a zone can have many physical networks, there can be multiple guest, public networks. Only the zone wide storage and management traffic label will be stored in host_details henceforth.
- If traffic labels are updated, discoverer should update the host_details
Reviewed-by: Kishan
Changes:
- When an LB rule is deleted or the IP address having an LB rule configured is released, ExternalNetworkUsageCommand is fired to gather the usage
accumulated on that IP after the last run of the ExternalNetworkUsage job.
if admin didn't specify storage traffic type before enabling zone, create default
storage traffic type with the same configuration of mgmt traffic type.
UI need a warning to admin, will file another bug for this
status 13250: resolved fixed
2) Added elasticIp and elasticLb network capabilities. Provided support to create network offering with these capabilities.
3) Added one more default network offering having elasticip and elasticlb
4) Public network support to Basic zone. You can associate/disassociate IP addresses now
change default mount path of OS where mgmt server is running to /var/lib/cloud/managment/mnt
as /var/lib/cloud/management is home directory of user 'cloud', mgmt server can have full permission
to manipulate it
status 12956: resolved fixed
Admin can either configure system.vm.default.hypervisor which is a global configuration for all zones, or call updatezone add defaultSystemVMHypervisorType
status 12139: resolved fixed