2)Re-apply all existing firewall rules as a part of implement call. TODO: Cleanup all existing rules from the backend (leave them in the DB) as a part of shutdown call
1. able to restore VM from its original template
2. Only allow to restore when VM is running or stopped
3. after restoring, VM state will not change, e.g. running is still running
status 9949: resolved fixed
TODO:
- Still leaving the provider columns in data_center schema as-is for CloudKit and BareMetal
- ExternalNetworkDeviceMgrImpl still needs to fix the dataCenter.setProviders calls and externalNetworkApplicance usage checks to see if zone has external networking.
Limitations:
* can't upgrade to the network offering with lesser number of services
* can upgrade only when the service provider of the original offering is not external (domR, dhcp, elb) to the external type of the provider
-Bringing add/delete/list of all external network devices under one unified API's (addNetworkDevice, deleteNetworkDevice, listNetworkDevice)
-Refactoring External network manager to work from both sets of API's add/delete/list NetworkDevice and add/delete/list External Firewall/LoadBalancer
- Make all API commands Async and add events
- Make BroadcatsDomainRange case insensitive
- Process all _networkElements to build the Service -> Provider map during NetworkMgr::configure()
It would cover the configuration of DHCPElement, VirtualRouterElement and
RedundantVirtualRouterElement.
Also add foreign key in domain_router table to reflect the domain_router is
created from which element and use what configuration.
Add configure command for these virtual router based elements. The commands
should be different for different elements.
The context of configuration would be added later.
- Create Zone changes and changes to data_center table to remove vlan, securityGroup fields
- Physical Network lifecycle APIs
- Physical Network Service Provider APIs
- DB schema changes
1.load hosts that in maintenance mode because maintenance is no longer an agent status now
2.don't disconnect agent when entering maintenance mode, again it's no longer an agent status now
* moved all services to the separate table, map them to the network_offering+provider.
* added state/securityGroupEnabled properties for the networkOffering
* added ability to list by state/securityGroupEnabled in listNetworkOfferings api command
2) New service: SourceNat
Changes:
- Added a new interface 'PluggableService'
- Any component that can be packaged separately from cloudstack, can implement this interface and provide its own property file listing the API commands the component supports
- As an example have made VirtualNetworkApplianceService pluggable and a new configureRouter command is added
- ComponentLocator reads all the pluggable service from componentLibrary or from components.xml and instantiates the services.
- As an example, DefaultComponentLibrary adds the pluggable service 'VirtualNetworkApplianceService'
- Also components.xml.in has an entry to show how a pluggable service can be added, but it is commented out.
- APIServer now reads the commands for each pluggable service and when a command for such a service is called, APIServer sets the required instance of the pluggable service in the coomand.
- To do this a new annotation '@PlugService' is added that is processed by APIServer. This eliminates the dependency on the BaseCmd to instantiate the service instances.
Since we would introduce a way to specify each service provider in the network
offering, it's better for redundant virtual router as a separate service
provider.
Also isRedundant() flag in the network offering would be removed. Redundant
virtual router temporality won't work from now. Until we're able to add
different network elements/service providers in network_offering.
In the past, the NetworkElement would cover almost all the functionality that
e.g. virtual router can cover: firewall, source NAT, static NAT, password,
VPN... So anyone want to implement the NetworkElement would have to implement
these service's specific methods, even it wouldn't support it. Also, if we want
to find a e.g. FirewallServiceProvider, we have to proceed all the current
network service providers, to call a method to know if it support such service.
That's neither elegant nor scaling way to do it.
As the first step, this patch separates each ServiceProvider from NetworkElement
(there are some interface already out of NetworkElement, so this patch slightly
modifies them too), and only the class would implement the correlated interface, would
have the ability to do these services.
Changes:
- VirtualMachineMgr puts the constraint that if Root volume is already READY, we provide the clusterId in the plan to the deploymentPlanner. Planner then searches for resources only under that cluster.
- If no deployment could be found, deploying VM fails.
- Fixed this, such that incase the root volume is recreatable, we call the planner again by removing the cluster constraint. Planner will then search for resources in other clusters.
- Works for system VMs(SSVM, consoleproxy, virual routers).
This reverts commit 42ab3c94c210d5a29289a5dfd0e44ae99c427f8b.
The commit may not fit for our new network as service framework, because we
would make single router and redundant router as two different service provider,
so the change of network offering should clean up the old network and then setup
new one. Make single router work as redundant router later make no sense in such
condition.
Then every router would have one guest ip. The gateway ip would be used if the
router is not redundant, otherwise the guest ip would be used for guest network.
move all listxxx interface from HostDao to managers(ResourceManager, SecondaryStorageVmManager etc) with decent name using SearchCriteria2
or direct call SearchCriteria2 on demand
Changes:
- We were ordering clusters based on capacity of the first-fit host found in each cluster. Due to this, there were cases where we deployed VMs to one cluster instead of balancing off within clusters.
- Now we order the list of clusters by aggregate capacity and choose the ones that have enough capacity for the required VM in this order.
- This should balance the load between clusters instead of bombarding one.
Conflicts:
server/src/com/cloud/capacity/dao/CapacityDao.java
server/src/com/cloud/capacity/dao/CapacityDaoImpl.java
Changes:
- Added a new API 'migrateSystemVm' backed by MigrateSystemVMCmd.java to migrate system VMs (SSVM, consoleproxy, domain routers(router, LB, DHCP))
- This is Admin only action
- The existing API 'migratevirtualmachine' is only for user VMs
1) Introduce new managers - ProjectManager and DomainManager. Moved all domain related code from AccountManager to DomainManager.
2) Moved some code from ManagementServerImpl to the correct managers.
3) New resource limit for Domain - Project
Reviewed-by: Alex
Changes:
- When management server starts, it goes through all the pending work items from op_it_work table and schedules HA work for each. It used to mark each item as done. Instead we should keep the item as pending and let it get marked as Done after the HA work is done.
- Changes in VirtualMachineMgr::advanceStop() :
a) if we find a VM with null hostId, we stop the VM only if it is forced stopped.
b) if VM state transition to Stopping fails,for state Starting and Migrating we try to find the pending work item and then do cleanup the VM. In case state is Stopping we can cleanup directly.
c) We proceed releasing all resources only if state transitioned to 'Stopping'.
- Changes in HA:
a) Depend on VirtualMachineMgr::advanceStop() in case host is not found to do VM cleanup
- When Vm state between mgmt server and agent syncs from starting -> running, mark any pending work item as done.
Conflicts:
server/src/com/cloud/vm/VirtualMachineManagerImpl.java
reviewed-by: Alex/Kelven
Changes:
1. UserVmManagerImpl :: finalizeStart()
Added null check for the cmds.getAnswers() object. Return ‘true’ if null.
2. VirtualMachineManagerImpl :: advanceStart()
Move the line to set PodId to the vm being started above the state transition where hostId gets set, so that podId is not null in case management server goes down when vm starts on the agent. On restart, podId is not updated during fullsync. So this will prevent podId remaining null.
vm.setPodId(dest.getPod().getId());
Added two New values "all" and "default" to global config "network.loadbalancer.haproxy.stats.visibility" . With this change, it can take six possible value:
global - stats visible from public network.
guest-network - stats visible only to guestnetwork.
link-local - stats visible only to link local network(for xen and kvm).
disabled - stats disabled.
all - stats available on public,guest and link-local. (Newly added)
default - stats availble on the serving http port, this does need any specific http port.(Newly added)
Except "default" and "disabled", all the rest of 4 need to configure the stats port.
Force stop the router would release all the resources it used, but router may
still running. Add a column "stop_pending" in the database, and stop it when the
router come back.
Admin would able to choose to force destroy such router, then recover the
network using restartNetwork command with cleanup=false.
Changes:
A KVM agent always connects to the management server itself, we dont have to do direct connect. This part of code was missing updating the DB host entry with hosttags.
Corrected the code to save the hosttags while adding a KVM host.
remove heartbeat entry for this Primary Storage, when put this Primary Storage into maintenance mode
create heartbeat entry for this Primary Storage, when cancal maintenance for this Primary Storage
status 11275: resolved fixed
status 11036: resolved fixed
1) Use row locks instead of global lock when update resource_count table. When update resource_count for account, make sure that we lock account+all related domains
2) Insert resource_count records for account/domain at the moment when account/domain is created.
3) As a part of DB upgrade, insert missing resource_count records for all non-removed accounts/domains
Conflicts:
core/src/com/cloud/alert/AlertManager.java
server/test/com/cloud/agent/MockAgentManagerImpl.java
clean up tests for security group manager v2
move interval to listener -- allows it to be configurable if needed
fix mocks
Enhanced logging for security group manager (from zucchini)
fix merge issues
merge issues
Changes :
- Fixing API doc +response name + errorMessage
- Adding seperate events to Egress rules
- Egress rules Using the same database table as that of ingress with new column type.
Pending Tasks:
- db upgrade
- database table rename from security_ingress_rule to generic name, renaming some of the jave class from ingress to generic name.
- Retesting on kvm
Changes:
To make sure migration does not attempt to pick a host that has running VMs more than the max guest VM's limit:
- Changed manual migration to call host allocators to return a list of hosts suitable for migration. Host allocators check for the max guest VM limit.
- Earlier we returned hosts with enough capacity but now Host Allocators make other checks along with capacity. So the list of hosts returned are hosts that have enough capacity AND satisfy all other conditions like host tags, max guests limit etc. Or in other words Allocators dont return the hosts that dont satisfy all conditions even if they have capacity.
-Therefore, now we mark the list of hosts returned for manual migration as 'suitable' hosts instead of 'hasenoughCapacity' in the HostResponse.
- HA migration already calls allocators, so no change is needed there.
Changes:
- Adding a new table 'hypervisor_capabilities' that will record capabilities for each hypervisor version. Added db schema changes for this.
- Currently a few capabilities have been added, namely, 'max_guests_limit' and 'security_group_enabled'
- Added a new column 'hypervisor_version' to host table. StartupRouting command now takes in this parameter. It should be set when a host connects.
- If a host's hypervisor version is not present, we find all the capabilities rows for that hypervisor type and use the first record.
- 'max_guests_limit' is the maximum number of running guest Vms that a host can have for the given hypervisor.
- Host Allocators use this limit and skip a host if the number of running VMs on that host exceeds this limit.
status 11326: resolved fixed
Also added more logging to the agent rebalance code.
Conflicts:
server/src/com/cloud/agent/manager/ClusteredAgentManagerImpl.java
When we apply rules or start new VM, we may encounter some running routers that
we can't program. That can due to network issue or host is down or vCenter is
disconnected, etc. To keep the synchronization, we would stop them, but only
when there is the other router we've successfully updated. If both routers are
unable to communicate with, we simply give up and report it user.
Conflicts:
server/src/com/cloud/network/router/VirtualNetworkApplianceManagerImpl.java
Now the logic is: if we can only connect to one of the two redundant routers, we
would stop the one that can't be connected. If we fail to program both routers,
just let it go.
Description :
API's:
- Two new api's authorizeSecurityGroupEgress,revokeSecurityGroupEgressCmd are added. These two API's are similer to ingress rule API's.
- authorizeSecurityGroupEgress :Authorizes a particular egress rule for this security group . Usageof API is very similer to that of authorizeSecurityGroupIngress except that instead of source cidr there will be destination cidr. By default like ingress, all the outgoing flows are blocked.
- revokeSecurityGroupEgress : It is similer to revokeSecurityGroupIngress api, It removes the egress rule.
- listSecurityGroup API's response changed. It include's egress list apart from the existing ingress rules in the output of the API.
Hypervisors :
- It is implemented in Xen and KVM.
Pending Tasks : Blocking using destination security groups.
Previous commits: c9fda641673df7701f44963ef27e1d488f121219 , 24e4e44b8f0712a37147a3777833de3f9e24829e
- adding supprt for Netscaler VPX & MPX load blancers
- implemented for virtual networking
- works only with new fetched public IP, inline support is not added yet
- adding supprt for Netscaler VPX & MPX load blancers
- implemented for virtual networking
- works only with new fetched public IP, inline support is not added yet
more details will be added in the bug
Added New value "link-local" to global config network.loadbalancer.haproxy.stats.visibility . With this change it can take new parameter "link-local" value apart from the existing 3 values global,guest-network,disabled.
global - stats visible from public network
guest-network - stats visible only to guestnetwork.
link-local - stats visible only to link local network
disabled - stats disabled.
Changes:
- Changes to updateHostCmd to accepts hosttags parameter
- Changes to wipe out existing tags and save new ones in host_tags DB.
- UpdateHost is Admin only operation - so only root admin can update host tags
Changes:
- CreateTemplate and RegisterTemplate now support adding a template tag. It is a string value. This is root-admin only action - only admin can add template tags.
- ListTemplates will return the template tag in response.
- HostAllocator changed to use template tag along with the existing tag on service offering. If both tags are present, allocator now finds hosts satisfying both tags. If no hosts have both tags, allocation will fail.
- DB changes to add new column to vm_template table.
- DB upgrade changes for upgrade from 2.2.10 to 2.2.11
previous commit: c9fda641673df7701f44963ef27e1d488f121219 ( this under bug 1067, typing error)
changes: 1) partially implemented listing of egress rules along with ingress rules.
2) partially implemneted egress rules for KVM