2)Re-apply all existing firewall rules as a part of implement call. TODO: Cleanup all existing rules from the backend (leave them in the DB) as a part of shutdown call
Limitations:
* can't upgrade to the network offering with lesser number of services
* can upgrade only when the service provider of the original offering is not external (domR, dhcp, elb) to the external type of the provider
- Create Zone changes and changes to data_center table to remove vlan, securityGroup fields
- Physical Network lifecycle APIs
- Physical Network Service Provider APIs
- DB schema changes
* moved all services to the separate table, map them to the network_offering+provider.
* added state/securityGroupEnabled properties for the networkOffering
* added ability to list by state/securityGroupEnabled in listNetworkOfferings api command
2) New service: SourceNat
Since we would introduce a way to specify each service provider in the network
offering, it's better for redundant virtual router as a separate service
provider.
Also isRedundant() flag in the network offering would be removed. Redundant
virtual router temporality won't work from now. Until we're able to add
different network elements/service providers in network_offering.
1) Introduce new managers - ProjectManager and DomainManager. Moved all domain related code from AccountManager to DomainManager.
2) Moved some code from ManagementServerImpl to the correct managers.
3) New resource limit for Domain - Project
status 11036: resolved fixed
1) Use row locks instead of global lock when update resource_count table. When update resource_count for account, make sure that we lock account+all related domains
2) Insert resource_count records for account/domain at the moment when account/domain is created.
3) As a part of DB upgrade, insert missing resource_count records for all non-removed accounts/domains
Conflicts:
core/src/com/cloud/alert/AlertManager.java
server/test/com/cloud/agent/MockAgentManagerImpl.java
status 10305: resolved fixed
While creating a system vm offering specify the type. If no type specified the default to domainrouter.
While requesting a set of system offering specify the paramter systemvmtype.
status 10305: resolved fixed
While creating a system vm offering specify the type. If no type specified the default to domainrouter.
While requesting a set of system offering specify the paramter systemvmtype.
This reverts commit 97f2b9936a8b9e3a057116d327b058253458b4ef.
Use the following solution instead:
* add unique_name field to the network_offerings table. Use this filed as a unique offering identifier in the code
* Added db upgrade steps to 225to226 sql script
Conflicts:
server/src/com/cloud/offerings/NetworkOfferingVO.java
This reverts commit 97f2b9936a8b9e3a057116d327b058253458b4ef.
Use the following solution instead:
* add unique_name field to the network_offerings table. Use this filed as a unique offering identifier in the code
* Added db upgrade steps to 225to226 sql script
This patch enable redundant virtual routers.
1. To enable this feature, db need to be updated using follow SQL by now(we
would get a UI way later):
UPDATE network_offerings SET redundant_router=1 WHERE guest_type="Virtual" AND
system_only=0;
2. System would try to start up two routers at different hosts. But if there is
only one host in the zone, system would start up two routers on it.
3. The failover part is using keepalived, and connection tracking part is using
conntrackd. There would be one master router and one backup router. The status
of router(master or backup) can be query from the database table domain_router
now. Management server would update the status every 30s by default.
4. The routers for the same zone would use same external NIC(same ip and mac).
The script used for fail-over would ensure only one external NIC present in the
network at any time.
5. Currently management server don't got the ability to stop one of router is
both of them reported as master. The feature is in the todo list.
After two routers start up, disconnect anyone of them, the guest network
shouldn't be affected, and established connection(http, ssh, etc.) should still
works. The fail-over on gateway part should be 3~4 seconds.
Currently the patch works with KVM. Would deal with vmware and XenServer soon.
- CreateZone API creates a zoneToken, inserts in DB and returns it in the
response
- UpdateZone API takes in 'details' map that is loaded to data_center_details
Changes:
- Cluster entry is not removed from the table when a cluster is deleted because there are some foreign key constraints failing if the row delete is attempted. Instead the cluster is marked as 'removed'
- While deleting the pod changed the check to see if pod has any clusters - we now check that there are no clusters with removed column null.
- Also pod entry cannot be deleted from the db due to foreign key constraints. So added 'removed' column to Pod table host_pod_ref
- Now on deleting a pod, the pod will be marked as removed and pod name is set to null.
status 7704: resolved fixed
For user vm:
* for default network, take limit from the corresponding service offering
* for all additional networks, take limit from the network offerings
For domainRouter/SSVM/CPVM:
* get info from the network offering
Added new config parameter: "vm.network.throttling.rate". If nw_rate is NULL for serviceOffering, this parameter would be used for default vm's network
- Added a new flag 'allocation_state' to zone,pod,cluster and host
- The possible values for this flag are 'Enabled' or 'Disabled'
- When a new zone,pod,cluster or host is added, allocation_state is 'Disabled' by default.
- For existing zone,pod,cluster or host, the state is 'Enabled'.
- All Add/Update/List commands for each of zone,pod,cluster or host can now take a new parameter 'allocationstate'
- If 'allocation_state' is 'Disabled', Allocators skip that zone or pod or cluster or pod.
- For a root admin, ListZones lists all zones including the 'Disabled' zones. But for any other user, the 'Disabled' zones are not included in the response.
- For any usecase that creates/deploys/adds/registers a resource and takes in zone as parameter, now we check if the Zone is 'Disabled'. If yes then the operation cannot be performed by a user other than root-admin. Add volume, snapshot, templates are examples of this usecase.
- To enable the root admin to test a particular pod/cluster/host, deployVM command takes in 'host_id' parameter that can be passed in only by root admin.
If this parameter is passed in by the admin, allocators do not search for hosts and use that host only. StoragePools are searched in the cluster of that host.
If VM cannot be deployed to that host, allocators and deployVM fails without retrying
Following was done as a part of checkin:
1) NetworkOffering/Network:
* add PF service support for default Guest network offering.
* Add one more additional network - Public.
* Allow to enable external firewall in Basic zone.
2) Don't allow to deployVm in Public Network.
3) Allow to add vlan ip ranges to Public networks in Basic zone.
4) Associate IP - allow to associate with Direct vms.
5) Allow to create PF/Static nat rules. Rules are being sent to External Firewall Rule only.
6) Add PF support to External Firewall element.
Bug 7723 - merge or re-write host tagging into master / 2.2
Bug 7627 - Need more logging for Allocators
Bug 8317 - Add better resource allocation failure messages
Changes for Deployment Planner to use host and storagePool allocators to find deployment destination.
Also has the changes for host tag feature.
Improved the logging for allocators.
2) Set traffic Type to be Guest for Direct/Virtual non-system default network offerings. Use this guestIpType during the network creation/implementation
status 8412: resolved fixed
1) Don't count domR/Dhcp nic in active nics.
2) Removed domR cleanup thread; Network shutdown thread would shutdown domR/dhcp when network has no active vms
status 7811: resolved fixed
Other fixes:
* vmExpunge: cleanup LB/PF rules after vm was marked as Expunging in the DB to avoid the situation when user recovers a vm in the middle of expunge job.
status 7803: resolved fixed
Fix overview:
1) Parameter "isDefault" should be defined as a part of createNetwork
* Virtual network is always default
* Parameter can be specified only for DirectNetwork
* Once parameter is set, there is no way to change it as we don't provide updateNetwork command.
2) Added isDefault parameter to listNetworks command so you can sort by that.
3) DeployVmCmd:
* at least one default network should be set
* if more than 1 default network is set - throw an error
4) Return isDefault information as a part of Nic object for the vm response in deploy/stop/start/listVm
status 7863: resolved fixed
Router cleanp thread is fixed, here is functionality description:
* Runs every "router.cleanup.interval" period of time (1 day by default)
* Stops only domRs running in Advance zone
* Thread Flow:
- gets all Running domRs/dhcps, get their networks, select network that
has to be checked (see criteria below):
- checks that there is only one nic in the op_networks table for the
network, and this nic belongs to domR/dhcp
- Stops domR/dhcp
* Criteria to choose the network:
- Network has to be non-system.
- Network should be one of the following: Guest Virtual (TrafficType=Guest; GuestType=Virtual); Direct Tagged (TrafficType=Public; GuestType=Direct)
Couple of other fixes:
* Added isShared parameter to listNetworks command
* Moved guestType from NetworkOffering to Network