was discussed on the mailing list as a useful debugging tool, currently
the log prints the DB id of the agent, which makes admins have to look
it up to know where the Command was run.
When running DatabaseUpgradeChecker as a standalone program _dao will not
be injected. Still create an instance of VersionDaoImpl in constructor
and when DatabaseUpgradeChecker is ran in the mgmt server it will be
overwritten by the injected value.
These changes are a joint effort between Edison and I to refactor some
of the code around snapshotting VM volumes and creating
templates/volumes from VM volume snapshots. In general, we were working
towards allowing PrimaryDataStoreDrivers to create snapshots on primary
storage and not requiring the snapshots to be transferred to secondary
storage.
High level changes:
-Added uuid to NfsTO, SwiftTO & S3TO to cut down on the requirement of
PrimaryDataStoreTO and ImageStoreTO which don't really serve much of a
purpose
-Initial work towards enable reverting VM volume from snapshots
-Added hypervisor commands for introducing and forgetting new hypervisor
objects (snapshots, templates & volumes)
Signed-off-by: Edison Su <sudison@gmail.com>
ACS is now comprised of a hierarchy of spring application contexts.
Each plugin can contribute configuration files to add to an existing
module or create it's own module.
Additionally, for the mgmt server, ACS custom AOP is no longer used
and instead we use Spring AOP to manage interceptors.
The managed context framework provides a simple way to add logic
to ACS at the various entry points of the system. As threads are
launched and ran listeners can be registered for onEntry or onLeave
of the managed context. This framework will be used specifically
to handle DB transaction checking and setting up the CallContext.
This framework is need to transition away from ACS custom AOP to
Spring AOP.
Various classes are using member injection to inject extensible objects.
Really those object should come from an AdapterList that is injected in.
This patch switches the code to use setter injection that will later allow
spring to inject an AdapterList or something similar to allow
extensibility.
Also fixed existing bugs for the API:
* corrected action event to be VOLUME.UPDATE (was VOLUME.ATTACH)
* all parameters to update, should be optional - fixed that. If nothing is specified, the db object will remain with its original fields
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue and/or TODO:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
- Documentation!
Signed-off-by : Toshiaki Hatano <haeena@haeena.net>
CLOUDSTACK-4457:
CLOUDSTACK-4459:
harden kvm getvolume. It's possible that one volume created on other kvm host, won't show up on another host, try more times to refresh storage pool if volume won't shown up
Conflicts:
engine/storage/integration-test/test/org/apache/cloudstack/storage/test/FakeDriverTestConfiguration.java
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
It was implemented by extending the NFS provider. Its validation was updated so that you can pass it a URL containing the
details of a CIFS share. The code that mounts NFS shares was extended to allow it do the same for CIFS shares. Otherwise,
the secondary storage code is left unchanged.
Issue happens as there are more than one thread processing connect for a host simultaneously. The VM full sync. is not designed to work in this scenario and as a result user VMs may get stopped incorrectly.
Direct agent scan task runs at regular intervals (direct.agent.scan.interval defaulted to 90 secs) and identifies hosts that needs to be processed for connect. In a normal scenario hosts mostly get connected within that interval and there are no issues. But if due to some reason the connect process takes more time and is not completed by the time next agent scan runs. In this case, based on the db. state same hosts may get picked up again. And then there will be situations where more than one thread is processing connect for the same host.
The fix is to check if there is a thread already processing connect for a host and in this case all subsequent threads for that host will simply bail out. Also there may be a scenario where one thread already completed processing connect but another thread already got scheduled before that and will again repeat the same. This is also prevented by putting appropriate checks.
Check for the all the transition states for Maintenance. Also corrected the isMaintenance function for StoragePoolVo
Signed off by : nitin mehta<nitin.mehta@citrix.com>
Description:
Set the criterion for overriding/preserving the vmware.create.full.clone
flag so that if past version deployments have any deployments (data centers),
this flag will be set to false. Else, it will be set to true.
The earlier criterion to set this flag was based on the CS version numbers,
but that is not a good business logic to serve as a basis to set the flag.
Changes:
- In the upgrade path, for a private zone, entry needs to be added in the affinity_group_domain_map to provide access to the private zone for the domains it belongs too.
Changes:
- Implict creation of the 'ExplicitDedication' Affinity group during resource dedication
- Only one group per account or per domain will be present
- ListDedicatedResources by affinityGroup
- Deployment should consider dedicated resources associated to the group only
- Deleting affinity group should release the dedicated resouces
- Releasing the dedicated resources should remove the group associated if there are no more resources.
Conflicts:
plugins/dedicated-resources/src/org/apache/cloudstack/dedicated/DedicatedResourceManagerImpl.java
plugins/dedicated-resources/test/org/apache/cloudstack/dedicated/manager/DedicatedApiUnitTest.java
server/src/com/cloud/configuration/ConfigurationManagerImpl.java
Changes:
- 'ExcplicitDedication' type of group can be created/deleted by Root admin only
- Users can no longer create this type of affinity group
- RootAdmin can create this type of affinitygroup at domain level. Such a domain level group is available for all accounts in that domain for listing and for use during deployVM.
- The domain level affinitygroup should be visible to the users in that domain, domain admins and Root admin.
Conflicts:
server/src/com/cloud/api/query/QueryManagerImpl.java
server/src/org/apache/cloudstack/affinity/AffinityGroupServiceImpl.java
server/test/org/apache/cloudstack/affinity/AffinityApiUnitTest.java
On a fresh environment, some values in cloud.configuration table are persisted in com.cloud.server.ConfigurationServerImpl.persistDefaultValues()
A default value need to be set before com.cloud.upgrade.DatabaseUpgradeChecker
UI support for baremetal PXE server
CloudStack CLOUDSTACK-1364
UI support for baremetal DHCP server
Conflicts:
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BareMetalPingServiceImpl.java
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalKickStartServiceImpl.java
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalPxeManagerImpl.java
While upgrading to 4.2, the type of vswitch being used by each cluster is persisted to cluster_details table. This helps if user want to change the type of vswitch used in a zone or entire cloud later on but leave existing cluster continue to use old vswitches. Hence even after modifying the type of vswitch at cloud level (by modifying global configuration parameters) or modifying the type vswitch at zone level (by modifying the traffic label) would not disturb operation of existing clusters.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Builtin template Ready Status is No even after the Status is downlaod complete. The reason was that template sync updates only the template download state but not the state. Fixing that. Ideally we should change the state through state machine only.
Signed off by : nitin mehta<nitin.mehta@citrix.com>
only when there is portable IP range added at region level.
region response will now have details if portable IP service is enabled
or not. Portable IP service for a region is turned off by default. when
adming adds a portable ip range portable ip service is enabled for the
region.
Updating the new system template URLs for the existing templates during upgrade to 4.2.
If new 4.2 system template is registered before upgrade then marking the old templates as removed during upgrade.
The time increased due to the newly added dedicated resources feature. During regular VM deployment, all dedicated resources are put in avoid list so that they are not considered for deployment.
Now the way to compute the list of dedicated resources is not optimal and performance deteriorates in an environment having lot of pods, clusters and hosts as the logic is to query db. for each suc resource.
The fix is to optimize the logic not to loop through all resources but get the list of each resource type in a single query.
Conflicts:
server/src/com/cloud/deploy/DeploymentPlanningManagerImpl.java
Introduce a global lock on template and pool id so that concurrent threads wont be inserting the same row in DB and hit MySQLIntegrityConstraintViolationException
Signed off by : nitin mehta<nitin.mehta@citrix.com>