It was implemented by extending the NFS provider. Its validation was updated so that you can pass it a URL containing the
details of a CIFS share. The code that mounts NFS shares was extended to allow it do the same for CIFS shares. Otherwise,
the secondary storage code is left unchanged.
A recent code change in NetworkManager causes NullPointerExceptions when DHCP
capability list is null.
The commit which made the NetworkManager change also changed the VirtualRouter
to not use null for the capabilitylist, but didn't make this change for other
network devices, causing DHCP to fail on MidoNet.
This change also updates the MidoNet plugin to use the most recent MidoNet API.
Changes:
createZone API:
- This API takes in domainid, set it to the zone record in the data_center table
updateZone API:
- This API uses 'isPublic' flag to set a private zone to public - if this flag is set and the zone is dedicated, release the dedication and remove the domainid from the data_center table
listZone API:
- This API already has 'domainid' parameter. We should allow list zones by domain for Root admin.
DedicateZone API:
- set domainid in the data_center table
ReleaseDedicatedZone API:
- remove zoneid from the data_center table
Changes:
- Adding mocks in unit tests for new injected components
Conflicts:
server/test/org/apache/cloudstack/networkoffering/ChildTestConfiguration.java
Changes:
- Implict creation of the 'ExplicitDedication' Affinity group during resource dedication
- Only one group per account or per domain will be present
- ListDedicatedResources by affinityGroup
- Deployment should consider dedicated resources associated to the group only
- Deleting affinity group should release the dedicated resouces
- Releasing the dedicated resources should remove the group associated if there are no more resources.
Conflicts:
plugins/dedicated-resources/src/org/apache/cloudstack/dedicated/DedicatedResourceManagerImpl.java
plugins/dedicated-resources/test/org/apache/cloudstack/dedicated/manager/DedicatedApiUnitTest.java
server/src/com/cloud/configuration/ConfigurationManagerImpl.java
Changes:
- 'ExcplicitDedication' type of group can be created/deleted by Root admin only
- Users can no longer create this type of affinity group
- RootAdmin can create this type of affinitygroup at domain level. Such a domain level group is available for all accounts in that domain for listing and for use during deployVM.
- The domain level affinitygroup should be visible to the users in that domain, domain admins and Root admin.
Conflicts:
server/src/com/cloud/api/query/QueryManagerImpl.java
server/src/org/apache/cloudstack/affinity/AffinityGroupServiceImpl.java
server/test/org/apache/cloudstack/affinity/AffinityApiUnitTest.java
when secondary storage is mounted as read-only, changing permission of files on it will fail. But we should still stick to current mount point instread of
returning a wrong mount point /mnt/sec
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
This failed due to a RAW -> QCOW2 conversion (again).
The current code still makes to much assumptions about everything always
being QCOW2 while that is not always true.
UI support for baremetal PXE server
CloudStack CLOUDSTACK-1364
UI support for baremetal DHCP server
Conflicts:
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BareMetalPingServiceImpl.java
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalKickStartServiceImpl.java
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalPxeManagerImpl.java
KVM - Create template from volume
Vmware - Create template from volume / Create template from snapshot
send the physical size in the copycommand which accordingly will populate template store ref and the usage_event tables with the right physical size
Signed off by : nitin mehta<nitin.mehta@citrix.com>
XS Creating templates from volume - send the physical size in the copycommand which accordingly will populate template store ref and the usage_event tables with the right physical size
Signed off by : nitin mehta<nitin.mehta@citrix.com>
If all the VM's volumes are on zone wide primary storage pool then live migration of the VM would not involve storage migration. Hence MigrateVM API would be called against MigrateVMWithVolume. So far PrepareForMigrationCommand handled scenarios of VM moving across hosts within a cluster, but with zone wide primary storage in picture this command need to handle scenarios of VM moving across clusters. Try to find the VM in datacenter if not found within cluster.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Simulator should revert back to CLOUD_DB after its operations on
SIMULATOR_DB or the cloudstack connections go to the simulator instead
of cloud.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
(cherry picked from commit 3d39716c8f)
Although libvirt supports resizing RBD volumes (and other formats) the
Java bindings (libvirt-java) don't.
Right now we use the Java bindings for librbd to handle the resizing for us,
but in the future this should be done by libvirt rather then these
Java bindings.
- ManagementServerSimulatorImpl is not injected by default context.
configureSimulatorCmd API was loaded as part of it. Use
SimulatorManagerImpl as PluggableService to inject configureSimulator
API.
- Remove unused ManagementServerSimulatorImpl.
- Rename ConfigureSimulator to ConfigureSimulatorCmd for uniformity with
all API Cmds
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
(cherry picked from commit 0c294a50a8)
Make VSM specific input paramters optional while adding VMware cluster where no traffic chosen to use Nexus 1000v dvSwitch when cloud level vSwitch is Nexus 1000v.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
CS used to access vnc server in xenserver dom0 to get VM console, now CS moves to use XenServer console API. getvncport plugin is not needed any more.
remove the code related to getvncport in XenServer
The existing Maven repo in the POM will be unavailable soon, so I have
changed it to cs-maven.midokura.com.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
While finding endpoint to send the migrate volume command, in case of zone wide storage pool, storage engine is picking a host among all connected hosts randomly. Hence a host may be from different cluster t
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Detail: Cloudstack tries to stop VMs all the time, for all sorts of reasons,
but usually just to get into a known state. Libvirt throws an exception of
'Domain not found' when attempting to stop a VM that doesn't exist. This causes
problems for troubleshooting real issues. Domain not found should equate
to success if trying to stop.
BUG-ID: CLOUDSTACK-4011
Bugfix-for: 4.2
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1375825281 -0600
created with gslbmethod=leastconn
Netscaler nitro api to add gslb virtual servers fails for some reason if
both netmask and round robin methods are specified. So working around
with settign netmask to be null while updating vserver.
java.lang.NumberFormatException
While deleting LB monitor and GSLB service binding Nitro API fails
with wierd NumberFormatException. Adding a workaround to delete the LB
monitor after GSLB service is delted (which ensures intenrally LB
monitor is delted).
Issue:
======
Unable to created internalLB Vm. this is due while determining maxconn
value from networkoffering.
to find the networking offering, networkid is passed instead of
networkoffering id which is causing the issue.
fixed:
=====
fixed the issue by passing the network offering id instead of network
id.
Conflicts:
plugins/network-elements/internal-loadbalancer/src/org/apache/cloudstack/network/lb/InternalLoadBalancerVMManagerImpl.java
Detail: KVM recently got a patch that did away with a few dozen ssh calls
when programming virtual router (CLOUDSTACK-3163), saving several seconds
for each vm served by the virtual router when the router is rebooted. This
patch updates Xen to use the same method, and cleans up the old script refs.
Reviewed-by: Sheng Yang, Prasanna Santhanam
The method getCommandHostDelegation(long hostId, Command cmid) got overidden in VmwareGuru.java as part of
commit bfe30cd2e3. Earlier there was no HV specific implementation and copy
volume from secondary to primary worked fine. With the Vmware specific change the code was getting hit even
in case of XS and other hypervisors and failed with NPE.
Now there is a check in the Vmware implementation to check if the HV is of type Vmware.
storage pool entry was removed from the db but the pool wasn't unmounted from
the host. There was a check that if the pool is not in UP state then the entry
can just be removed. That is wrong. A pool can only be removed if it is in
maintenance state. So after putting the pool in maintenance state if admin
tries to delete the pool we just remove the db entry without un-mounting the
storage pool from the host. Removed the incorrect check.
Marked the system template new system template as dynamicallyScalable
- handled upgrade case
- moved "dynamicallyScalable" flag to vm_instance table from user_vm_details to support dynamic scaling of system vm
Signed off by : Nitin Mehta<nitin.mehta@citrix.com>
details has information on cores per socket, create an instance accordingly. The vm record
is populated with information on how many cores should be allowed in a socket.
a host to the pool it may take a little longer to join the pool. Increased the time the
resource waits and checks to make sure the host has joined as slave to the pool.