Detail: getPhysicalDisk() was not matching on volumes with .raw, so
instead setting disk format to QCOW2.
BUG-ID: CLOUDSTACK-5018
Bugfix-for:
Reviewed-by:
Reported-by:
Signed-off-by: John Kinsella <jlk@stratosec.co> 1383287538 -0700
Now VPN connection can be created as "passive", which would enable the ability
of remote peer initiate the connection. So it's possible for VPC VR to
establish the connection to another VPC VR of CloudStack.
Test case also included.
The test case would create 2 vpcs and using VPN to connect them.
1) vxlan will use bridge scheme 'brvx-<vni>'. Multiple physical networks can host guest
traffic type with vxlan isolation, so long as they don't use the same VNI range.
2) Guest traffic labels can be physical interface if bridge by given name is not found.
Normally we take traffic label name, find the matching bridge, then resolve that to a
physical interface. Then we create guest bridges on that interface. Now we can just
specify the interface.
When a ROOT volume is created from base template, if a folder already exists for the ROOT volume's VM then replace the old ROOT disk files with the new one.
The simulator uses the default planners of cloudstack and does not
require a separate planner context (as of now). This was just c&p from
baremetal planners.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
xs 6.1/6.2 introduce the new virtual platform, so there are two virtual platforms, windows PV driver version must match virtual platforms,
this patch tracks PV driver versions in vm details and template details.
Anthony
Detail: Checks for other Ethernet interface names uses startsWith(),
whereas the p1p1 style interface uses a regex that doesn't allow for
tailing characters, and so blocks vlan IDs. Fixed.
BUG-ID: CLOUDSTACK-4884
Bugfix-for: 4.2.1
Reviewed-by:
Reported-by:
Signed-off-by: John Kinsella <jlk@stratosec.co> 1381965250 -0700
Introduction of a new Transaction API that is more consistent with the style
of Spring's transaction managment. The existing Transaction class was renamed
to TransactionLegacy. All of the non-DAO code in the management server has been
updated to use the new Transaction API.
I don't think host kernel version has any bearing on it. Original code
was tested with CentOS 6.3 and 6.4, but it seems to succeed or fail per-host,
e.g. a fast host might work and a slow host might not. I was getting intermittent
failures with ubuntu 12.04.3 prior to this patch.
These changes are a joint effort between Edison and I to refactor some
of the code around snapshotting VM volumes and creating
templates/volumes from VM volume snapshots. In general, we were working
towards allowing PrimaryDataStoreDrivers to create snapshots on primary
storage and not requiring the snapshots to be transferred to secondary
storage.
High level changes:
-Added uuid to NfsTO, SwiftTO & S3TO to cut down on the requirement of
PrimaryDataStoreTO and ImageStoreTO which don't really serve much of a
purpose
-Initial work towards enable reverting VM volume from snapshots
-Added hypervisor commands for introducing and forgetting new hypervisor
objects (snapshots, templates & volumes)
Signed-off-by: Edison Su <sudison@gmail.com>
ACS is now comprised of a hierarchy of spring application contexts.
Each plugin can contribute configuration files to add to an existing
module or create it's own module.
Additionally, for the mgmt server, ACS custom AOP is no longer used
and instead we use Spring AOP to manage interceptors.
The managed context framework provides a simple way to add logic
to ACS at the various entry points of the system. As threads are
launched and ran listeners can be registered for onEntry or onLeave
of the managed context. This framework will be used specifically
to handle DB transaction checking and setting up the CallContext.
This framework is need to transition away from ACS custom AOP to
Spring AOP.
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue and/or TODO:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
- Documentation!
Signed-off-by : Toshiaki Hatano <haeena@haeena.net>
Libvirt reports:
org.libvirt.LibvirtException: Storage volume not found: no storage vol
with matching name
in some cases, if the volume is created on one kvm host, while accessed
from other host.
It's possible due to concurrent access(read/write) storage.
The current fix is to try serveral times, and wait for 30 seconds for
each retry.
If the issue still there, then need to sync the storage pool access
CLOUDSTACK-4457:
CLOUDSTACK-4459:
harden kvm getvolume. It's possible that one volume created on other kvm host, won't show up on another host, try more times to refresh storage pool if volume won't shown up
Conflicts:
engine/storage/integration-test/test/org/apache/cloudstack/storage/test/FakeDriverTestConfiguration.java
plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/storage/KVMStorageProcessor.java
There still exist two issues after Edison's commits.
(1) Migration from new hosts to old hosts failed.
The bridge name on old host is set to cloudVirBr* if network.bridge.name.schema is set to 3.0 in /etc/cloudstack/agent/agent.properties, but the actual bridge name is breth*-* after running cloudstack-agent-upgrade.
(2) all ports of vms (Basic zone, or Advanced zone with security groups) on old hosts are open, because the iptables rules are binding to device (bridge) name which is changed by cloudstack-agent-upgrade.
After this, the KVM upgrade steps :
a. Install 4.2 cloudstack agent on each kvm host
b. Run "cloudstack-agent-upgrade". This script will upgrade all the existing bridge name to new bridge name, and update related firewall rules.
c. install a libvirt hook:
c1. mkdir /etc/libvirt/hooks
c2. cp /usr/share/cloudstack-agent/lib/libvirtqemuhook /etc/libvirt/hooks/qemu
c3. chmod +x /etc/libvirt/hooks/qemu
c4. service libvirtd restart
c5. service cloudstack-agent restart
Signed-off-by: Wei Zhou <w.zhou@leaseweb.com>
The migrate method from libvirt supports passing down a different XML for running
the instance of the target hypervisor.
This enables the VNC to bind to the private IP Address of the hypervisor and during
migration this will be changed to the private IP address of the target host.
This way VNC doesn't listen world wide and is much safer.
when secondary storage is mounted as read-only, changing permission of files on it will fail. But we should still stick to current mount point instread of
returning a wrong mount point /mnt/sec
Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.
Patch Note:
in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI
in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN is used for isolation
Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation status.
Known Issue:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"
This failed due to a RAW -> QCOW2 conversion (again).
The current code still makes to much assumptions about everything always
being QCOW2 while that is not always true.
UI support for baremetal PXE server
CloudStack CLOUDSTACK-1364
UI support for baremetal DHCP server
Conflicts:
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BareMetalPingServiceImpl.java
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalKickStartServiceImpl.java
plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalPxeManagerImpl.java
KVM - Create template from volume
Vmware - Create template from volume / Create template from snapshot
send the physical size in the copycommand which accordingly will populate template store ref and the usage_event tables with the right physical size
Signed off by : nitin mehta<nitin.mehta@citrix.com>
XS Creating templates from volume - send the physical size in the copycommand which accordingly will populate template store ref and the usage_event tables with the right physical size
Signed off by : nitin mehta<nitin.mehta@citrix.com>
If all the VM's volumes are on zone wide primary storage pool then live migration of the VM would not involve storage migration. Hence MigrateVM API would be called against MigrateVMWithVolume. So far PrepareForMigrationCommand handled scenarios of VM moving across hosts within a cluster, but with zone wide primary storage in picture this command need to handle scenarios of VM moving across clusters. Try to find the VM in datacenter if not found within cluster.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>
Simulator should revert back to CLOUD_DB after its operations on
SIMULATOR_DB or the cloudstack connections go to the simulator instead
of cloud.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
(cherry picked from commit 3d39716c8f)
Although libvirt supports resizing RBD volumes (and other formats) the
Java bindings (libvirt-java) don't.
Right now we use the Java bindings for librbd to handle the resizing for us,
but in the future this should be done by libvirt rather then these
Java bindings.
- ManagementServerSimulatorImpl is not injected by default context.
configureSimulatorCmd API was loaded as part of it. Use
SimulatorManagerImpl as PluggableService to inject configureSimulator
API.
- Remove unused ManagementServerSimulatorImpl.
- Rename ConfigureSimulator to ConfigureSimulatorCmd for uniformity with
all API Cmds
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
(cherry picked from commit 0c294a50a8)
Make VSM specific input paramters optional while adding VMware cluster where no traffic chosen to use Nexus 1000v dvSwitch when cloud level vSwitch is Nexus 1000v.
Signed-off-by: Sateesh Chodapuneedi <sateesh@apache.org>