Detail: This merges the resizevolume feature branch, which provides the
ability to migrate a disk between disk offerings, thereby changing its
size, or specifying a new size if current disk offering is custom.
BUG-ID: CLOUDSTACK-644
Signed-off-by: Marcus Sorensen <marcus@betterservers.com> 1358358209 -0700
Although I still think the templates aren't well maintained, I just
added 12.04 since this is an LTS and people probably want it in the
list of templates.
This system should be more generic I think though.
right approach to populate uuid column since it will impact upgrade as
well), and populate UUID column in seed data sql script.
Signed-off-by: Min Chen <min.chen@citrix.com>
The basic idea behind this is, deploy a fix sized threadpool for updating RvR
status, then using producer/consumer model. There is a global configuration
router.check.poolsize(10 by default) to control the pool size.
Using pool size 100 for 1000 RvR is tested with simulator and works well.
Also we can adjust the global configuration option router.check.interval to e.g.
60s from default 30s to mitigate the issue.
For LB device in inline mode, the ip deployer(the owner of public ip) is the
firewall in front of it, not itself. So check if it's inline or not, if it's
inline, return the firewall as ip deployer
These columns were not added by the upgrade from 4.0 to 4.1 causing SQL-errors
These columns are present in create-schema.sql, so by adding them here the upgrade also works to 4.1
Multiple fixes:
1. changes to the mvn configuration
a. include simulator to client.war
b. activate simulator by profile
2. templates for simulator
3. developer prefill for simulator
a. Use deplydb-simulator to setup simulator db
4. Inherit components-simulator.xml from components.xml
5. ListVolumesCommand missed for MockStorageManager
6. Include simulator properties into utils/db.properties
TODO:
Secondary storage VMs don't come up because ComponentLocator doesn't
retain a unique set of adapaters by name. Fix this in subsequent
checkin.
BootArgs carry router priority for master and backup and simulator will
reuse the priority to decide RvR status. Deprecating the odd/even logic.
Signed-off-by: Prasanna Santhanam <tsp@apache.org>
1. Modified create-schema.sql to add version as 4.1.0 instead of 4.0.0
2. Removed schema-40to41.sql amd moved the content to schema-40to410.sql
3. Added to schema-40to410.sql Upgrade40to41.java
Conflicts:
server/src/com/cloud/upgrade/dao/Upgrade40to41.java
setup/db/create-schema.sql
setup/db/db/schema-40to41.sql
setup/db/db/schema-40to410.sql
Signed-off-by: Min Chen <min.chen@citrix.com>
1. Modified create-schema.sql to add version as 4.1.0 instead of 4.0.0
2. Removed schema-40to41.sql amd moved the content to schema-40to410.sql
3. Added to schema-40to410.sql Upgrade40to41.java
This is improvement of:
commit 1ca493e4fa
Author: Sheng Yang <sheng.yang@cloud.com>
Date: Wed Feb 29 17:43:50 2012 -0800
bug 14042: Don't set dhcp:router option on DHCP server for non-default
network on CentOS/RHEL
The old solution only works on CentOS/RHEL, this one would enable the ability to more
guest OS, and enable user to choose what policy should be for each guest os
type.
IdentityProxy is only referenced in CreateCmdResponse, which involves
some async job logic change. Since it is not impacting list performance,
will leave it there for now.
Signed-off-by: Min Chen <min.chen@citrix.com>
- introduces Capability in the network offering, which
decides when EIP service is enabled, by defualt public IP
should be assigned to the VM or not
- default network offering with EIP/ELB service will still work with old EIP
semantics, i.e) assign a public IP to each VM on start
This is part 1 of list API refactoring. Commands covered:
listVmsCmd, listRoutersCmd Response covered:
UserVmResponse, DomainRouterResponse. DB views created:
user_vm_view, domain_router_view.
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
Make cloud-setup-databases compatible to python 2.4 and before.
Add code Prasanna Santhanam <tsp@apache.org>
Partially revert a6dcd7af49 which removed
the fix for CLOUDSTACK-199: Fix how cloud-setup-databases parses
Patch splits by right most @ in supplied argument to get
user:password and host substrings.
Less than python <2.4 the following is unsupported and produces a
SyntaxError.
try:
...code ...
except ValueError:
...code ...
finally:
...code ...
Workaround is the following
try:
try:
...code ...
except ValueError:
...code ...
finally:
Credits to Prasanna Santhanam <tsp@apache.org>
Signed-off-by: Rohit Yadav <bhaisaab@apache.org>
This commit merges the nicira-l3support branch with master. This
effectively adds nicira nvp l3 support to master. The NiciraNVP Provider
can support the following services with this modification: Connectivity,
SourceNat, StaticNat and PortForwarding
Testing done:
Create, Delete network offerings with Nicira Element
Use Gui to add, modify, remove Nicira Element and Provider
Provision, deprovision SourceNat networks
Provision, deprovision Portforwarding and StaticNat rules
Tested with Nicira NVP release 2.1.0, 2.2.0 and 2.2.1 (2.2.x recommended)
Support for local data disk. Currently enable/disable config is at zone level, in subsequent checkins it can be made more granular.
Following changes are made:
- Create disk offering API now takes an extra parameter to denote storage type (local or shared). This is similar to storage type in service offering.
- Create/delete of data volume on local storage
- Attach/detach for local data volumes. Re-attach is allowed as long as vm host and data volume storage pool host is same.
- Migration of VM instance is not supported if it uses local root or data volumes.
- Migrate is not supported for local volumes.
- Zone level config to enable/disable local storage usage for service and disk offerings.
- Local storage gets discovered when a host is added/reconnected if zone level config is enabled. When disabled existing local storages are not removed but any new local storage is not added.
- Deploy VM command validates service and disk offerings based on local storage config.
- Upgrade uses the global config 'use.local.storage' to set the zone level config for local storage.
(cherry picked from commit 62710aed37606168012a0ed255a876c8e7954010)
Support for local data disk. Currently enable/disable config is at zone level, in subsequent checkins it can be made more granular.
Following changes are made:
- Create disk offering API now takes an extra parameter to denote storage type (local or shared). This is similar to storage type in service offering.
- Create/delete of data volume on local storage
- Attach/detach for local data volumes. Re-attach is allowed as long as vm host and data volume storage pool host is same.
- Migration of VM instance is not supported if it uses local root or data volumes.
- Migrate is not supported for local volumes.
- Zone level config to enable/disable local storage usage for service and disk offerings.
- Local storage gets discovered when a host is added/reconnected if zone level config is enabled. When disabled existing local storages are not removed but any new local storage is not added.
- Deploy VM command validates service and disk offerings based on local storage config.
- Upgrade uses the global config 'use.local.storage' to set the zone level config for local storage.
(cherry picked from commit 62710aed37606168012a0ed255a876c8e7954010)
Review: https://reviews.apache.org/r/6733/
Testing: Applied patch, and ran 'ant clean-all build-apidocs'
on both OSX and Ubuntu 12. Initial testing confirmed the issue
that Greg discovered on OSX. After the patch, the docs built
correctly.
Support for up to 16 VDIs per VM on XS 6.0 and above (16 VDIs => root + cd + 14 data volumes). Currently in CS number of data disk that can be attached to VM is hard-coded to 6. Made this setting configurable by moving it to hypervisor capabilities. Although XS 6.0 and above supports upto 16 VDIs but while testing on XS 6.0.2 found that only 13 data volumes can be attached to a VM. So for XS 6.0 and 6.0.2 max_data_volumes_limit is set to 13 currently.
Signed-off-by: Koushik Das <Koushik.Das@citrix.com>
This patch adds RBD (RADOS Block Device) support for primary storage in combination with KVM.
To get this patch working you need:
- libvirt-java 0.4.8
- libvirt with RBD storage pool support (>0.9.13)
- Qemu with RBD support (>0.14)
The primary storage does not support all the functions of CloudStack yet, for example snapshotting is disabled
due to the fact that backupping up a RBD snapshot is not possible in the way CloudStack wants to do it.
Creating templates from RBD volumes goes well, creating a VM from a template however is still a hit-and-miss.
NFS primary storage is also still required, you are not able to run your System VM's from RBD, they will need
to run on NFS.
Other then these points you can run instances with RBD backed disks.