Merging master into 2.1.refactor, resolve the merge conflicts as best I can. New commands related to extracting template/iso/volume and related to instance groups were refactored to the new API framework.

This commit is contained in:
Kris McQueen 2010-09-28 15:47:14 -07:00
commit 848ce60097
807 changed files with 41259 additions and 11550 deletions

21
.gitignore vendored
View File

@ -1,4 +1,5 @@
build/build.number
<<<<<<< HEAD
build.number
bin
cloudstack-proprietary
@ -15,3 +16,23 @@ dist
cloud-*.tar.bz2
*.pyc
build.number
=======
bin/
cloudstack-proprietary/
premium/
.lock-wscript
artifacts/
.waf-*
waf-*
target/
override/
.metadata
dist/
*~
*.bak
cloud-*.tar.bz2
*.log
*.pyc
build.number
cloud.log.*.*
>>>>>>> master

652
HACKING
View File

@ -1,652 +0,0 @@
---------------------------------------------------------------------
THE QUICK GUIDE TO CLOUDSTACK DEVELOPMENT
---------------------------------------------------------------------
=== Overview of the development lifecycle ===
To hack on a CloudStack component, you will generally:
1. Configure the source code:
./waf configure --prefix=/home/youruser/cloudstack
(see below, "./waf configure")
2. Build and install the CloudStack
./waf install
(see below, "./waf install")
3. Set the CloudStack component up
(see below, "Running the CloudStack components from source")
4. Run the CloudStack component
(see below, "Running the CloudStack components from source")
5. Modify the source code
6. Build and install the CloudStack again
./waf install --preserve-config
(see below, "./waf install")
7. GOTO 4
=== What is this waf thing in my development lifecycle? ===
waf is a self-contained, advanced build system written by Thomas Nagy,
in the spirit of SCons or the GNU autotools suite.
* To run waf on Linux / Mac: ./waf [...commands...]
* To run waf on Windows: waf.bat [...commands...]
./waf --help should be your first discovery point to find out both the
configure-time options and the different processes that you can run
using waf.
=== What do the different waf commands above do? ===
1. ./waf configure --prefix=/some/path
You run this command *once*, in preparation to building, or every
time you need to change a configure-time variable.
This runs configure() in wscript, which takes care of setting the
variables and options that waf will use for compilation and
installation, including the installation directory (PREFIX).
For convenience reasons, if you forget to run configure, waf
will proceed with some default configuration options. By
default, PREFIX is /usr/local, but you can set it e.g. to
/home/youruser/cloudstack if you plan to do a non-root
install. Be ware that you can later install the stack as a
regular user, but most components need to *run* as root.
./waf showconfig displays the values of the configure-time options
2. ./waf
You run this command to trigger compilation of the modified files.
This runs the contents of wscript_build, which takes care of
discovering and describing what needs to be built, which
build products / sources need to be installed, and where.
3. ./waf install
You run this command when you want to install the CloudStack.
If you are going to install for production, you should run this
process as root. If, conversely, you only want to install the
stack as your own user and in a directory that you have write
permission, it's fine to run waf install as your own user.
This runs the contents of wscript_build, with an option variable
Options.is_install = True. When this variable is set, waf will
install the files described in wscript_build. For convenience
reasons, when you run install, any files that need to be recompiled
will also be recompiled prior to installation.
--------------------
WARNING: each time you do ./waf install, the configuration files
in the installation directory are *overwritten*.
There are, however, two ways to get around this:
a) ./waf install has an option --preserve-config. If you pass
this option when installing, configuration files are never
overwritten.
This option is useful when you have modified source files and
you need to deploy them on a system that already has the
CloudStack installed and configured, but you do *not* want to
overwrite the existing configuration of the CloudStack.
If, however, you have reconfigured and rebuilt the source
since the last time you did ./waf install, then you are
advised to replace the configuration files and set the
components up again, because some configuration files
in the source use identifiers that may have changed during
the last ./waf configure. So, if this is your case, check
out the next way:
b) Every configuration file can be overridden in the source
without touching the original.
- Look for said config file X (or X.in) in the source, then
- create an override/ folder in the folder that contains X, then
- place a file named X (or X.in) inside override/, then
- put the desired contents inside X (or X.in)
Now, every time you run ./waf install, the file that will be
installed is path/to/override/X.in, instead of /path/to/X.in.
This option is useful if you are developing the CloudStack
and constantly reinstalling it. It guarantees that every
time you install the CloudStack, the installation will have
the correct configuration and will be ready to run.
=== Running the CloudStack components from source (for debugging / coding) ===
It is not technically possible to run the CloudStack components from
the source. That, however, is fine -- each component can be run
independently from the install directory:
- Management Server
1) Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
2) If you haven't done so yet, set up the management server database:
- either run ./waf deploydb_kvm, or
- run $BINDIR/cloud-setup-databases
3) Execute ./waf run as your current user (or as root if the
installation path is only writable by root). Alternatively,
you can use ./waf debug and this will run with debugging enabled.
- Agent (Linux-only):
1) Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
2) If you haven't done so yet, set the Agent up:
- run $BINDIR/cloud-setup-agent
3) Execute ./waf run_agent as root
this will launch sudo and require your root password unless you have
set sudo up not to ask for it
- Console Proxy (Linux-only):
1) Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
2) If you haven't done so yet, set the Console Proxy up:
- run $BINDIR/cloud-setup-console-proxy
3) Execute ./waf run_console_proxy
this will launch sudo and require your root password unless you have
set sudo up not to ask for it
---------------------------------------------------------------------
BUILD SYSTEM TIPS
---------------------------------------------------------------------
=== Integrating compilation and execution of each component into Eclipse ===
To run the Management Server from Eclipse, set up an External Tool of the
Program variety. Put the path to the waf binary in the Location of the
window, and the source directory as Working Directory. Then specify
"install --preserve-config run" as arguments (without the quotes). You can
now use the Run button in Eclipse to execute the Management Server directly
from Eclipse. You can replace run with debug if you want to run the
Management Server with the Debugging Proxy turned on.
To run the Agent or Console Proxy from Eclipse, set up an External Tool of
the Program variety just like in the Management Server case. In there,
however, specify "install --preserve-config run_agent" or
"install --preserve-config run_console_proxy" as arguments instead.
Remember that you need to set sudo up to not ask you for a password and not
require a TTY, otherwise sudo -- implicitly called by waf run_agent or
waf run_console_proxy -- will refuse to work.
=== Building targets selectively ===
You can find out the targets of the build system:
./waf list_targets
If you want to run a specific task generator,
./waf build --targets=patchsubst
should run just that one (and whatever targets are required to build that
one, of course).
=== Common targets ===
* ./waf configure: you must always run configure once, and provide it with
the target installation paths for when you run install later
o --help: will show you all the configure options
o --no-dep-check: will skip dependency checks for java packages
needed to compile (saves 20 seconds when redoing the configure)
o --with-db-user, --with-db-pw, --with-db-host: informs the build
system of the MySQL configuration needed to set up the management
server upon install, and to do deploydb
* ./waf build: will compile any source files (and, on some projects, will
also perform any variable substitutions on any .in files such as the
MANIFEST files). Build outputs will be in <projectdir>/artifacts/default.
* ./waf install: will compile if not compiled yet, then execute an install
of the built targets. I had to write a significantly large amount of code
(that is, couple tens of lines of code) to make install work.
* ./waf run: will run the management server in the foreground
* ./waf debug: will run the management server in the foreground, and open
port 8787 to connect with the debugger (see the Run / debug options of
waf --help to change that port)
* ./waf deploydb: deploys the database using the MySQL configuration supplied
with the configuration options when you did ./waf configure. RUN WAF BUILD
FIRST AT LEAST ONCE.
* ./waf dist: create a source tarball. These tarballs will be distributed
independently on our Web site, and will form the source release of the
Cloud Stack. It is a self-contained release that can be ./waf built and
./waf installed everywhere.
* ./waf clean: remove known build products
* ./waf distclean: remove the artifacts/ directory altogether
* ./waf uninstall: uninstall all installed files
* ./waf rpm: build RPM packages
o if the build fails because the system lacks dependencies from our
other modules, waf will attempt to install RPMs from the repos,
then try the build
o it will place the built packages in artifacts/rpmbuild/
* ./waf deb: build Debian packages
o if the build fails because the system lacks dependencies from our
other modules, waf will attempt to install DEBs from the repos,
then try the build
o it will place the built packages in artifacts/debbuild/
* ./waf uninstallrpms: removes all Cloud.com RPMs from a system (but not
logfiles or modified config files)
* ./waf viewrpmdeps: displays RPM dependencies declared in the RPM specfile
* ./waf installrpmdeps: runs Yum to install the packages required to build
the CloudStack
* ./waf uninstalldebs: removes all Cloud.com DEBs from a system (AND logfiles
AND modified config files)
* ./waf viewdebdeps: displays DEB dependencies declared in the project
debian/control file
* ./waf installdebdeps: runs aptitude to install the packages required to
build our software
=== Overriding certain source files ===
Earlier in this document we explored overriding configuration files.
Overrides are not limited to configuration files.
If you want to provide your own server-setup.xml or SQL files in client/setup:
* create a directory override inside the client/setup folder
* place your file that should override a file in client/setup there
There's also override support in client/tomcatconf and agent/conf.
=== Environment substitutions ===
Any file named "something.in" has its tokens (@SOMETOKEN@) automatically
substituted for the corresponding build environment variable. The build
environment variables are generally constructed at configure time and
controllable by the --command-line-parameters to waf configure, and should
be available as a list of variables inside the file
artifacts/c4che/build.default.py.
=== The prerelease mechanism ===
The prerelease mechanism (--prerelease=BRANCHNAME) allows developers and
builders to build packages with pre-release Release tags. The Release tags
are constructed in such a way that both the build number and the branch name
is included, so developers can push these packages to repositories and upgrade
them using yum or aptitude without having to delete packages manually and
install packages manually every time a new build is done. Any package built
with the prerelease mechanism gets a standard X.Y.Z version number -- and,
due to the way that the prerelease Release tags are concocted, always upgrades
any older prerelease package already present on any system. The prerelease
mechanism must never be used to create packages that are intended to be
released as stable software to the general public.
Relevant documentation:
http://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-Version
http://fedoraproject.org/wiki/PackageNamingGuidelines#Pre-Release_packages
Everything comes together on the build server in the following way:
=== SCCS info ===
When building a source distribution (waf dist), or RPM/DEB distributions
(waf deb / waf rpm), waf will automatically detect the relevant source code
control information if the git command is present on the machine where waf
is run, and it will write the information to a file called sccs-info inside
the source tarball / install it into /usr/share/doc/cloud*/sccs-info when
installing the packages.
If this source code conrol information cannot be calculated, then the old
sccs-info file is preserved across dist runs if it exists, and if it did
not exist before, the fact that the source could not be properly tracked
down to a repository is noted in the file.
=== Debugging the build system ===
Almost all targets have names. waf build -vvvvv --zones=task will give you
the task names that you can use in --targets.
---------------------------------------------------------------------
UNDERSTANDING THE BUILD SYSTEM
---------------------------------------------------------------------
=== Documentation for the build system ===
The first and foremost reference material:
- http://freehackers.org/~tnagy/wafbook/index.html
Examples
- http://code.google.com/p/waf/wiki/CodeSnippets
- http://code.google.com/p/waf/w/list
FAQ
- http://code.google.com/p/waf/wiki/FAQ
=== Why waf ===
The CloudStack uses waf to build itself. waf is a relative newcomer
to the build system world; it borrows concepts from SCons and
other later-generation build systems:
- waf is very flexible and rich; unlike other build systems, it covers
the entire life cycle, from compilation to installation to
uninstallation. it also supports dist (create source tarball),
distcheck (check that the source tarball compiles and installs),
autoconf-like checks for dependencies at compilation time,
and more.
- waf is self-contained. A single file, distributed with the project,
enables everything to be built, with only a dependency on Python,
which is freely available and shipped in all Linux computers.
- waf also supports building projects written in multiple languages
(in the case of the CloudStack, we build from C, Java and Python).
- since waf is written in Python, the entire library of the Python
language is available to use in the build process.
=== Hacking on the build system: what are these wscript files? ===
1. wscript: contains most commands you can run from within waf
2. wscript_configure: contains the process that discovers the software
on the system and configures the build to fit that
2. wscript_build: contains a manifest of *what* is built and installed
Refer to the waf book for general information on waf:
http://freehackers.org/~tnagy/wafbook/index.html
=== What happens when waf runs ===
When you run waf, this happens behind the scenes:
- When you run waf for the first time, it unpacks itself to a hidden
directory .waf-1.X.Y.MD5SUM, including the main program and all
the Python libraries it provides and needs.
- Immediately after unpacking itself, waf reads the wscript file
at the root of the source directory. After parsing this file and
loading the functions defined here, it reads wscript_build and
generates a function build() based on it.
- After loading the build scripts as explained above, waf calls
the functions you specified in the command line.
So, for example, ./waf configure build install will:
* call configure() from wscript,
* call build() loaded from the contents of wscript_build,
* call build() once more but with Options.is_install = True.
As part of build(), waf invokes ant to build the Java portion of our
stack.
=== How and why we use ant within waf ===
By now, you have probably noticed that we do, indeed, ship ant
build files in the CloudStack. During the build process, waf calls
ant directly to build the Java portions of our stack, and it uses
the resulting JAR files to perform the installation.
The reason we do this rather than use the native waf capabilities
for building Java projects is simple: by using ant, we can leverage
the support built-in for ant in Eclipse and many other IDEs. Another
reason to do this is because Java developers are familiar with ant,
so adding a new JAR file or modifying what gets built into the
existing JAR files is facilitated for Java developers.
If you add to the ant build files a new ant target that uses the
compile-java macro, waf will automatically pick it up, along with its
depends= and JAR name attributes. In general, all you need to do is
add the produced JAR name to the packaging manifests (cloud.spec and
debian/{name-of-package}.install).
---------------------------------------------------------------------
FOR ANT USERS
---------------------------------------------------------------------
If you are using Ant directly instead of using waf, these instructions apply to you:
in this document, the example instructions are based on local source repository rooted at c:\root. You are free to locate it to anywhere you'd like to.
3.1 Setup developer build type
1) Go to c:\cloud\java\build directory
2) Copy file build-cloud.properties.template to file build-cloud.properties, then modify some of the parameters to match your local setup. The template properties file should have content as
debug=true
debuglevel=lines,vars,source
tomcat.home=$TOMCAT_HOME --> change to your local Tomcat root directory such as c:/apache-tomcat-6.0.18
debug.jvmarg=-Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n
deprecation=off
build.type=developer
target.compat.version=1.5
source.compat.version=1.5
branding.name=default
3) Make sure the following Environment variables and Path are set:
set enviroment variables:
CATALINA_HOME:
JAVA_HOME:
CLOUD_HOME:
MYSQL_HOME:
update the path to include
MYSQL_HOME\bin
4) Clone a full directory tree of C:\cloud\java\build\deploy\production to C:\cloud\java\build\deploy\developer
You can use Windows Explorer to copy the directory tree over. Please note, during your daily development process, whenever you see updates in C:\cloud\java\build\deploy\production, be sure to sync it into C:\cloud\java\build\deploy\developer.
3.2 Common build instructions
After you have setup the build type, you are ready to perform build and run Management Server alone locally.
cd java
python waf configure build install
More at Build system.
Will install the management server and its requisites to the appropriate place (your Tomcat instance on Windows, /usr/local on Linux). It will also install the agent to /usr/local/cloud/agent (this will change in the future).
4. Database and Server deployment
After a successful management server build (database deployment scripts use some of the artifacts from build process), you can use database deployment script to deploy and initialize the database. You can find the deployment scripts in C:/cloud/java/build/deploy/db. deploy-db.sh is used to create, populate your DB instance. Please take a look at content of deploy-db.sh for more details
Before you run the scripts, you should edit C:/cloud/java/build/deploy/developer/db/server-setup-dev.xml to allocate Public and Private IP ranges for your development setup. Ensure that the ranges you pick are unallocated to others.
Customized VM templates to be populated are in C:/cloud/java/build/deploy/developer/db/templates-dev.sql Edit this file to customize the templates to your needs.
Deploy the DB by running
./deploy-db.sh ../developer/db/server-setup-dev.xml ../developer/db/templates-dev.xml
4.1. Management Server Deployment
ant build-server
Build Management Server
ant deploy-server
Deploy Management Server software to Tomcat environment
ant debug
Start Management Server in debug mode. The JVM debug options can be found in cloud-build.properties
ant run
Start Management Server in normal mode.
5. Agent deployment
After a successful build process, you should be able to find build artifacts at distribution directory, in this example case, for developer build type, the artifacts locate at c:\cloud\java\dist\developer, particularly, if you have run
ant package-agent build command, you should see the agent software be packaged in a single file named agent.zip under c:\cloud\java\dist\developer, together with the agent deployment script deploy-agent.sh.
5.1 Agent Type
Agent software can be deployed and configured to serve with different roles at run time. In current implementation, there are 3 types of agent configuration, respectively called as Computing Server, Routing Server and Storage Server.
* When agent software is configured to run as Computing server, it is responsible to host user VMs. Agent software should be running in Xen Dom0 system on computer server machine.
* When agent software is configured to run as Routing Server, it is responsible to host routing VMs for user virtual network and console proxy system VMs. Routing server serves as the bridge to outside network, the machine that agent software is running should have at least two network interfaces, one towards outside network, one participates the internal VMOps management network. Like computer server, agent software on routing server should also be running in Xen Dom0 system.
* When agent software is configured to run as Storage server, it is responsible to provide storage service for all VMs. The storage service is based on ZFS running on a Solaris system, agent software on storage server is therefore running under Solaris (actually a Solaris VM), Dom0 systems on computing server and routing server can access the storage service through iScsi initiator. The storage volume will be eventually mounted on Dom0 system and make available to DomU VMs through our agent software.
5.2 Resource sharing
All developers can share the same set of agent server machines for development, to make this possible, the concept of instance appears in various places
* VM names. VM names are structual names, it contains a instance section that can identify VMs from different VMOps cloud instances. VMOps cloud instance name is configured in server configuration parameter AgentManager/instance.name
* iScsi initiator mount point. For Computing servers and Routing servers, the mount point can distinguish the mounted DomU VM images from different agent deployments. The mount location can be specified in agent.properties file with a name-value pair named mount.parent
* iScsi target allocation point. For storage servers, this allocation point can distinguish the storage allocation from different storage agent deployments. The allocation point can be specified in agent.properties file with a name-value pair named parent
5.4 Deploy agent software
Before running the deployment scripts, first copy the build artifacts agent.zip and deploy-agent.sh to your personal development directory on agent server machines. By our current convention, you can create your personal development directory that usually locates at /root/your name. In following example, the agent package and deployment scripts are copied to test0.lab.vmops.com and the deployment script file has been marked as executible.
On build machine,
scp agent.zip root@test0:/root/your name
scp deploy-agent.sh root@test0:/root/your name
On agent server machine
chmod +x deploy-agent.sh
5.4.1 Deploy agent on computing server
deploy-agent.sh -d /root/<your name>/agent -h <management server IP> -t computing -m expert
5.4.2 Deploy agent on routing server
deploy-agent.sh -d /root/<your name>/agent -h <management server IP> -t routing -m expert
5.4.3 Deploy agent on storage server
deploy-agent.sh -d /root/<your name>/agent -h <management server IP> -t storage -m expert
5.5 Configure agent
After you have deployed the agent software, you should configure the agent by editing the agent.properties file under /root/<your name>/agent/conf directory on each of the Routing, Computing and Storage servers. Add/Edit following properties. The rest are defaults that get populated by the agent at runtime.
workers=3
host=<replace with your management server IP>
port=8250
pod=<replace with your pod id>
zone=<replace with your zone id>
instance=<your unique instance name>
developer=true
Following is a sample agent.properties file for Routing server
workers=3
id=1
port=8250
pod=RC
storage=comstar
zone=RC
type=routing
private.network.nic=xenbr0
instance=RC
public.network.nic=xenbr1
developer=true
host=192.168.1.138
5.5 Running agent
Edit /root/<ryour name>/agent/conf/log4j-cloud.xml to update the location of logs to somewhere under /root/<your name>
Once you have deployed and configured the agent software, you are ready to launch it. Under the agent root directory (in our example, /root/<your name>/agent. there is a scrip file named run.sh, you can use it to launch the agent.
Launch agent in detached background process
nohup ./run.sh &
Launch agent in interactive mode
./run.sh
Launch agent in debug mode, for example, following command makes JVM listen at TCP port 8787
./run.sh -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n
If agent is launched in debug mode, you may use Eclipse IDE to remotely debug it, please note, when you are sharing agent server machine with others, choose a TCP port that is not in use by someone else.
Please also note that, run.sh also searches for /etc/cloud directory for agent.properties, make sure it uses the correct agent.properties file!
5.5. Stopping the Agents
the pid of the agent process is in /var/run/agent.<Instance>.pid
To Stop the agent:
kill <pid of agent>

155
INSTALL
View File

@ -1,155 +0,0 @@
---------------------------------------------------------------------
TABLE OF CONTENTS
---------------------------------------------------------------------
1. Really quick start: building and installing a production stack
2. Post-install: setting the CloudStack components up
3. Installation paths: where the stack is installed on your system
4. Uninstalling the CloudStack from your system
---------------------------------------------------------------------
REALLY QUICK START: BUILDING AND INSTALLING A PRODUCTION STACK
---------------------------------------------------------------------
You have two options. Choose one:
a) Building distribution packages from the source and installing them
b) Building from the source and installing directly from there
=== I want to build and install distribution packages ===
This is the recommended way to run your CloudStack cloud. The
advantages are that dependencies are taken care of automatically
for you, and you can verify the integrity of the installed files
using your system's package manager.
1. As root, install the build dependencies.
a) Fedora / CentOS: ./waf installrpmdeps
b) Ubuntu: ./waf installdebdeps
2. As a non-root user, build the CloudStack packages.
a) Fedora / CentOS: ./waf rpm
b) Ubuntu: ./waf deb
3. As root, install the CloudStack packages.
You can choose which components to install on your system.
a) Fedora / CentOS: the installable RPMs are in artifacts/rpmbuild
install as root: rpm -ivh artifacts/rpmbuild/RPMS/{x86_64,noarch,i386}/*.rpm
b) Ubuntu: the installable DEBs are in artifacts/debbuild
install as root: dpkg -i artifacts/debbuild/*.deb
4. Configure and start the components you intend to run.
Consult the Installation Guide to find out how to
configure each component, and "Installation paths" for information
on where programs, initscripts and config files are installed.
=== I want to build and install directly from the source ===
This is the recommended way to run your CloudStack cloud if you
intend to modify the source, if you intend to port the CloudStack to
another distribution, or if you intend to run the CloudStack on a
distribution for which packages are not built.
1. As root, install the build dependencies.
See below for a list.
2. As non-root, configure the build.
See below to discover configuration options.
./waf configure
3. As non-root, build the CloudStack.
To learn more, see "Quick guide to developing, building and
installing from source" below.
./waf build
4. As root, install the runtime dependencies.
See below for a list.
5. As root, Install the CloudStack
./waf install
6. Configure and start the components you intend to run.
Consult the Installation Guide to find out how to
configure each component, and "Installation paths" for information
on where to find programs, initscripts and config files mentioned
in the Installation Guide (paths may vary).
=== Dependencies of the CloudStack ===
- Build dependencies:
1. FIXME DEPENDENCIES LIST THEM HERE
- Runtime dependencies:
2. FIXME DEPENDENCIES LIST THEM HERE
---------------------------------------------------------------------
INSTALLATION PATHS: WHERE THE STACK IS INSTALLED ON YOUR SYSTEM
---------------------------------------------------------------------
The CloudStack build system installs files on a variety of paths, each
one of which is selectable when building from source.
- $PREFIX:
the default prefix where the entire stack is installed
defaults to /usr/local on source builds
defaults to /usr on package builds
- $SYSCONFDIR/cloud:
the prefix for CloudStack configuration files
defaults to $PREFIX/etc/cloud on source builds
defaults to /etc/cloud on package builds
- $SYSCONFDIR/init.d:
the prefix for CloudStack initscripts
defaults to $PREFIX/etc/init.d on source builds
defaults to /etc/init.d on package builds
- $BINDIR:
the CloudStack installs programs there
defaults to $PREFIX/bin on source builds
defaults to /usr/bin on package builds
- $LIBEXECDIR:
the CloudStack installs service runners there
defaults to $PREFIX/libexec on source builds
defaults to /usr/libexec on package builds (/usr/bin on Ubuntu)
---------------------------------------------------------------------
UNINSTALLING THE CLOUDSTACK FROM YOUR SYSTEM
---------------------------------------------------------------------
- If you installed the CloudStack using packages, use your operating
system package manager to remove the CloudStack packages.
a) Fedora / CentOS: the installable RPMs are in artifacts/rpmbuild
as root: rpm -qa | grep ^cloud- | xargs rpm -e
b) Ubuntu: the installable DEBs are in artifacts/debbuild
aptitude purge '~ncloud'
- If you installed from a source tree:
./waf uninstall

52
README
View File

@ -1,52 +0,0 @@
Hello, and thanks for downloading the Cloud.com CloudStack™! The
Cloud.com CloudStack™ is Open Source Software that allows
organizations to build Infrastructure as a Service (Iaas) clouds.
Working with server, storage, and networking equipment of your
choice, the CloudStack provides a turn-key software stack that
dramatically simplifies the process of deploying and managing a
cloud.
---------------------------------------------------------------------
HOW TO INSTALL THE CLOUDSTACK
---------------------------------------------------------------------
Please refer to the document INSTALL distributed with the source.
---------------------------------------------------------------------
HOW TO HACK ON THE CLOUDSTACK
---------------------------------------------------------------------
Please refer to the document HACKING distributed with the source.
---------------------------------------------------------------------
BE PART OF THE CLOUD.COM COMMUNITY!
---------------------------------------------------------------------
We are more than happy to have you ask us questions, hack our source
code, and receive your contributions.
* Our forums are available at http://cloud.com/community .
* If you would like to modify / extend / hack on the CloudStack source,
refer to the file HACKING for more information.
* If you find bugs, please log on to http://bugs.cloud.com/ and file
a report.
* If you have patches to send us get in touch with us at info@cloud.com
or file them as attachments in our bug tracker above.
---------------------------------------------------------------------
Cloud.com's contact information is:
20400 Stevens Creek Blvd
Suite 390
Cupertino, CA 95014
Tel: +1 (888) 384-0962
This software is OSI certified Open Source Software. OSI Certified is a
certification mark of the Open Source Initiative.

View File

@ -512,6 +512,13 @@ Also see [[AdvancedOptions]]</pre>
</div>
<!--POST-SHADOWAREA-->
<div id="storeArea">
<div title="(default) on http://tiddlyvault.tiddlyspot.com/#%5B%5BDisableWikiLinksPlugin%20(TiddlyTools)%5D%5D" modifier="(System)" created="201009040211" tags="systemServer" changecount="1">
<pre>|''Type:''|file|
|''URL:''|http://tiddlyvault.tiddlyspot.com/#%5B%5BDisableWikiLinksPlugin%20(TiddlyTools)%5D%5D|
|''Workspace:''|(default)|
This tiddler was automatically created to record the details of this server</pre>
</div>
<div title="AntInformation" creator="RuddO" modifier="RuddO" created="201008072228" changecount="1">
<pre>---------------------------------------------------------------------
FOR ANT USERS
@ -702,21 +709,18 @@ Once this command is done, the packages will be built in the directory {{{artifa
# As a non-root user, run the command {{{./waf deb}}} in the source directory.
Once this command is done, the packages will be built in the directory {{{artifacts/debbuild}}}.</pre>
</div>
<div title="Building from the source and installing directly from there" creator="RuddO" modifier="RuddO" created="201008080022" modified="201008081327" changecount="14">
<pre>!Obtain the source for the CloudStack
<div title="Building from the source and installing directly from there" creator="RuddO" modifier="RuddO" created="201008080022" modified="201009040235" changecount="20">
<pre>You need to do the following steps on each machine that will run a CloudStack component.
!Obtain the source for the CloudStack
If you aren't reading this from a local copy of the source code, see [[Obtaining the source]].
!Prepare your development environment
See [[Preparing your development environment]].
!Configure the build on the builder machine
!Prepare your environment
See [[Preparing your environment]].
!Configure the build
As non-root, run the command {{{./waf configure}}}. See [[waf configure]] to discover configuration options for that command.
!Build the CloudStack on the builder machine
!Build the CloudStack
As non-root, run the command {{{./waf build}}}. See [[waf build]] for an explanation.
!Install the CloudStack on the target systems
On each machine where you intend to run a CloudStack component:
# upload the entire source code tree after compilation, //ensuring that the source ends up in the same path as the machine in which you compiled it//,
## {{{rsync}}} is [[usually very handy|Using rsync to quickly transport the source tree to another machine]] for this
# in that newly uploaded directory of the target machine, run the command {{{./waf install}}} //as root//.
Consult [[waf install]] for information on installation.</pre>
!Install the CloudStack
Run the command {{{./waf install}}} //as root//. Consult [[waf install]] for information on installation.</pre>
</div>
<div title="Changing the build, install and packaging processes" creator="RuddO" modifier="RuddO" created="201008081215" modified="201008081309" tags="fixme" changecount="15">
<pre>!Changing the [[configuration|waf configure]] process
@ -737,11 +741,91 @@ See the files in the {{{debian/}}} folder.</pre>
<div title="CloudStack" creator="RuddO" modifier="RuddO" created="201008072205" changecount="1">
<pre>The Cloud.com CloudStack is an open source software product that enables the deployment, management, and configuration of multi-tier and multi-tenant infrastructure cloud services by enterprises and service providers.</pre>
</div>
<div title="CloudStack build dependencies" creator="RuddO" modifier="RuddO" created="201008081310" tags="fixme" changecount="1">
<pre>Not done yet!</pre>
<div title="CloudStack build dependencies" creator="RuddO" modifier="RuddO" created="201008081310" modified="201009040226" changecount="20">
<pre>Prior to building the CloudStack, you need to install the following software packages in your system.
# Sun Java 1.6
## You must install the Java Development Kit with {{{javac}}}, not just the Java Runtime Environment
## The commands {{{java}}} and {{{javac}}} must be found in your {{{PATH}}}
# Apache Tomcat
## If you are using the official Apache binary distribution, set the environment variable {{{TOMCAT_HOME}}} to point to the Apache Tomcat directory
# MySQL
## At the very minimum, you need to have the client and libraries installed
## If your development machine is also going to be the database server, you need to have the server installed and running as well
# Python 2.6
## Ensure that the {{{python}}} command is in your {{{PATH}}}
## Do ''not'' install Cygwin Python!
# The MySQLdb module for Python 2.6
## If you use Windows, you can find a [[pre-built package here|http://soemin.googlecode.com/files/MySQL-python-1.2.3c1.win32-py2.6.exe]]
# The Bourne-again shell (also known as bash)
# GNU coreutils
''Note for Windows users'': Some of the packages in the above list are only available on Windows through Cygwin. If that is your case, install them using Cygwin and remember to include the Cygwin {{{bin/}}} directory in your PATH. Under no circumstances install Cygwin Python! Use the Python for Windows official installer instead.
!Additional dependencies for Linux development environments
# GCC (only needed on Linux)
# glibc-devel / glibc-dev
# The Java packages (usually available in your distribution):
## commons-collections
## commons-dbcp
## commons-logging
## commons-logging-api
## commons-pool
## commons-httpclient
## ws-commons-util
# useradd
# userdel</pre>
</div>
<div title="CloudStack run-time dependencies" creator="RuddO" modifier="RuddO" created="201008081310" tags="fixme" changecount="1">
<pre>Not done yet!</pre>
<div title="CloudStack run-time dependencies" creator="RuddO" modifier="RuddO" created="201008081310" modified="201009040225" tags="fixme" changecount="16">
<pre>The following software / programs must be correctly installed in the machines where you will run a CloudStack component. This list is by no means complete yet, but it will be soon.
''Note for Windows users'': Some of the packages in the lists below are only available on Windows through Cygwin. If that is your case, install them using Cygwin and remember to include the Cygwin {{{bin/}}} directory in your PATH. Under no circumstances install Cygwin Python! Use the Python for Windows official installer instead.
!Run-time dependencies common to all components of the CloudStack
# bash
# coreutils
# Sun Java 1.6
## You must install the Java Development Kit with {{{javac}}}, not just the Java Runtime Environment
## The commands {{{java}}} and {{{javac}}} must be found in your {{{PATH}}}
# Python 2.6
## Ensure that the {{{python}}} command is in your {{{PATH}}}
## Do ''not'' install Cygwin Python!
# The Java packages (usually available in your distribution):
## commons-collections
## commons-dbcp
## commons-logging
## commons-logging-api
## commons-pool
## commons-httpclient
## ws-commons-util
!Management Server-specific dependencies
# Apache Tomcat
## If you are using the official Apache binary distribution, set the environment variable {{{TOMCAT_HOME}}} to point to the Apache Tomcat directory
# MySQL
## At the very minimum, you need to have the client and libraries installed
## If you will be running the Management Server in the same machine that will run the database server, you need to have the server installed and running as well
# The MySQLdb module for Python 2.6
## If you use Windows, you can find a [[pre-built package here|http://soemin.googlecode.com/files/MySQL-python-1.2.3c1.win32-py2.6.exe]]
# openssh-clients (provides the ssh-keygen command)
# mkisofs (provides the genisoimage command)</pre>
</div>
<div title="Database migration infrastructure" creator="RuddO" modifier="RuddO" created="201009011837" modified="201009011852" changecount="14">
<pre>To support incremental migration from one version to another without having to redeploy the database, the CloudStack supports an incremental schema migration mechanism for the database.
!!!How does it work?
When the database is deployed for the first time with [[waf deploydb]] or the command {{{cloud-setup-databases}}}, a row is written to the {{{configuration}}} table, named {{{schema.level}}} and containing the current schema level. This schema level row comes from the file {{{setup/db/schema-level.sql}}} in the source (refer to the [[Installation paths]] topic to find out where this file is installed in a running system).
This value is used by the database migrator {{{cloud-migrate-databases}}} (source {{{setup/bindir/cloud-migrate-databases.in}}}) to determine the starting schema level. The database migrator has a series of classes -- each class represents a step in the migration process and is usually tied to the execution of a SQL file stored in {{{setup/db}}}. To migrate the database, the database migrator:
# walks the list of steps it knows about,
# generates a list of steps sorted by the order they should be executed in,
# executes each step in order
# at the end of each step, records the new schema level to the database table {{{configuration}}}
For more information, refer to the database migrator source -- it is documented.
!!!What impact does this have on me as a developer?
Whenever you need to evolve the schema of the database:
# write a migration SQL script and store it in {{{setup/db}}},
# include your schema changes in the appropriate SQL file {{{create-*.sql}}} too (as the database is expected to be at its latest evolved schema level right after deploying a fresh database)
# write a class in {{{setup/bindir/cloud-migrate-databases.in}}}, describing the migration step; in detail:
## the schema level your migration step expects the database to be in,
## the schema level your migration step will leave your database in (presumably the latest schema level, which you will have to choose!),
## and the name / description of the step
# bump the schema level in {{{setup/db/schema-level.sql}}} to the latest schema level
Otherwise, ''end-user migration will fail catastrophically''.</pre>
</div>
<div title="DefaultTiddlers" creator="RuddO" modifier="RuddO" created="201008072205" modified="201008072257" changecount="4">
<pre>[[Welcome]]</pre>
@ -749,13 +833,115 @@ See the files in the {{{debian/}}} folder.</pre>
<div title="Development conventions" creator="RuddO" modifier="RuddO" created="201008081334" modified="201008081336" changecount="4">
<pre>#[[Source layout guide]]</pre>
</div>
<div title="DisableWikiLinksPlugin" modifier="ELSDesignStudios" created="200512092239" modified="200807230133" tags="systemConfig" server.type="file" server.host="www.tiddlytools.com" server.page.revision="200807230133">
<pre>/***
|Name|DisableWikiLinksPlugin|
|Source|http://www.TiddlyTools.com/#DisableWikiLinksPlugin|
|Version|1.6.0|
|Author|Eric Shulman|
|License|http://www.TiddlyTools.com/#LegalStatements|
|~CoreVersion|2.1|
|Type|plugin|
|Description|selectively disable TiddlyWiki's automatic ~WikiWord linking behavior|
This plugin allows you to disable TiddlyWiki's automatic ~WikiWord linking behavior, so that WikiWords embedded in tiddler content will be rendered as regular text, instead of being automatically converted to tiddler links. To create a tiddler link when automatic linking is disabled, you must enclose the link text within {{{[[...]]}}}.
!!!!!Usage
&lt;&lt;&lt;
You can block automatic WikiWord linking behavior for any specific tiddler by ''tagging it with&lt;&lt;tag excludeWikiWords&gt;&gt;'' (see configuration below) or, check a plugin option to disable automatic WikiWord links to non-existing tiddler titles, while still linking WikiWords that correspond to existing tiddlers titles or shadow tiddler titles. You can also block specific selected WikiWords from being automatically linked by listing them in [[DisableWikiLinksList]] (see configuration below), separated by whitespace. This tiddler is optional and, when present, causes the listed words to always be excluded, even if automatic linking of other WikiWords is being permitted.
Note: WikiWords contained in default ''shadow'' tiddlers will be automatically linked unless you select an additional checkbox option lets you disable these automatic links as well, though this is not recommended, since it can make it more difficult to access some TiddlyWiki standard default content (such as AdvancedOptions or SideBarTabs)
&lt;&lt;&lt;
!!!!!Configuration
&lt;&lt;&lt;
&lt;&lt;option chkDisableWikiLinks&gt;&gt; Disable ALL automatic WikiWord tiddler links
&lt;&lt;option chkAllowLinksFromShadowTiddlers&gt;&gt; ... except for WikiWords //contained in// shadow tiddlers
&lt;&lt;option chkDisableNonExistingWikiLinks&gt;&gt; Disable automatic WikiWord links for non-existing tiddlers
Disable automatic WikiWord links for words listed in: &lt;&lt;option txtDisableWikiLinksList&gt;&gt;
Disable automatic WikiWord links for tiddlers tagged with: &lt;&lt;option txtDisableWikiLinksTag&gt;&gt;
&lt;&lt;&lt;
!!!!!Revisions
&lt;&lt;&lt;
2008.07.22 [1.6.0] hijack tiddler changed() method to filter disabled wiki words from internal links[] array (so they won't appear in the missing tiddlers list)
2007.06.09 [1.5.0] added configurable txtDisableWikiLinksTag (default value: &quot;excludeWikiWords&quot;) to allows selective disabling of automatic WikiWord links for any tiddler tagged with that value.
2006.12.31 [1.4.0] in formatter, test for chkDisableNonExistingWikiLinks
2006.12.09 [1.3.0] in formatter, test for excluded wiki words specified in DisableWikiLinksList
2006.12.09 [1.2.2] fix logic in autoLinkWikiWords() (was allowing links TO shadow tiddlers, even when chkDisableWikiLinks is TRUE).
2006.12.09 [1.2.1] revised logic for handling links in shadow content
2006.12.08 [1.2.0] added hijack of Tiddler.prototype.autoLinkWikiWords so regular (non-bracketed) WikiWords won't be added to the missing list
2006.05.24 [1.1.0] added option to NOT bypass automatic wikiword links when displaying default shadow content (default is to auto-link shadow content)
2006.02.05 [1.0.1] wrapped wikifier hijack in init function to eliminate globals and avoid FireFox 1.5.0.1 crash bug when referencing globals
2005.12.09 [1.0.0] initial release
&lt;&lt;&lt;
!!!!!Code
***/
//{{{
version.extensions.DisableWikiLinksPlugin= {major: 1, minor: 6, revision: 0, date: new Date(2008,7,22)};
if (config.options.chkDisableNonExistingWikiLinks==undefined) config.options.chkDisableNonExistingWikiLinks= false;
if (config.options.chkDisableWikiLinks==undefined) config.options.chkDisableWikiLinks=false;
if (config.options.txtDisableWikiLinksList==undefined) config.options.txtDisableWikiLinksList=&quot;DisableWikiLinksList&quot;;
if (config.options.chkAllowLinksFromShadowTiddlers==undefined) config.options.chkAllowLinksFromShadowTiddlers=true;
if (config.options.txtDisableWikiLinksTag==undefined) config.options.txtDisableWikiLinksTag=&quot;excludeWikiWords&quot;;
// find the formatter for wikiLink and replace handler with 'pass-thru' rendering
initDisableWikiLinksFormatter();
function initDisableWikiLinksFormatter() {
for (var i=0; i&lt;config.formatters.length &amp;&amp; config.formatters[i].name!=&quot;wikiLink&quot;; i++);
config.formatters[i].coreHandler=config.formatters[i].handler;
config.formatters[i].handler=function(w) {
// supress any leading &quot;~&quot; (if present)
var skip=(w.matchText.substr(0,1)==config.textPrimitives.unWikiLink)?1:0;
var title=w.matchText.substr(skip);
var exists=store.tiddlerExists(title);
var inShadow=w.tiddler &amp;&amp; store.isShadowTiddler(w.tiddler.title);
// check for excluded Tiddler
if (w.tiddler &amp;&amp; w.tiddler.isTagged(config.options.txtDisableWikiLinksTag))
{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
// check for specific excluded wiki words
var t=store.getTiddlerText(config.options.txtDisableWikiLinksList);
if (t &amp;&amp; t.length &amp;&amp; t.indexOf(w.matchText)!=-1)
{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
// if not disabling links from shadows (default setting)
if (config.options.chkAllowLinksFromShadowTiddlers &amp;&amp; inShadow)
return this.coreHandler(w);
// check for non-existing non-shadow tiddler
if (config.options.chkDisableNonExistingWikiLinks &amp;&amp; !exists)
{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
// if not enabled, just do standard WikiWord link formatting
if (!config.options.chkDisableWikiLinks)
return this.coreHandler(w);
// just return text without linking
w.outputText(w.output,w.matchStart+skip,w.nextMatch)
}
}
Tiddler.prototype.coreAutoLinkWikiWords = Tiddler.prototype.autoLinkWikiWords;
Tiddler.prototype.autoLinkWikiWords = function()
{
// if all automatic links are not disabled, just return results from core function
if (!config.options.chkDisableWikiLinks)
return this.coreAutoLinkWikiWords.apply(this,arguments);
return false;
}
Tiddler.prototype.disableWikiLinks_changed = Tiddler.prototype.changed;
Tiddler.prototype.changed = function()
{
this.disableWikiLinks_changed.apply(this,arguments);
// remove excluded wiki words from links array
var t=store.getTiddlerText(config.options.txtDisableWikiLinksList,&quot;&quot;).readBracketedList();
if (t.length) for (var i=0; i&lt;t.length; i++)
if (this.links.contains(t[i]))
this.links.splice(this.links.indexOf(t[i]),1);
};
//}}}</pre>
</div>
<div title="Git" creator="RuddO" modifier="RuddO" created="201008081330" tags="fixme" changecount="1">
<pre>Not done yet!</pre>
</div>
<div title="Hacking on the CloudStack" creator="RuddO" modifier="RuddO" created="201008072228" modified="201008081354" changecount="47">
<div title="Hacking on the CloudStack" creator="RuddO" modifier="RuddO" created="201008072228" modified="201009040156" changecount="52">
<pre>Start here if you want to learn the essentials to extend, modify and enhance the CloudStack. This assumes that you've already familiarized yourself with CloudStack concepts, installation and configuration using the [[Getting started|Welcome]] instructions.
* [[Obtain the source|Obtaining the source]]
* [[Prepare your environment|Preparing your development environment]]
* [[Prepare your environment|Preparing your environment]]
* [[Get acquainted with the development lifecycle|Your development lifecycle]]
* [[Familiarize yourself with our development conventions|Development conventions]]
Extra developer information:
@ -764,6 +950,7 @@ Extra developer information:
* [[How to integrate with Eclipse]]
* [[Starting over]]
* [[Making a source release|waf dist]]
* [[How to write database migration scripts|Database migration infrastructure]]
</pre>
</div>
<div title="How to integrate with Eclipse" creator="RuddO" modifier="RuddO" created="201008081029" modified="201008081346" changecount="3">
@ -785,13 +972,13 @@ Any ant target added to the ant project files will automatically be detected --
The reason we do this rather than use the native waf capabilities for building Java projects is simple: by using ant, we can leverage the support built-in for ant in [[Eclipse|How to integrate with Eclipse]] and many other &quot;&quot;&quot;IDEs&quot;&quot;&quot;. Another reason to do this is because Java developers are familiar with ant, so adding a new JAR file or modifying what gets built into the existing JAR files is facilitated for Java developers.</pre>
</div>
<div title="Installation paths" creator="RuddO" modifier="RuddO" created="201008080025" modified="201008080028" changecount="6">
<div title="Installation paths" creator="RuddO" modifier="RuddO" created="201008080025" modified="201009012342" changecount="8">
<pre>The CloudStack build system installs files on a variety of paths, each
one of which is selectable when building from source.
* {{{$PREFIX}}}:
** the default prefix where the entire stack is installed
** defaults to /usr/local on source builds
** defaults to /usr on package builds
** defaults to {{{/usr/local}}} on source builds as root, {{{$HOME/cloudstack}}} on source builds as a regular user, {{{C:\CloudStack}}} on Windows builds
** defaults to {{{/usr}}} on package builds
* {{{$SYSCONFDIR/cloud}}}:
** the prefix for CloudStack configuration files
** defaults to $PREFIX/etc/cloud on source builds
@ -901,16 +1088,17 @@ This will create a folder called {{{cloudstack-oss}}} in your current folder.
!Browsing the source code online
You can browse the CloudStack source code through [[our CGit Web interface|http://git.cloud.com/cloudstack-oss]].</pre>
</div>
<div title="Preparing your development environment" creator="RuddO" modifier="RuddO" created="201008081133" modified="201008081159" changecount="7">
<pre>!Install the build dependencies on the machine where you will compile the CloudStack
!!Fedora / CentOS
The command [[waf installrpmdeps]] issued from the source tree gets it done.
!!Ubuntu
The command [[waf installdebdeps]] issues from the source tree gets it done.
!!Other distributions
See [[CloudStack build dependencies]]
!Install the run-time dependencies on the machines where you will run the CloudStack
See [[CloudStack run-time dependencies]].</pre>
<div title="Preparing your environment" creator="RuddO" modifier="RuddO" created="201008081133" modified="201009040238" changecount="17">
<pre>!Install the build dependencies
* If you want to compile the CloudStack on Linux:
** Fedora / CentOS: The command [[waf installrpmdeps]] issued from the source tree gets it done.
** Ubuntu: The command [[waf installdebdeps]] issues from the source tree gets it done.
** Other distributions: Manually install the packages listed in [[CloudStack build dependencies]].
* If you want to compile the CloudStack on Windows or Mac:
** Manually install the packages listed in [[CloudStack build dependencies]].
** Note that you won't be able to deploy this compiled CloudStack onto Linux machines -- you will be limited to running the Management Server.
!Install the run-time dependencies
In addition to the build dependencies, a number of software packages need to be installed on the machine to be able to run certain components of the CloudStack. These packages are not strictly required to //build// the stack, but they are required to run at least one part of it. See the topic [[CloudStack run-time dependencies]] for the list of packages.</pre>
</div>
<div title="Preserving the CloudStack configuration across source reinstalls" creator="RuddO" modifier="RuddO" created="201008080958" modified="201008080959" changecount="2">
<pre>Every time you run {{{./waf install}}} to deploy changed code, waf will install configuration files once again. This can be a nuisance if you are developing the stack.
@ -1149,9 +1337,9 @@ Cloud.com's contact information is:
!Legal information
//Unless otherwise specified// by Cloud.com, Inc., or in the sources themselves, [[this software is OSI certified Open Source Software distributed under the GNU General Public License, version 3|License statement]]. OSI Certified is a certification mark of the Open Source Initiative. The software powering this documentation is &quot;&quot;&quot;BSD-licensed&quot;&quot;&quot; and obtained from [[TiddlyWiki.com|http://tiddlywiki.com/]].</pre>
</div>
<div title="Your development lifecycle" creator="RuddO" modifier="RuddO" created="201008080933" modified="201008081349" changecount="16">
<pre>This is the typical lifecycle that you would follow when hacking on a CloudStack component, assuming that your [[development environment has been set up|Preparing your development environment]]:
# [[Configure|waf configure]] the source code&lt;br&gt;{{{./waf configure --prefix=/home/youruser/cloudstack}}}
<div title="Your development lifecycle" creator="RuddO" modifier="RuddO" created="201008080933" modified="201009040158" changecount="18">
<pre>This is the typical lifecycle that you would follow when hacking on a CloudStack component, assuming that your [[development environment has been set up|Preparing your environment]]:
# [[Configure|waf configure]] the source code&lt;br&gt;{{{./waf configure}}}
# [[Build|waf build]] and [[install|waf install]] the CloudStack
## {{{./waf install}}}
## [[How to perform these tasks from Eclipse|How to integrate with Eclipse]]
@ -1229,7 +1417,7 @@ Makes an inventory of all build products in {{{artifacts/default}}}, and removes
Contrast to [[waf distclean]].</pre>
</div>
<div title="waf configure" creator="RuddO" modifier="RuddO" created="201008080940" modified="201008081146" changecount="14">
<div title="waf configure" creator="RuddO" modifier="RuddO" created="201008080940" modified="201009012344" changecount="15">
<pre>{{{
./waf configure --prefix=/directory/that/you/have/write/permission/to
}}}
@ -1238,7 +1426,7 @@ This runs the file {{{wscript_configure}}}, which takes care of setting the var
!When / why should I run this?
You run this command //once//, in preparation to building the stack, or every time you need to change a configure-time variable. Once you find an acceptable set of configure-time variables, you should not need to run {{{configure}}} again.
!What happens if I don't run it?
For convenience reasons, if you forget to configure the source, waf will autoconfigure itself and select some sensible default configuration options. By default, {{{PREFIX}}} is {{{/usr/local}}}, but you can set it e.g. to {{{/home/youruser/cloudstack}}} if you plan to do a non-root install. Be ware that you can later install the stack as a regular user, but most components need to //run// as root.
For convenience reasons, if you forget to configure the source, waf will autoconfigure itself and select some sensible default configuration options. By default, {{{PREFIX}}} is {{{/usr/local}}} if you configure as root (do this if you plan to do a non-root install), or {{{/home/youruser/cloudstack}}} if you configure as your regular user name. Be ware that you can later install the stack as a regular user, but most components need to //run// as root.
!What variables / options exist for configure?
In general: refer to the output of {{{./waf configure --help}}}.

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys, os, subprocess, errno, re, traceback
import sys, os, subprocess, errno, re, traceback, getopt
# ---- This snippet of code adds the sources path and the waf configured PYTHONDIR to the Python path ----
# ---- We do this so cloud_utils can be looked up in the following order:
@ -37,13 +37,44 @@ backupdir = "@SHAREDSTATEDIR@/@AGENTPATH@/etcbackup"
#=================== the magic happens here ====================
stderr("Welcome to the Cloud Agent setup")
stderr("")
try:
# parse cmd line
opts, args = getopt.getopt(sys.argv[1:], "a", ["host=", "zone=", "pod=", "cluster=", "no-kvm", "guid="])
host=None
zone=None
pod=None
cluster=None
guid=None
autoMode=False
do_check_kvm = True
for opt, arg in opts:
if opt == "--host":
if arg != "":
host = arg
elif opt == "--zone":
if arg != "":
zone = arg
elif opt == "--pod":
if arg != "":
pod = arg
elif opt == "--cluster":
if arg != "":
cluster = arg
elif opt == "--guid":
if arg != "":
guid = arg
elif opt == "--no-kvm":
do_check_kvm = False
elif opt == "-a":
autoMode=True
if autoMode:
cloud_utils.setLogFile("/var/log/cloud/setupAgent.log")
stderr("Welcome to the Cloud Agent setup")
stderr("")
# pre-flight checks for things that the administrator must fix
do_check_kvm = not ( "--no-kvm" in sys.argv[1:] )
try:
for f,n in cloud_utils.preflight_checks(
do_check_kvm=do_check_kvm
@ -59,6 +90,8 @@ try:
try:
tasks = cloud_utils.config_tasks(brname)
for t in tasks:
t.setAutoMode(autoMode)
if all( [ t.done() for t in tasks ] ):
stderr("All configuration tasks have been performed already")
@ -83,7 +116,7 @@ try:
stderr(str(e))
bail(cloud_utils.E_SETUPFAILED,"Cloud Agent setup failed")
setup_agent_config(configfile)
setup_agent_config(configfile, host, zone, pod, cluster, guid)
stderr("Enabling and starting the Cloud Agent")
stop_service(servicename)
enable_service(servicename)

View File

@ -32,6 +32,7 @@ import com.cloud.resource.ServerResource;
@Local(value={ServerResource.class})
public class DummyResource implements ServerResource {
private boolean _isRemoteAgent = false;
String _name;
Host.Type _type;
boolean _negative;
@ -101,4 +102,12 @@ public class DummyResource implements ServerResource {
public void setAgentControl(IAgentControl agentControl) {
_agentControl = agentControl;
}
public boolean IsRemoteAgent() {
return _isRemoteAgent;
}
public void setRemoteAgent(boolean remote) {
_isRemoteAgent = remote;
}
}

View File

@ -134,9 +134,9 @@ import com.cloud.agent.api.storage.CreateAnswer;
import com.cloud.agent.api.storage.CreateCommand;
import com.cloud.agent.api.storage.CreatePrivateTemplateAnswer;
import com.cloud.agent.api.storage.CreatePrivateTemplateCommand;
import com.cloud.agent.api.storage.DestroyCommand;
import com.cloud.agent.api.storage.DownloadAnswer;
import com.cloud.agent.api.storage.PrimaryStorageDownloadCommand;
import com.cloud.agent.api.to.DiskCharacteristicsTO;
import com.cloud.agent.api.to.StoragePoolTO;
import com.cloud.agent.api.to.VolumeTO;
import com.cloud.agent.resource.computing.LibvirtStoragePoolDef.poolType;
@ -160,13 +160,14 @@ import com.cloud.hypervisor.Hypervisor;
import com.cloud.network.NetworkEnums.RouterPrivateIpStrategy;
import com.cloud.resource.ServerResource;
import com.cloud.resource.ServerResourceBase;
import com.cloud.storage.Storage;
import com.cloud.storage.Storage.StorageResourceType;
import com.cloud.storage.StorageLayer;
import com.cloud.storage.StoragePoolVO;
import com.cloud.storage.Volume;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.Storage.ImageFormat;
import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.storage.Volume.StorageResourceType;
import com.cloud.storage.Volume.VolumeType;
import com.cloud.storage.template.Processor;
import com.cloud.storage.template.QCOW2Processor;
@ -182,6 +183,7 @@ import com.cloud.utils.net.NetUtils;
import com.cloud.utils.script.OutputInterpreter;
import com.cloud.utils.script.Script;
import com.cloud.vm.ConsoleProxyVO;
import com.cloud.vm.DiskProfile;
import com.cloud.vm.DomainRouter;
import com.cloud.vm.State;
import com.cloud.vm.VirtualMachineName;
@ -220,6 +222,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
private String _host;
private String _dcId;
private String _pod;
private String _clusterId;
private long _hvVersion;
private final String _SSHKEYSPATH = "/root/.ssh";
private final String _SSHPRVKEYPATH = _SSHKEYSPATH + File.separator + "id_rsa.cloud";
@ -559,7 +562,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
if (_pod == null) {
_pod = "default";
}
_clusterId = (String) params.get("cluster");
_createvnetPath = Script.findScript(networkScriptsDir, "createvnet.sh");
if(_createvnetPath == null) {
throw new ConfigurationException("Unable to find createvnet.sh");
@ -1114,6 +1119,8 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
return execute((MaintainCommand) cmd);
} else if (cmd instanceof CreateCommand) {
return execute((CreateCommand) cmd);
} else if (cmd instanceof DestroyCommand) {
return execute((DestroyCommand) cmd);
} else if (cmd instanceof PrimaryStorageDownloadCommand) {
return execute((PrimaryStorageDownloadCommand) cmd);
} else if (cmd instanceof CreatePrivateTemplateCommand) {
@ -1161,13 +1168,13 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
}
protected StorageResourceType getStorageResourceType() {
return StorageResourceType.STORAGE_POOL;
protected Storage.StorageResourceType getStorageResourceType() {
return Storage.StorageResourceType.STORAGE_POOL;
}
protected Answer execute(CreateCommand cmd) {
StoragePoolTO pool = cmd.getPool();
DiskCharacteristicsTO dskch = cmd.getDiskCharacteristics();
DiskProfile dskch = cmd.getDiskCharacteristics();
StorageVol tmplVol = null;
StoragePool primaryPool = null;
StorageVol vol = null;
@ -1187,10 +1194,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
s_logger.debug(result);
return new CreateAnswer(cmd, result);
}
vol = createVolume(primaryPool, tmplVol);
LibvirtStorageVolumeDef volDef = new LibvirtStorageVolumeDef(UUID.randomUUID().toString(), tmplVol.getInfo().capacity, volFormat.QCOW2, tmplVol.getPath(), volFormat.QCOW2);
s_logger.debug(volDef.toString());
vol = primaryPool.storageVolCreateXML(volDef.toString(), 0);
if (vol == null) {
return new Answer(cmd, false, " Can't create storage volume on storage pool");
}
@ -1224,24 +1230,68 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
}
}
public Answer execute(DestroyCommand cmd) {
VolumeTO vol = cmd.getVolume();
try {
StorageVol volume = getVolume(vol.getPath());
if (volume == null) {
s_logger.debug("Failed to find the volume: " + vol.getPath());
return new Answer(cmd, true, "Success");
}
volume.delete(0);
volume.free();
} catch (LibvirtException e) {
s_logger.debug("Failed to delete volume: " + e.toString());
return new Answer(cmd, false, e.toString());
}
return new Answer(cmd, true, "Success");
}
protected ManageSnapshotAnswer execute(final ManageSnapshotCommand cmd) {
String snapshotName = cmd.getSnapshotName();
String VolPath = cmd.getVolumePath();
String snapshotPath = cmd.getSnapshotPath();
String vmName = cmd.getVmName();
try {
StorageVol vol = getVolume(VolPath);
if (vol == null) {
return new ManageSnapshotAnswer(cmd, false, null);
DomainInfo.DomainState state = null;
Domain vm = null;
if (vmName != null) {
try {
vm = getDomain(cmd.getVmName());
state = vm.getInfo().state;
} catch (LibvirtException e) {
}
}
Domain vm = getDomain(cmd.getVmName());
String vmUuid = vm.getUUIDString();
Object[] args = new Object[] {snapshotName, vmUuid};
String snapshot = SnapshotXML.format(args);
s_logger.debug(snapshot);
if (cmd.getCommandSwitch().equalsIgnoreCase(ManageSnapshotCommand.CREATE_SNAPSHOT)) {
vm.snapshotCreateXML(snapshot);
if (state == DomainInfo.DomainState.VIR_DOMAIN_RUNNING) {
String vmUuid = vm.getUUIDString();
Object[] args = new Object[] {snapshotName, vmUuid};
String snapshot = SnapshotXML.format(args);
s_logger.debug(snapshot);
if (cmd.getCommandSwitch().equalsIgnoreCase(ManageSnapshotCommand.CREATE_SNAPSHOT)) {
vm.snapshotCreateXML(snapshot);
} else {
DomainSnapshot snap = vm.snapshotLookupByName(snapshotName);
snap.delete(0);
}
} else {
DomainSnapshot snap = vm.snapshotLookupByName(snapshotName);
snap.delete(0);
/*VM is not running, create a snapshot by ourself*/
final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
if (cmd.getCommandSwitch().equalsIgnoreCase(ManageSnapshotCommand.CREATE_SNAPSHOT)) {
command.add("-c", VolPath);
} else {
command.add("-d", snapshotPath);
}
command.add("-n", snapshotName);
String result = command.execute();
if (result != null) {
s_logger.debug("Failed to manage snapshot: " + result);
return new ManageSnapshotAnswer(cmd, false, "Failed to manage snapshot: " + result);
}
}
} catch (LibvirtException e) {
s_logger.debug("Failed to manage snapshot: " + e.toString());
@ -1258,28 +1308,52 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
String snapshotName = cmd.getSnapshotName();
String snapshotPath = cmd.getSnapshotUuid();
String snapshotDestPath = null;
String vmName = cmd.getVmName();
try {
StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(secondaryStoragePoolURL));
String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId;
final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
snapshotDestPath = ssPmountPath + File.separator + "snapshots" + File.separator + dcId + File.separator + accountId + File.separator + volumeId;
Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
command.add("-b", snapshotPath);
command.add("-n", snapshotName);
command.add("-p", snapshotDestPath);
command.add("-t", snapshotName);
String result = command.execute();
if (result != null) {
s_logger.debug("Failed to backup snaptshot: " + result);
return new BackupSnapshotAnswer(cmd, false, result, null);
}
/*Delete the snapshot on primary*/
Domain vm = getDomain(cmd.getVmName());
String vmUuid = vm.getUUIDString();
Object[] args = new Object[] {snapshotName, vmUuid};
String snapshot = SnapshotXML.format(args);
s_logger.debug(snapshot);
DomainSnapshot snap = vm.snapshotLookupByName(snapshotName);
snap.delete(0);
DomainInfo.DomainState state = null;
Domain vm = null;
if (vmName != null) {
try {
vm = getDomain(cmd.getVmName());
state = vm.getInfo().state;
} catch (LibvirtException e) {
}
}
if (state == DomainInfo.DomainState.VIR_DOMAIN_RUNNING) {
String vmUuid = vm.getUUIDString();
Object[] args = new Object[] {snapshotName, vmUuid};
String snapshot = SnapshotXML.format(args);
s_logger.debug(snapshot);
DomainSnapshot snap = vm.snapshotLookupByName(snapshotName);
snap.delete(0);
} else {
command = new Script(_manageSnapshotPath, _timeout, s_logger);
command.add("-d", snapshotPath);
command.add("-n", snapshotName);
result = command.execute();
if (result != null) {
s_logger.debug("Failed to backup snapshot: " + result);
return new BackupSnapshotAnswer(cmd, false, "Failed to backup snapshot: " + result, null);
}
}
} catch (LibvirtException e) {
return new BackupSnapshotAnswer(cmd, false, e.toString(), null);
} catch (URISyntaxException e) {
@ -1295,7 +1369,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
try {
StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
String snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId;
String snapshotDestPath = ssPmountPath + File.separator + "snapshots" + File.separator + dcId + File.separator + accountId + File.separator + volumeId;
final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
command.add("-d", snapshotDestPath);
@ -1317,11 +1391,12 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
try {
StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
String snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId;
String snapshotDestPath = ssPmountPath + File.separator + "snapshots" + File.separator + dcId + File.separator + accountId + File.separator + volumeId;
final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
command.add("-d", snapshotDestPath);
command.add("-n", cmd.getSnapshotName());
command.add("-f");
command.execute();
} catch (LibvirtException e) {
return new Answer(cmd, false, e.toString());
@ -1355,7 +1430,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
try {
secondaryPool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
/*TODO: assuming all the storage pools mounted under _mountPoint, the mount point should be got from pool.dumpxml*/
String templatePath = _mountPoint + File.separator + secondaryPool.getUUIDString() + File.separator + templateInstallFolder;
String templatePath = _mountPoint + File.separator + secondaryPool.getUUIDString() + File.separator + templateInstallFolder;
_storage.mkdirs(templatePath);
String tmplPath = templateInstallFolder + File.separator + tmplFileName;
Script command = new Script(_createTmplPath, _timeout, s_logger);
command.add("-t", templatePath);
@ -1402,38 +1479,55 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
protected CreatePrivateTemplateAnswer execute(CreatePrivateTemplateCommand cmd) {
String secondaryStorageURL = cmd.getSecondaryStorageURL();
String snapshotUUID = cmd.getSnapshotPath();
StoragePool secondaryStorage = null;
StoragePool privateTemplStorage = null;
StorageVol privateTemplateVol = null;
StorageVol snapshotVol = null;
try {
String templateFolder = cmd.getAccountId() + File.separator + cmd.getTemplateId() + File.separator;
String templateInstallFolder = "/template/tmpl/" + templateFolder;
secondaryStorage = getNfsSPbyURI(_conn, new URI(secondaryStorageURL));
/*TODO: assuming all the storage pools mounted under _mountPoint, the mount point should be got from pool.dumpxml*/
String mountPath = _mountPoint + File.separator + secondaryStorage.getUUIDString() + templateInstallFolder;
File mpfile = new File(mountPath);
if (!mpfile.exists()) {
mpfile.mkdir();
String tmpltPath = _mountPoint + File.separator + secondaryStorage.getUUIDString() + templateInstallFolder;
_storage.mkdirs(tmpltPath);
Script command = new Script(_createTmplPath, _timeout, s_logger);
command.add("-f", cmd.getSnapshotPath());
command.add("-c", cmd.getSnapshotName());
command.add("-t", tmpltPath);
command.add("-n", cmd.getUniqueName() + ".qcow2");
command.add("-s");
String result = command.execute();
if (result != null) {
s_logger.debug("failed to create template: " + result);
return new CreatePrivateTemplateAnswer(cmd,
false,
result,
null,
0,
null,
null);
}
// Create a SR for the secondary storage installation folder
privateTemplStorage = getNfsSPbyURI(_conn, new URI(secondaryStorageURL + templateInstallFolder));
snapshotVol = getVolume(snapshotUUID);
LibvirtStorageVolumeDef vol = new LibvirtStorageVolumeDef(UUID.randomUUID().toString(), snapshotVol.getInfo().capacity, volFormat.QCOW2, null, null);
s_logger.debug(vol.toString());
privateTemplateVol = copyVolume(privateTemplStorage, vol, snapshotVol);
Map<String, Object> params = new HashMap<String, Object>();
params.put(StorageLayer.InstanceConfigKey, _storage);
Processor qcow2Processor = new QCOW2Processor();
qcow2Processor.configure("QCOW2 Processor", params);
FormatInfo info = qcow2Processor.process(tmpltPath, null, cmd.getUniqueName());
TemplateLocation loc = new TemplateLocation(_storage, tmpltPath);
loc.create(1, true, cmd.getUniqueName());
loc.addFormat(info);
loc.save();
return new CreatePrivateTemplateAnswer(cmd,
true,
null,
templateInstallFolder + privateTemplateVol.getName(),
privateTemplateVol.getInfo().capacity/1024*1024, /*in Mega unit*/
privateTemplateVol.getName(),
templateInstallFolder + cmd.getUniqueName() + ".qcow2",
info.virtualSize,
cmd.getUniqueName(),
ImageFormat.QCOW2);
} catch (URISyntaxException e) {
return new CreatePrivateTemplateAnswer(cmd,
@ -1452,7 +1546,31 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
0,
null,
null);
}
} catch (InternalErrorException e) {
return new CreatePrivateTemplateAnswer(cmd,
false,
e.toString(),
null,
0,
null,
null);
} catch (IOException e) {
return new CreatePrivateTemplateAnswer(cmd,
false,
e.toString(),
null,
0,
null,
null);
} catch (ConfigurationException e) {
return new CreatePrivateTemplateAnswer(cmd,
false,
e.toString(),
null,
0,
null,
null);
}
}
private StoragePool getNfsSPbyURI(Connect conn, URI uri) throws LibvirtException {
@ -1469,10 +1587,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
if (sp == null) {
try {
File tpFile = new File(targetPath);
if (!tpFile.exists()) {
tpFile.mkdir();
}
_storage.mkdir(targetPath);
LibvirtStoragePoolDef spd = new LibvirtStoragePoolDef(poolType.NFS, uuid, uuid,
sourceHost, sourcePath, targetPath);
s_logger.debug(spd.toString());
@ -1582,10 +1697,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
String targetPath = _mountPoint + File.separator + pool.getUuid();
LibvirtStoragePoolDef spd = new LibvirtStoragePoolDef(poolType.NFS, pool.getUuid(), pool.getUuid(),
pool.getHostAddress(), pool.getPath(), targetPath);
File tpFile = new File(targetPath);
if (!tpFile.exists()) {
tpFile.mkdir();
}
_storage.mkdir(targetPath);
StoragePool sp = null;
try {
s_logger.debug(spd.toString());
@ -2359,7 +2471,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
Iterator<Map.Entry<String, String>> itr = entrySet.iterator();
while (itr.hasNext()) {
Map.Entry<String, String> entry = itr.next();
if (entry.getValue().equalsIgnoreCase(sourceFile)) {
if ((entry.getValue() != null) && (entry.getValue().equalsIgnoreCase(sourceFile))) {
diskDev = entry.getKey();
break;
}
@ -2449,6 +2561,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
fillNetworkInformation(cmd);
cmd.getHostDetails().putAll(getVersionStrings());
cmd.setPool(_pool);
cmd.setCluster(_clusterId);
return new StartupCommand[]{cmd};
}
@ -2940,9 +3053,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
private String getHypervisorPath() {
File f =new File("/usr/bin/cloud-qemu-kvm");
File f =new File("/usr/bin/cloud-qemu-system-x86_64");
if (f.exists()) {
return "/usr/bin/cloud-qemu-kvm";
return "/usr/bin/cloud-qemu-system-x86_64";
} else {
if (_conn == null)
return null;
@ -3096,7 +3209,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
brName = setVnetBrName(vnetId);
String vnetDev = "vtap" + vnetId;
createVnet(vnetId, _pifs.first());
vnetNic.defBridgeNet(brName, vnetDev, guestMac, interfaceDef.nicModel.VIRTIO);
vnetNic.defBridgeNet(brName, null, guestMac, interfaceDef.nicModel.VIRTIO);
}
nics.add(vnetNic);
@ -3112,7 +3225,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
brName = setVnetBrName(vnetId);
String vnetDev = "vtap" + vnetId;
createVnet(vnetId, _pifs.second());
pubNic.defBridgeNet(brName, vnetDev, pubMac, interfaceDef.nicModel.VIRTIO);
pubNic.defBridgeNet(brName, null, pubMac, interfaceDef.nicModel.VIRTIO);
}
nics.add(pubNic);
return nics;
@ -3164,7 +3277,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
String datadiskPath = tmplVol.getKey();
diskDef hda = new diskDef();
hda.defFileBasedDisk(rootkPath, "vda", diskDef.diskBus.IDE, diskDef.diskFmtType.QCOW2);
hda.defFileBasedDisk(rootkPath, "hda", diskDef.diskBus.IDE, diskDef.diskFmtType.QCOW2);
disks.add(hda);
diskDef hdb = new diskDef();
@ -3246,7 +3359,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
File logPath = new File("/var/run/cloud");
if (!logPath.exists()) {
logPath.mkdir();
logPath.mkdirs();
}
cleanup_rules_for_dead_vms();
@ -3500,6 +3613,22 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
}
private StorageVol createVolume(StoragePool destPool, StorageVol tmplVol) throws LibvirtException {
if (isCentosHost()) {
LibvirtStorageVolumeDef volDef = new LibvirtStorageVolumeDef(UUID.randomUUID().toString(), tmplVol.getInfo().capacity, volFormat.QCOW2, null, null);
s_logger.debug(volDef.toString());
StorageVol vol = destPool.storageVolCreateXML(volDef.toString(), 0);
/*create qcow2 image based on the name*/
Script.runSimpleBashScript("qemu-img create -f qcow2 -b " + tmplVol.getPath() + " " + vol.getPath() );
return vol;
} else {
LibvirtStorageVolumeDef volDef = new LibvirtStorageVolumeDef(UUID.randomUUID().toString(), tmplVol.getInfo().capacity, volFormat.QCOW2, tmplVol.getPath(), volFormat.QCOW2);
s_logger.debug(volDef.toString());
return destPool.storageVolCreateXML(volDef.toString(), 0);
}
}
private StorageVol getVolume(StoragePool pool, String volKey) {
StorageVol vol = null;
try {

View File

@ -94,6 +94,8 @@ public class LibvirtDomainXMLParser extends LibvirtXMLParser {
} else if (qName.equalsIgnoreCase("disk")) {
diskMaps.put(diskDev, diskFile);
_disk = false;
diskFile = null;
diskDev = null;
} else if (qName.equalsIgnoreCase("description")) {
_desc = false;
}

View File

@ -28,8 +28,8 @@ import org.apache.log4j.Logger;
import com.cloud.resource.DiskPreparer;
import com.cloud.storage.Volume;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.VirtualMachineTemplate.BootloaderType;
import com.cloud.storage.Volume.VolumeType;
import com.cloud.template.VirtualMachineTemplate.BootloaderType;
import com.cloud.utils.NumbersUtil;
import com.cloud.utils.script.Script;

7
agent/wscript_build Normal file
View File

@ -0,0 +1,7 @@
import Options
bld.install_files("${AGENTLIBDIR}",
bld.path.ant_glob("storagepatch/**",src=True,bld=False,dir=False,flat=True),
cwd=bld.path,relative_trick=True)
if not Options.options.PRESERVECONFIG:
bld.install_files_filtered("${AGENTSYSCONFDIR}","conf/*")

View File

@ -0,0 +1,66 @@
/**
*
*/
package com.cloud.acl;
import java.security.acl.NotOwnerException;
import com.cloud.domain.PartOf;
import com.cloud.exception.PermissionDeniedException;
import com.cloud.user.Account;
import com.cloud.user.OwnedBy;
import com.cloud.user.User;
import com.cloud.utils.component.Adapter;
/**
* SecurityChecker checks the ownership and access control to objects within
* the management stack for users and accounts.
*/
public interface SecurityChecker extends Adapter {
/**
* Checks if the account owns the object.
*
* @param account account to check against.
* @param object object that the account is trying to access.
* @return true if access allowed. false if this adapter cannot authenticate ownership.
* @throws PermissionDeniedException if this adapter is suppose to authenticate ownership and the check failed.
*/
boolean checkOwnership(Account account, OwnedBy object) throws NotOwnerException;
/**
* Checks if the user belongs to an account that owns the object.
*
* @param user user to check against.
* @param object object that the account is trying to access.
* @return true if access allowed. false if this adapter cannot authenticate ownership.
* @throws PermissionDeniedException if this adapter is suppose to authenticate ownership and the check failed.
*/
boolean checkOwnership(User user, OwnedBy object) throws NotOwnerException;
/**
* Checks if the account can access the object.
*
* @param account account to check against.
* @param object object that the account is trying to access.
* @return true if access allowed. false if this adapter cannot provide permission.
* @throws PermissionDeniedException if this adapter is suppose to authenticate ownership and the check failed.
*/
boolean checkAccess(Account account, PartOf object) throws PermissionDeniedException;
/**
* Checks if the user belongs to an account that can access the object.
*
* @param user user to check against.
* @param object object that the account is trying to access.
* @return true if access allowed. false if this adapter cannot authenticate ownership.
* @throws PermissionDeniedException if this adapter is suppose to authenticate ownership and the check failed.
*/
boolean checkAccess(User user, PartOf object) throws PermissionDeniedException;
// We should be able to use this method to check against commands. For example, we can
// annotate the command with access annotations and this method can use it to extract
// OwnedBy and PartOf interfaces on the object and use it to verify against a user.
// I leave this empty for now so Kris and the API team can see if it is useful.
// boolean checkAuthorization(User user, Command cmd) throws PermissionDeniedException;
}

View File

@ -0,0 +1,13 @@
/**
*
*/
package com.cloud.dc;
import com.cloud.org.Grouping;
/**
*
*/
public interface DataCenter extends Grouping {
long getId();
}

View File

@ -0,0 +1,27 @@
/**
*
*/
package com.cloud.dc;
import com.cloud.org.Grouping;
/**
* Represents one pod in the cloud stack.
*
*/
public interface Pod extends Grouping {
/**
* @return unique id mapped to the pod.
*/
long getId();
String getCidrAddress();
int getCidrSize();
String getGateway();
long getDataCenterId();
//String getUniqueName();
}

View File

@ -17,13 +17,21 @@
*/
package com.cloud.deploy;
public class DataCenterDeployment implements DeploymentStrategy {
public class DataCenterDeployment implements DeploymentPlan {
long _dcId;
public DataCenterDeployment(long dataCenterId) {
int _count;
public DataCenterDeployment(long dataCenterId, int count) {
_dcId = dataCenterId;
}
@Override
public long getDataCenterId() {
return _dcId;
}
@Override
public int getCount() {
return _count;
}
}

View File

@ -0,0 +1,84 @@
/**
*
*/
package com.cloud.deploy;
import java.util.Map;
import com.cloud.dc.DataCenter;
import com.cloud.dc.Pod;
import com.cloud.host.Host;
import com.cloud.org.Cluster;
import com.cloud.storage.StoragePool;
import com.cloud.storage.Volume;
import com.cloud.utils.NumbersUtil;
public class DeployDestination {
DataCenter _dc;
Pod _pod;
Cluster _cluster;
Host _host;
Map<Volume, StoragePool> _storage;
public DataCenter getDataCenter() {
return _dc;
}
public Pod getPod() {
return _pod;
}
public Cluster getCluster() {
return _cluster;
}
public Host getHost() {
return _host;
}
public Map<Volume, StoragePool> getStorageForDisks() {
return _storage;
}
public DeployDestination(DataCenter dc, Pod pod, Cluster cluster, Host host) {
_dc = dc;
_pod = pod;
_cluster = cluster;
_host = host;
}
public DeployDestination() {
}
@Override
public int hashCode() {
return NumbersUtil.hash(_host.getId());
}
@Override
public boolean equals(Object obj) {
DeployDestination that = (DeployDestination)obj;
if (this._dc == null || that._dc == null) {
return false;
}
if (this._dc.getId() != that._dc.getId()) {
return false;
}
if (this._pod == null || that._pod == null) {
return false;
}
if (this._pod.getId() != that._pod.getId()) {
return false;
}
if (this._cluster == null || that._cluster == null) {
return false;
}
if (this._cluster.getId() != that._cluster.getId()) {
return false;
}
if (this._host == null || that._host == null) {
return false;
}
return this._host.getId() == that._host.getId();
}
}

View File

@ -21,6 +21,7 @@ package com.cloud.deploy;
* Describes how a VM should be deployed.
*
*/
public interface DeploymentStrategy {
public interface DeploymentPlan {
public long getDataCenterId();
public int getCount();
}

View File

@ -0,0 +1,13 @@
/**
*
*/
package com.cloud.deploy;
import java.util.Set;
import com.cloud.utils.component.Adapter;
import com.cloud.vm.VirtualMachineProfile;
public interface DeploymentPlanner extends Adapter {
DeployDestination plan(VirtualMachineProfile vm, DeploymentPlan plan, Set<DeployDestination> avoid);
}

View File

@ -0,0 +1,37 @@
/**
*
*/
package com.cloud.domain;
import java.util.Date;
import com.cloud.user.OwnedBy;
/**
* Domain defines the Domain object.
*/
public interface Domain extends OwnedBy {
public static final long ROOT_DOMAIN = 1L;
long getId();
Long getParent();
void setParent(Long parent);
String getName();
void setName(String name);
Date getRemoved();
String getPath();
void setPath(String path);
int getLevel();
int getChildCount();
long getNextChildSeq();
}

View File

@ -0,0 +1,15 @@
/**
*
*/
package com.cloud.domain;
/**
* PartOf must be implemented by all objects that belongs
* in a domain.
*/
public interface PartOf {
/**
* @return domain id that the object belongs to.
*/
long getDomainId();
}

View File

@ -0,0 +1,15 @@
/**
*
*/
package com.cloud.exception;
import com.cloud.utils.SerialVersionUID;
public class ConflictingNetworkSettingsException extends Exception {
private static final long serialVersionUID = SerialVersionUID.ConflictingNetworkSettingException;
public ConflictingNetworkSettingsException() {
super();
}
}

View File

@ -19,7 +19,9 @@ package com.cloud.host;
import java.util.Date;
import com.cloud.host.Status;
import com.cloud.hypervisor.Hypervisor;
import com.cloud.hypervisor.Hypervisor.Type;
/**

View File

@ -23,7 +23,9 @@ public class Hypervisor {
None, //for storage hosts
Xen,
XenServer,
KVM;
KVM,
VmWare,
VirtualBox,
Parralels;
}
}

View File

@ -17,6 +17,11 @@
*/
package com.cloud.network;
import java.net.URI;
import java.net.URISyntaxException;
import com.cloud.utils.exception.CloudRuntimeException;
/**
* Network includes all of the enums used within networking.
*
@ -34,17 +39,50 @@ public class Network {
public enum AddressFormat {
Ip4,
Ip6
Ip6,
Mixed
}
/**
* Different types of broadcast domains.
*/
public enum BroadcastDomainType {
Native,
Vlan,
Vswitch,
Vnet;
Native(null, null),
Vlan("vlan", Integer.class),
Vswitch("vs", String.class),
LinkLocal(null, null),
Vnet("vnet", Long.class),
UnDecided(null, null);
private String scheme;
private Class<?> type;
private BroadcastDomainType(String scheme, Class<?> type) {
this.scheme = scheme;
this.type = type;
}
/**
* @return scheme to be used in broadcast uri. Null indicates that this type does not have broadcast tags.
*/
public String scheme() {
return scheme;
}
/**
* @return type of the value in the broadcast uri. Null indicates that this type does not have broadcast tags.
*/
public Class<?> type() {
return type;
}
public <T> URI toUri(T value) {
try {
return new URI(scheme + "://" + value);
} catch (URISyntaxException e) {
throw new CloudRuntimeException("Unable to convert to broadcast URI: " + value);
}
}
};
/**
@ -54,8 +92,57 @@ public class Network {
Public,
Guest,
Storage,
LinkLocal,
Control,
Vpn,
Management
};
public enum IsolationType {
None(null, null),
Ec2("ec2", String.class),
Vlan("vlan", Integer.class),
Vswitch("vs", String.class),
Undecided(null, null),
Vnet("vnet", Long.class);
private final String scheme;
private final Class<?> type;
private IsolationType(String scheme, Class<?> type) {
this.scheme = scheme;
this.type = type;
}
public String scheme() {
return scheme;
}
public Class<?> type() {
return type;
}
public <T> URI toUri(T value) {
try {
return new URI(scheme + "://" + value.toString());
} catch (URISyntaxException e) {
throw new CloudRuntimeException("Unable to convert to isolation type URI: " + value);
}
}
}
public enum BroadcastScheme {
Vlan("vlan"),
VSwitch("vswitch");
private String scheme;
private BroadcastScheme(String scheme) {
this.scheme = scheme;
}
@Override
public String toString() {
return scheme;
}
}
}

View File

@ -0,0 +1,75 @@
/**
*
*/
package com.cloud.network;
import java.util.List;
import java.util.Set;
import com.cloud.network.Network.BroadcastDomainType;
import com.cloud.network.Network.Mode;
import com.cloud.network.Network.TrafficType;
import com.cloud.utils.fsm.FiniteState;
import com.cloud.utils.fsm.StateMachine;
/**
* A NetworkProfile defines the specifics of a network
* owned by an account.
*/
public interface NetworkConfiguration {
enum Event {
ImplementNetwork,
DestroyNetwork;
}
enum State implements FiniteState<State, Event> {
Allocated, // Indicates the network configuration is in allocated but not setup.
Setup, // Indicates the network configuration is setup.
Implemented, // Indicates the network configuration is in use.
Destroying;
@Override
public StateMachine<State, Event> getStateMachine() {
return s_fsm;
}
@Override
public State getNextState(Event event) {
return s_fsm.getNextState(this, event);
}
@Override
public List<State> getFromStates(Event event) {
return s_fsm.getFromStates(this, event);
}
@Override
public Set<Event> getPossibleEvents() {
return s_fsm.getPossibleEvents(this);
}
private static StateMachine<State, Event> s_fsm = new StateMachine<State, Event>();
}
/**
* @return id of the network profile. Null means the network profile is not from the database.
*/
Long getId();
Mode getMode();
BroadcastDomainType getBroadcastDomainType();
TrafficType getTrafficType();
String getGateway();
String getCidr();
long getDataCenterId();
long getNetworkOfferingId();
State getState();
}

View File

@ -0,0 +1,37 @@
/**
*
*/
package com.cloud.network.configuration;
import com.cloud.deploy.DeployDestination;
import com.cloud.deploy.DeploymentPlan;
import com.cloud.exception.InsufficientAddressCapacityException;
import com.cloud.exception.InsufficientVirtualNetworkCapcityException;
import com.cloud.network.NetworkConfiguration;
import com.cloud.offering.NetworkOffering;
import com.cloud.user.Account;
import com.cloud.utils.component.Adapter;
import com.cloud.vm.NicProfile;
import com.cloud.vm.VirtualMachineProfile;
/**
* NetworkGuru takes a network offering requested and figures
* out what is the correct network configuration that are needed to add
* to the account in order to support this network.
*
*/
public interface NetworkGuru extends Adapter {
NetworkConfiguration design(NetworkOffering offering, DeploymentPlan plan, NetworkConfiguration userSpecified, Account owner);
NetworkConfiguration implement(NetworkConfiguration config, NetworkOffering offering, DeployDestination destination);
NicProfile allocate(NetworkConfiguration config, NicProfile nic, VirtualMachineProfile vm) throws InsufficientVirtualNetworkCapcityException, InsufficientAddressCapacityException;
// NicProfile create(NicProfile nic, NetworkConfiguration config, VirtualMachineProfile vm, DeployDestination dest) throws InsufficientVirtualNetworkCapcityException, InsufficientAddressCapacityException;
String reserve(NicProfile nic, NetworkConfiguration config, VirtualMachineProfile vm, DeployDestination dest) throws InsufficientVirtualNetworkCapcityException, InsufficientAddressCapacityException;
boolean release(String uniqueId);
void destroy(NetworkConfiguration config, NetworkOffering offering);
}

View File

@ -0,0 +1,36 @@
/**
*
*/
package com.cloud.network.element;
import com.cloud.network.NetworkConfiguration;
import com.cloud.offering.NetworkOffering;
import com.cloud.utils.component.Adapter;
import com.cloud.vm.NicProfile;
import com.cloud.vm.VirtualMachineProfile;
/**
* Represents one network element that exists in a network.
*/
public interface NetworkElement extends Adapter {
/**
* Implement the network configuration as specified.
* @param config fully specified network configuration.
* @param offering network offering that originated the network configuration.
* @return true if network configuration is now usable; false if not.
*/
boolean implement(NetworkConfiguration config, NetworkOffering offering);
/**
* Prepare the nic profile to be used within the network.
* @param config
* @param nic
* @param offering
* @return
*/
boolean prepare(NetworkConfiguration config, NicProfile nic, VirtualMachineProfile vm, NetworkOffering offering);
boolean release(NetworkConfiguration config, NicProfile nic, VirtualMachineProfile vm, NetworkOffering offering);
boolean shutdown(NetworkConfiguration config, NetworkOffering offering);
}

View File

@ -17,18 +17,20 @@
*/
package com.cloud.offering;
import com.cloud.network.Network.TrafficType;
/**
* Describes network offering
*
*/
public interface NetworkOffering {
public enum GuestIpType {
Virtualized,
DirectSingle,
DirectDual
}
public enum GuestIpType {
Virtualized,
DirectSingle,
DirectDual
}
long getId();
/**
@ -60,4 +62,6 @@ public interface NetworkOffering {
* @return concurrent connections to be supported.
*/
Integer getConcurrentConnections();
TrafficType getTrafficType();
}

View File

@ -22,13 +22,7 @@ package com.cloud.offering;
* offered.
*/
public interface ServiceOffering {
public enum GuestIpType {
Virtualized,
DirectSingle,
DirectDual
}
/**
/**
* @return user readable description
*/
String getName();
@ -66,7 +60,7 @@ public interface ServiceOffering {
/**
* @return the type of IP address to allocate as the primary ip address to a guest
*/
GuestIpType getGuestIpType();
NetworkOffering.GuestIpType getGuestIpType();
/**
* @return whether or not the service offering requires local storage

View File

@ -0,0 +1,9 @@
/**
*
*/
package com.cloud.org;
public interface Cluster extends Grouping {
long getId();
}

View File

@ -0,0 +1,8 @@
/**
*
*/
package com.cloud.org;
public interface Grouping {
}

View File

@ -0,0 +1,11 @@
/**
*
*/
package com.cloud.org;
public interface RunningIn {
long getDataCenterId();
long getPodId();
Long getClusterId();
Long getHostId();
}

View File

@ -0,0 +1,10 @@
/**
*
*/
package com.cloud.resource;
import com.cloud.utils.component.Adapter;
public interface Concierge<T extends Resource> extends Adapter {
}

View File

@ -0,0 +1,111 @@
/**
*
*/
package com.cloud.resource;
import java.util.Date;
import java.util.List;
import java.util.Set;
import com.cloud.utils.fsm.FiniteState;
import com.cloud.utils.fsm.StateMachine;
/**
* Indicates a resource in CloudStack.
* Any resource that requires an reservation and release system
* must implement this interface.
*
*/
public interface Resource {
enum Event {
ReservationRequested,
ReleaseRequested,
CancelRequested,
OperationCompleted,
OperationFailed,
}
enum State implements FiniteState<State, Event> {
Allocated, // Resource is allocated
Reserving, // Resource is being reserved right now.
Reserved, // Resource is reserved
Releasing, // Resource is being released.
Ready; // Resource is ready which means it does not need to go through reservation.
@Override
public StateMachine<State, Event> getStateMachine() {
// TODO Auto-generated method stub
return null;
}
@Override
public State getNextState(Event event) {
return s_fsm.getNextState(this, event);
}
@Override
public List<State> getFromStates(Event event) {
return s_fsm.getFromStates(this, event);
}
@Override
public Set<Event> getPossibleEvents() {
return s_fsm.getPossibleEvents(this);
}
final static private StateMachine<State, Event> s_fsm = new StateMachine<State, Event>();
static {
s_fsm.addTransition(State.Allocated, Event.ReservationRequested, State.Reserving);
s_fsm.addTransition(State.Reserving, Event.CancelRequested, State.Allocated);
s_fsm.addTransition(State.Reserving, Event.OperationCompleted, State.Reserved);
s_fsm.addTransition(State.Reserving, Event.OperationFailed, State.Allocated);
}
}
enum ReservationStrategy {
UserSpecified,
Create,
Start
}
/**
* @return id in the CloudStack database
*/
long getId();
/**
* @return reservation id returned by the allocation source. This can be the
* String version of the database id if the allocation source does not need it's
* own implementation of the reservation id. This is passed back to the
* allocation source to release the resource.
*/
String getReservationId();
/**
* @return unique name for the allocation source.
*/
String getReserver();
/**
* @return the time a reservation request was made to the allocation source.
*/
Date getUpdateTime();
/**
* @return the expected reservation interval. -1 indicates
*/
int getExpectedReservationInterval();
/**
* @return the expected release interval.
*/
int getExpectedReleaseInterval();
/**
* @return the reservation state of the resource.
*/
State getState();
ReservationStrategy getReservationStrategy();
}

View File

@ -18,20 +18,30 @@
package com.cloud.storage;
public class Storage {
public enum ImageFormat {
public static enum ImageFormat {
QCOW2(true, true, false),
RAW(false, false, false),
VHD(true, true, true),
ISO(false, false, false);
ISO(false, false, false),
VMDK(true, true, true, "vmw.tar");
private final boolean thinProvisioned;
private final boolean supportSparse;
private final boolean supportSnapshot;
private final String fileExtension;
private ImageFormat(boolean thinProvisioned, boolean supportSparse, boolean supportSnapshot) {
this.thinProvisioned = thinProvisioned;
this.supportSparse = supportSparse;
this.supportSnapshot = supportSnapshot;
fileExtension = null;
}
private ImageFormat(boolean thinProvisioned, boolean supportSparse, boolean supportSnapshot, String fileExtension) {
this.thinProvisioned = thinProvisioned;
this.supportSparse = supportSparse;
this.supportSnapshot = supportSnapshot;
this.fileExtension = fileExtension;
}
public boolean isThinProvisioned() {
@ -47,11 +57,14 @@ public class Storage {
}
public String getFileExtension() {
return toString().toLowerCase();
if(fileExtension == null)
return toString().toLowerCase();
return fileExtension;
}
}
public enum FileSystem {
public static enum FileSystem {
Unknown,
ext3,
ntfs,
@ -66,7 +79,7 @@ public class Storage {
hfsp
}
public enum StoragePoolType {
public static enum StoragePoolType {
Filesystem(false), //local directory
NetworkFilesystem(true), //NFS or CIFS
IscsiLUN(true), //shared LUN, with a clusterfs overlay
@ -84,4 +97,6 @@ public class Storage {
return shared;
}
}
public static enum StorageResourceType {STORAGE_POOL, STORAGE_HOST, SECONDARY_STORAGE}
}

View File

@ -17,30 +17,38 @@
*/
package com.cloud.storage;
import com.cloud.async.AsyncInstanceCreateStatus;
import java.util.Date;
public interface Volume {
enum VolumeType {UNKNOWN, ROOT, SWAP, DATADISK};
import com.cloud.domain.PartOf;
import com.cloud.template.BasedOn;
import com.cloud.user.OwnedBy;
public interface Volume extends PartOf, OwnedBy, BasedOn {
enum VolumeType {UNKNOWN, ROOT, SWAP, DATADISK, ISO};
enum MirrorState {NOT_MIRRORED, ACTIVE, DEFUNCT};
enum StorageResourceType {STORAGE_POOL, STORAGE_HOST, SECONDARY_STORAGE};
/**
enum State {
Allocated,
Creating,
Created,
Corrupted,
ToBeDestroyed,
Expunging,
Destroyed
}
enum SourceType {
Snapshot,DiskOffering,Template,Blank
}
long getId();
/**
* @return the volume name
*/
String getName();
/**
* @return owner's account id
*/
long getAccountId();
/**
* @return id of the owning account's domain
*/
long getDomainId();
/**
* @return total size of the partition
*/
@ -69,12 +77,21 @@ public interface Volume {
VolumeType getVolumeType();
StorageResourceType getStorageResourceType();
Storage.StorageResourceType getStorageResourceType();
Long getPoolId();
public AsyncInstanceCreateStatus getStatus();
State getState();
public void setStatus(AsyncInstanceCreateStatus status);
SourceType getSourceType();
void setSourceType(SourceType sourceType);
void setSourceId(Long sourceId);
Long getSourceId();
Date getAttached();
void setAttached(Date attached);
}

View File

@ -0,0 +1,16 @@
/**
*
*/
package com.cloud.template;
/**
* BasedOn is implemented by all objects that are based on a certain template.
*/
public interface BasedOn {
/**
* @return the template id that the volume is based on.
*/
Long getTemplateId();
}

View File

@ -15,14 +15,13 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.storage;
package com.cloud.template;
import com.cloud.async.AsyncInstanceCreateStatus;
import com.cloud.storage.Storage.FileSystem;
public interface VirtualMachineTemplate {
public interface VirtualMachineTemplate {
public static enum BootloaderType { PyGrub, HVM, External };
public static enum BootloaderType { PyGrub, HVM, External, CD };
/**
* @return id.

View File

@ -20,7 +20,22 @@ package com.cloud.user;
import java.util.Date;
public interface Account {
import com.cloud.domain.PartOf;
public interface Account extends PartOf {
public enum Type {
Normal,
Admin,
DomainAdmin,
CustomerCare
}
public enum State {
Disabled,
Enabled,
Locked
}
public static final short ACCOUNT_TYPE_NORMAL = 0;
public static final short ACCOUNT_TYPE_ADMIN = 1;
public static final short ACCOUNT_TYPE_DOMAIN_ADMIN = 2;
@ -32,14 +47,12 @@ public interface Account {
public static final long ACCOUNT_ID_SYSTEM = 1;
public Long getId();
public long getId();
public String getAccountName();
public void setAccountName(String accountId);
public short getType();
public void setType(short type);
public String getState();
public void setState(String state);
public Long getDomainId();
public void setDomainId(Long domainId);
public long getDomainId();
public Date getRemoved();
}

View File

@ -0,0 +1,14 @@
/**
*
*/
package com.cloud.user;
/**
* OwnedBy must be inheritted by all objects that can be owned by an account.
*/
public interface OwnedBy {
/**
* @return account id that owns this object.
*/
long getAccountId();
}

View File

@ -18,9 +18,9 @@
package com.cloud.user;
import java.util.Date;
import java.util.Date;
public interface User {
public interface User extends OwnedBy {
public static final long UID_SYSTEM = 1;
public Long getId();
@ -45,8 +45,6 @@ public interface User {
public void setLastname(String lastname);
public long getAccountId();
public void setAccountId(long accountId);
public String getEmail();

View File

@ -17,12 +17,14 @@
*/
package com.cloud.uservm;
import com.cloud.domain.PartOf;
import com.cloud.user.OwnedBy;
import com.cloud.vm.VirtualMachine;
/**
* This represents one running virtual machine instance.
*/
public interface UserVm extends VirtualMachine {
public interface UserVm extends VirtualMachine, OwnedBy, PartOf {
/**
* @return service offering id
@ -39,11 +41,6 @@ public interface UserVm extends VirtualMachine {
*/
String getVnet();
/**
* @return the account this vm instance belongs to.
*/
long getAccountId();
/**
* @return the domain this vm instance belongs to.
*/
@ -63,8 +60,6 @@ public interface UserVm extends VirtualMachine {
String getDisplayName();
String getGroup();
String getUserData();
void setUserData(String userData);

View File

@ -0,0 +1,131 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.vm;
import com.cloud.offering.DiskOffering;
import com.cloud.storage.Volume;
/**
* DiskCharacteristics describes a disk and what functionality is required from it.
* This object is generated by the management server and passed to the allocators
* and resources to allocate and create disks. There object is immutable once
* it has been created.
*/
public class DiskProfile {
private long size;
private String[] tags;
private Volume.VolumeType type;
private String name;
private boolean useLocalStorage;
private boolean recreatable;
private long diskOfferingId;
private Long templateId;
private long volumeId;
private Volume vol;
private DiskOffering offering;
protected DiskProfile() {
}
public DiskProfile(long volumeId, Volume.VolumeType type, String name, long diskOfferingId, long size, String[] tags, boolean useLocalStorage, boolean recreatable, Long templateId) {
this.type = type;
this.name = name;
this.size = size;
this.tags = tags;
this.useLocalStorage = useLocalStorage;
this.recreatable = recreatable;
this.diskOfferingId = diskOfferingId;
this.templateId = templateId;
this.volumeId = volumeId;
}
public DiskProfile(Volume vol, DiskOffering offering) {
this(vol.getId(), vol.getVolumeType(), vol.getName(), offering.getId(), vol.getSize(), offering.getTagsArray(), offering.getUseLocalStorage(), offering.getUseLocalStorage(), vol.getSize());
this.vol = vol;
this.offering = offering;
}
/**
* @return size of the disk requested in bytes.
*/
public long getSize() {
return size;
}
/**
* @return id of the volume backing up this disk characteristics
*/
public long getVolumeId() {
return volumeId;
}
/**
* @return Unique name for the disk.
*/
public String getName() {
return name;
}
/**
* @return tags for the disk. This can be used to match it to different storage pools.
*/
public String[] getTags() {
return tags;
}
/**
* @return type of volume.
*/
public Volume.VolumeType getType() {
return type;
}
/**
* @return Does this volume require local storage?
*/
public boolean useLocalStorage() {
return useLocalStorage;
}
/**
* @return Is this volume recreatable? A volume is recreatable if the disk's content can be
* reconstructed from the template.
*/
public boolean isRecreatable() {
return recreatable;
}
/**
* @return template id the disk is based on. Can be null if it is not based on any templates.
*/
public Long getTemplateId() {
return templateId;
}
/**
* @return disk offering id that the disk is based on.
*/
public long getDiskOfferingId() {
return diskOfferingId;
}
@Override
public String toString() {
return new StringBuilder("DskChr[").append(type).append("|").append(size).append("|").append("]").toString();
}
}

View File

@ -17,17 +17,14 @@
*/
package com.cloud.vm;
import com.cloud.network.Network.Mode;
import com.cloud.resource.Resource;
/**
* Nic represents one nic on the VM.
*/
public interface Nic {
enum State {
AcquireIp,
IpAcquired,
}
State getState();
public interface Nic extends Resource {
String getIp4Address();
@ -36,15 +33,14 @@ public interface Nic {
/**
* @return network profile id that this
*/
long getNetworkProfileId();
/**
* @return the unique id to reference this nic.
*/
long getId();
long getNetworkConfigurationId();
/**
* @return the vm instance id that this nic belongs to.
*/
long getInstanceId();
int getDeviceId();
Mode getMode();
}

View File

@ -0,0 +1,186 @@
/**
*
*/
package com.cloud.vm;
import java.net.URI;
import com.cloud.network.Network.AddressFormat;
import com.cloud.network.Network.BroadcastDomainType;
import com.cloud.network.Network.Mode;
import com.cloud.network.Network.TrafficType;
import com.cloud.network.NetworkConfiguration;
import com.cloud.resource.Resource;
import com.cloud.resource.Resource.ReservationStrategy;
public class NicProfile {
long id;
BroadcastDomainType broadcastType;
Mode mode;
long vmId;
String gateway;
AddressFormat format;
TrafficType trafficType;
String ip4Address;
String ip6Address;
String macAddress;
URI isolationUri;
String netmask;
URI broadcastUri;
ReservationStrategy strategy;
String reservationId;
public String getNetmask() {
return netmask;
}
public void setNetmask(String netmask) {
this.netmask = netmask;
}
public void setBroadcastUri(URI broadcastUri) {
this.broadcastUri = broadcastUri;
}
public URI getBroadCastUri() {
return broadcastUri;
}
public void setIsolationUri(URI isolationUri) {
this.isolationUri = isolationUri;
}
public URI getIsolationUri() {
return isolationUri;
}
public BroadcastDomainType getType() {
return broadcastType;
}
public void setBroadcastType(BroadcastDomainType broadcastType) {
this.broadcastType = broadcastType;
}
public void setMode(Mode mode) {
this.mode = mode;
}
public void setVmId(long vmId) {
this.vmId = vmId;
}
public void setGateway(String gateway) {
this.gateway = gateway;
}
public void setFormat(AddressFormat format) {
this.format = format;
}
public void setTrafficType(TrafficType trafficType) {
this.trafficType = trafficType;
}
public void setIp6Address(String ip6Address) {
this.ip6Address = ip6Address;
}
public Mode getMode() {
return mode;
}
public long getNetworkId() {
return id;
}
public long getVirtualMachineId() {
return vmId;
}
public long getId() {
return id;
}
public BroadcastDomainType getBroadcastType() {
return broadcastType;
}
public void setMacAddress(String macAddress) {
this.macAddress = macAddress;
}
public long getVmId() {
return vmId;
}
public String getGateway() {
return gateway;
}
public AddressFormat getFormat() {
return format;
}
public TrafficType getTrafficType() {
return trafficType;
}
public String getIp4Address() {
return ip4Address;
}
public String getIp6Address() {
return ip6Address;
}
public String getMacAddress() {
return macAddress;
}
public void setIp4Address(String ip4Address) {
this.ip4Address = ip4Address;
}
public NicProfile(Nic nic, NetworkConfiguration network) {
this.id = nic.getId();
this.gateway = network.getGateway();
this.mode = network.getMode();
this.format = null;
this.broadcastType = network.getBroadcastDomainType();
this.trafficType = network.getTrafficType();
this.ip4Address = nic.getIp4Address();
this.ip6Address = null;
this.macAddress = nic.getMacAddress();
this.reservationId = nic.getReservationId();
this.strategy = nic.getReservationStrategy();
}
public NicProfile(long id, BroadcastDomainType type, Mode mode, long vmId) {
this.id = id;
this.broadcastType = type;
this.mode = mode;
this.vmId = vmId;
}
public NicProfile(Resource.ReservationStrategy strategy, String ip4Address, String macAddress, String gateway, String netmask) {
this.format = AddressFormat.Ip4;
this.ip4Address = ip4Address;
this.macAddress = macAddress;
this.gateway = gateway;
this.netmask = netmask;
this.strategy = strategy;
}
public ReservationStrategy getReservationStrategy() {
return strategy;
}
public String getReservationId() {
return reservationId;
}
public void setReservationId(String reservationId) {
this.reservationId = reservationId;
}
}

View File

@ -0,0 +1,14 @@
/**
*
*/
package com.cloud.vm;
/**
* RunningOn must be implemented by objects that runs on hosts.
*
*/
public interface RunningOn {
Long getHostId();
}

View File

@ -19,10 +19,13 @@
package com.cloud.vm;
import java.util.List;
import java.util.Set;
import com.cloud.utils.fsm.FiniteState;
import com.cloud.utils.fsm.StateMachine;
import com.cloud.vm.VirtualMachine.Event;
public enum State {
public enum State implements FiniteState<State, Event> {
Creating(true),
Starting(true),
Running(false),
@ -44,22 +47,24 @@ public enum State {
return _transitional;
}
public static String[] toStrings(State... states) {
String[] strs = new String[states.length];
for (int i = 0; i < states.length; i++) {
strs[i] = states[i].toString();
}
return strs;
}
@Override
public State getNextState(VirtualMachine.Event e) {
return s_fsm.getNextState(this, e);
}
public State[] getFromStates(VirtualMachine.Event e) {
List<State> from = s_fsm.getFromStates(this, e);
return from.toArray(new State[from.size()]);
@Override
public List<State> getFromStates(VirtualMachine.Event e) {
return s_fsm.getFromStates(this, e);
}
@Override
public Set<Event> getPossibleEvents() {
return s_fsm.getPossibleEvents(this);
}
@Override
public StateMachine<State, Event> getStateMachine() {
return s_fsm;
}
protected static final StateMachine<State, VirtualMachine.Event> s_fsm = new StateMachine<State, VirtualMachine.Event>();

View File

@ -19,11 +19,14 @@ package com.cloud.vm;
import java.util.Date;
import com.cloud.domain.PartOf;
import com.cloud.user.OwnedBy;
/**
* VirtualMachine describes the properties held by a virtual machine
*
*/
public interface VirtualMachine {
public interface VirtualMachine extends RunningOn, OwnedBy, PartOf {
public enum Event {
CreateRequested,
StartRequested,
@ -100,11 +103,6 @@ public interface VirtualMachine {
*/
public long getDataCenterId();
/**
* @return id of the host it is running on. If not running, returns null.
*/
public Long getHostId();
/**
* @return id of the host it was assigned last time.
*/

View File

@ -0,0 +1,115 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.vm;
import java.util.List;
import java.util.Map;
import com.cloud.hypervisor.Hypervisor;
import com.cloud.offering.ServiceOffering;
public class VirtualMachineProfile {
VirtualMachine _vm;
int _cpus;
int _speed; // in mhz
long _ram; // in bytes
Hypervisor.Type _hypervisorType;
VirtualMachine.Type _type;
Map<String, String> _params;
Long _templateId;
List<DiskProfile> _disks;
List<NicProfile> _nics;
public VirtualMachineProfile(VirtualMachine.Type type) {
this._type = type;
}
public long getId() {
return _vm.getId();
}
public VirtualMachine.Type getType() {
return _type;
}
public Long getTemplateId() {
return _templateId;
}
public int getCpus() {
return _cpus;
}
public int getSpeed() {
return _speed;
}
public long getRam() {
return _ram;
}
public void setNics(List<NicProfile> profiles) {
this._nics = profiles;
}
public List<NicProfile> getNics() {
return _nics;
}
public void setDisks(List<DiskProfile> profiles) {
this._disks = profiles;
}
public List<DiskProfile> getDisks() {
return _disks;
}
public Hypervisor.Type getHypervisorType() {
return _hypervisorType;
}
public VirtualMachine getVm() {
return _vm;
}
public VirtualMachineProfile(long id, int core, int speed, long ram, Long templateId, Hypervisor.Type type, Map<String, String> params) {
this._cpus = core;
this._speed = speed;
this._ram = ram;
this._hypervisorType = type;
this._params = params;
this._templateId = templateId;
}
public VirtualMachineProfile(VirtualMachine vm, ServiceOffering offering) {
this._cpus = offering.getCpu();
this._speed = offering.getSpeed();
this._ram = offering.getRamSize();
this._templateId = vm.getTemplateId();
this._type = vm.getType();
this._vm = vm;
}
protected VirtualMachineProfile() {
}
@Override
public String toString() {
return "VM-" + _type + "-" + _vm.getId();
}
}

View File

@ -0,0 +1,13 @@
/**
*
*/
package com.cloud.vm;
import com.cloud.offering.ServiceOffering;
import com.cloud.template.VirtualMachineTemplate;
import com.cloud.utils.component.Adapter;
public interface VirtualMachineProfiler extends Adapter {
VirtualMachineProfile convert(ServiceOffering offering, VirtualMachineTemplate template);
}

View File

@ -1,68 +0,0 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.vm;
import java.util.Map;
import com.cloud.hypervisor.Hypervisor;
public class VmCharacteristics {
int core;
int speed; // in mhz
long ram; // in bytes
Hypervisor.Type hypervisorType;
VirtualMachine.Type type;
Map<String, String> params;
public VmCharacteristics(VirtualMachine.Type type) {
this.type = type;
}
public VirtualMachine.Type getType() {
return type;
}
public VmCharacteristics() {
}
public int getCores() {
return core;
}
public int getSpeed() {
return speed;
}
public long getRam() {
return ram;
}
public Hypervisor.Type getHypervisorType() {
return hypervisorType;
}
public VmCharacteristics(int core, int speed, long ram, Hypervisor.Type type, Map<String, String> params) {
this.core = core;
this.speed = speed;
this.ram = ram;
this.hypervisorType = type;
this.params = params;
}
}

1
build/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
/override

View File

@ -140,7 +140,7 @@
</path>
<path id="thirdparty.classpath">
<filelist files="${thirdparty.classpath}" />
<!--filelist files="${thirdparty.classpath}" /-->
<fileset dir="${thirdparty.dir}" erroronmissingdir="false">
<include name="*.jar" />
</fileset>

View File

@ -48,7 +48,7 @@
<javac srcdir="@{top.dir}/src" debug="${debug}" debuglevel="${debuglevel}" deprecation="${deprecation}" destdir="${classes.dir}/@{jar.name}" source="${source.compat.version}" target="${target.compat.version}" includeantruntime="false" compiler="javac1.6">
<!-- compilerarg line="-processor com.cloud.annotation.LocalProcessor -processorpath ${base.dir}/tools/src -Xlint:all"/ -->
<!-- compilerarg line="-processor com.cloud.utils.LocalProcessor -processorpath ${base.dir}/utils/src -Xlint:all"/ -->
<compilerarg line="-Xlint:all"/>
<compilerarg line="-Xlint:-path"/>
<classpath refid="@{classpath}" />
<exclude-files/>
</javac>

View File

@ -27,6 +27,18 @@
<target name="run" depends="start-tomcat"/>
<target name="stop" depends="stop-tomcat"/>
<target name="debug" depends="debug-tomcat"/>
<target name="setup">
<mkdir dir="${build.dir}/override"/>
<copy todir="${build.dir}/override">
<fileset dir="${build.dir}">
<include name="build-cloud.properties"/>
<include name="replace.properties"/>
</fileset>
</copy>
<loadproperties srcfile="${build.dir}/override/replace.properties" resource="propertyresource"/>
<!-- propertyfile file="${build.dir}/override/replace.properties"/ -->
</target>
<target name="debug-suspend">
<java jar="${tomcat.home}/bin/bootstrap.jar" fork="true">
@ -86,23 +98,11 @@
<target name="unzip-usage" if="usagezip.uptodate">
<unzip src="${deploy.work.dir}/usage.zip" dest="${deploy.work.dir}/usage"/>
</target>
<!--
<target name="deploy-db">
<property file="
<sql
driver="com.mysql.jdbc.Driver"
url="jdbc:database-url"
userid="cloud"
password="cloud"
src="data.sql"
/>
</target>
-->
<target name="deploy-server" depends="deploy-common" >
<copy todir="${server.deploy.to.dir}/webapps/client/WEB-INF/lib/vms" file="${dist.dir}/systemvm.zip" />
<copy todir="${server.deploy.to.dir}/webapps/client/WEB-INF/lib/vms" file="${dist.dir}/systemvm.iso" />
</target>
<target name="deploy-common" >
<condition property="zip.uptodate">
<available file="${deploy.work.dir}/client.zip" type="file"/>
@ -114,7 +114,6 @@
<include name="*.jar"/>
</fileset>
</copy>
<copy todir="${server.deploy.to.dir}/webapps/client/WEB-INF/lib/scripts/vm/hypervisor/xenserver" file="${dist.dir}/patch.tgz" />
<touch file="${server.deploy.to.dir}/webapps/client/WEB-INF/lib/scripts/vm/hypervisor/xenserver/version"/>
<echo file="${server.deploy.to.dir}/webapps/client/WEB-INF/lib/scripts/vm/hypervisor/xenserver/version" append="false" message="${version}.${build.number}"/>
<copy overwrite="true" todir="${server.deploy.to.dir}/conf">
@ -169,11 +168,18 @@
<available file="${setup.db.dir}/override/templates.xenserver.sql" />
</condition>
<condition property="vmware.templates.file" value="override/templates.vmware.sql" else="templates.vmware.sql">
<available file="${setup.db.dir}/override/templates.vmware.sql" />
</condition>
<condition property="templates.file" value="${kvm.templates.file}" else="${xenserver.templates.file}" >
<condition property="templates.file.intermediate" value="${kvm.templates.file}" else="${xenserver.templates.file}" >
<isset property="KVM"/>
</condition>
<condition property="templates.file" value="${vmware.templates.file}" else="${templates.file.intermediate}" >
<isset property="vmware"/>
</condition>
<echo message="deploydb ${server-setup.file} ${templates.file} ${DBROOTPW}" />
<exec dir="${setup.db.dir}" executable="bash">
<arg value="deploy-db-dev.sh" />

13
client/tomcatconf/commands.properties.in Normal file → Executable file
View File

@ -61,6 +61,7 @@ deleteTemplate=com.cloud.api.commands.DeleteTemplateCmd;15
listTemplates=com.cloud.api.commands.ListTemplatesCmd;15
updateTemplatePermissions=com.cloud.api.commands.UpdateTemplatePermissionsCmd;15
listTemplatePermissions=com.cloud.api.commands.ListTemplatePermissionsCmd;15
extractTemplate=com.cloud.api.commands.ExtractTemplateCmd;15
#### iso commands
attachIso=com.cloud.api.commands.AttachIsoCmd;15
@ -72,6 +73,7 @@ deleteIso=com.cloud.api.commands.DeleteIsoCmd;15
copyIso=com.cloud.api.commands.CopyIsoCmd;15
updateIsoPermissions=com.cloud.api.commands.UpdateIsoPermissionsCmd;15
listIsoPermissions=com.cloud.api.commands.ListIsoPermissionsCmd;15
extractIso=com.cloud.api.commands.ExtractIsoCmd;15
#### guest OS commands
listOsTypes=com.cloud.api.commands.ListGuestOsCmd;15
@ -139,6 +141,7 @@ listSystemVms=com.cloud.api.commands.ListSystemVMsCmd;1
updateConfiguration=com.cloud.api.commands.UpdateCfgCmd;1
listConfigurations=com.cloud.api.commands.ListCfgsByCmd;1
addConfig=com.cloud.api.commands.AddConfigCmd;15
listCapabilities=com.cloud.api.commands.ListCapabilitiesCmd;15
#### pod commands
createPod=com.cloud.api.commands.CreatePodCmd;1
@ -177,6 +180,7 @@ detachVolume=com.cloud.api.commands.DetachVolumeCmd;15
createVolume=com.cloud.api.commands.CreateVolumeCmd;15
deleteVolume=com.cloud.api.commands.DeleteVolumeCmd;15
listVolumes=com.cloud.api.commands.ListVolumesCmd;15
extractVolume=com.cloud.api.commands.ExtractVolumeCmd;15
#### registration command: FIXME -- this really should be something in management server that
#### generates a new key for the user and they just have to
@ -207,4 +211,11 @@ listNetworkGroups=com.cloud.api.commands.ListNetworkGroupsCmd;11
registerPreallocatedLun=com.cloud.server.api.commands.RegisterPreallocatedLunCmd;1
deletePreallocatedLun=com.cloud.server.api.commands.DeletePreallocatedLunCmd;1
listPreallocatedLuns=com.cloud.api.commands.ListPreallocatedLunsCmd;1
listPreallocatedLuns=com.cloud.api.commands.ListPreallocatedLunsCmd;1
#### vm group commands
createInstanceGroup=com.cloud.api.commands.CreateVMGroupCmd;15
deleteInstanceGroup=com.cloud.api.commands.DeleteVMGroupCmd;15
updateInstanceGroup=com.cloud.api.commands.UpdateVMGroupCmd;15
listInstanceGroups=com.cloud.api.commands.ListVMGroupsCmd;15

View File

@ -88,6 +88,7 @@
<dao name="ResourceCount" class="com.cloud.configuration.dao.ResourceCountDaoImpl"/>
<dao name="UserAccount" class="com.cloud.user.dao.UserAccountDaoImpl"/>
<dao name="VM Template Host" class="com.cloud.storage.dao.VMTemplateHostDaoImpl"/>
<dao name="Upload" class="com.cloud.storage.dao.UploadDaoImpl"/>
<dao name="VM Template Pool" class="com.cloud.storage.dao.VMTemplatePoolDaoImpl"/>
<dao name="VM Template Zone" class="com.cloud.storage.dao.VMTemplateZoneDaoImpl"/>
<dao name="Launch Permission" class="com.cloud.storage.dao.LaunchPermissionDaoImpl"/>
@ -110,7 +111,12 @@
<dao name="GuestOSDao" class="com.cloud.storage.dao.GuestOSDaoImpl"/>
<dao name="GuestOSCategoryDao" class="com.cloud.storage.dao.GuestOSCategoryDaoImpl"/>
<dao name="ClusterDao" class="com.cloud.dc.dao.ClusterDaoImpl"/>
<dao name="NetworkConfigurationDao" class="com.cloud.network.dao.NetworkConfigurationDaoImpl"/>
<dao name="NetworkOfferingDao" class="com.cloud.offerings.dao.NetworkOfferingDaoImpl"/>
<dao name="NicDao" class="com.cloud.vm.dao.NicDaoImpl"/>
<dao name="InstanceGroupDao" class="com.cloud.vm.dao.InstanceGroupDaoImpl"/>
<dao name="Instance Group to VM Mapping" class="com.cloud.vm.dao.InstanceGroupVMMapDaoImpl"/>
<adapters key="com.cloud.agent.manager.allocator.HostAllocator">
<adapter name="FirstFitRouting" class="com.cloud.agent.manager.allocator.impl.FirstFitRoutingAllocator"/>
<adapter name="FirstFit" class="com.cloud.agent.manager.allocator.impl.FirstFitAllocator"/>
@ -146,7 +152,8 @@
<adapters key="com.cloud.resource.Discoverer">
<adapter name="SecondaryStorage" class="com.cloud.storage.secondary.SecondaryStorageDiscoverer"/>
<adapter name="XenServer" class="com.cloud.hypervisor.xen.discoverer.XcpServerDiscoverer"/>
<adapter name="XCP Agent" class="com.cloud.hypervisor.xen.discoverer.XcpServerDiscoverer"/>
<adapter name="KVM Agent" class="com.cloud.hypervisor.kvm.discoverer.KvmServerDiscoverer"/>
</adapters>
<manager name="Cluster Manager" class="com.cloud.cluster.DummyClusterManagerImpl">
@ -166,6 +173,8 @@
</manager>
<manager name="download manager" class="com.cloud.storage.download.DownloadMonitorImpl">
</manager>
<manager name="upload manager" class="com.cloud.storage.upload.UploadMonitorImpl">
</manager>
<manager name="console proxy manager" class="com.cloud.consoleproxy.AgentBasedStandaloneConsoleProxyManager">
</manager>
<manager name="vm manager" class="com.cloud.vm.UserVmManagerImpl"/>
@ -209,8 +218,9 @@
<dao name="IP Addresses configuration server" class="com.cloud.network.dao.IPAddressDaoImpl"/>
<dao name="Datacenter IP Addresses configuration server" class="com.cloud.dc.dao.DataCenterIpAddressDaoImpl"/>
<dao name="domain router" class="com.cloud.vm.dao.DomainRouterDaoImpl"/>
<dao name="host zone configuration server" class="com.cloud.dc.dao.DataCenterDaoImpl">
</dao>
<dao name="host zone configuration server" class="com.cloud.dc.dao.DataCenterDaoImpl"/>
<dao name="Console Proxy" class="com.cloud.vm.dao.ConsoleProxyDaoImpl"/>
<dao name="Secondary Storage VM" class="com.cloud.vm.dao.SecondaryStorageVmDaoImpl"/>
<dao name="host pod configuration server" class="com.cloud.dc.dao.HostPodDaoImpl">
</dao>
<dao name="PodVlanMap configuration server" class="com.cloud.dc.dao.PodVlanMapDaoImpl"/>

11
client/wscript_build Normal file
View File

@ -0,0 +1,11 @@
import Options
start_path = bld.path.find_dir("WEB-INF")
bld.install_files('${MSENVIRON}/webapps/client/WEB-INF',
start_path.ant_glob("**",src=True,bld=False,dir=False,flat=True),
cwd=start_path,relative_trick=True)
if not Options.options.PRESERVECONFIG:
bld.install_files_filtered("${MSCONF}","tomcatconf/*")
bld.install_files("${MSCONF}",'tomcatconf/db.properties',chmod=0640)
bld.setownership("${MSCONF}/db.properties","root",bld.env.MSUSER)

View File

@ -34,6 +34,8 @@ BuildRequires: commons-httpclient
BuildRequires: jpackage-utils
BuildRequires: gcc
BuildRequires: glibc-devel
BuildRequires: /usr/bin/mkisofs
BuildRequires: MySQL-python
%global _premium %(tar jtvmf %{SOURCE0} '*/cloudstack-proprietary/' --occurrence=1 2>/dev/null | wc -l)
@ -44,6 +46,7 @@ intelligent cloud implementation.
%package utils
Summary: Cloud.com utility library
Requires: java >= 1.6.0
Requires: python
Group: System Environment/Libraries
Obsoletes: vmops-utils < %{version}-%{release}
%description utils
@ -180,12 +183,11 @@ Summary: Cloud.com setup tools
Obsoletes: vmops-setup < %{version}-%{release}
Requires: java >= 1.6.0
Requires: python
Requires: mysql
Requires: MySQL-python
Requires: %{name}-utils = %{version}-%{release}
Requires: %{name}-server = %{version}-%{release}
Requires: %{name}-deps = %{version}-%{release}
Requires: %{name}-python = %{version}-%{release}
Requires: MySQL-python
Group: System Environment/Libraries
%description setup
The Cloud.com setup tools let you set up your Management Server and Usage Server.
@ -231,6 +233,9 @@ Requires: %{name}-daemonize
Requires: /sbin/service
Requires: /sbin/chkconfig
Requires: kvm
%if 0%{?fedora} >= 12
Requires: cloud-qemu-system-x86
%endif
Requires: libcgroup
Requires: /usr/bin/uuidgen
Requires: augeas >= 0.7.1
@ -368,7 +373,6 @@ if [ "$1" == "1" ] ; then
/sbin/chkconfig --add %{name}-management > /dev/null 2>&1 || true
/sbin/chkconfig --level 345 %{name}-management on > /dev/null 2>&1 || true
fi
test -f %{_sharedstatedir}/%{name}/management/.ssh/id_rsa || su - %{name} -c 'yes "" 2>/dev/null | ssh-keygen -t rsa -q -N ""' < /dev/null
@ -447,82 +451,43 @@ fi
%defattr(0644,root,root,0755)
%{_javadir}/%{name}-utils.jar
%{_javadir}/%{name}-api.jar
%attr(755,root,root) %{_bindir}/cloud-sccs
%attr(755,root,root) %{_bindir}/cloud-gitrevs
%doc %{_docdir}/%{name}-%{version}/sccs-info
%doc %{_docdir}/%{name}-%{version}/version-info
%doc %{_docdir}/%{name}-%{version}/configure-info
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files client-ui
%defattr(0644,root,root,0755)
%{_datadir}/%{name}/management/webapps/client/*
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files server
%defattr(0644,root,root,0755)
%{_javadir}/%{name}-server.jar
%{_sysconfdir}/%{name}/server/*
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%if %{_premium}
%files agent-scripts
%defattr(-,root,root,-)
%{_libdir}/%{name}/agent/scripts/*
%{_libdir}/%{name}/agent/vms/systemvm.zip
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%else
%files agent-scripts
%defattr(-,root,root,-)
%{_libdir}/%{name}/agent/scripts/installer/*
%{_libdir}/%{name}/agent/scripts/network/domr/*.sh
%{_libdir}/%{name}/agent/scripts/storage/*.sh
%{_libdir}/%{name}/agent/scripts/storage/zfs/*
%{_libdir}/%{name}/agent/scripts/storage/qcow2/*
%{_libdir}/%{name}/agent/scripts/storage/secondary/*
%{_libdir}/%{name}/agent/scripts/util/*
%{_libdir}/%{name}/agent/scripts/vm/*.sh
%{_libdir}/%{name}/agent/scripts/vm/storage/nfs/*
%{_libdir}/%{name}/agent/scripts/vm/storage/iscsi/*
%{_libdir}/%{name}/agent/scripts/vm/network/*
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/*.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/kvm/*
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xen/*
%{_libdir}/%{name}/agent/vms/systemvm.zip
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/*
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
# maintain the following list in sync with files agent-scripts
%if %{_premium}
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/check_heartbeat.sh
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/find_bond.sh
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/launch_hb.sh
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/setup_heartbeat_sr.sh
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/vmopspremium
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xenheartbeat.sh
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xenserver56/patch-premium
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xs_cleanup.sh
%endif
%{_libdir}/%{name}/agent/vms/systemvm.zip
%{_libdir}/%{name}/agent/vms/systemvm.iso
%files daemonize
%defattr(-,root,root,-)
%attr(755,root,root) %{_bindir}/%{name}-daemonize
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files deps
%defattr(0644,root,root,0755)
@ -543,39 +508,20 @@ fi
%{_javadir}/%{name}-xenserver-5.5.0-1.jar
%{_javadir}/%{name}-xmlrpc-common-3.*.jar
%{_javadir}/%{name}-xmlrpc-client-3.*.jar
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files core
%defattr(0644,root,root,0755)
%{_javadir}/%{name}-core.jar
%doc README
%doc INSTALL
%doc HACKING
%doc debian/copyright
%files vnet
%defattr(0644,root,root,0755)
%attr(0755,root,root) %{_sbindir}/%{name}-vnetd
%attr(0755,root,root) %{_sbindir}/%{name}-vn
%attr(0755,root,root) %{_initrddir}/%{name}-vnetd
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files python
%defattr(0644,root,root,0755)
%{_prefix}/lib*/python*/site-packages/%{name}*
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files setup
%attr(0755,root,root) %{_bindir}/%{name}-setup-databases
@ -585,19 +531,18 @@ fi
%{_datadir}/%{name}/setup/create-index-fk.sql
%{_datadir}/%{name}/setup/create-schema.sql
%{_datadir}/%{name}/setup/server-setup.sql
%{_datadir}/%{name}/setup/templates.kvm.sql
%{_datadir}/%{name}/setup/templates.xenserver.sql
%{_datadir}/%{name}/setup/templates.*.sql
%{_datadir}/%{name}/setup/deploy-db-dev.sh
%{_datadir}/%{name}/setup/server-setup.xml
%{_datadir}/%{name}/setup/data-20to21.sql
%{_datadir}/%{name}/setup/index-20to21.sql
%{_datadir}/%{name}/setup/index-212to213.sql
%{_datadir}/%{name}/setup/postprocess-20to21.sql
%{_datadir}/%{name}/setup/schema-20to21.sql
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%{_datadir}/%{name}/setup/schema-level.sql
%{_datadir}/%{name}/setup/schema-21to22.sql
%{_datadir}/%{name}/setup/data-21to22.sql
%{_datadir}/%{name}/setup/index-21to22.sql
%files client
%defattr(0644,root,root,0755)
@ -637,19 +582,10 @@ fi
%dir %attr(770,root,%{name}) %{_localstatedir}/cache/%{name}/management/temp
%dir %attr(770,root,%{name}) %{_localstatedir}/log/%{name}/management
%dir %attr(770,root,%{name}) %{_localstatedir}/log/%{name}/agent
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files agent-libs
%defattr(0644,root,root,0755)
%{_javadir}/%{name}-agent.jar
%doc README
%doc INSTALL
%doc HACKING
%doc debian/copyright
%files agent
%defattr(0644,root,root,0755)
@ -665,11 +601,6 @@ fi
%{_libdir}/%{name}/agent/images
%attr(0755,root,root) %{_bindir}/%{name}-setup-agent
%dir %attr(770,root,root) %{_localstatedir}/log/%{name}/agent
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files console-proxy
%defattr(0644,root,root,0755)
@ -682,11 +613,6 @@ fi
%{_libdir}/%{name}/console-proxy/*
%attr(0755,root,root) %{_bindir}/%{name}-setup-console-proxy
%dir %attr(770,root,root) %{_localstatedir}/log/%{name}/console-proxy
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%if %{_premium}
@ -697,20 +623,10 @@ fi
%{_sharedstatedir}/%{name}/test/*
%{_libdir}/%{name}/test/*
%{_sysconfdir}/%{name}/test/*
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files premium-deps
%defattr(0644,root,root,0755)
%{_javadir}/%{name}-premium/*.jar
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files premium
%defattr(0644,root,root,0755)
@ -718,15 +634,18 @@ fi
%{_javadir}/%{name}-server-extras.jar
%{_sysconfdir}/%{name}/management/commands-ext.properties
%{_sysconfdir}/%{name}/management/components-premium.xml
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/*
%{_libdir}/%{name}/agent/vms/systemvm-premium.zip
%{_libdir}/%{name}/agent/vms/systemvm-premium.iso
%{_datadir}/%{name}/setup/create-database-premium.sql
%{_datadir}/%{name}/setup/create-schema-premium.sql
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
# maintain the following list in sync with files agent-scripts
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/check_heartbeat.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/find_bond.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/launch_hb.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/setup_heartbeat_sr.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/vmopspremium
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xenheartbeat.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xenserver56/patch-premium
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xs_cleanup.sh
%files usage
%defattr(0644,root,root,0755)
@ -737,11 +656,6 @@ fi
%{_sysconfdir}/%{name}/usage/usage-components.xml
%config(noreplace) %{_sysconfdir}/%{name}/usage/log4j-%{name}_usage.xml
%config(noreplace) %attr(640,root,%{name}) %{_sysconfdir}/%{name}/usage/db.properties
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%endif

View File

@ -1,6 +1,6 @@
#!/usr/bin/env python
import sys, os, subprocess, errno, re
import sys, os, subprocess, errno, re, getopt
# ---- This snippet of code adds the sources path and the waf configured PYTHONDIR to the Python path ----
# ---- We do this so cloud_utils can be looked up in the following order:
@ -14,6 +14,7 @@ for pythonpath in (
if os.path.isdir(pythonpath): sys.path.insert(0,pythonpath)
# ---- End snippet of code ----
import cloud_utils
from cloud_utils import stderr
E_GENERIC= 1
E_NOKVM = 2
@ -27,13 +28,6 @@ E_CPRECONFIGFAILED = 9
E_CPFAILEDTOSTART = 10
E_NOFQDN = 11
def stderr(msgfmt,*args):
msgfmt += "\n"
if args: sys.stderr.write(msgfmt%args)
else: sys.stderr.write(msgfmt)
sys.stderr.flush()
def bail(errno=E_GENERIC,message=None,*args):
if message: stderr(message,*args)
stderr("Cloud Console Proxy setup aborted")
@ -132,9 +126,30 @@ CentOS = os.path.exists("/etc/centos-release") or ( os.path.exists("/etc/redhat-
#--------------- procedure starts here ------------
def main():
# parse cmd line
opts, args = getopt.getopt(sys.argv[1:], "a", ["host=", "zone=", "pod="])
host=None
zone=None
pod=None
autoMode=False
do_check_kvm = True
for opt, arg in opts:
if opt == "--host":
if arg != "":
host = arg
elif opt == "--zone":
if arg != "":
zone = arg
elif opt == "--pod":
if arg != "":
pod = arg
elif opt == "-a":
autoMode=True
servicename = "@PACKAGE@-console-proxy"
if autoMode:
cloud_utils.setLogFile("/var/log/cloud/setupConsoleProxy.log")
stderr("Welcome to the Cloud Console Proxy setup")
stderr("")
@ -176,7 +191,7 @@ def main():
print e.stdout+e.stderr
bail(E_FWRECONFIGFAILED,"Firewall could not be enabled")
cloud_utils.setup_consoleproxy_config("@CPSYSCONFDIR@/agent.properties")
cloud_utils.setup_consoleproxy_config("@CPSYSCONFDIR@/agent.properties", host, zone, pod)
stderr("Enabling and starting the Cloud Console Proxy")
cloud_utils.enable_service(servicename)
stderr("Cloud Console Proxy restarted")

51
console-proxy/scripts/_run.sh Executable file
View File

@ -0,0 +1,51 @@
#!/usr/bin/env bash
#run.sh runs the console proxy.
# make sure we delete the old files from the original template
rm console-proxy.jar
rm console-common.jar
rm conf/cloud.properties
set -x
CP=./:./conf
for file in *.jar
do
CP=${CP}:$file
done
keyvalues=
if [ -f /mnt/cmdline ]
then
CMDLINE=$(cat /mnt/cmdline)
else
CMDLINE=$(cat /proc/cmdline)
fi
#CMDLINE="graphical utf8 eth0ip=0.0.0.0 eth0mask=255.255.255.0 eth1ip=192.168.140.40 eth1mask=255.255.255.0 eth2ip=172.24.0.50 eth2mask=255.255.0.0 gateway=172.24.0.1 dns1=72.52.126.11 template=domP dns2=72.52.126.12 host=192.168.1.142 port=8250 mgmtcidr=192.168.1.0/24 localgw=192.168.140.1 zone=5 pod=5"
for i in $CMDLINE
do
KEY=$(echo $i | cut -s -d= -f1)
VALUE=$(echo $i | cut -s -d= -f2)
[ "$KEY" == "" ] && continue
case $KEY in
*)
keyvalues="${keyvalues} $KEY=$VALUE"
esac
done
tot_mem_k=$(cat /proc/meminfo | grep MemTotal | awk '{print $2}')
let "tot_mem_m=tot_mem_k>>10"
let "eightypcnt=$tot_mem_m*8/10"
let "maxmem=$tot_mem_m-80"
if [ $maxmem -gt $eightypcnt ]
then
maxmem=$eightypcnt
fi
EXTRA=
if [ -f certs/realhostip.keystore ]
then
EXTRA="-Djavax.net.ssl.trustStore=$(dirname $0)/certs/realhostip.keystore -Djavax.net.ssl.trustStorePassword=vmops.com"
fi
java -mx${maxmem}m ${EXTRA} -cp $CP com.cloud.agent.AgentShell $keyvalues $@

View File

@ -2,7 +2,12 @@
BASE_DIR="/var/www/html/copy/template/"
HTACCESS="$BASE_DIR/.htaccess"
PASSWDFILE="/etc/httpd/.htpasswd"
if [ -d /etc/apache2 ]
then
PASSWDFILE="/etc/apache2/.htpasswd"
fi
config_htaccess() {
mkdir -p $BASE_DIR

View File

@ -15,6 +15,17 @@ config_httpd_conf() {
echo "</VirtualHost>" >> /etc/httpd/conf/httpd.conf
}
config_apache2_conf() {
local ip=$1
local srvr=$2
cp -f /etc/apache2/sites-available/default.orig /etc/apache2/sites-available/default
cp -f /etc/apache2/sites-available/default-ssl.orig /etc/apache2/sites-available/default-ssl
sed -i -e "s/VirtualHost.*:80$/VirtualHost $ip:80/" /etc/httpd/conf/httpd.conf
sed -i 's/_default_/$ip/' /etc/apache2/sites-available/default-ssl
sed -i 's/ssl-cert-snakeoil.key/realhostip.key/' /etc/apache2/sites-available/default-ssl
sed -i 's/ssl-cert-snakeoil.pem/realhostip.crt/' /etc/apache2/sites-available/default-ssl
}
copy_certs() {
local certdir=$(dirname $0)/certs
local mydir=$(dirname $0)
@ -25,16 +36,37 @@ copy_certs() {
return 1
}
copy_certs_apache2() {
local certdir=$(dirname $0)/certs
local mydir=$(dirname $0)
if [ -d $certdir ] && [ -f $certdir/realhostip.key ] && [ -f $certdir/realhostip.crt ] ; then
cp $certdir/realhostip.key /etc/ssl/private/ && cp $certdir/realhostip.crt /etc/ssl/certs/
return $?
fi
return 1
}
if [ $# -ne 2 ] ; then
echo $"Usage: `basename $0` ipaddr servername "
exit 0
fi
copy_certs
if [ -d /etc/apache2 ]
then
copy_certs_apache2
else
copy_certs
fi
if [ $? -ne 0 ]
then
echo "Failed to copy certificates"
exit 2
fi
config_httpd_conf $1 $2
if [ -d /etc/apache2 ]
then
config_apache2_conf $1 $2
else
config_httpd_conf $1 $2
fi

View File

@ -1,51 +1,14 @@
#!/usr/bin/env bash
#run.sh runs the console proxy.
#!/bin/bash
#_run.sh runs the agent client.
# make sure we delete the old files from the original template
rm console-proxy.jar
rm console-common.jar
rm conf/cloud.properties
set -x
CP=./:./conf
for file in *.jar
# set -x
while true
do
CP=${CP}:$file
./_run.sh "$@"
ex=$?
if [ $ex -eq 0 ] || [ $ex -eq 1 ] || [ $ex -eq 66 ] || [ $ex -gt 128 ]; then
exit $ex
fi
sleep 20
done
keyvalues=
if [ -f /mnt/cmdline ]
then
CMDLINE=$(cat /mnt/cmdline)
else
CMDLINE=$(cat /proc/cmdline)
fi
#CMDLINE="graphical utf8 eth0ip=0.0.0.0 eth0mask=255.255.255.0 eth1ip=192.168.140.40 eth1mask=255.255.255.0 eth2ip=172.24.0.50 eth2mask=255.255.0.0 gateway=172.24.0.1 dns1=72.52.126.11 template=domP dns2=72.52.126.12 host=192.168.1.142 port=8250 mgmtcidr=192.168.1.0/24 localgw=192.168.140.1 zone=5 pod=5"
for i in $CMDLINE
do
KEY=$(echo $i | cut -s -d= -f1)
VALUE=$(echo $i | cut -s -d= -f2)
[ "$KEY" == "" ] && continue
case $KEY in
*)
keyvalues="${keyvalues} $KEY=$VALUE"
esac
done
tot_mem_k=$(cat /proc/meminfo | grep MemTotal | awk '{print $2}')
let "tot_mem_m=tot_mem_k>>10"
let "eightypcnt=$tot_mem_m*8/10"
let "maxmem=$tot_mem_m-80"
if [ $maxmem -gt $eightypcnt ]
then
maxmem=$eightypcnt
fi
EXTRA=
if [ -f certs/realhostip.keystore ]
then
EXTRA="-Djavax.net.ssl.trustStore=$(dirname $0)/certs/realhostip.keystore -Djavax.net.ssl.trustStorePassword=vmops.com"
fi
java -mx${maxmem}m ${EXTRA} -cp $CP com.cloud.agent.AgentShell $keyvalues $@

View File

@ -0,0 +1,10 @@
import Options
# binary unsubstitutable files:
bld.install_files("${CPLIBDIR}",bld.path.ant_glob("images/**",src=True,bld=False,dir=False,flat=True),cwd=bld.path,relative_trick=True)
# text substitutable files (substitute with tokens from the environment bld.env):
bld.substitute('css/** js/** ui/** scripts/**',install_to="${CPLIBDIR}")
# config files (do not replace them if preserve config option is true)
if not Options.options.PRESERVECONFIG: bld.install_files_filtered("${CPSYSCONFDIR}","conf.dom0/*")

View File

@ -1,38 +1,44 @@
<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" path="src"/>
<classpathentry kind="src" path="test"/>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/>
<classpathentry combineaccessrules="false" kind="src" path="/utils"/>
<classpathentry kind="lib" path="/thirdparty/xmlrpc-common-3.1.3.jar"/>
<classpathentry kind="lib" path="/thirdparty/xmlrpc-client-3.1.3.jar"/>
<classpathentry kind="lib" path="/thirdparty/gson-1.3.jar"/>
<classpathentry kind="lib" path="/thirdparty/log4j-1.2.15.jar"/>
<classpathentry kind="lib" path="/thirdparty/cglib-nodep-2.2.jar"/>
<classpathentry kind="lib" path="/thirdparty/commons-dbcp-1.2.2.jar"/>
<classpathentry kind="lib" path="/thirdparty/commons-pool-1.4.jar"/>
<classpathentry kind="lib" path="/thirdparty/ehcache-1.5.0.jar"/>
<classpathentry kind="lib" path="/thirdparty/junit-4.8.1.jar"/>
<classpathentry kind="lib" path="/thirdparty/xenserver-5.5.0-1.jar" sourcepath="/thirdparty/xen/XenServerJava"/>
<classpathentry kind="lib" path="/thirdparty/trilead-ssh2-build213.jar"/>
<classpathentry kind="lib" path="/thirdparty/commons-httpclient-3.1.jar"/>
<classpathentry kind="lib" path="/thirdparty/commons-codec-1.4.jar"/>
<classpathentry combineaccessrules="false" kind="src" path="/api"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-activation.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-apputils.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-axis.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-jaxen-core.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-jaxen-jdom.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-jaxrpc.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-jdom.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-mailapi.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-saxpath.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-smtp.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-vim.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-vim25.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-wbem.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-xalan.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-xerces.jar"/>
<classpathentry kind="lib" path="/home/kris/git/cloudstack-oss/cloudstack-proprietary/thirdparty/vmware-lib-xml-apis.jar"/>
<classpathentry kind="output" path="bin"/>
</classpath>
<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" path="src"/>
<classpathentry kind="src" path="test"/>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/>
<classpathentry combineaccessrules="false" kind="src" path="/utils"/>
<classpathentry kind="lib" path="/thirdparty/xmlrpc-common-3.1.3.jar"/>
<classpathentry kind="lib" path="/thirdparty/xmlrpc-client-3.1.3.jar"/>
<classpathentry kind="lib" path="/thirdparty/gson-1.3.jar"/>
<classpathentry kind="lib" path="/thirdparty/log4j-1.2.15.jar"/>
<classpathentry kind="lib" path="/thirdparty/cglib-nodep-2.2.jar"/>
<classpathentry kind="lib" path="/thirdparty/commons-dbcp-1.2.2.jar"/>
<classpathentry kind="lib" path="/thirdparty/commons-pool-1.4.jar"/>
<classpathentry kind="lib" path="/thirdparty/ehcache-1.5.0.jar"/>
<classpathentry kind="lib" path="/thirdparty/junit-4.8.1.jar"/>
<classpathentry kind="lib" path="/thirdparty/xenserver-5.5.0-1.jar" sourcepath="/thirdparty/xen/XenServerJava"/>
<classpathentry kind="lib" path="/thirdparty/trilead-ssh2-build213.jar"/>
<classpathentry kind="lib" path="/thirdparty/commons-httpclient-3.1.jar"/>
<classpathentry kind="lib" path="/thirdparty/commons-codec-1.4.jar"/>
<classpathentry combineaccessrules="false" kind="src" path="/api"/>
<classpathentry kind="lib" path="/thirdparty/vmware-apputils.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-credstore.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-activation.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-axis.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-jaxen-core.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-jaxen-jdom.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-jaxen.license"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-jaxen.readme"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-jaxrpc.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-jdom.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-mailapi.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-saxpath.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-smtp.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-wbem.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-xalan.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-xalan.license"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-xalan.readme"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-xerces.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-xerces.readme"/>
<classpathentry kind="lib" path="/thirdparty/vmware-lib-xml-apis.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-vim.jar"/>
<classpathentry kind="lib" path="/thirdparty/vmware-vim25.jar"/>
<classpathentry kind="output" path="bin"/>
</classpath>

View File

@ -21,7 +21,7 @@ package com.cloud.agent.api;
import java.util.List;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.VirtualMachineTemplate.BootloaderType;
import com.cloud.template.VirtualMachineTemplate.BootloaderType;
public abstract class AbstractStartCommand extends Command {

View File

@ -0,0 +1,21 @@
/**
*
*/
package com.cloud.agent.api;
public class Start2Answer extends Answer {
protected Start2Answer() {
}
public Start2Answer(Start2Command cmd, String msg) {
super(cmd, false, msg);
}
public Start2Answer(Start2Command cmd, Exception e) {
super(cmd, false, e.getMessage());
}
public Start2Answer(Start2Command cmd) {
super(cmd, true, null);
}
}

View File

@ -0,0 +1,62 @@
/**
*
*/
package com.cloud.agent.api;
import com.cloud.agent.api.to.VirtualMachineTO;
public class Start2Command extends Command {
VirtualMachineTO vm;
public VirtualMachineTO getVirtualMachine() {
return vm;
}
/*
long id;
String guestIpAddress;
String gateway;
int ramSize;
String imagePath;
String guestNetworkId;
String guestMacAddress;
String vncPassword;
String externalVlan;
String externalMacAddress;
int utilization;
int cpuWeight;
int cpu;
int networkRateMbps;
int networkRateMulticastMbps;
String hostName;
String arch;
String isoPath;
boolean bootFromISO;
String guestOSDescription;
---->console proxy
private ConsoleProxyVO proxy;
private int proxyCmdPort;
private String vncPort;
private String urlPort;
private String mgmt_host;
private int mgmt_port;
private boolean sslEnabled;
----->abstract
protected String vmName;
protected String storageHosts[] = new String[2];
protected List<VolumeVO> volumes;
protected boolean mirroredVols = false;
protected BootloaderType bootloader = BootloaderType.PyGrub;
*/
@Override
public boolean executeInSequence() {
return true;
}
public Start2Command() {
}
}

View File

@ -25,7 +25,9 @@ import com.cloud.vm.ConsoleProxyVO;
public class StartConsoleProxyCommand extends AbstractStartCommand {
private ConsoleProxyVO proxy;
private ConsoleProxyVO proxy;
int networkRateMbps;
int networkRateMulticastMbps;
private int proxyCmdPort;
private String vncPort;
private String urlPort;
@ -36,9 +38,12 @@ public class StartConsoleProxyCommand extends AbstractStartCommand {
protected StartConsoleProxyCommand() {
}
public StartConsoleProxyCommand(int proxyCmdPort, ConsoleProxyVO proxy, String vmName, String storageHost,
public StartConsoleProxyCommand(int networkRateMbps, int networkRateMulticastMbps, int proxyCmdPort,
ConsoleProxyVO proxy, String vmName, String storageHost,
List<VolumeVO> vols, String vncPort, String urlPort, String mgmtHost, int mgmtPort, boolean sslEnabled) {
super(vmName, storageHost, vols);
super(vmName, storageHost, vols);
this.networkRateMbps = networkRateMbps;
this.networkRateMulticastMbps = networkRateMulticastMbps;
this.proxyCmdPort = proxyCmdPort;
this.proxy = proxy;
this.vncPort = vncPort;
@ -57,7 +62,15 @@ public class StartConsoleProxyCommand extends AbstractStartCommand {
return proxy;
}
public int getProxyCmdPort() {
public int getNetworkRateMbps() {
return networkRateMbps;
}
public int getNetworkRateMulticastMbps() {
return networkRateMulticastMbps;
}
public int getProxyCmdPort() {
return proxyCmdPort;
}

View File

@ -28,6 +28,8 @@ import com.cloud.vm.DomainRouter.Role;
public class StartRouterCommand extends AbstractStartCommand {
DomainRouterVO router;
int networkRateMbps;
int networkRateMulticastMbps;
protected StartRouterCommand() {
super();
@ -38,16 +40,27 @@ public class StartRouterCommand extends AbstractStartCommand {
return true;
}
public StartRouterCommand(DomainRouterVO router, String routerName, String[] storageIps, List<VolumeVO> vols, boolean mirroredVols) {
public StartRouterCommand(DomainRouterVO router, int networkRateMbps, int networkRateMulticastMbps,
String routerName, String[] storageIps, List<VolumeVO> vols, boolean mirroredVols) {
super(routerName, storageIps, vols, mirroredVols);
this.router = router;
this.networkRateMbps = networkRateMbps;
this.networkRateMulticastMbps = networkRateMulticastMbps;
}
public DomainRouter getRouter() {
return router;
}
public String getBootArgs() {
public int getNetworkRateMbps() {
return networkRateMbps;
}
public int getNetworkRateMulticastMbps() {
return networkRateMulticastMbps;
}
public String getBootArgs() {
String eth2Ip = router.getPublicIpAddress()==null?"0.0.0.0":router.getPublicIpAddress();
String basic = " eth0ip=" + router.getGuestIpAddress() + " eth0mask=" + router.getGuestNetmask() + " eth1ip="
+ router.getPrivateIpAddress() + " eth1mask=" + router.getPrivateNetmask() + " gateway=" + router.getGateway()

View File

@ -27,7 +27,9 @@ import com.cloud.vm.SecondaryStorageVmVO;
public class StartSecStorageVmCommand extends AbstractStartCommand {
private SecondaryStorageVmVO secStorageVm;
private SecondaryStorageVmVO secStorageVm;
int networkRateMbps;
int networkRateMulticastMbps;
private int proxyCmdPort;
private String mgmt_host;
private int mgmt_port;
@ -36,9 +38,12 @@ public class StartSecStorageVmCommand extends AbstractStartCommand {
protected StartSecStorageVmCommand() {
}
public StartSecStorageVmCommand(int proxyCmdPort, SecondaryStorageVmVO secStorageVm, String vmName, String storageHost,
public StartSecStorageVmCommand(int networkRateMbps, int networkRateMulticastMbps, int proxyCmdPort,
SecondaryStorageVmVO secStorageVm, String vmName, String storageHost,
List<VolumeVO> vols, String mgmtHost, int mgmtPort, boolean sslCopy) {
super(vmName, storageHost, vols);
super(vmName, storageHost, vols);
this.networkRateMbps = networkRateMbps;
this.networkRateMulticastMbps = networkRateMulticastMbps;
this.proxyCmdPort = proxyCmdPort;
this.secStorageVm = secStorageVm;
@ -56,7 +61,15 @@ public class StartSecStorageVmCommand extends AbstractStartCommand {
return secStorageVm;
}
public int getProxyCmdPort() {
public int getNetworkRateMbps() {
return networkRateMbps;
}
public int getNetworkRateMulticastMbps() {
return networkRateMulticastMbps;
}
public int getProxyCmdPort() {
return proxyCmdPort;
}

View File

@ -20,7 +20,7 @@ package com.cloud.agent.api;
import java.util.HashMap;
import java.util.Map;
import com.cloud.storage.Volume;
import com.cloud.storage.Storage;
import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.storage.template.TemplateInfo;
@ -31,7 +31,7 @@ public class StartupStorageCommand extends StartupCommand {
Map<String, TemplateInfo> templateInfo;
long totalSize;
StoragePoolInfo poolInfo;
Volume.StorageResourceType resourceType;
Storage.StorageResourceType resourceType;
StoragePoolType fsType;
Map<String, String> hostDetails = new HashMap<String, String>();
String nfsShare;
@ -91,11 +91,11 @@ public class StartupStorageCommand extends StartupCommand {
this.poolInfo = poolInfo;
}
public Volume.StorageResourceType getResourceType() {
public Storage.StorageResourceType getResourceType() {
return resourceType;
}
public void setResourceType(Volume.StorageResourceType resourceType) {
public void setResourceType(Storage.StorageResourceType resourceType) {
this.resourceType = resourceType;
}

View File

@ -29,16 +29,18 @@ public class IPAssocCommand extends RoutingCommand {
private String publicIp;
private boolean sourceNat;
private boolean add;
private boolean oneToOneNat;
private boolean firstIP;
private String vlanId;
private String vlanGateway;
private String vlanNetmask;
private String vifMacAddress;
private String guestIp;
protected IPAssocCommand() {
}
public IPAssocCommand(String routerName, String privateIpAddress, String ipAddress, boolean add, boolean firstIP, boolean sourceNat, String vlanId, String vlanGateway, String vlanNetmask, String vifMacAddress) {
public IPAssocCommand(String routerName, String privateIpAddress, String ipAddress, boolean add, boolean firstIP, boolean sourceNat, String vlanId, String vlanGateway, String vlanNetmask, String vifMacAddress, String guestIp) {
this.setRouterName(routerName);
this.routerIp = privateIpAddress;
this.publicIp = ipAddress;
@ -49,8 +51,13 @@ public class IPAssocCommand extends RoutingCommand {
this.vlanGateway = vlanGateway;
this.vlanNetmask = vlanNetmask;
this.vifMacAddress = vifMacAddress;
this.guestIp = guestIp;
}
public String getGuestIp(){
return guestIp;
}
public String getRouterIp() {
return routerIp;
}
@ -63,6 +70,10 @@ public class IPAssocCommand extends RoutingCommand {
return add;
}
public boolean isOneToOneNat(){
return this.oneToOneNat;
}
public boolean isFirstIP() {
return firstIP;
}

View File

@ -0,0 +1,52 @@
package com.cloud.agent.api.storage;
import com.cloud.storage.Storage.ImageFormat;
public class AbstractUploadCommand extends StorageCommand{
private String url;
private ImageFormat format;
private long accountId;
private String name;
protected AbstractUploadCommand() {
}
protected AbstractUploadCommand(String name, String url, ImageFormat format, long accountId) {
this.url = url;
this.format = format;
this.accountId = accountId;
this.name = name;
}
protected AbstractUploadCommand(AbstractUploadCommand that) {
this(that.name, that.url, that.format, that.accountId);
}
public String getUrl() {
return url;
}
public String getName() {
return name;
}
public ImageFormat getFormat() {
return format;
}
public long getAccountId() {
return accountId;
}
@Override
public boolean executeInSequence() {
return true;
}
public void setUrl(String url) {
this.url = url;
}
}

View File

@ -18,18 +18,19 @@
package com.cloud.agent.api.storage;
import com.cloud.agent.api.Command;
import com.cloud.agent.api.to.DiskCharacteristicsTO;
import com.cloud.agent.api.to.StoragePoolTO;
import com.cloud.storage.StoragePoolVO;
import com.cloud.storage.VolumeVO;
import com.cloud.vm.DiskProfile;
import com.cloud.vm.VMInstanceVO;
public class CreateCommand extends Command {
private long volId;
private StoragePoolTO pool;
private DiskCharacteristicsTO diskCharacteristics;
private DiskProfile diskCharacteristics;
private String templateUrl;
private long size;
private String instanceName;
protected CreateCommand() {
super();
@ -44,7 +45,7 @@ public class CreateCommand extends Command {
* @param templateUrl
* @param pool
*/
public CreateCommand(VolumeVO vol, VMInstanceVO vm, DiskCharacteristicsTO diskCharacteristics, String templateUrl, StoragePoolVO pool) {
public CreateCommand(VolumeVO vol, VMInstanceVO vm, DiskProfile diskCharacteristics, String templateUrl, StoragePoolVO pool) {
this(vol, vm, diskCharacteristics, pool, 0);
this.templateUrl = templateUrl;
}
@ -57,12 +58,13 @@ public class CreateCommand extends Command {
* @param diskCharacteristics
* @param pool
*/
public CreateCommand(VolumeVO vol, VMInstanceVO vm, DiskCharacteristicsTO diskCharacteristics, StoragePoolVO pool, long size) {
public CreateCommand(VolumeVO vol, VMInstanceVO vm, DiskProfile diskCharacteristics, StoragePoolVO pool, long size) {
this.volId = vol.getId();
this.diskCharacteristics = diskCharacteristics;
this.pool = new StoragePoolTO(pool);
this.templateUrl = null;
this.size = size;
//this.instanceName = vm.getInstanceName();
}
@Override
@ -78,7 +80,7 @@ public class CreateCommand extends Command {
return pool;
}
public DiskCharacteristicsTO getDiskCharacteristics() {
public DiskProfile getDiskCharacteristics() {
return diskCharacteristics;
}
@ -89,4 +91,8 @@ public class CreateCommand extends Command {
public long getSize(){
return this.size;
}
public String getInstanceName() {
return instanceName;
}
}

View File

@ -28,6 +28,16 @@ public class PrimaryStorageDownloadCommand extends AbstractDownloadCommand {
String localPath;
String poolUuid;
long poolId;
//
// Temporary hacking to make vmware work quickly, expose NFS raw information to allow
// agent do quick copy over NFS.
//
// provide storage URL (it contains all information to help agent resource to mount the
// storage if needed, example of such URL may be as following
// nfs://192.168.10.231/export/home/kelven/vmware-test/secondary
String secondaryStorageUrl;
String primaryStorageUrl;
protected PrimaryStorageDownloadCommand() {
}
@ -54,6 +64,22 @@ public class PrimaryStorageDownloadCommand extends AbstractDownloadCommand {
return localPath;
}
public void setSecondaryStorageUrl(String url) {
secondaryStorageUrl = url;
}
public String getSecondaryStorageUrl() {
return secondaryStorageUrl;
}
public void setPrimaryStorageUrl(String url) {
primaryStorageUrl = url;
}
public String getPrimaryStorageUrl() {
return primaryStorageUrl;
}
@Override
public boolean executeInSequence() {
return true;

View File

@ -0,0 +1,105 @@
package com.cloud.agent.api.storage;
import java.io.File;
import com.cloud.agent.api.Answer;
import com.cloud.agent.api.Command;
import com.cloud.storage.UploadVO;
public class UploadAnswer extends Answer {
private String jobId;
private int uploadPct;
private String errorString;
private UploadVO.Status uploadStatus;
private String uploadPath;
private String installPath;
public Long templateSize = 0L;
public int getUploadPct() {
return uploadPct;
}
public String getErrorString() {
return errorString;
}
public String getUploadStatusString() {
return uploadStatus.toString();
}
public UploadVO.Status getUploadStatus() {
return uploadStatus;
}
public String getUploadPath() {
return uploadPath;
}
protected UploadAnswer() {
}
public void setErrorString(String errorString) {
this.errorString = errorString;
}
public String getJobId() {
return jobId;
}
public void setJobId(String jobId) {
this.jobId = jobId;
}
public UploadAnswer(String jobId, int uploadPct, String errorString,
UploadVO.Status uploadStatus, String fileSystemPath, String installPath, long templateSize) {
super();
this.jobId = jobId;
this.uploadPct = uploadPct;
this.errorString = errorString;
this.uploadStatus = uploadStatus;
this.uploadPath = fileSystemPath;
this.installPath = fixPath(installPath);
this.templateSize = templateSize;
}
public UploadAnswer(String jobId, int uploadPct, Command command,
UploadVO.Status uploadStatus, String fileSystemPath, String installPath) {
super(command);
this.jobId = jobId;
this.uploadPct = uploadPct;
this.uploadStatus = uploadStatus;
this.uploadPath = fileSystemPath;
this.installPath = installPath;
}
private static String fixPath(String path){
if (path == null)
return path;
if (path.startsWith(File.separator)) {
path=path.substring(File.separator.length());
}
if (path.endsWith(File.separator)) {
path=path.substring(0, path.length()-File.separator.length());
}
return path;
}
public void setUploadStatus(UploadVO.Status uploadStatus) {
this.uploadStatus = uploadStatus;
}
public String getInstallPath() {
return installPath;
}
public void setInstallPath(String installPath) {
this.installPath = fixPath(installPath);
}
public void setTemplateSize(long templateSize) {
this.templateSize = templateSize;
}
public Long getTemplateSize() {
return templateSize;
}
}

View File

@ -0,0 +1,126 @@
package com.cloud.agent.api.storage;
import com.cloud.storage.VMTemplateHostVO;
import com.cloud.storage.Upload.Type;
import com.cloud.storage.VMTemplateVO;
import com.cloud.agent.api.storage.AbstractUploadCommand;
import com.cloud.agent.api.storage.DownloadCommand.PasswordAuth;
public class UploadCommand extends AbstractUploadCommand {
private VMTemplateVO template;
private String url;
private String installPath;
private boolean hvm;
private String description;
private String checksum;
private PasswordAuth auth;
private long templateSizeInBytes;
private long id;
private Type type;
public UploadCommand(VMTemplateVO template, String url, VMTemplateHostVO vmTemplateHost) {
this.template = template;
this.url = url;
this.installPath = vmTemplateHost.getInstallPath();
this.checksum = template.getChecksum();
this.id = template.getId();
this.templateSizeInBytes = vmTemplateHost.getSize();
}
public UploadCommand(String url, long id, long sizeInBytes, String installPath, Type type){
this.template = null;
this.url = url;
this.installPath = installPath;
this.id = id;
this.type = type;
this.templateSizeInBytes = sizeInBytes;
}
protected UploadCommand() {
}
public UploadCommand(UploadCommand that) {
this.template = that.template;
this.url = that.url;
this.installPath = that.installPath;
this.checksum = that.getChecksum();
this.id = that.id;
}
public String getDescription() {
return description;
}
public VMTemplateVO getTemplate() {
return template;
}
public void setTemplate(VMTemplateVO template) {
this.template = template;
}
public String getUrl() {
return url;
}
public void setUrl(String url) {
this.url = url;
}
public boolean isHvm() {
return hvm;
}
public void setHvm(boolean hvm) {
this.hvm = hvm;
}
public PasswordAuth getAuth() {
return auth;
}
public void setAuth(PasswordAuth auth) {
this.auth = auth;
}
public Long getTemplateSizeInBytes() {
return templateSizeInBytes;
}
public void setTemplateSizeInBytes(Long templateSizeInBytes) {
this.templateSizeInBytes = templateSizeInBytes;
}
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
public void setInstallPath(String installPath) {
this.installPath = installPath;
}
public void setDescription(String description) {
this.description = description;
}
public void setChecksum(String checksum) {
this.checksum = checksum;
}
public String getInstallPath() {
return installPath;
}
public String getChecksum() {
return checksum;
}
}

View File

@ -0,0 +1,32 @@
package com.cloud.agent.api.storage;
public class UploadProgressCommand extends UploadCommand {
public static enum RequestType {GET_STATUS, ABORT, RESTART, PURGE, GET_OR_RESTART}
private String jobId;
private RequestType request;
protected UploadProgressCommand() {
super();
}
public UploadProgressCommand(UploadCommand cmd, String jobId, RequestType req) {
super(cmd);
this.jobId = jobId;
this.setRequest(req);
}
public String getJobId() {
return jobId;
}
public void setRequest(RequestType request) {
this.request = request;
}
public RequestType getRequest() {
return request;
}
}

View File

@ -1,79 +0,0 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.api.to;
import com.cloud.storage.DiskOfferingVO;
import com.cloud.storage.Volume;
public class DiskCharacteristicsTO {
private long size;
private String[] tags;
private Volume.VolumeType type;
private String name;
private boolean useLocalStorage;
private boolean recreatable;
protected DiskCharacteristicsTO() {
}
public DiskCharacteristicsTO(Volume.VolumeType type, String name, long size, String[] tags, boolean useLocalStorage, boolean recreatable) {
this.type = type;
this.name = name;
this.size = size;
this.tags = tags;
this.useLocalStorage = useLocalStorage;
this.recreatable = recreatable;
}
public DiskCharacteristicsTO(Volume.VolumeType type, String name, DiskOfferingVO offering, long size) {
this(type, name, size, offering.getTagsArray(), offering.getUseLocalStorage(), offering.isRecreatable());
}
public DiskCharacteristicsTO(Volume.VolumeType type, String name, DiskOfferingVO offering) {
this(type, name, offering.getDiskSizeInBytes(), offering.getTagsArray(), offering.getUseLocalStorage(), offering.isRecreatable());
}
public long getSize() {
return size;
}
public String getName() {
return name;
}
public String[] getTags() {
return tags;
}
public Volume.VolumeType getType() {
return type;
}
public boolean useLocalStorage() {
return useLocalStorage;
}
public boolean isRecreatable() {
return recreatable;
}
@Override
public String toString() {
return new StringBuilder("DskChr[").append(type).append("|").append(size).append("|").append("]").toString();
}
}

View File

@ -17,21 +17,79 @@
*/
package com.cloud.agent.api.to;
import com.cloud.network.Network.BroadcastDomainType;
import com.cloud.network.Network.TrafficType;
/**
* Transfer object to transfer network settings.
*/
public class NetworkTO {
private String uuid;
private String ip;
private String netmask;
private String gateway;
private String mac;
private String dns1;
private String dns2;
private String vlan;
private Long vlan;
private BroadcastDomainType broadcastType;
private TrafficType type;
protected NetworkTO() {
public NetworkTO() {
}
public String getUuid() {
return uuid;
}
public void setUuid(String uuid) {
this.uuid = uuid;
}
public Long getVlan() {
return vlan;
}
public void setVlan(Long vlan) {
this.vlan = vlan;
}
public BroadcastDomainType getBroadcastType() {
return broadcastType;
}
public void setBroadcastType(BroadcastDomainType broadcastType) {
this.broadcastType = broadcastType;
}
public void setIp(String ip) {
this.ip = ip;
}
public void setNetmask(String netmask) {
this.netmask = netmask;
}
public void setGateway(String gateway) {
this.gateway = gateway;
}
public void setMac(String mac) {
this.mac = mac;
}
public void setDns1(String dns1) {
this.dns1 = dns1;
}
public void setDns2(String dns2) {
this.dns2 = dns2;
}
public void setType(TrafficType type) {
this.type = type;
}
/**
* This constructor is usually for hosts where the other information are not important.
*
@ -55,7 +113,7 @@ public class NetworkTO {
* @param dns1
* @param dns2
*/
public NetworkTO(String ip, String vlan, String netmask, String mac, String gateway, String dns1, String dns2) {
public NetworkTO(String ip, Long vlan, String netmask, String mac, String gateway, String dns1, String dns2) {
this.ip = ip;
this.netmask = netmask;
this.mac = mac;
@ -88,4 +146,8 @@ public class NetworkTO {
public String getDns2() {
return dns2;
}
public TrafficType getType() {
return type;
}
}

View File

@ -0,0 +1,36 @@
/**
*
*/
package com.cloud.agent.api.to;
public class NicTO extends NetworkTO {
int deviceId;
Integer controlPort;
Integer networkRateMbps;
Integer networkRateMulticastMbps;
public NicTO() {
super();
controlPort = null;
}
public void setDeviceId(int deviceId) {
this.deviceId = deviceId;
}
public int getDeviceId() {
return deviceId;
}
public Integer getControlPort() {
return controlPort;
}
public Integer getNetworkRateMbps() {
return networkRateMbps;
}
public Integer getNetworkRateMulticastMbps() {
return networkRateMulticastMbps;
}
}

View File

@ -0,0 +1,173 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.api.to;
import java.util.Map;
import com.cloud.template.VirtualMachineTemplate.BootloaderType;
import com.cloud.vm.VirtualMachine.Type;
public class VirtualMachineTO {
private long id;
private String name;
private BootloaderType bootloader;
Type type;
int cpus;
Integer weight;
Integer utilization;
long minRam;
long maxRam;
String hostName;
String arch;
String os;
String bootArgs;
String[] bootupScripts;
boolean rebootOnCrash;
VolumeTO[] disks;
NicTO[] nics;
public VirtualMachineTO() {
}
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public Type getType() {
return type;
}
public BootloaderType getBootloader() {
return bootloader;
}
public void setBootloader(BootloaderType bootloader) {
this.bootloader = bootloader;
}
public int getCpus() {
return cpus;
}
public void setCpus(int cpus) {
this.cpus = cpus;
}
public Integer getWeight() {
return weight;
}
public void setWeight(Integer weight) {
this.weight = weight;
}
public Integer getUtilization() {
return utilization;
}
public void setUtiliziation(Integer utilization) {
this.utilization = utilization;
}
public long getMinRam() {
return minRam;
}
public void setRam(long minRam, long maxRam) {
this.minRam = minRam;
this.maxRam = maxRam;
}
public long getMaxRam() {
return maxRam;
}
public String getHostName() {
return hostName;
}
public void setHostName(String hostName) {
this.hostName = hostName;
}
public String getArch() {
return arch;
}
public void setArch(String arch) {
this.arch = arch;
}
public String getOs() {
return os;
}
public void setOs(String os) {
this.os = os;
}
public String getBootArgs() {
return bootArgs;
}
public void setBootArgs(Map<String, String> bootParams) {
StringBuilder buf = new StringBuilder();
for (Map.Entry<String, String> entry : bootParams.entrySet()) {
buf.append(" ").append(entry.getKey()).append("=").append(entry.getValue());
}
bootArgs = buf.toString();
}
public String[] getBootupScripts() {
return bootupScripts;
}
public void setBootupScripts(String[] bootupScripts) {
this.bootupScripts = bootupScripts;
}
public VolumeTO[] getDisks() {
return disks;
}
public void setDisks(VolumeTO[] disks) {
this.disks = disks;
}
public NicTO[] getNetworks() {
return nics;
}
public void setNics(NicTO[] nics) {
this.nics = nics;
}
}

View File

@ -1,39 +0,0 @@
/**
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
* It is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/
package com.cloud.agent.api.to;
import com.cloud.vm.VMInstanceVO;
public class VmTO {
private long id;
private String name;
NetworkTO[] networks;
public VmTO() {
}
// FIXME: Preferrably NetworkTO is constructed inside the TO objects.
// But we're press for time so I'm going to let the conversion
// happen outside.
public VmTO(VMInstanceVO instance, NetworkTO[] networks) {
id = instance.getId();
name = instance.getName();
this.networks = networks;
}
}

View File

@ -17,12 +17,12 @@
*/
package com.cloud.agent.api.to;
import com.cloud.storage.Storage;
import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.storage.StoragePoolVO;
import com.cloud.storage.VMTemplateStoragePoolVO;
import com.cloud.storage.Volume;
import com.cloud.storage.VolumeVO;
import com.cloud.storage.Storage.StoragePoolType;
import com.cloud.storage.Volume.StorageResourceType;
public class VolumeTO {
@ -35,11 +35,12 @@ public class VolumeTO {
private String path;
private long size;
private Volume.VolumeType type;
private Volume.StorageResourceType resourceType;
private Storage.StorageResourceType resourceType;
private StoragePoolType storagePoolType;
private long poolId;
private int deviceId;
public VolumeTO(long id, Volume.VolumeType type, Volume.StorageResourceType resourceType, StoragePoolType poolType, String name, String mountPoint, String path, long size) {
public VolumeTO(long id, Volume.VolumeType type, Storage.StorageResourceType resourceType, StoragePoolType poolType, String name, String mountPoint, String path, long size) {
this.id = id;
this.name= name;
this.path = path;
@ -65,12 +66,16 @@ public class VolumeTO {
this.id = templatePoolRef.getId();
this.path = templatePoolRef.getInstallPath();
this.size = templatePoolRef.getTemplateSize();
this.resourceType = StorageResourceType.STORAGE_POOL;
this.resourceType = Storage.StorageResourceType.STORAGE_POOL;
this.storagePoolType = pool.getPoolType();
this.mountPoint = pool.getPath();
}
public int getDeviceId() {
return deviceId;
}
public Volume.StorageResourceType getResourceType() {
public Storage.StorageResourceType getResourceType() {
return resourceType;
}

View File

@ -40,7 +40,7 @@ public class AlertDaoImpl extends GenericDaoBase<AlertVO, Long> implements Alert
sc.addAnd("podId", SearchCriteria.Op.EQ, podId);
}
List<AlertVO> alerts = listActiveBy(sc, searchFilter);
List<AlertVO> alerts = listBy(sc, searchFilter);
if ((alerts != null) && !alerts.isEmpty()) {
return alerts.get(0);
}

View File

@ -61,7 +61,7 @@ public class AsyncJobDaoImpl extends GenericDaoBase<AsyncJobVO, Long> implements
sc.setParameters("instanceId", instanceId);
sc.setParameters("status", AsyncJobResult.STATUS_IN_PROGRESS);
List<AsyncJobVO> l = listBy(sc);
List<AsyncJobVO> l = listIncludingRemovedBy(sc);
if(l != null && l.size() > 0) {
if(l.size() > 1) {
s_logger.warn("Instance " + instanceType + "-" + instanceId + " has multiple pending async-job");
@ -76,6 +76,6 @@ public class AsyncJobDaoImpl extends GenericDaoBase<AsyncJobVO, Long> implements
SearchCriteria<AsyncJobVO> sc = expiringAsyncJobSearch.create();
sc.setParameters("created", cutTime);
Filter filter = new Filter(AsyncJobVO.class, "created", true, 0L, (long)limit);
return listBy(sc, filter);
return listIncludingRemovedBy(sc, filter);
}
}

View File

@ -66,7 +66,7 @@ public class SyncQueueDaoImpl extends GenericDaoBase<SyncQueueVO, Long> implemen
SearchCriteria<SyncQueueVO> sc = TypeIdSearch.create();
sc.setParameters("syncObjType", syncObjType);
sc.setParameters("syncObjId", syncObjId);
return findOneActiveBy(sc);
return findOneBy(sc);
}
protected SyncQueueDaoImpl() {

View File

@ -53,7 +53,7 @@ public class SyncQueueItemDaoImpl extends GenericDaoBase<SyncQueueItemVO, Long>
sc.setParameters("queueId", queueId);
Filter filter = new Filter(SyncQueueItemVO.class, "created", true, 0L, 1L);
List<SyncQueueItemVO> l = listActiveBy(sc, filter);
List<SyncQueueItemVO> l = listBy(sc, filter);
if(l != null && l.size() > 0)
return l.get(0);
@ -105,6 +105,6 @@ public class SyncQueueItemDaoImpl extends GenericDaoBase<SyncQueueItemVO, Long>
sc.setParameters("lastProcessMsid", msid);
Filter filter = new Filter(SyncQueueItemVO.class, "created", true, 0L, 1L);
return listActiveBy(sc, filter);
return listBy(sc, filter);
}
}

View File

@ -22,7 +22,6 @@ import com.cloud.capacity.CapacityVO;
import com.cloud.utils.db.GenericDao;
public interface CapacityDao extends GenericDao<CapacityVO, Long> {
void setUsedStorage(Long hostId, long totalUsed);
void clearNonStorageCapacities();
void clearStorageCapacities();
}

View File

@ -34,9 +34,8 @@ public class CapacityDaoImpl extends GenericDaoBase<CapacityVO, Long> implements
private static final String ADD_ALLOCATED_SQL = "UPDATE `cloud`.`op_host_capacity` SET used_capacity = used_capacity + ? WHERE host_id = ? AND capacity_type = ?";
private static final String SUBTRACT_ALLOCATED_SQL = "UPDATE `cloud`.`op_host_capacity` SET used_capacity = used_capacity - ? WHERE host_id = ? AND capacity_type = ?";
private static final String SET_USED_STORAGE_SQL = "UPDATE `cloud`.`op_host_capacity` SET used_capacity = ? WHERE host_id = ? AND capacity_type = 2";
private static final String CLEAR_STORAGE_CAPACITIES = "DELETE FROM `cloud`.`op_host_capacity` WHERE capacity_type=2 OR capacity_type=6"; //clear storage and secondary_storage capacities
private static final String CLEAR_NON_STORAGE_CAPACITIES = "DELETE FROM `cloud`.`op_host_capacity` WHERE capacity_type<>2 AND capacity_type <>6"; //clear non-storage and non-secondary_storage capacities
private static final String CLEAR_STORAGE_CAPACITIES = "DELETE FROM `cloud`.`op_host_capacity` WHERE capacity_type=2 OR capacity_type=3 OR capacity_type=6"; //clear storage and secondary_storage capacities
private static final String CLEAR_NON_STORAGE_CAPACITIES = "DELETE FROM `cloud`.`op_host_capacity` WHERE capacity_type<>2 AND capacity_type<>3 AND capacity_type<>6"; //clear non-storage and non-secondary_storage capacities
public void updateAllocated(Long hostId, long allocatedAmount, short capacityType, boolean add) {
Transaction txn = Transaction.currentTxn();
@ -61,22 +60,6 @@ public class CapacityDaoImpl extends GenericDaoBase<CapacityVO, Long> implements
}
}
public void setUsedStorage(Long hostId, long totalUsed) {
Transaction txn = Transaction.currentTxn();
PreparedStatement pstmt = null;
try {
txn.start();
String sql = SET_USED_STORAGE_SQL;
pstmt = txn.prepareAutoCloseStatement(sql);
pstmt.setLong(1, totalUsed);
pstmt.setLong(2, hostId);
pstmt.executeUpdate(); // TODO: Make sure exactly 1 row was updated?
txn.commit();
} catch (Exception e) {
txn.rollback();
s_logger.warn("Exception setting used storage for host: " + hostId, e);
}
}
@Override
public void clearNonStorageCapacities() {

View File

@ -44,7 +44,7 @@ public class ManagementServerHostDaoImpl extends GenericDaoBase<ManagementServer
SearchCriteria<ManagementServerHostVO> sc = MsIdSearch.create();
sc.setParameters("msid", msid);
List<ManagementServerHostVO> l = listBy(sc);
List<ManagementServerHostVO> l = listIncludingRemovedBy(sc);
if(l != null && l.size() > 0)
return l.get(0);
@ -101,7 +101,7 @@ public class ManagementServerHostDaoImpl extends GenericDaoBase<ManagementServer
SearchCriteria<ManagementServerHostVO> sc = activeSearch.create();
sc.setParameters("lastUpdateTime", cutTime);
return listBy(sc);
return listIncludingRemovedBy(sc);
}
public void increaseAlertCount(long id) {

View File

@ -61,7 +61,7 @@ public class ConfigurationDaoImpl extends GenericDaoBase<ConfigurationVO, String
SearchCriteria<ConfigurationVO> sc = InstanceSearch.create();
sc.setParameters("instance", "DEFAULT");
List<ConfigurationVO> configurations = listBy(sc);
List<ConfigurationVO> configurations = listIncludingRemovedBy(sc);
for (ConfigurationVO config : configurations) {
if (config.getValue() != null)
@ -71,7 +71,7 @@ public class ConfigurationDaoImpl extends GenericDaoBase<ConfigurationVO, String
sc = InstanceSearch.create();
sc.setParameters("instance", instance);
configurations = listBy(sc);
configurations = listIncludingRemovedBy(sc);
for (ConfigurationVO config : configurations) {
if (config.getValue() != null)
@ -126,7 +126,7 @@ public class ConfigurationDaoImpl extends GenericDaoBase<ConfigurationVO, String
public String getValue(String name) {
SearchCriteria<ConfigurationVO> sc = NameSearch.create();
sc.setParameters("name", name);
List<ConfigurationVO> configurations = listBy(sc);
List<ConfigurationVO> configurations = listIncludingRemovedBy(sc);
if (configurations.size() == 0) {
return null;

View File

@ -52,7 +52,7 @@ public class ResourceCountDaoImpl extends GenericDaoBase<ResourceCountVO, Long>
sc.setParameters("accountId", accountId);
sc.setParameters("type", type);
return findOneBy(sc);
return findOneIncludingRemovedBy(sc);
}
private ResourceCountVO findByDomainIdAndType(long domainId, ResourceType type) {
@ -64,7 +64,7 @@ public class ResourceCountDaoImpl extends GenericDaoBase<ResourceCountVO, Long>
sc.setParameters("domainId", domainId);
sc.setParameters("type", type);
return findOneBy(sc);
return findOneIncludingRemovedBy(sc);
}
@Override

View File

@ -48,7 +48,7 @@ public class ResourceLimitDaoImpl extends GenericDaoBase<ResourceLimitVO, Long>
sc.setParameters("domainId", domainId);
sc.setParameters("type", type);
return findOneBy(sc);
return findOneIncludingRemovedBy(sc);
}
public List<ResourceLimitVO> listByDomainId(Long domainId) {
@ -58,7 +58,7 @@ public class ResourceLimitDaoImpl extends GenericDaoBase<ResourceLimitVO, Long>
SearchCriteria<ResourceLimitVO> sc = IdTypeSearch.create();
sc.setParameters("domainId", domainId);
return listBy(sc);
return listIncludingRemovedBy(sc);
}
public ResourceLimitVO findByAccountIdAndType(Long accountId, ResourceCount.ResourceType type) {
@ -69,7 +69,7 @@ public class ResourceLimitDaoImpl extends GenericDaoBase<ResourceLimitVO, Long>
sc.setParameters("accountId", accountId);
sc.setParameters("type", type);
return findOneBy(sc);
return findOneIncludingRemovedBy(sc);
}
public List<ResourceLimitVO> listByAccountId(Long accountId) {
@ -79,7 +79,7 @@ public class ResourceLimitDaoImpl extends GenericDaoBase<ResourceLimitVO, Long>
SearchCriteria<ResourceLimitVO> sc = IdTypeSearch.create();
sc.setParameters("accountId", accountId);
return listBy(sc);
return listIncludingRemovedBy(sc);
}
public boolean update(Long id, Long max) {

View File

@ -25,9 +25,11 @@ import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;
import com.cloud.user.OwnedBy;
@Entity
@Table(name="account_vlan_map")
public class AccountVlanMapVO {
public class AccountVlanMapVO implements OwnedBy {
@Id
@GeneratedValue(strategy=GenerationType.IDENTITY)
@ -52,7 +54,8 @@ public class AccountVlanMapVO {
public Long getId() {
return id;
}
@Override
public long getAccountId() {
return accountId;
}

View File

@ -24,11 +24,12 @@ import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Table;
import com.cloud.org.Cluster;
import com.cloud.utils.NumbersUtil;
@Entity
@Table(name="cluster")
public class ClusterVO {
public class ClusterVO implements Cluster {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)

Some files were not shown because too many files have changed in this diff Show More