Merge branch 'master' of ssh://git.cloud.com/var/lib/git/cloudstack-oss

This commit is contained in:
nit 2010-09-08 15:40:17 +05:30
commit 7a54cf8f7c
377 changed files with 19204 additions and 4964 deletions

652
HACKING
View File

@ -1,652 +0,0 @@
---------------------------------------------------------------------
THE QUICK GUIDE TO CLOUDSTACK DEVELOPMENT
---------------------------------------------------------------------
=== Overview of the development lifecycle ===
To hack on a CloudStack component, you will generally:
1. Configure the source code:
./waf configure --prefix=/home/youruser/cloudstack
(see below, "./waf configure")
2. Build and install the CloudStack
./waf install
(see below, "./waf install")
3. Set the CloudStack component up
(see below, "Running the CloudStack components from source")
4. Run the CloudStack component
(see below, "Running the CloudStack components from source")
5. Modify the source code
6. Build and install the CloudStack again
./waf install --preserve-config
(see below, "./waf install")
7. GOTO 4
=== What is this waf thing in my development lifecycle? ===
waf is a self-contained, advanced build system written by Thomas Nagy,
in the spirit of SCons or the GNU autotools suite.
* To run waf on Linux / Mac: ./waf [...commands...]
* To run waf on Windows: waf.bat [...commands...]
./waf --help should be your first discovery point to find out both the
configure-time options and the different processes that you can run
using waf.
=== What do the different waf commands above do? ===
1. ./waf configure --prefix=/some/path
You run this command *once*, in preparation to building, or every
time you need to change a configure-time variable.
This runs configure() in wscript, which takes care of setting the
variables and options that waf will use for compilation and
installation, including the installation directory (PREFIX).
For convenience reasons, if you forget to run configure, waf
will proceed with some default configuration options. By
default, PREFIX is /usr/local, but you can set it e.g. to
/home/youruser/cloudstack if you plan to do a non-root
install. Be ware that you can later install the stack as a
regular user, but most components need to *run* as root.
./waf showconfig displays the values of the configure-time options
2. ./waf
You run this command to trigger compilation of the modified files.
This runs the contents of wscript_build, which takes care of
discovering and describing what needs to be built, which
build products / sources need to be installed, and where.
3. ./waf install
You run this command when you want to install the CloudStack.
If you are going to install for production, you should run this
process as root. If, conversely, you only want to install the
stack as your own user and in a directory that you have write
permission, it's fine to run waf install as your own user.
This runs the contents of wscript_build, with an option variable
Options.is_install = True. When this variable is set, waf will
install the files described in wscript_build. For convenience
reasons, when you run install, any files that need to be recompiled
will also be recompiled prior to installation.
--------------------
WARNING: each time you do ./waf install, the configuration files
in the installation directory are *overwritten*.
There are, however, two ways to get around this:
a) ./waf install has an option --preserve-config. If you pass
this option when installing, configuration files are never
overwritten.
This option is useful when you have modified source files and
you need to deploy them on a system that already has the
CloudStack installed and configured, but you do *not* want to
overwrite the existing configuration of the CloudStack.
If, however, you have reconfigured and rebuilt the source
since the last time you did ./waf install, then you are
advised to replace the configuration files and set the
components up again, because some configuration files
in the source use identifiers that may have changed during
the last ./waf configure. So, if this is your case, check
out the next way:
b) Every configuration file can be overridden in the source
without touching the original.
- Look for said config file X (or X.in) in the source, then
- create an override/ folder in the folder that contains X, then
- place a file named X (or X.in) inside override/, then
- put the desired contents inside X (or X.in)
Now, every time you run ./waf install, the file that will be
installed is path/to/override/X.in, instead of /path/to/X.in.
This option is useful if you are developing the CloudStack
and constantly reinstalling it. It guarantees that every
time you install the CloudStack, the installation will have
the correct configuration and will be ready to run.
=== Running the CloudStack components from source (for debugging / coding) ===
It is not technically possible to run the CloudStack components from
the source. That, however, is fine -- each component can be run
independently from the install directory:
- Management Server
1) Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
2) If you haven't done so yet, set up the management server database:
- either run ./waf deploydb_kvm, or
- run $BINDIR/cloud-setup-databases
3) Execute ./waf run as your current user (or as root if the
installation path is only writable by root). Alternatively,
you can use ./waf debug and this will run with debugging enabled.
- Agent (Linux-only):
1) Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
2) If you haven't done so yet, set the Agent up:
- run $BINDIR/cloud-setup-agent
3) Execute ./waf run_agent as root
this will launch sudo and require your root password unless you have
set sudo up not to ask for it
- Console Proxy (Linux-only):
1) Execute ./waf install as your current user (or as root if the
installation path is only writable by root).
WARNING: if any CloudStack configuration files have been
already configured / altered, they will be *overwritten* by this
process. Append --preserve-config to ./waf install to prevent this
from happening. Or resort to the override method discussed
above (search for "override" in this document).
2) If you haven't done so yet, set the Console Proxy up:
- run $BINDIR/cloud-setup-console-proxy
3) Execute ./waf run_console_proxy
this will launch sudo and require your root password unless you have
set sudo up not to ask for it
---------------------------------------------------------------------
BUILD SYSTEM TIPS
---------------------------------------------------------------------
=== Integrating compilation and execution of each component into Eclipse ===
To run the Management Server from Eclipse, set up an External Tool of the
Program variety. Put the path to the waf binary in the Location of the
window, and the source directory as Working Directory. Then specify
"install --preserve-config run" as arguments (without the quotes). You can
now use the Run button in Eclipse to execute the Management Server directly
from Eclipse. You can replace run with debug if you want to run the
Management Server with the Debugging Proxy turned on.
To run the Agent or Console Proxy from Eclipse, set up an External Tool of
the Program variety just like in the Management Server case. In there,
however, specify "install --preserve-config run_agent" or
"install --preserve-config run_console_proxy" as arguments instead.
Remember that you need to set sudo up to not ask you for a password and not
require a TTY, otherwise sudo -- implicitly called by waf run_agent or
waf run_console_proxy -- will refuse to work.
=== Building targets selectively ===
You can find out the targets of the build system:
./waf list_targets
If you want to run a specific task generator,
./waf build --targets=patchsubst
should run just that one (and whatever targets are required to build that
one, of course).
=== Common targets ===
* ./waf configure: you must always run configure once, and provide it with
the target installation paths for when you run install later
o --help: will show you all the configure options
o --no-dep-check: will skip dependency checks for java packages
needed to compile (saves 20 seconds when redoing the configure)
o --with-db-user, --with-db-pw, --with-db-host: informs the build
system of the MySQL configuration needed to set up the management
server upon install, and to do deploydb
* ./waf build: will compile any source files (and, on some projects, will
also perform any variable substitutions on any .in files such as the
MANIFEST files). Build outputs will be in <projectdir>/artifacts/default.
* ./waf install: will compile if not compiled yet, then execute an install
of the built targets. I had to write a significantly large amount of code
(that is, couple tens of lines of code) to make install work.
* ./waf run: will run the management server in the foreground
* ./waf debug: will run the management server in the foreground, and open
port 8787 to connect with the debugger (see the Run / debug options of
waf --help to change that port)
* ./waf deploydb: deploys the database using the MySQL configuration supplied
with the configuration options when you did ./waf configure. RUN WAF BUILD
FIRST AT LEAST ONCE.
* ./waf dist: create a source tarball. These tarballs will be distributed
independently on our Web site, and will form the source release of the
Cloud Stack. It is a self-contained release that can be ./waf built and
./waf installed everywhere.
* ./waf clean: remove known build products
* ./waf distclean: remove the artifacts/ directory altogether
* ./waf uninstall: uninstall all installed files
* ./waf rpm: build RPM packages
o if the build fails because the system lacks dependencies from our
other modules, waf will attempt to install RPMs from the repos,
then try the build
o it will place the built packages in artifacts/rpmbuild/
* ./waf deb: build Debian packages
o if the build fails because the system lacks dependencies from our
other modules, waf will attempt to install DEBs from the repos,
then try the build
o it will place the built packages in artifacts/debbuild/
* ./waf uninstallrpms: removes all Cloud.com RPMs from a system (but not
logfiles or modified config files)
* ./waf viewrpmdeps: displays RPM dependencies declared in the RPM specfile
* ./waf installrpmdeps: runs Yum to install the packages required to build
the CloudStack
* ./waf uninstalldebs: removes all Cloud.com DEBs from a system (AND logfiles
AND modified config files)
* ./waf viewdebdeps: displays DEB dependencies declared in the project
debian/control file
* ./waf installdebdeps: runs aptitude to install the packages required to
build our software
=== Overriding certain source files ===
Earlier in this document we explored overriding configuration files.
Overrides are not limited to configuration files.
If you want to provide your own server-setup.xml or SQL files in client/setup:
* create a directory override inside the client/setup folder
* place your file that should override a file in client/setup there
There's also override support in client/tomcatconf and agent/conf.
=== Environment substitutions ===
Any file named "something.in" has its tokens (@SOMETOKEN@) automatically
substituted for the corresponding build environment variable. The build
environment variables are generally constructed at configure time and
controllable by the --command-line-parameters to waf configure, and should
be available as a list of variables inside the file
artifacts/c4che/build.default.py.
=== The prerelease mechanism ===
The prerelease mechanism (--prerelease=BRANCHNAME) allows developers and
builders to build packages with pre-release Release tags. The Release tags
are constructed in such a way that both the build number and the branch name
is included, so developers can push these packages to repositories and upgrade
them using yum or aptitude without having to delete packages manually and
install packages manually every time a new build is done. Any package built
with the prerelease mechanism gets a standard X.Y.Z version number -- and,
due to the way that the prerelease Release tags are concocted, always upgrades
any older prerelease package already present on any system. The prerelease
mechanism must never be used to create packages that are intended to be
released as stable software to the general public.
Relevant documentation:
http://www.debian.org/doc/debian-policy/ch-controlfields.html#s-f-Version
http://fedoraproject.org/wiki/PackageNamingGuidelines#Pre-Release_packages
Everything comes together on the build server in the following way:
=== SCCS info ===
When building a source distribution (waf dist), or RPM/DEB distributions
(waf deb / waf rpm), waf will automatically detect the relevant source code
control information if the git command is present on the machine where waf
is run, and it will write the information to a file called sccs-info inside
the source tarball / install it into /usr/share/doc/cloud*/sccs-info when
installing the packages.
If this source code conrol information cannot be calculated, then the old
sccs-info file is preserved across dist runs if it exists, and if it did
not exist before, the fact that the source could not be properly tracked
down to a repository is noted in the file.
=== Debugging the build system ===
Almost all targets have names. waf build -vvvvv --zones=task will give you
the task names that you can use in --targets.
---------------------------------------------------------------------
UNDERSTANDING THE BUILD SYSTEM
---------------------------------------------------------------------
=== Documentation for the build system ===
The first and foremost reference material:
- http://freehackers.org/~tnagy/wafbook/index.html
Examples
- http://code.google.com/p/waf/wiki/CodeSnippets
- http://code.google.com/p/waf/w/list
FAQ
- http://code.google.com/p/waf/wiki/FAQ
=== Why waf ===
The CloudStack uses waf to build itself. waf is a relative newcomer
to the build system world; it borrows concepts from SCons and
other later-generation build systems:
- waf is very flexible and rich; unlike other build systems, it covers
the entire life cycle, from compilation to installation to
uninstallation. it also supports dist (create source tarball),
distcheck (check that the source tarball compiles and installs),
autoconf-like checks for dependencies at compilation time,
and more.
- waf is self-contained. A single file, distributed with the project,
enables everything to be built, with only a dependency on Python,
which is freely available and shipped in all Linux computers.
- waf also supports building projects written in multiple languages
(in the case of the CloudStack, we build from C, Java and Python).
- since waf is written in Python, the entire library of the Python
language is available to use in the build process.
=== Hacking on the build system: what are these wscript files? ===
1. wscript: contains most commands you can run from within waf
2. wscript_configure: contains the process that discovers the software
on the system and configures the build to fit that
2. wscript_build: contains a manifest of *what* is built and installed
Refer to the waf book for general information on waf:
http://freehackers.org/~tnagy/wafbook/index.html
=== What happens when waf runs ===
When you run waf, this happens behind the scenes:
- When you run waf for the first time, it unpacks itself to a hidden
directory .waf-1.X.Y.MD5SUM, including the main program and all
the Python libraries it provides and needs.
- Immediately after unpacking itself, waf reads the wscript file
at the root of the source directory. After parsing this file and
loading the functions defined here, it reads wscript_build and
generates a function build() based on it.
- After loading the build scripts as explained above, waf calls
the functions you specified in the command line.
So, for example, ./waf configure build install will:
* call configure() from wscript,
* call build() loaded from the contents of wscript_build,
* call build() once more but with Options.is_install = True.
As part of build(), waf invokes ant to build the Java portion of our
stack.
=== How and why we use ant within waf ===
By now, you have probably noticed that we do, indeed, ship ant
build files in the CloudStack. During the build process, waf calls
ant directly to build the Java portions of our stack, and it uses
the resulting JAR files to perform the installation.
The reason we do this rather than use the native waf capabilities
for building Java projects is simple: by using ant, we can leverage
the support built-in for ant in Eclipse and many other IDEs. Another
reason to do this is because Java developers are familiar with ant,
so adding a new JAR file or modifying what gets built into the
existing JAR files is facilitated for Java developers.
If you add to the ant build files a new ant target that uses the
compile-java macro, waf will automatically pick it up, along with its
depends= and JAR name attributes. In general, all you need to do is
add the produced JAR name to the packaging manifests (cloud.spec and
debian/{name-of-package}.install).
---------------------------------------------------------------------
FOR ANT USERS
---------------------------------------------------------------------
If you are using Ant directly instead of using waf, these instructions apply to you:
in this document, the example instructions are based on local source repository rooted at c:\root. You are free to locate it to anywhere you'd like to.
3.1 Setup developer build type
1) Go to c:\cloud\java\build directory
2) Copy file build-cloud.properties.template to file build-cloud.properties, then modify some of the parameters to match your local setup. The template properties file should have content as
debug=true
debuglevel=lines,vars,source
tomcat.home=$TOMCAT_HOME --> change to your local Tomcat root directory such as c:/apache-tomcat-6.0.18
debug.jvmarg=-Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n
deprecation=off
build.type=developer
target.compat.version=1.5
source.compat.version=1.5
branding.name=default
3) Make sure the following Environment variables and Path are set:
set enviroment variables:
CATALINA_HOME:
JAVA_HOME:
CLOUD_HOME:
MYSQL_HOME:
update the path to include
MYSQL_HOME\bin
4) Clone a full directory tree of C:\cloud\java\build\deploy\production to C:\cloud\java\build\deploy\developer
You can use Windows Explorer to copy the directory tree over. Please note, during your daily development process, whenever you see updates in C:\cloud\java\build\deploy\production, be sure to sync it into C:\cloud\java\build\deploy\developer.
3.2 Common build instructions
After you have setup the build type, you are ready to perform build and run Management Server alone locally.
cd java
python waf configure build install
More at Build system.
Will install the management server and its requisites to the appropriate place (your Tomcat instance on Windows, /usr/local on Linux). It will also install the agent to /usr/local/cloud/agent (this will change in the future).
4. Database and Server deployment
After a successful management server build (database deployment scripts use some of the artifacts from build process), you can use database deployment script to deploy and initialize the database. You can find the deployment scripts in C:/cloud/java/build/deploy/db. deploy-db.sh is used to create, populate your DB instance. Please take a look at content of deploy-db.sh for more details
Before you run the scripts, you should edit C:/cloud/java/build/deploy/developer/db/server-setup-dev.xml to allocate Public and Private IP ranges for your development setup. Ensure that the ranges you pick are unallocated to others.
Customized VM templates to be populated are in C:/cloud/java/build/deploy/developer/db/templates-dev.sql Edit this file to customize the templates to your needs.
Deploy the DB by running
./deploy-db.sh ../developer/db/server-setup-dev.xml ../developer/db/templates-dev.xml
4.1. Management Server Deployment
ant build-server
Build Management Server
ant deploy-server
Deploy Management Server software to Tomcat environment
ant debug
Start Management Server in debug mode. The JVM debug options can be found in cloud-build.properties
ant run
Start Management Server in normal mode.
5. Agent deployment
After a successful build process, you should be able to find build artifacts at distribution directory, in this example case, for developer build type, the artifacts locate at c:\cloud\java\dist\developer, particularly, if you have run
ant package-agent build command, you should see the agent software be packaged in a single file named agent.zip under c:\cloud\java\dist\developer, together with the agent deployment script deploy-agent.sh.
5.1 Agent Type
Agent software can be deployed and configured to serve with different roles at run time. In current implementation, there are 3 types of agent configuration, respectively called as Computing Server, Routing Server and Storage Server.
* When agent software is configured to run as Computing server, it is responsible to host user VMs. Agent software should be running in Xen Dom0 system on computer server machine.
* When agent software is configured to run as Routing Server, it is responsible to host routing VMs for user virtual network and console proxy system VMs. Routing server serves as the bridge to outside network, the machine that agent software is running should have at least two network interfaces, one towards outside network, one participates the internal VMOps management network. Like computer server, agent software on routing server should also be running in Xen Dom0 system.
* When agent software is configured to run as Storage server, it is responsible to provide storage service for all VMs. The storage service is based on ZFS running on a Solaris system, agent software on storage server is therefore running under Solaris (actually a Solaris VM), Dom0 systems on computing server and routing server can access the storage service through iScsi initiator. The storage volume will be eventually mounted on Dom0 system and make available to DomU VMs through our agent software.
5.2 Resource sharing
All developers can share the same set of agent server machines for development, to make this possible, the concept of instance appears in various places
* VM names. VM names are structual names, it contains a instance section that can identify VMs from different VMOps cloud instances. VMOps cloud instance name is configured in server configuration parameter AgentManager/instance.name
* iScsi initiator mount point. For Computing servers and Routing servers, the mount point can distinguish the mounted DomU VM images from different agent deployments. The mount location can be specified in agent.properties file with a name-value pair named mount.parent
* iScsi target allocation point. For storage servers, this allocation point can distinguish the storage allocation from different storage agent deployments. The allocation point can be specified in agent.properties file with a name-value pair named parent
5.4 Deploy agent software
Before running the deployment scripts, first copy the build artifacts agent.zip and deploy-agent.sh to your personal development directory on agent server machines. By our current convention, you can create your personal development directory that usually locates at /root/your name. In following example, the agent package and deployment scripts are copied to test0.lab.vmops.com and the deployment script file has been marked as executible.
On build machine,
scp agent.zip root@test0:/root/your name
scp deploy-agent.sh root@test0:/root/your name
On agent server machine
chmod +x deploy-agent.sh
5.4.1 Deploy agent on computing server
deploy-agent.sh -d /root/<your name>/agent -h <management server IP> -t computing -m expert
5.4.2 Deploy agent on routing server
deploy-agent.sh -d /root/<your name>/agent -h <management server IP> -t routing -m expert
5.4.3 Deploy agent on storage server
deploy-agent.sh -d /root/<your name>/agent -h <management server IP> -t storage -m expert
5.5 Configure agent
After you have deployed the agent software, you should configure the agent by editing the agent.properties file under /root/<your name>/agent/conf directory on each of the Routing, Computing and Storage servers. Add/Edit following properties. The rest are defaults that get populated by the agent at runtime.
workers=3
host=<replace with your management server IP>
port=8250
pod=<replace with your pod id>
zone=<replace with your zone id>
instance=<your unique instance name>
developer=true
Following is a sample agent.properties file for Routing server
workers=3
id=1
port=8250
pod=RC
storage=comstar
zone=RC
type=routing
private.network.nic=xenbr0
instance=RC
public.network.nic=xenbr1
developer=true
host=192.168.1.138
5.5 Running agent
Edit /root/<ryour name>/agent/conf/log4j-cloud.xml to update the location of logs to somewhere under /root/<your name>
Once you have deployed and configured the agent software, you are ready to launch it. Under the agent root directory (in our example, /root/<your name>/agent. there is a scrip file named run.sh, you can use it to launch the agent.
Launch agent in detached background process
nohup ./run.sh &
Launch agent in interactive mode
./run.sh
Launch agent in debug mode, for example, following command makes JVM listen at TCP port 8787
./run.sh -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n
If agent is launched in debug mode, you may use Eclipse IDE to remotely debug it, please note, when you are sharing agent server machine with others, choose a TCP port that is not in use by someone else.
Please also note that, run.sh also searches for /etc/cloud directory for agent.properties, make sure it uses the correct agent.properties file!
5.5. Stopping the Agents
the pid of the agent process is in /var/run/agent.<Instance>.pid
To Stop the agent:
kill <pid of agent>

155
INSTALL
View File

@ -1,155 +0,0 @@
---------------------------------------------------------------------
TABLE OF CONTENTS
---------------------------------------------------------------------
1. Really quick start: building and installing a production stack
2. Post-install: setting the CloudStack components up
3. Installation paths: where the stack is installed on your system
4. Uninstalling the CloudStack from your system
---------------------------------------------------------------------
REALLY QUICK START: BUILDING AND INSTALLING A PRODUCTION STACK
---------------------------------------------------------------------
You have two options. Choose one:
a) Building distribution packages from the source and installing them
b) Building from the source and installing directly from there
=== I want to build and install distribution packages ===
This is the recommended way to run your CloudStack cloud. The
advantages are that dependencies are taken care of automatically
for you, and you can verify the integrity of the installed files
using your system's package manager.
1. As root, install the build dependencies.
a) Fedora / CentOS: ./waf installrpmdeps
b) Ubuntu: ./waf installdebdeps
2. As a non-root user, build the CloudStack packages.
a) Fedora / CentOS: ./waf rpm
b) Ubuntu: ./waf deb
3. As root, install the CloudStack packages.
You can choose which components to install on your system.
a) Fedora / CentOS: the installable RPMs are in artifacts/rpmbuild
install as root: rpm -ivh artifacts/rpmbuild/RPMS/{x86_64,noarch,i386}/*.rpm
b) Ubuntu: the installable DEBs are in artifacts/debbuild
install as root: dpkg -i artifacts/debbuild/*.deb
4. Configure and start the components you intend to run.
Consult the Installation Guide to find out how to
configure each component, and "Installation paths" for information
on where programs, initscripts and config files are installed.
=== I want to build and install directly from the source ===
This is the recommended way to run your CloudStack cloud if you
intend to modify the source, if you intend to port the CloudStack to
another distribution, or if you intend to run the CloudStack on a
distribution for which packages are not built.
1. As root, install the build dependencies.
See below for a list.
2. As non-root, configure the build.
See below to discover configuration options.
./waf configure
3. As non-root, build the CloudStack.
To learn more, see "Quick guide to developing, building and
installing from source" below.
./waf build
4. As root, install the runtime dependencies.
See below for a list.
5. As root, Install the CloudStack
./waf install
6. Configure and start the components you intend to run.
Consult the Installation Guide to find out how to
configure each component, and "Installation paths" for information
on where to find programs, initscripts and config files mentioned
in the Installation Guide (paths may vary).
=== Dependencies of the CloudStack ===
- Build dependencies:
1. FIXME DEPENDENCIES LIST THEM HERE
- Runtime dependencies:
2. FIXME DEPENDENCIES LIST THEM HERE
---------------------------------------------------------------------
INSTALLATION PATHS: WHERE THE STACK IS INSTALLED ON YOUR SYSTEM
---------------------------------------------------------------------
The CloudStack build system installs files on a variety of paths, each
one of which is selectable when building from source.
- $PREFIX:
the default prefix where the entire stack is installed
defaults to /usr/local on source builds
defaults to /usr on package builds
- $SYSCONFDIR/cloud:
the prefix for CloudStack configuration files
defaults to $PREFIX/etc/cloud on source builds
defaults to /etc/cloud on package builds
- $SYSCONFDIR/init.d:
the prefix for CloudStack initscripts
defaults to $PREFIX/etc/init.d on source builds
defaults to /etc/init.d on package builds
- $BINDIR:
the CloudStack installs programs there
defaults to $PREFIX/bin on source builds
defaults to /usr/bin on package builds
- $LIBEXECDIR:
the CloudStack installs service runners there
defaults to $PREFIX/libexec on source builds
defaults to /usr/libexec on package builds (/usr/bin on Ubuntu)
---------------------------------------------------------------------
UNINSTALLING THE CLOUDSTACK FROM YOUR SYSTEM
---------------------------------------------------------------------
- If you installed the CloudStack using packages, use your operating
system package manager to remove the CloudStack packages.
a) Fedora / CentOS: the installable RPMs are in artifacts/rpmbuild
as root: rpm -qa | grep ^cloud- | xargs rpm -e
b) Ubuntu: the installable DEBs are in artifacts/debbuild
aptitude purge '~ncloud'
- If you installed from a source tree:
./waf uninstall

52
README
View File

@ -1,52 +0,0 @@
Hello, and thanks for downloading the Cloud.com CloudStack™! The
Cloud.com CloudStack™ is Open Source Software that allows
organizations to build Infrastructure as a Service (Iaas) clouds.
Working with server, storage, and networking equipment of your
choice, the CloudStack provides a turn-key software stack that
dramatically simplifies the process of deploying and managing a
cloud.
---------------------------------------------------------------------
HOW TO INSTALL THE CLOUDSTACK
---------------------------------------------------------------------
Please refer to the document INSTALL distributed with the source.
---------------------------------------------------------------------
HOW TO HACK ON THE CLOUDSTACK
---------------------------------------------------------------------
Please refer to the document HACKING distributed with the source.
---------------------------------------------------------------------
BE PART OF THE CLOUD.COM COMMUNITY!
---------------------------------------------------------------------
We are more than happy to have you ask us questions, hack our source
code, and receive your contributions.
* Our forums are available at http://cloud.com/community .
* If you would like to modify / extend / hack on the CloudStack source,
refer to the file HACKING for more information.
* If you find bugs, please log on to http://bugs.cloud.com/ and file
a report.
* If you have patches to send us get in touch with us at info@cloud.com
or file them as attachments in our bug tracker above.
---------------------------------------------------------------------
Cloud.com's contact information is:
20400 Stevens Creek Blvd
Suite 390
Cupertino, CA 95014
Tel: +1 (888) 384-0962
This software is OSI certified Open Source Software. OSI Certified is a
certification mark of the Open Source Initiative.

View File

@ -512,6 +512,13 @@ Also see [[AdvancedOptions]]</pre>
</div>
<!--POST-SHADOWAREA-->
<div id="storeArea">
<div title="(default) on http://tiddlyvault.tiddlyspot.com/#%5B%5BDisableWikiLinksPlugin%20(TiddlyTools)%5D%5D" modifier="(System)" created="201009040211" tags="systemServer" changecount="1">
<pre>|''Type:''|file|
|''URL:''|http://tiddlyvault.tiddlyspot.com/#%5B%5BDisableWikiLinksPlugin%20(TiddlyTools)%5D%5D|
|''Workspace:''|(default)|
This tiddler was automatically created to record the details of this server</pre>
</div>
<div title="AntInformation" creator="RuddO" modifier="RuddO" created="201008072228" changecount="1">
<pre>---------------------------------------------------------------------
FOR ANT USERS
@ -702,21 +709,18 @@ Once this command is done, the packages will be built in the directory {{{artifa
# As a non-root user, run the command {{{./waf deb}}} in the source directory.
Once this command is done, the packages will be built in the directory {{{artifacts/debbuild}}}.</pre>
</div>
<div title="Building from the source and installing directly from there" creator="RuddO" modifier="RuddO" created="201008080022" modified="201008081327" changecount="14">
<pre>!Obtain the source for the CloudStack
<div title="Building from the source and installing directly from there" creator="RuddO" modifier="RuddO" created="201008080022" modified="201009040235" changecount="20">
<pre>You need to do the following steps on each machine that will run a CloudStack component.
!Obtain the source for the CloudStack
If you aren't reading this from a local copy of the source code, see [[Obtaining the source]].
!Prepare your development environment
See [[Preparing your development environment]].
!Configure the build on the builder machine
!Prepare your environment
See [[Preparing your environment]].
!Configure the build
As non-root, run the command {{{./waf configure}}}. See [[waf configure]] to discover configuration options for that command.
!Build the CloudStack on the builder machine
!Build the CloudStack
As non-root, run the command {{{./waf build}}}. See [[waf build]] for an explanation.
!Install the CloudStack on the target systems
On each machine where you intend to run a CloudStack component:
# upload the entire source code tree after compilation, //ensuring that the source ends up in the same path as the machine in which you compiled it//,
## {{{rsync}}} is [[usually very handy|Using rsync to quickly transport the source tree to another machine]] for this
# in that newly uploaded directory of the target machine, run the command {{{./waf install}}} //as root//.
Consult [[waf install]] for information on installation.</pre>
!Install the CloudStack
Run the command {{{./waf install}}} //as root//. Consult [[waf install]] for information on installation.</pre>
</div>
<div title="Changing the build, install and packaging processes" creator="RuddO" modifier="RuddO" created="201008081215" modified="201008081309" tags="fixme" changecount="15">
<pre>!Changing the [[configuration|waf configure]] process
@ -737,11 +741,91 @@ See the files in the {{{debian/}}} folder.</pre>
<div title="CloudStack" creator="RuddO" modifier="RuddO" created="201008072205" changecount="1">
<pre>The Cloud.com CloudStack is an open source software product that enables the deployment, management, and configuration of multi-tier and multi-tenant infrastructure cloud services by enterprises and service providers.</pre>
</div>
<div title="CloudStack build dependencies" creator="RuddO" modifier="RuddO" created="201008081310" tags="fixme" changecount="1">
<pre>Not done yet!</pre>
<div title="CloudStack build dependencies" creator="RuddO" modifier="RuddO" created="201008081310" modified="201009040226" changecount="20">
<pre>Prior to building the CloudStack, you need to install the following software packages in your system.
# Sun Java 1.6
## You must install the Java Development Kit with {{{javac}}}, not just the Java Runtime Environment
## The commands {{{java}}} and {{{javac}}} must be found in your {{{PATH}}}
# Apache Tomcat
## If you are using the official Apache binary distribution, set the environment variable {{{TOMCAT_HOME}}} to point to the Apache Tomcat directory
# MySQL
## At the very minimum, you need to have the client and libraries installed
## If your development machine is also going to be the database server, you need to have the server installed and running as well
# Python 2.6
## Ensure that the {{{python}}} command is in your {{{PATH}}}
## Do ''not'' install Cygwin Python!
# The MySQLdb module for Python 2.6
## If you use Windows, you can find a [[pre-built package here|http://soemin.googlecode.com/files/MySQL-python-1.2.3c1.win32-py2.6.exe]]
# The Bourne-again shell (also known as bash)
# GNU coreutils
''Note for Windows users'': Some of the packages in the above list are only available on Windows through Cygwin. If that is your case, install them using Cygwin and remember to include the Cygwin {{{bin/}}} directory in your PATH. Under no circumstances install Cygwin Python! Use the Python for Windows official installer instead.
!Additional dependencies for Linux development environments
# GCC (only needed on Linux)
# glibc-devel / glibc-dev
# The Java packages (usually available in your distribution):
## commons-collections
## commons-dbcp
## commons-logging
## commons-logging-api
## commons-pool
## commons-httpclient
## ws-commons-util
# useradd
# userdel</pre>
</div>
<div title="CloudStack run-time dependencies" creator="RuddO" modifier="RuddO" created="201008081310" tags="fixme" changecount="1">
<pre>Not done yet!</pre>
<div title="CloudStack run-time dependencies" creator="RuddO" modifier="RuddO" created="201008081310" modified="201009040225" tags="fixme" changecount="16">
<pre>The following software / programs must be correctly installed in the machines where you will run a CloudStack component. This list is by no means complete yet, but it will be soon.
''Note for Windows users'': Some of the packages in the lists below are only available on Windows through Cygwin. If that is your case, install them using Cygwin and remember to include the Cygwin {{{bin/}}} directory in your PATH. Under no circumstances install Cygwin Python! Use the Python for Windows official installer instead.
!Run-time dependencies common to all components of the CloudStack
# bash
# coreutils
# Sun Java 1.6
## You must install the Java Development Kit with {{{javac}}}, not just the Java Runtime Environment
## The commands {{{java}}} and {{{javac}}} must be found in your {{{PATH}}}
# Python 2.6
## Ensure that the {{{python}}} command is in your {{{PATH}}}
## Do ''not'' install Cygwin Python!
# The Java packages (usually available in your distribution):
## commons-collections
## commons-dbcp
## commons-logging
## commons-logging-api
## commons-pool
## commons-httpclient
## ws-commons-util
!Management Server-specific dependencies
# Apache Tomcat
## If you are using the official Apache binary distribution, set the environment variable {{{TOMCAT_HOME}}} to point to the Apache Tomcat directory
# MySQL
## At the very minimum, you need to have the client and libraries installed
## If you will be running the Management Server in the same machine that will run the database server, you need to have the server installed and running as well
# The MySQLdb module for Python 2.6
## If you use Windows, you can find a [[pre-built package here|http://soemin.googlecode.com/files/MySQL-python-1.2.3c1.win32-py2.6.exe]]
# openssh-clients (provides the ssh-keygen command)
# mkisofs (provides the genisoimage command)</pre>
</div>
<div title="Database migration infrastructure" creator="RuddO" modifier="RuddO" created="201009011837" modified="201009011852" changecount="14">
<pre>To support incremental migration from one version to another without having to redeploy the database, the CloudStack supports an incremental schema migration mechanism for the database.
!!!How does it work?
When the database is deployed for the first time with [[waf deploydb]] or the command {{{cloud-setup-databases}}}, a row is written to the {{{configuration}}} table, named {{{schema.level}}} and containing the current schema level. This schema level row comes from the file {{{setup/db/schema-level.sql}}} in the source (refer to the [[Installation paths]] topic to find out where this file is installed in a running system).
This value is used by the database migrator {{{cloud-migrate-databases}}} (source {{{setup/bindir/cloud-migrate-databases.in}}}) to determine the starting schema level. The database migrator has a series of classes -- each class represents a step in the migration process and is usually tied to the execution of a SQL file stored in {{{setup/db}}}. To migrate the database, the database migrator:
# walks the list of steps it knows about,
# generates a list of steps sorted by the order they should be executed in,
# executes each step in order
# at the end of each step, records the new schema level to the database table {{{configuration}}}
For more information, refer to the database migrator source -- it is documented.
!!!What impact does this have on me as a developer?
Whenever you need to evolve the schema of the database:
# write a migration SQL script and store it in {{{setup/db}}},
# include your schema changes in the appropriate SQL file {{{create-*.sql}}} too (as the database is expected to be at its latest evolved schema level right after deploying a fresh database)
# write a class in {{{setup/bindir/cloud-migrate-databases.in}}}, describing the migration step; in detail:
## the schema level your migration step expects the database to be in,
## the schema level your migration step will leave your database in (presumably the latest schema level, which you will have to choose!),
## and the name / description of the step
# bump the schema level in {{{setup/db/schema-level.sql}}} to the latest schema level
Otherwise, ''end-user migration will fail catastrophically''.</pre>
</div>
<div title="DefaultTiddlers" creator="RuddO" modifier="RuddO" created="201008072205" modified="201008072257" changecount="4">
<pre>[[Welcome]]</pre>
@ -749,13 +833,115 @@ See the files in the {{{debian/}}} folder.</pre>
<div title="Development conventions" creator="RuddO" modifier="RuddO" created="201008081334" modified="201008081336" changecount="4">
<pre>#[[Source layout guide]]</pre>
</div>
<div title="DisableWikiLinksPlugin" modifier="ELSDesignStudios" created="200512092239" modified="200807230133" tags="systemConfig" server.type="file" server.host="www.tiddlytools.com" server.page.revision="200807230133">
<pre>/***
|Name|DisableWikiLinksPlugin|
|Source|http://www.TiddlyTools.com/#DisableWikiLinksPlugin|
|Version|1.6.0|
|Author|Eric Shulman|
|License|http://www.TiddlyTools.com/#LegalStatements|
|~CoreVersion|2.1|
|Type|plugin|
|Description|selectively disable TiddlyWiki's automatic ~WikiWord linking behavior|
This plugin allows you to disable TiddlyWiki's automatic ~WikiWord linking behavior, so that WikiWords embedded in tiddler content will be rendered as regular text, instead of being automatically converted to tiddler links. To create a tiddler link when automatic linking is disabled, you must enclose the link text within {{{[[...]]}}}.
!!!!!Usage
&lt;&lt;&lt;
You can block automatic WikiWord linking behavior for any specific tiddler by ''tagging it with&lt;&lt;tag excludeWikiWords&gt;&gt;'' (see configuration below) or, check a plugin option to disable automatic WikiWord links to non-existing tiddler titles, while still linking WikiWords that correspond to existing tiddlers titles or shadow tiddler titles. You can also block specific selected WikiWords from being automatically linked by listing them in [[DisableWikiLinksList]] (see configuration below), separated by whitespace. This tiddler is optional and, when present, causes the listed words to always be excluded, even if automatic linking of other WikiWords is being permitted.
Note: WikiWords contained in default ''shadow'' tiddlers will be automatically linked unless you select an additional checkbox option lets you disable these automatic links as well, though this is not recommended, since it can make it more difficult to access some TiddlyWiki standard default content (such as AdvancedOptions or SideBarTabs)
&lt;&lt;&lt;
!!!!!Configuration
&lt;&lt;&lt;
&lt;&lt;option chkDisableWikiLinks&gt;&gt; Disable ALL automatic WikiWord tiddler links
&lt;&lt;option chkAllowLinksFromShadowTiddlers&gt;&gt; ... except for WikiWords //contained in// shadow tiddlers
&lt;&lt;option chkDisableNonExistingWikiLinks&gt;&gt; Disable automatic WikiWord links for non-existing tiddlers
Disable automatic WikiWord links for words listed in: &lt;&lt;option txtDisableWikiLinksList&gt;&gt;
Disable automatic WikiWord links for tiddlers tagged with: &lt;&lt;option txtDisableWikiLinksTag&gt;&gt;
&lt;&lt;&lt;
!!!!!Revisions
&lt;&lt;&lt;
2008.07.22 [1.6.0] hijack tiddler changed() method to filter disabled wiki words from internal links[] array (so they won't appear in the missing tiddlers list)
2007.06.09 [1.5.0] added configurable txtDisableWikiLinksTag (default value: &quot;excludeWikiWords&quot;) to allows selective disabling of automatic WikiWord links for any tiddler tagged with that value.
2006.12.31 [1.4.0] in formatter, test for chkDisableNonExistingWikiLinks
2006.12.09 [1.3.0] in formatter, test for excluded wiki words specified in DisableWikiLinksList
2006.12.09 [1.2.2] fix logic in autoLinkWikiWords() (was allowing links TO shadow tiddlers, even when chkDisableWikiLinks is TRUE).
2006.12.09 [1.2.1] revised logic for handling links in shadow content
2006.12.08 [1.2.0] added hijack of Tiddler.prototype.autoLinkWikiWords so regular (non-bracketed) WikiWords won't be added to the missing list
2006.05.24 [1.1.0] added option to NOT bypass automatic wikiword links when displaying default shadow content (default is to auto-link shadow content)
2006.02.05 [1.0.1] wrapped wikifier hijack in init function to eliminate globals and avoid FireFox 1.5.0.1 crash bug when referencing globals
2005.12.09 [1.0.0] initial release
&lt;&lt;&lt;
!!!!!Code
***/
//{{{
version.extensions.DisableWikiLinksPlugin= {major: 1, minor: 6, revision: 0, date: new Date(2008,7,22)};
if (config.options.chkDisableNonExistingWikiLinks==undefined) config.options.chkDisableNonExistingWikiLinks= false;
if (config.options.chkDisableWikiLinks==undefined) config.options.chkDisableWikiLinks=false;
if (config.options.txtDisableWikiLinksList==undefined) config.options.txtDisableWikiLinksList=&quot;DisableWikiLinksList&quot;;
if (config.options.chkAllowLinksFromShadowTiddlers==undefined) config.options.chkAllowLinksFromShadowTiddlers=true;
if (config.options.txtDisableWikiLinksTag==undefined) config.options.txtDisableWikiLinksTag=&quot;excludeWikiWords&quot;;
// find the formatter for wikiLink and replace handler with 'pass-thru' rendering
initDisableWikiLinksFormatter();
function initDisableWikiLinksFormatter() {
for (var i=0; i&lt;config.formatters.length &amp;&amp; config.formatters[i].name!=&quot;wikiLink&quot;; i++);
config.formatters[i].coreHandler=config.formatters[i].handler;
config.formatters[i].handler=function(w) {
// supress any leading &quot;~&quot; (if present)
var skip=(w.matchText.substr(0,1)==config.textPrimitives.unWikiLink)?1:0;
var title=w.matchText.substr(skip);
var exists=store.tiddlerExists(title);
var inShadow=w.tiddler &amp;&amp; store.isShadowTiddler(w.tiddler.title);
// check for excluded Tiddler
if (w.tiddler &amp;&amp; w.tiddler.isTagged(config.options.txtDisableWikiLinksTag))
{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
// check for specific excluded wiki words
var t=store.getTiddlerText(config.options.txtDisableWikiLinksList);
if (t &amp;&amp; t.length &amp;&amp; t.indexOf(w.matchText)!=-1)
{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
// if not disabling links from shadows (default setting)
if (config.options.chkAllowLinksFromShadowTiddlers &amp;&amp; inShadow)
return this.coreHandler(w);
// check for non-existing non-shadow tiddler
if (config.options.chkDisableNonExistingWikiLinks &amp;&amp; !exists)
{ w.outputText(w.output,w.matchStart+skip,w.nextMatch); return; }
// if not enabled, just do standard WikiWord link formatting
if (!config.options.chkDisableWikiLinks)
return this.coreHandler(w);
// just return text without linking
w.outputText(w.output,w.matchStart+skip,w.nextMatch)
}
}
Tiddler.prototype.coreAutoLinkWikiWords = Tiddler.prototype.autoLinkWikiWords;
Tiddler.prototype.autoLinkWikiWords = function()
{
// if all automatic links are not disabled, just return results from core function
if (!config.options.chkDisableWikiLinks)
return this.coreAutoLinkWikiWords.apply(this,arguments);
return false;
}
Tiddler.prototype.disableWikiLinks_changed = Tiddler.prototype.changed;
Tiddler.prototype.changed = function()
{
this.disableWikiLinks_changed.apply(this,arguments);
// remove excluded wiki words from links array
var t=store.getTiddlerText(config.options.txtDisableWikiLinksList,&quot;&quot;).readBracketedList();
if (t.length) for (var i=0; i&lt;t.length; i++)
if (this.links.contains(t[i]))
this.links.splice(this.links.indexOf(t[i]),1);
};
//}}}</pre>
</div>
<div title="Git" creator="RuddO" modifier="RuddO" created="201008081330" tags="fixme" changecount="1">
<pre>Not done yet!</pre>
</div>
<div title="Hacking on the CloudStack" creator="RuddO" modifier="RuddO" created="201008072228" modified="201008081354" changecount="47">
<div title="Hacking on the CloudStack" creator="RuddO" modifier="RuddO" created="201008072228" modified="201009040156" changecount="52">
<pre>Start here if you want to learn the essentials to extend, modify and enhance the CloudStack. This assumes that you've already familiarized yourself with CloudStack concepts, installation and configuration using the [[Getting started|Welcome]] instructions.
* [[Obtain the source|Obtaining the source]]
* [[Prepare your environment|Preparing your development environment]]
* [[Prepare your environment|Preparing your environment]]
* [[Get acquainted with the development lifecycle|Your development lifecycle]]
* [[Familiarize yourself with our development conventions|Development conventions]]
Extra developer information:
@ -764,6 +950,7 @@ Extra developer information:
* [[How to integrate with Eclipse]]
* [[Starting over]]
* [[Making a source release|waf dist]]
* [[How to write database migration scripts|Database migration infrastructure]]
</pre>
</div>
<div title="How to integrate with Eclipse" creator="RuddO" modifier="RuddO" created="201008081029" modified="201008081346" changecount="3">
@ -785,13 +972,13 @@ Any ant target added to the ant project files will automatically be detected --
The reason we do this rather than use the native waf capabilities for building Java projects is simple: by using ant, we can leverage the support built-in for ant in [[Eclipse|How to integrate with Eclipse]] and many other &quot;&quot;&quot;IDEs&quot;&quot;&quot;. Another reason to do this is because Java developers are familiar with ant, so adding a new JAR file or modifying what gets built into the existing JAR files is facilitated for Java developers.</pre>
</div>
<div title="Installation paths" creator="RuddO" modifier="RuddO" created="201008080025" modified="201008080028" changecount="6">
<div title="Installation paths" creator="RuddO" modifier="RuddO" created="201008080025" modified="201009012342" changecount="8">
<pre>The CloudStack build system installs files on a variety of paths, each
one of which is selectable when building from source.
* {{{$PREFIX}}}:
** the default prefix where the entire stack is installed
** defaults to /usr/local on source builds
** defaults to /usr on package builds
** defaults to {{{/usr/local}}} on source builds as root, {{{$HOME/cloudstack}}} on source builds as a regular user, {{{C:\CloudStack}}} on Windows builds
** defaults to {{{/usr}}} on package builds
* {{{$SYSCONFDIR/cloud}}}:
** the prefix for CloudStack configuration files
** defaults to $PREFIX/etc/cloud on source builds
@ -901,16 +1088,17 @@ This will create a folder called {{{cloudstack-oss}}} in your current folder.
!Browsing the source code online
You can browse the CloudStack source code through [[our CGit Web interface|http://git.cloud.com/cloudstack-oss]].</pre>
</div>
<div title="Preparing your development environment" creator="RuddO" modifier="RuddO" created="201008081133" modified="201008081159" changecount="7">
<pre>!Install the build dependencies on the machine where you will compile the CloudStack
!!Fedora / CentOS
The command [[waf installrpmdeps]] issued from the source tree gets it done.
!!Ubuntu
The command [[waf installdebdeps]] issues from the source tree gets it done.
!!Other distributions
See [[CloudStack build dependencies]]
!Install the run-time dependencies on the machines where you will run the CloudStack
See [[CloudStack run-time dependencies]].</pre>
<div title="Preparing your environment" creator="RuddO" modifier="RuddO" created="201008081133" modified="201009040238" changecount="17">
<pre>!Install the build dependencies
* If you want to compile the CloudStack on Linux:
** Fedora / CentOS: The command [[waf installrpmdeps]] issued from the source tree gets it done.
** Ubuntu: The command [[waf installdebdeps]] issues from the source tree gets it done.
** Other distributions: Manually install the packages listed in [[CloudStack build dependencies]].
* If you want to compile the CloudStack on Windows or Mac:
** Manually install the packages listed in [[CloudStack build dependencies]].
** Note that you won't be able to deploy this compiled CloudStack onto Linux machines -- you will be limited to running the Management Server.
!Install the run-time dependencies
In addition to the build dependencies, a number of software packages need to be installed on the machine to be able to run certain components of the CloudStack. These packages are not strictly required to //build// the stack, but they are required to run at least one part of it. See the topic [[CloudStack run-time dependencies]] for the list of packages.</pre>
</div>
<div title="Preserving the CloudStack configuration across source reinstalls" creator="RuddO" modifier="RuddO" created="201008080958" modified="201008080959" changecount="2">
<pre>Every time you run {{{./waf install}}} to deploy changed code, waf will install configuration files once again. This can be a nuisance if you are developing the stack.
@ -1149,9 +1337,9 @@ Cloud.com's contact information is:
!Legal information
//Unless otherwise specified// by Cloud.com, Inc., or in the sources themselves, [[this software is OSI certified Open Source Software distributed under the GNU General Public License, version 3|License statement]]. OSI Certified is a certification mark of the Open Source Initiative. The software powering this documentation is &quot;&quot;&quot;BSD-licensed&quot;&quot;&quot; and obtained from [[TiddlyWiki.com|http://tiddlywiki.com/]].</pre>
</div>
<div title="Your development lifecycle" creator="RuddO" modifier="RuddO" created="201008080933" modified="201008081349" changecount="16">
<pre>This is the typical lifecycle that you would follow when hacking on a CloudStack component, assuming that your [[development environment has been set up|Preparing your development environment]]:
# [[Configure|waf configure]] the source code&lt;br&gt;{{{./waf configure --prefix=/home/youruser/cloudstack}}}
<div title="Your development lifecycle" creator="RuddO" modifier="RuddO" created="201008080933" modified="201009040158" changecount="18">
<pre>This is the typical lifecycle that you would follow when hacking on a CloudStack component, assuming that your [[development environment has been set up|Preparing your environment]]:
# [[Configure|waf configure]] the source code&lt;br&gt;{{{./waf configure}}}
# [[Build|waf build]] and [[install|waf install]] the CloudStack
## {{{./waf install}}}
## [[How to perform these tasks from Eclipse|How to integrate with Eclipse]]
@ -1229,7 +1417,7 @@ Makes an inventory of all build products in {{{artifacts/default}}}, and removes
Contrast to [[waf distclean]].</pre>
</div>
<div title="waf configure" creator="RuddO" modifier="RuddO" created="201008080940" modified="201008081146" changecount="14">
<div title="waf configure" creator="RuddO" modifier="RuddO" created="201008080940" modified="201009012344" changecount="15">
<pre>{{{
./waf configure --prefix=/directory/that/you/have/write/permission/to
}}}
@ -1238,7 +1426,7 @@ This runs the file {{{wscript_configure}}}, which takes care of setting the var
!When / why should I run this?
You run this command //once//, in preparation to building the stack, or every time you need to change a configure-time variable. Once you find an acceptable set of configure-time variables, you should not need to run {{{configure}}} again.
!What happens if I don't run it?
For convenience reasons, if you forget to configure the source, waf will autoconfigure itself and select some sensible default configuration options. By default, {{{PREFIX}}} is {{{/usr/local}}}, but you can set it e.g. to {{{/home/youruser/cloudstack}}} if you plan to do a non-root install. Be ware that you can later install the stack as a regular user, but most components need to //run// as root.
For convenience reasons, if you forget to configure the source, waf will autoconfigure itself and select some sensible default configuration options. By default, {{{PREFIX}}} is {{{/usr/local}}} if you configure as root (do this if you plan to do a non-root install), or {{{/home/youruser/cloudstack}}} if you configure as your regular user name. Be ware that you can later install the stack as a regular user, but most components need to //run// as root.
!What variables / options exist for configure?
In general: refer to the output of {{{./waf configure --help}}}.

View File

@ -134,6 +134,7 @@ import com.cloud.agent.api.storage.CreateAnswer;
import com.cloud.agent.api.storage.CreateCommand;
import com.cloud.agent.api.storage.CreatePrivateTemplateAnswer;
import com.cloud.agent.api.storage.CreatePrivateTemplateCommand;
import com.cloud.agent.api.storage.DestroyCommand;
import com.cloud.agent.api.storage.DownloadAnswer;
import com.cloud.agent.api.storage.PrimaryStorageDownloadCommand;
import com.cloud.agent.api.to.StoragePoolTO;
@ -1116,6 +1117,8 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
return execute((MaintainCommand) cmd);
} else if (cmd instanceof CreateCommand) {
return execute((CreateCommand) cmd);
} else if (cmd instanceof DestroyCommand) {
return execute((DestroyCommand) cmd);
} else if (cmd instanceof PrimaryStorageDownloadCommand) {
return execute((PrimaryStorageDownloadCommand) cmd);
} else if (cmd instanceof CreatePrivateTemplateCommand) {
@ -1189,10 +1192,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
s_logger.debug(result);
return new CreateAnswer(cmd, result);
}
vol = createVolume(primaryPool, tmplVol);
LibvirtStorageVolumeDef volDef = new LibvirtStorageVolumeDef(UUID.randomUUID().toString(), tmplVol.getInfo().capacity, volFormat.QCOW2, tmplVol.getPath(), volFormat.QCOW2);
s_logger.debug(volDef.toString());
vol = primaryPool.storageVolCreateXML(volDef.toString(), 0);
if (vol == null) {
return new Answer(cmd, false, " Can't create storage volume on storage pool");
}
@ -1226,24 +1228,68 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
}
}
public Answer execute(DestroyCommand cmd) {
VolumeTO vol = cmd.getVolume();
try {
StorageVol volume = getVolume(vol.getPath());
if (volume == null) {
s_logger.debug("Failed to find the volume: " + vol.getPath());
return new Answer(cmd, true, "Success");
}
volume.delete(0);
volume.free();
} catch (LibvirtException e) {
s_logger.debug("Failed to delete volume: " + e.toString());
return new Answer(cmd, false, e.toString());
}
return new Answer(cmd, true, "Success");
}
protected ManageSnapshotAnswer execute(final ManageSnapshotCommand cmd) {
String snapshotName = cmd.getSnapshotName();
String VolPath = cmd.getVolumePath();
String snapshotPath = cmd.getSnapshotPath();
String vmName = cmd.getVmName();
try {
StorageVol vol = getVolume(VolPath);
if (vol == null) {
return new ManageSnapshotAnswer(cmd, false, null);
DomainInfo.DomainState state = null;
Domain vm = null;
if (vmName != null) {
try {
vm = getDomain(cmd.getVmName());
state = vm.getInfo().state;
} catch (LibvirtException e) {
}
}
Domain vm = getDomain(cmd.getVmName());
String vmUuid = vm.getUUIDString();
Object[] args = new Object[] {snapshotName, vmUuid};
String snapshot = SnapshotXML.format(args);
s_logger.debug(snapshot);
if (cmd.getCommandSwitch().equalsIgnoreCase(ManageSnapshotCommand.CREATE_SNAPSHOT)) {
vm.snapshotCreateXML(snapshot);
if (state == DomainInfo.DomainState.VIR_DOMAIN_RUNNING) {
String vmUuid = vm.getUUIDString();
Object[] args = new Object[] {snapshotName, vmUuid};
String snapshot = SnapshotXML.format(args);
s_logger.debug(snapshot);
if (cmd.getCommandSwitch().equalsIgnoreCase(ManageSnapshotCommand.CREATE_SNAPSHOT)) {
vm.snapshotCreateXML(snapshot);
} else {
DomainSnapshot snap = vm.snapshotLookupByName(snapshotName);
snap.delete(0);
}
} else {
DomainSnapshot snap = vm.snapshotLookupByName(snapshotName);
snap.delete(0);
/*VM is not running, create a snapshot by ourself*/
final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
if (cmd.getCommandSwitch().equalsIgnoreCase(ManageSnapshotCommand.CREATE_SNAPSHOT)) {
command.add("-c", VolPath);
} else {
command.add("-d", snapshotPath);
}
command.add("-n", snapshotName);
String result = command.execute();
if (result != null) {
s_logger.debug("Failed to manage snapshot: " + result);
return new ManageSnapshotAnswer(cmd, false, "Failed to manage snapshot: " + result);
}
}
} catch (LibvirtException e) {
s_logger.debug("Failed to manage snapshot: " + e.toString());
@ -1260,28 +1306,52 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
String snapshotName = cmd.getSnapshotName();
String snapshotPath = cmd.getSnapshotUuid();
String snapshotDestPath = null;
String vmName = cmd.getVmName();
try {
StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(secondaryStoragePoolURL));
String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId;
final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
snapshotDestPath = ssPmountPath + File.separator + "snapshots" + File.separator + dcId + File.separator + accountId + File.separator + volumeId;
Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
command.add("-b", snapshotPath);
command.add("-n", snapshotName);
command.add("-p", snapshotDestPath);
command.add("-t", snapshotName);
String result = command.execute();
if (result != null) {
s_logger.debug("Failed to backup snaptshot: " + result);
return new BackupSnapshotAnswer(cmd, false, result, null);
}
/*Delete the snapshot on primary*/
Domain vm = getDomain(cmd.getVmName());
String vmUuid = vm.getUUIDString();
Object[] args = new Object[] {snapshotName, vmUuid};
String snapshot = SnapshotXML.format(args);
s_logger.debug(snapshot);
DomainSnapshot snap = vm.snapshotLookupByName(snapshotName);
snap.delete(0);
DomainInfo.DomainState state = null;
Domain vm = null;
if (vmName != null) {
try {
vm = getDomain(cmd.getVmName());
state = vm.getInfo().state;
} catch (LibvirtException e) {
}
}
if (state == DomainInfo.DomainState.VIR_DOMAIN_RUNNING) {
String vmUuid = vm.getUUIDString();
Object[] args = new Object[] {snapshotName, vmUuid};
String snapshot = SnapshotXML.format(args);
s_logger.debug(snapshot);
DomainSnapshot snap = vm.snapshotLookupByName(snapshotName);
snap.delete(0);
} else {
command = new Script(_manageSnapshotPath, _timeout, s_logger);
command.add("-d", snapshotPath);
command.add("-n", snapshotName);
result = command.execute();
if (result != null) {
s_logger.debug("Failed to backup snapshot: " + result);
return new BackupSnapshotAnswer(cmd, false, "Failed to backup snapshot: " + result, null);
}
}
} catch (LibvirtException e) {
return new BackupSnapshotAnswer(cmd, false, e.toString(), null);
} catch (URISyntaxException e) {
@ -1297,7 +1367,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
try {
StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
String snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId;
String snapshotDestPath = ssPmountPath + File.separator + "snapshots" + File.separator + dcId + File.separator + accountId + File.separator + volumeId;
final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
command.add("-d", snapshotDestPath);
@ -1319,11 +1389,12 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
try {
StoragePool secondaryStoragePool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
String ssPmountPath = _mountPoint + File.separator + secondaryStoragePool.getUUIDString();
String snapshotDestPath = ssPmountPath + File.separator + dcId + File.separator + "snapshots" + File.separator + accountId + File.separator + volumeId;
String snapshotDestPath = ssPmountPath + File.separator + "snapshots" + File.separator + dcId + File.separator + accountId + File.separator + volumeId;
final Script command = new Script(_manageSnapshotPath, _timeout, s_logger);
command.add("-d", snapshotDestPath);
command.add("-n", cmd.getSnapshotName());
command.add("-f");
command.execute();
} catch (LibvirtException e) {
return new Answer(cmd, false, e.toString());
@ -1357,7 +1428,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
try {
secondaryPool = getNfsSPbyURI(_conn, new URI(cmd.getSecondaryStoragePoolURL()));
/*TODO: assuming all the storage pools mounted under _mountPoint, the mount point should be got from pool.dumpxml*/
String templatePath = _mountPoint + File.separator + secondaryPool.getUUIDString() + File.separator + templateInstallFolder;
String templatePath = _mountPoint + File.separator + secondaryPool.getUUIDString() + File.separator + templateInstallFolder;
_storage.mkdirs(templatePath);
String tmplPath = templateInstallFolder + File.separator + tmplFileName;
Script command = new Script(_createTmplPath, _timeout, s_logger);
command.add("-t", templatePath);
@ -1404,38 +1477,55 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
protected CreatePrivateTemplateAnswer execute(CreatePrivateTemplateCommand cmd) {
String secondaryStorageURL = cmd.getSecondaryStorageURL();
String snapshotUUID = cmd.getSnapshotPath();
StoragePool secondaryStorage = null;
StoragePool privateTemplStorage = null;
StorageVol privateTemplateVol = null;
StorageVol snapshotVol = null;
try {
String templateFolder = cmd.getAccountId() + File.separator + cmd.getTemplateId() + File.separator;
String templateInstallFolder = "/template/tmpl/" + templateFolder;
secondaryStorage = getNfsSPbyURI(_conn, new URI(secondaryStorageURL));
/*TODO: assuming all the storage pools mounted under _mountPoint, the mount point should be got from pool.dumpxml*/
String mountPath = _mountPoint + File.separator + secondaryStorage.getUUIDString() + templateInstallFolder;
File mpfile = new File(mountPath);
if (!mpfile.exists()) {
mpfile.mkdir();
String tmpltPath = _mountPoint + File.separator + secondaryStorage.getUUIDString() + templateInstallFolder;
_storage.mkdirs(tmpltPath);
Script command = new Script(_createTmplPath, _timeout, s_logger);
command.add("-f", cmd.getSnapshotPath());
command.add("-c", cmd.getSnapshotName());
command.add("-t", tmpltPath);
command.add("-n", cmd.getUniqueName() + ".qcow2");
command.add("-s");
String result = command.execute();
if (result != null) {
s_logger.debug("failed to create template: " + result);
return new CreatePrivateTemplateAnswer(cmd,
false,
result,
null,
0,
null,
null);
}
// Create a SR for the secondary storage installation folder
privateTemplStorage = getNfsSPbyURI(_conn, new URI(secondaryStorageURL + templateInstallFolder));
snapshotVol = getVolume(snapshotUUID);
LibvirtStorageVolumeDef vol = new LibvirtStorageVolumeDef(UUID.randomUUID().toString(), snapshotVol.getInfo().capacity, volFormat.QCOW2, null, null);
s_logger.debug(vol.toString());
privateTemplateVol = copyVolume(privateTemplStorage, vol, snapshotVol);
Map<String, Object> params = new HashMap<String, Object>();
params.put(StorageLayer.InstanceConfigKey, _storage);
Processor qcow2Processor = new QCOW2Processor();
qcow2Processor.configure("QCOW2 Processor", params);
FormatInfo info = qcow2Processor.process(tmpltPath, null, cmd.getUniqueName());
TemplateLocation loc = new TemplateLocation(_storage, tmpltPath);
loc.create(1, true, cmd.getUniqueName());
loc.addFormat(info);
loc.save();
return new CreatePrivateTemplateAnswer(cmd,
true,
null,
templateInstallFolder + privateTemplateVol.getName(),
privateTemplateVol.getInfo().capacity/1024*1024, /*in Mega unit*/
privateTemplateVol.getName(),
templateInstallFolder + cmd.getUniqueName() + ".qcow2",
info.virtualSize,
cmd.getUniqueName(),
ImageFormat.QCOW2);
} catch (URISyntaxException e) {
return new CreatePrivateTemplateAnswer(cmd,
@ -1454,7 +1544,31 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
0,
null,
null);
}
} catch (InternalErrorException e) {
return new CreatePrivateTemplateAnswer(cmd,
false,
e.toString(),
null,
0,
null,
null);
} catch (IOException e) {
return new CreatePrivateTemplateAnswer(cmd,
false,
e.toString(),
null,
0,
null,
null);
} catch (ConfigurationException e) {
return new CreatePrivateTemplateAnswer(cmd,
false,
e.toString(),
null,
0,
null,
null);
}
}
private StoragePool getNfsSPbyURI(Connect conn, URI uri) throws LibvirtException {
@ -1471,10 +1585,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
if (sp == null) {
try {
File tpFile = new File(targetPath);
if (!tpFile.exists()) {
tpFile.mkdir();
}
_storage.mkdir(targetPath);
LibvirtStoragePoolDef spd = new LibvirtStoragePoolDef(poolType.NFS, uuid, uuid,
sourceHost, sourcePath, targetPath);
s_logger.debug(spd.toString());
@ -1584,10 +1695,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
String targetPath = _mountPoint + File.separator + pool.getUuid();
LibvirtStoragePoolDef spd = new LibvirtStoragePoolDef(poolType.NFS, pool.getUuid(), pool.getUuid(),
pool.getHostAddress(), pool.getPath(), targetPath);
File tpFile = new File(targetPath);
if (!tpFile.exists()) {
tpFile.mkdir();
}
_storage.mkdir(targetPath);
StoragePool sp = null;
try {
s_logger.debug(spd.toString());
@ -2361,7 +2469,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
Iterator<Map.Entry<String, String>> itr = entrySet.iterator();
while (itr.hasNext()) {
Map.Entry<String, String> entry = itr.next();
if (entry.getValue().equalsIgnoreCase(sourceFile)) {
if ((entry.getValue() != null) && (entry.getValue().equalsIgnoreCase(sourceFile))) {
diskDev = entry.getKey();
break;
}
@ -2942,9 +3050,9 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
private String getHypervisorPath() {
File f =new File("/usr/bin/cloud-qemu-kvm");
File f =new File("/usr/bin/cloud-qemu-system-x86_64");
if (f.exists()) {
return "/usr/bin/cloud-qemu-kvm";
return "/usr/bin/cloud-qemu-system-x86_64";
} else {
if (_conn == null)
return null;
@ -3098,7 +3206,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
brName = setVnetBrName(vnetId);
String vnetDev = "vtap" + vnetId;
createVnet(vnetId, _pifs.first());
vnetNic.defBridgeNet(brName, vnetDev, guestMac, interfaceDef.nicModel.VIRTIO);
vnetNic.defBridgeNet(brName, null, guestMac, interfaceDef.nicModel.VIRTIO);
}
nics.add(vnetNic);
@ -3114,7 +3222,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
brName = setVnetBrName(vnetId);
String vnetDev = "vtap" + vnetId;
createVnet(vnetId, _pifs.second());
pubNic.defBridgeNet(brName, vnetDev, pubMac, interfaceDef.nicModel.VIRTIO);
pubNic.defBridgeNet(brName, null, pubMac, interfaceDef.nicModel.VIRTIO);
}
nics.add(pubNic);
return nics;
@ -3166,7 +3274,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
String datadiskPath = tmplVol.getKey();
diskDef hda = new diskDef();
hda.defFileBasedDisk(rootkPath, "vda", diskDef.diskBus.IDE, diskDef.diskFmtType.QCOW2);
hda.defFileBasedDisk(rootkPath, "hda", diskDef.diskBus.IDE, diskDef.diskFmtType.QCOW2);
disks.add(hda);
diskDef hdb = new diskDef();
@ -3248,7 +3356,7 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
File logPath = new File("/var/run/cloud");
if (!logPath.exists()) {
logPath.mkdir();
logPath.mkdirs();
}
cleanup_rules_for_dead_vms();
@ -3502,6 +3610,22 @@ public class LibvirtComputingResource extends ServerResourceBase implements Serv
}
}
private StorageVol createVolume(StoragePool destPool, StorageVol tmplVol) throws LibvirtException {
if (isCentosHost()) {
LibvirtStorageVolumeDef volDef = new LibvirtStorageVolumeDef(UUID.randomUUID().toString(), tmplVol.getInfo().capacity, volFormat.QCOW2, null, null);
s_logger.debug(volDef.toString());
StorageVol vol = destPool.storageVolCreateXML(volDef.toString(), 0);
/*create qcow2 image based on the name*/
Script.runSimpleBashScript("qemu-img create -f qcow2 -b " + tmplVol.getPath() + " " + vol.getPath() );
return vol;
} else {
LibvirtStorageVolumeDef volDef = new LibvirtStorageVolumeDef(UUID.randomUUID().toString(), tmplVol.getInfo().capacity, volFormat.QCOW2, tmplVol.getPath(), volFormat.QCOW2);
s_logger.debug(volDef.toString());
return destPool.storageVolCreateXML(volDef.toString(), 0);
}
}
private StorageVol getVolume(StoragePool pool, String volKey) {
StorageVol vol = null;
try {

View File

@ -94,6 +94,8 @@ public class LibvirtDomainXMLParser extends LibvirtXMLParser {
} else if (qName.equalsIgnoreCase("disk")) {
diskMaps.put(diskDev, diskFile);
_disk = false;
diskFile = null;
diskDev = null;
} else if (qName.equalsIgnoreCase("description")) {
_desc = false;
}

7
agent/wscript_build Normal file
View File

@ -0,0 +1,7 @@
import Options
bld.install_files("${AGENTLIBDIR}",
bld.path.ant_glob("storagepatch/**",src=True,bld=False,dir=False,flat=True),
cwd=bld.path,relative_trick=True)
if not Options.options.PRESERVECONFIG:
bld.install_files_filtered("${AGENTSYSCONFDIR}","conf/*")

View File

@ -22,16 +22,26 @@ public class Storage {
QCOW2(true, true, false),
RAW(false, false, false),
VHD(true, true, true),
ISO(false, false, false);
ISO(false, false, false),
VMDK(true, true, true, "vmw.tar");
private final boolean thinProvisioned;
private final boolean supportSparse;
private final boolean supportSnapshot;
private final String fileExtension;
private ImageFormat(boolean thinProvisioned, boolean supportSparse, boolean supportSnapshot) {
this.thinProvisioned = thinProvisioned;
this.supportSparse = supportSparse;
this.supportSnapshot = supportSnapshot;
fileExtension = null;
}
private ImageFormat(boolean thinProvisioned, boolean supportSparse, boolean supportSnapshot, String fileExtension) {
this.thinProvisioned = thinProvisioned;
this.supportSparse = supportSparse;
this.supportSnapshot = supportSnapshot;
this.fileExtension = fileExtension;
}
public boolean isThinProvisioned() {
@ -47,7 +57,10 @@ public class Storage {
}
public String getFileExtension() {
return toString().toLowerCase();
if(fileExtension == null)
return toString().toLowerCase();
return fileExtension;
}
}

View File

@ -17,6 +17,8 @@
*/
package com.cloud.storage;
import java.util.Date;
import com.cloud.domain.PartOf;
import com.cloud.template.BasedOn;
import com.cloud.user.OwnedBy;
@ -86,4 +88,8 @@ public interface Volume extends PartOf, OwnedBy, BasedOn {
void setSourceId(Long sourceId);
Long getSourceId();
Date getAttached();
void setAttached(Date attached);
}

View File

@ -107,6 +107,7 @@
<property name="meld.home" location="/usr/local/bin" />
<property name="assertion" value="-da" />
<!-- directories for testing -->
<property name="test.target.dir" location="${target.dir}/test" />
<property name="test.classes.dir" location="${test.target.dir}/classes" />
@ -389,10 +390,6 @@
</target>
<target name="build-console-proxy" depends="-init, build-console-viewer, compile-console-proxy, copy-console-proxy">
<copy todir="${console-proxy.dist.dir}">
<fileset dir="${console-proxy.dir}/scripts">
</fileset>
</copy>
<copy todir="${console-proxy.dist.dir}">
<fileset dir="${console-proxy.dir}/scripts">
</fileset>
@ -518,17 +515,14 @@
</target>
<target name="build-kvm-domr-patch" depends="-init">
<target name="build-systemvm-patch" depends="-init">
<mkdir dir="${dist.dir}" />
<tar destfile="${dist.dir}/patch.tar">
<tarfileset dir="${base.dir}/patches/kvm" filemode="755">
<include name="**/*"/>
<exclude name="**/.classpath" />
<exclude name="**/.project" />
</tarfileset>
<tarfileset dir="${base.dir}/patches/shared" filemode="755">
<tarfileset dir="${base.dir}/patches/systemvm" filemode="755">
<include name="**/*"/>
<exclude name="**/.classpath" />
<exclude name="**/.project" />
<exclude name="**/wscript_build" />
</tarfileset>
</tar>
<gzip destfile="${dist.dir}/patch.tgz" src="${dist.dir}/patch.tar"/>

View File

@ -100,7 +100,7 @@
</target>
<target name="deploy-server" depends="deploy-common" >
<copy todir="${server.deploy.to.dir}/webapps/client/WEB-INF/lib/vms" file="${dist.dir}/systemvm.zip" />
<copy todir="${server.deploy.to.dir}/webapps/client/WEB-INF/lib/vms" file="${dist.dir}/systemvm.iso" />
</target>
<target name="deploy-common" >
@ -114,7 +114,6 @@
<include name="*.jar"/>
</fileset>
</copy>
<copy todir="${server.deploy.to.dir}/webapps/client/WEB-INF/lib/scripts/vm/hypervisor/xenserver" file="${dist.dir}/patch.tgz" />
<touch file="${server.deploy.to.dir}/webapps/client/WEB-INF/lib/scripts/vm/hypervisor/xenserver/version"/>
<echo file="${server.deploy.to.dir}/webapps/client/WEB-INF/lib/scripts/vm/hypervisor/xenserver/version" append="false" message="${version}.${build.number}"/>
<copy overwrite="true" todir="${server.deploy.to.dir}/conf">
@ -169,11 +168,18 @@
<available file="${setup.db.dir}/override/templates.xenserver.sql" />
</condition>
<condition property="vmware.templates.file" value="override/templates.vmware.sql" else="templates.vmware.sql">
<available file="${setup.db.dir}/override/templates.vmware.sql" />
</condition>
<condition property="templates.file" value="${kvm.templates.file}" else="${xenserver.templates.file}" >
<condition property="templates.file.intermediate" value="${kvm.templates.file}" else="${xenserver.templates.file}" >
<isset property="KVM"/>
</condition>
<condition property="templates.file" value="${vmware.templates.file}" else="${templates.file.intermediate}" >
<isset property="vmware"/>
</condition>
<echo message="deploydb ${server-setup.file} ${templates.file} ${DBROOTPW}" />
<exec dir="${setup.db.dir}" executable="bash">
<arg value="deploy-db-dev.sh" />

View File

@ -23,7 +23,6 @@
<property name="docs.dist.dir" location="${dist.dir}/docs" />
<property name="db.dist.dir" location="${dist.dir}/db" />
<property name="usage.dist.dir" location="${dist.dir}/usage" />
<property name="kvm.domr.patch.dir" location="${scripts.dir}/vm/hypervisor/kvm/patch" />
<target name="-init-package">
<mkdir dir="${dist.dir}" />
@ -92,9 +91,9 @@
</target>
<target name="package-agent" depends="-init-package, package-oss-systemvm, build-kvm-domr-patch, package-agent-common">
<target name="package-agent" depends="-init-package, package-oss-systemvm, build-systemvm-patch, package-agent-common">
<zip destfile="${dist.dir}/agent.zip" duplicate="preserve" update="true">
<zipfileset dir="${dist.dir}" prefix="scripts/vm/hypervisor/kvm">
<zipfileset dir="${dist.dir}" prefix="vms">
<include name="patch.tgz" />
</zipfileset>
<zipfileset dir="${dist.dir}" prefix="vms" filemode="555">
@ -103,6 +102,18 @@
</zip>
</target>
<target name="package-oss-systemvm-iso" depends="-init-package, package-oss-systemvm, build-systemvm-patch">
<exec executable="mkisofs" dir="${dist.dir}">
<arg value="-quiet"/>
<arg value="-r"/>
<arg value="-o"/>
<arg value="systemvm.iso"/>
<arg value="systemvm.zip"/>
<arg value="patch.tgz"/>
</exec>
</target>
<target name="package-agent-simulator" depends="-init-package">
<delete file="${dist.dir}/agent-simulator.zip" />
<zip destfile="${dist.dir}/agent-simulator.zip" duplicate="preserve">
@ -123,7 +134,7 @@
</zip>
</target>
<target name="build-all" depends="build-opensource, build-kvm-domr-patch, build-ui, build-war-oss, package-oss-systemvm">
<target name="build-all" depends="build-opensource, build-ui, build-war-oss, package-oss-systemvm-iso">
</target>
<target name="build-war-oss" depends="-init-package" description="Compile the GWT client UI and builds WAR file.">

View File

@ -1,5 +1 @@
computer = computer
disk = disk
computer_disk_hahaha = computer disk hahaha
monitor = monitor
keyboard = keyboard
Details = Details

View File

@ -1,5 +1,14 @@
computer = 電腦
disk = 硬碟
computer_disk_hahaha = 電腦 硬碟 哈哈哈 !!!
monitor = 瑩幕
keyboard = 鍵盤
Details = 詳述
Volume = 容積
Statistics = 統計
Zone = 區域
Template = 模板
Service = 服務
HA = 高的可用性
Created = 產生日期
Account = 帳戶
Domain = 領土
Host = 主機
ISO = 空白模板

View File

@ -140,6 +140,7 @@ listSystemVms=com.cloud.api.commands.ListSystemVMsCmd;1
updateConfiguration=com.cloud.api.commands.UpdateCfgCmd;1
listConfigurations=com.cloud.api.commands.ListCfgsByCmd;1
addConfig=com.cloud.api.commands.AddConfigCmd;15
listCapabilities=com.cloud.api.commands.ListCapabilitiesCmd;15
#### pod commands
createPod=com.cloud.api.commands.CreatePodCmd;1
@ -208,4 +209,4 @@ listNetworkGroups=com.cloud.api.commands.ListNetworkGroupsCmd;11
registerPreallocatedLun=com.cloud.server.api.commands.RegisterPreallocatedLunCmd;1
deletePreallocatedLun=com.cloud.server.api.commands.DeletePreallocatedLunCmd;1
listPreallocatedLuns=com.cloud.api.commands.ListPreallocatedLunsCmd;1
listPreallocatedLuns=com.cloud.api.commands.ListPreallocatedLunsCmd;1

View File

@ -110,7 +110,9 @@
<dao name="GuestOSDao" class="com.cloud.storage.dao.GuestOSDaoImpl"/>
<dao name="GuestOSCategoryDao" class="com.cloud.storage.dao.GuestOSCategoryDaoImpl"/>
<dao name="ClusterDao" class="com.cloud.dc.dao.ClusterDaoImpl"/>
<dao name="NetworkProfileDao" class="com.cloud.network.dao.NetworkProfileDaoImpl"/>
<dao name="NetworkOfferingDao" class="com.cloud.offerings.dao.NetworkOfferingDaoImpl"/>
<adapters key="com.cloud.agent.manager.allocator.HostAllocator">
<adapter name="FirstFitRouting" class="com.cloud.agent.manager.allocator.impl.FirstFitRoutingAllocator"/>
<adapter name="FirstFit" class="com.cloud.agent.manager.allocator.impl.FirstFitAllocator"/>
@ -217,8 +219,9 @@
<dao name="IP Addresses configuration server" class="com.cloud.network.dao.IPAddressDaoImpl"/>
<dao name="Datacenter IP Addresses configuration server" class="com.cloud.dc.dao.DataCenterIpAddressDaoImpl"/>
<dao name="domain router" class="com.cloud.vm.dao.DomainRouterDaoImpl"/>
<dao name="host zone configuration server" class="com.cloud.dc.dao.DataCenterDaoImpl">
</dao>
<dao name="host zone configuration server" class="com.cloud.dc.dao.DataCenterDaoImpl"/>
<dao name="Console Proxy" class="com.cloud.vm.dao.ConsoleProxyDaoImpl"/>
<dao name="Secondary Storage VM" class="com.cloud.vm.dao.SecondaryStorageVmDaoImpl"/>
<dao name="host pod configuration server" class="com.cloud.dc.dao.HostPodDaoImpl">
</dao>
<dao name="PodVlanMap configuration server" class="com.cloud.dc.dao.PodVlanMapDaoImpl"/>

11
client/wscript_build Normal file
View File

@ -0,0 +1,11 @@
import Options
start_path = bld.path.find_dir("WEB-INF")
bld.install_files('${MSENVIRON}/webapps/client/WEB-INF',
start_path.ant_glob("**",src=True,bld=False,dir=False,flat=True),
cwd=start_path,relative_trick=True)
if not Options.options.PRESERVECONFIG:
bld.install_files_filtered("${MSCONF}","tomcatconf/*")
bld.install_files("${MSCONF}",'tomcatconf/db.properties',chmod=0640)
bld.setownership("${MSCONF}/db.properties","root",bld.env.MSUSER)

View File

@ -34,6 +34,8 @@ BuildRequires: commons-httpclient
BuildRequires: jpackage-utils
BuildRequires: gcc
BuildRequires: glibc-devel
BuildRequires: /usr/bin/mkisofs
BuildRequires: MySQL-python
%global _premium %(tar jtvmf %{SOURCE0} '*/cloudstack-proprietary/' --occurrence=1 2>/dev/null | wc -l)
@ -181,12 +183,11 @@ Summary: Cloud.com setup tools
Obsoletes: vmops-setup < %{version}-%{release}
Requires: java >= 1.6.0
Requires: python
Requires: mysql
Requires: MySQL-python
Requires: %{name}-utils = %{version}-%{release}
Requires: %{name}-server = %{version}-%{release}
Requires: %{name}-deps = %{version}-%{release}
Requires: %{name}-python = %{version}-%{release}
Requires: MySQL-python
Group: System Environment/Libraries
%description setup
The Cloud.com setup tools let you set up your Management Server and Usage Server.
@ -372,7 +373,6 @@ if [ "$1" == "1" ] ; then
/sbin/chkconfig --add %{name}-management > /dev/null 2>&1 || true
/sbin/chkconfig --level 345 %{name}-management on > /dev/null 2>&1 || true
fi
test -f %{_sharedstatedir}/%{name}/management/.ssh/id_rsa || su - %{name} -c 'yes "" 2>/dev/null | ssh-keygen -t rsa -q -N ""' < /dev/null
@ -456,79 +456,38 @@ fi
%doc %{_docdir}/%{name}-%{version}/sccs-info
%doc %{_docdir}/%{name}-%{version}/version-info
%doc %{_docdir}/%{name}-%{version}/configure-info
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files client-ui
%defattr(0644,root,root,0755)
%{_datadir}/%{name}/management/webapps/client/*
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files server
%defattr(0644,root,root,0755)
%{_javadir}/%{name}-server.jar
%{_sysconfdir}/%{name}/server/*
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%if %{_premium}
%files agent-scripts
%defattr(-,root,root,-)
%{_libdir}/%{name}/agent/scripts/*
%{_libdir}/%{name}/agent/vms/systemvm.zip
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%else
%files agent-scripts
%defattr(-,root,root,-)
%{_libdir}/%{name}/agent/scripts/installer/*
%{_libdir}/%{name}/agent/scripts/network/domr/*.sh
%{_libdir}/%{name}/agent/scripts/storage/*.sh
%{_libdir}/%{name}/agent/scripts/storage/zfs/*
%{_libdir}/%{name}/agent/scripts/storage/qcow2/*
%{_libdir}/%{name}/agent/scripts/storage/secondary/*
%{_libdir}/%{name}/agent/scripts/util/*
%{_libdir}/%{name}/agent/scripts/vm/*.sh
%{_libdir}/%{name}/agent/scripts/vm/storage/nfs/*
%{_libdir}/%{name}/agent/scripts/vm/storage/iscsi/*
%{_libdir}/%{name}/agent/scripts/vm/network/*
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/*.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/kvm/*
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xen/*
%{_libdir}/%{name}/agent/vms/systemvm.zip
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/*
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
# maintain the following list in sync with files agent-scripts
%if %{_premium}
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/check_heartbeat.sh
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/find_bond.sh
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/launch_hb.sh
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/setup_heartbeat_sr.sh
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/vmopspremium
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xenheartbeat.sh
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xenserver56/patch-premium
%exclude %{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xs_cleanup.sh
%endif
%{_libdir}/%{name}/agent/vms/systemvm.zip
%{_libdir}/%{name}/agent/vms/systemvm.iso
%files daemonize
%defattr(-,root,root,-)
%attr(755,root,root) %{_bindir}/%{name}-daemonize
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files deps
%defattr(0644,root,root,0755)
@ -549,39 +508,20 @@ fi
%{_javadir}/%{name}-xenserver-5.5.0-1.jar
%{_javadir}/%{name}-xmlrpc-common-3.*.jar
%{_javadir}/%{name}-xmlrpc-client-3.*.jar
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files core
%defattr(0644,root,root,0755)
%{_javadir}/%{name}-core.jar
%doc README
%doc INSTALL
%doc HACKING
%doc debian/copyright
%files vnet
%defattr(0644,root,root,0755)
%attr(0755,root,root) %{_sbindir}/%{name}-vnetd
%attr(0755,root,root) %{_sbindir}/%{name}-vn
%attr(0755,root,root) %{_initrddir}/%{name}-vnetd
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files python
%defattr(0644,root,root,0755)
%{_prefix}/lib*/python*/site-packages/%{name}*
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files setup
%attr(0755,root,root) %{_bindir}/%{name}-setup-databases
@ -591,19 +531,17 @@ fi
%{_datadir}/%{name}/setup/create-index-fk.sql
%{_datadir}/%{name}/setup/create-schema.sql
%{_datadir}/%{name}/setup/server-setup.sql
%{_datadir}/%{name}/setup/templates.kvm.sql
%{_datadir}/%{name}/setup/templates.xenserver.sql
%{_datadir}/%{name}/setup/templates.*.sql
%{_datadir}/%{name}/setup/deploy-db-dev.sh
%{_datadir}/%{name}/setup/server-setup.xml
%{_datadir}/%{name}/setup/data-20to21.sql
%{_datadir}/%{name}/setup/index-20to21.sql
%{_datadir}/%{name}/setup/index-212to213.sql
%{_datadir}/%{name}/setup/postprocess-20to21.sql
%{_datadir}/%{name}/setup/schema-20to21.sql
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%{_datadir}/%{name}/setup/schema-level.sql
%{_datadir}/%{name}/setup/schema-21to22.sql
%{_datadir}/%{name}/setup/data-21to22.sql
%files client
%defattr(0644,root,root,0755)
@ -643,19 +581,10 @@ fi
%dir %attr(770,root,%{name}) %{_localstatedir}/cache/%{name}/management/temp
%dir %attr(770,root,%{name}) %{_localstatedir}/log/%{name}/management
%dir %attr(770,root,%{name}) %{_localstatedir}/log/%{name}/agent
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files agent-libs
%defattr(0644,root,root,0755)
%{_javadir}/%{name}-agent.jar
%doc README
%doc INSTALL
%doc HACKING
%doc debian/copyright
%files agent
%defattr(0644,root,root,0755)
@ -671,11 +600,6 @@ fi
%{_libdir}/%{name}/agent/images
%attr(0755,root,root) %{_bindir}/%{name}-setup-agent
%dir %attr(770,root,root) %{_localstatedir}/log/%{name}/agent
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files console-proxy
%defattr(0644,root,root,0755)
@ -688,11 +612,6 @@ fi
%{_libdir}/%{name}/console-proxy/*
%attr(0755,root,root) %{_bindir}/%{name}-setup-console-proxy
%dir %attr(770,root,root) %{_localstatedir}/log/%{name}/console-proxy
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%if %{_premium}
@ -703,20 +622,10 @@ fi
%{_sharedstatedir}/%{name}/test/*
%{_libdir}/%{name}/test/*
%{_sysconfdir}/%{name}/test/*
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files premium-deps
%defattr(0644,root,root,0755)
%{_javadir}/%{name}-premium/*.jar
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%files premium
%defattr(0644,root,root,0755)
@ -724,15 +633,18 @@ fi
%{_javadir}/%{name}-server-extras.jar
%{_sysconfdir}/%{name}/management/commands-ext.properties
%{_sysconfdir}/%{name}/management/components-premium.xml
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/*
%{_libdir}/%{name}/agent/vms/systemvm-premium.zip
%{_libdir}/%{name}/agent/vms/systemvm-premium.iso
%{_datadir}/%{name}/setup/create-database-premium.sql
%{_datadir}/%{name}/setup/create-schema-premium.sql
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
# maintain the following list in sync with files agent-scripts
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/check_heartbeat.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/find_bond.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/launch_hb.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/setup_heartbeat_sr.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/vmopspremium
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xenheartbeat.sh
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xenserver56/patch-premium
%{_libdir}/%{name}/agent/scripts/vm/hypervisor/xenserver/xs_cleanup.sh
%files usage
%defattr(0644,root,root,0755)
@ -743,11 +655,6 @@ fi
%{_sysconfdir}/%{name}/usage/usage-components.xml
%config(noreplace) %{_sysconfdir}/%{name}/usage/log4j-%{name}_usage.xml
%config(noreplace) %attr(640,root,%{name}) %{_sysconfdir}/%{name}/usage/db.properties
%doc README
%doc INSTALL
%doc HACKING
%doc README.html
%doc debian/copyright
%endif

51
console-proxy/scripts/_run.sh Executable file
View File

@ -0,0 +1,51 @@
#!/usr/bin/env bash
#run.sh runs the console proxy.
# make sure we delete the old files from the original template
rm console-proxy.jar
rm console-common.jar
rm conf/cloud.properties
set -x
CP=./:./conf
for file in *.jar
do
CP=${CP}:$file
done
keyvalues=
if [ -f /mnt/cmdline ]
then
CMDLINE=$(cat /mnt/cmdline)
else
CMDLINE=$(cat /proc/cmdline)
fi
#CMDLINE="graphical utf8 eth0ip=0.0.0.0 eth0mask=255.255.255.0 eth1ip=192.168.140.40 eth1mask=255.255.255.0 eth2ip=172.24.0.50 eth2mask=255.255.0.0 gateway=172.24.0.1 dns1=72.52.126.11 template=domP dns2=72.52.126.12 host=192.168.1.142 port=8250 mgmtcidr=192.168.1.0/24 localgw=192.168.140.1 zone=5 pod=5"
for i in $CMDLINE
do
KEY=$(echo $i | cut -s -d= -f1)
VALUE=$(echo $i | cut -s -d= -f2)
[ "$KEY" == "" ] && continue
case $KEY in
*)
keyvalues="${keyvalues} $KEY=$VALUE"
esac
done
tot_mem_k=$(cat /proc/meminfo | grep MemTotal | awk '{print $2}')
let "tot_mem_m=tot_mem_k>>10"
let "eightypcnt=$tot_mem_m*8/10"
let "maxmem=$tot_mem_m-80"
if [ $maxmem -gt $eightypcnt ]
then
maxmem=$eightypcnt
fi
EXTRA=
if [ -f certs/realhostip.keystore ]
then
EXTRA="-Djavax.net.ssl.trustStore=$(dirname $0)/certs/realhostip.keystore -Djavax.net.ssl.trustStorePassword=vmops.com"
fi
java -mx${maxmem}m ${EXTRA} -cp $CP com.cloud.agent.AgentShell $keyvalues $@

View File

@ -2,7 +2,12 @@
BASE_DIR="/var/www/html/copy/template/"
HTACCESS="$BASE_DIR/.htaccess"
PASSWDFILE="/etc/httpd/.htpasswd"
if [ -d /etc/apache2 ]
then
PASSWDFILE="/etc/apache2/.htpasswd"
fi
config_htaccess() {
mkdir -p $BASE_DIR

View File

@ -15,6 +15,17 @@ config_httpd_conf() {
echo "</VirtualHost>" >> /etc/httpd/conf/httpd.conf
}
config_apache2_conf() {
local ip=$1
local srvr=$2
cp -f /etc/apache2/sites-available/default.orig /etc/apache2/sites-available/default
cp -f /etc/apache2/sites-available/default-ssl.orig /etc/apache2/sites-available/default-ssl
sed -i -e "s/VirtualHost.*:80$/VirtualHost $ip:80/" /etc/httpd/conf/httpd.conf
sed -i 's/_default_/$ip/' /etc/apache2/sites-available/default-ssl
sed -i 's/ssl-cert-snakeoil.key/realhostip.key/' /etc/apache2/sites-available/default-ssl
sed -i 's/ssl-cert-snakeoil.pem/realhostip.crt/' /etc/apache2/sites-available/default-ssl
}
copy_certs() {
local certdir=$(dirname $0)/certs
local mydir=$(dirname $0)
@ -25,16 +36,37 @@ copy_certs() {
return 1
}
copy_certs_apache2() {
local certdir=$(dirname $0)/certs
local mydir=$(dirname $0)
if [ -d $certdir ] && [ -f $certdir/realhostip.key ] && [ -f $certdir/realhostip.crt ] ; then
cp $certdir/realhostip.key /etc/ssl/private/ && cp $certdir/realhostip.crt /etc/ssl/certs/
return $?
fi
return 1
}
if [ $# -ne 2 ] ; then
echo $"Usage: `basename $0` ipaddr servername "
exit 0
fi
copy_certs
if [ -d /etc/apache2 ]
then
copy_certs_apache2
else
copy_certs
fi
if [ $? -ne 0 ]
then
echo "Failed to copy certificates"
exit 2
fi
config_httpd_conf $1 $2
if [ -d /etc/apache2 ]
then
config_apache2_conf $1 $2
else
config_httpd_conf $1 $2
fi

View File

@ -1,51 +1,14 @@
#!/usr/bin/env bash
#run.sh runs the console proxy.
#!/bin/bash
#_run.sh runs the agent client.
# make sure we delete the old files from the original template
rm console-proxy.jar
rm console-common.jar
rm conf/cloud.properties
set -x
CP=./:./conf
for file in *.jar
# set -x
while true
do
CP=${CP}:$file
./_run.sh "$@"
ex=$?
if [ $ex -eq 0 ] || [ $ex -eq 1 ] || [ $ex -eq 66 ] || [ $ex -gt 128 ]; then
exit $ex
fi
sleep 20
done
keyvalues=
if [ -f /mnt/cmdline ]
then
CMDLINE=$(cat /mnt/cmdline)
else
CMDLINE=$(cat /proc/cmdline)
fi
#CMDLINE="graphical utf8 eth0ip=0.0.0.0 eth0mask=255.255.255.0 eth1ip=192.168.140.40 eth1mask=255.255.255.0 eth2ip=172.24.0.50 eth2mask=255.255.0.0 gateway=172.24.0.1 dns1=72.52.126.11 template=domP dns2=72.52.126.12 host=192.168.1.142 port=8250 mgmtcidr=192.168.1.0/24 localgw=192.168.140.1 zone=5 pod=5"
for i in $CMDLINE
do
KEY=$(echo $i | cut -s -d= -f1)
VALUE=$(echo $i | cut -s -d= -f2)
[ "$KEY" == "" ] && continue
case $KEY in
*)
keyvalues="${keyvalues} $KEY=$VALUE"
esac
done
tot_mem_k=$(cat /proc/meminfo | grep MemTotal | awk '{print $2}')
let "tot_mem_m=tot_mem_k>>10"
let "eightypcnt=$tot_mem_m*8/10"
let "maxmem=$tot_mem_m-80"
if [ $maxmem -gt $eightypcnt ]
then
maxmem=$eightypcnt
fi
EXTRA=
if [ -f certs/realhostip.keystore ]
then
EXTRA="-Djavax.net.ssl.trustStore=$(dirname $0)/certs/realhostip.keystore -Djavax.net.ssl.trustStorePassword=vmops.com"
fi
java -mx${maxmem}m ${EXTRA} -cp $CP com.cloud.agent.AgentShell $keyvalues $@

View File

@ -0,0 +1,10 @@
import Options
# binary unsubstitutable files:
bld.install_files("${CPLIBDIR}",bld.path.ant_glob("images/**",src=True,bld=False,dir=False,flat=True),cwd=bld.path,relative_trick=True)
# text substitutable files (substitute with tokens from the environment bld.env):
bld.substitute('css/** js/** ui/** scripts/**',install_to="${CPLIBDIR}")
# config files (do not replace them if preserve config option is true)
if not Options.options.PRESERVECONFIG: bld.install_files_filtered("${CPSYSCONFDIR}","conf.dom0/*")

View File

@ -210,4 +210,6 @@ public interface AgentManager extends Manager {
public boolean reconnect(final long hostId) throws AgentUnavailableException;
public List<HostVO> discoverHosts(long dcId, Long podId, Long clusterId, URI url, String username, String password) throws DiscoveryException;
Answer easySend(Long hostId, Command cmd, int timeout);
}

View File

@ -30,6 +30,7 @@ public class CreateCommand extends Command {
private DiskCharacteristics diskCharacteristics;
private String templateUrl;
private long size;
private String instanceName;
protected CreateCommand() {
super();
@ -63,6 +64,7 @@ public class CreateCommand extends Command {
this.pool = new StoragePoolTO(pool);
this.templateUrl = null;
this.size = size;
//this.instanceName = vm.getInstanceName();
}
@Override
@ -89,4 +91,8 @@ public class CreateCommand extends Command {
public long getSize(){
return this.size;
}
public String getInstanceName() {
return instanceName;
}
}

View File

@ -28,6 +28,16 @@ public class PrimaryStorageDownloadCommand extends AbstractDownloadCommand {
String localPath;
String poolUuid;
long poolId;
//
// Temporary hacking to make vmware work quickly, expose NFS raw information to allow
// agent do quick copy over NFS.
//
// provide storage URL (it contains all information to help agent resource to mount the
// storage if needed, example of such URL may be as following
// nfs://192.168.10.231/export/home/kelven/vmware-test/secondary
String secondaryStorageUrl;
String primaryStorageUrl;
protected PrimaryStorageDownloadCommand() {
}
@ -54,6 +64,22 @@ public class PrimaryStorageDownloadCommand extends AbstractDownloadCommand {
return localPath;
}
public void setSecondaryStorageUrl(String url) {
secondaryStorageUrl = url;
}
public String getSecondaryStorageUrl() {
return secondaryStorageUrl;
}
public void setPrimaryStorageUrl(String url) {
primaryStorageUrl = url;
}
public String getPrimaryStorageUrl() {
return primaryStorageUrl;
}
@Override
public boolean executeInSequence() {
return true;

View File

@ -121,6 +121,11 @@ public class EventTypes {
public static final String EVENT_SERVICE_OFFERING_EDIT = "SERVICE.OFFERING.EDIT";
public static final String EVENT_SERVICE_OFFERING_DELETE = "SERVICE.OFFERING.DELETE";
// Disk Offerings
public static final String EVENT_DISK_OFFERING_CREATE = "DISK.OFFERING.CREATE";
public static final String EVENT_DISK_OFFERING_EDIT = "DISK.OFFERING.EDIT";
public static final String EVENT_DISK_OFFERING_DELETE = "DISK.OFFERING.DELETE";
// Pods
public static final String EVENT_POD_CREATE = "POD.CREATE";
public static final String EVENT_POD_EDIT = "POD.EDIT";

View File

@ -1,5 +1,5 @@
/**
: * Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
* Copyright (C) 2010 Cloud.com, Inc. All rights reserved.
*
* This software is licensed under the GNU General Public License v3 or later.
*
@ -17,6 +17,7 @@
*/
package com.cloud.hypervisor.xen.resource;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
@ -152,6 +153,7 @@ import com.cloud.host.Host.Type;
import com.cloud.hypervisor.Hypervisor;
import com.cloud.network.Network.BroadcastDomainType;
import com.cloud.network.Network.TrafficType;
import com.cloud.hypervisor.xen.resource.XenServerConnectionPool.XenServerConnection;
import com.cloud.resource.ServerResource;
import com.cloud.storage.Storage;
import com.cloud.storage.Storage.ImageFormat;
@ -345,6 +347,11 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
}
}
protected VDI cloudVDIcopy(VDI vdi, SR sr) throws BadServerResponse, XenAPIException, XmlRpcException{
Connection conn = getConnection();
return vdi.copy(conn, sr);
}
protected void destroyStoppedVm() {
Map<VM, VM.Record> vmentries = null;
Connection conn = getConnection();
@ -1063,29 +1070,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
}
protected Answer execute(ModifySshKeysCommand cmd) {
String publickey = cmd.getPubKey();
String privatekey = cmd.getPrvKey();
com.trilead.ssh2.Connection sshConnection = new com.trilead.ssh2.Connection(_host.ip, 22);
try {
sshConnection.connect(null, 60000, 60000);
if (!sshConnection.authenticateWithPassword(_username, _password)) {
throw new Exception("Unable to authenticate");
}
SCPClient scp = new SCPClient(sshConnection);
scp.put(publickey.getBytes(), "id_rsa.pub", "/opt/xensource/bin", "0600");
scp.put(privatekey.getBytes(), "id_rsa", "/opt/xensource/bin", "0600");
scp.put(privatekey.getBytes(), "id_rsa.cloud", "/root/.ssh", "0600");
return new Answer(cmd);
} catch (Exception e) {
String msg = " scp ssh key failed due to " + e.toString() + " - " + e.getMessage();
s_logger.warn(msg);
} finally {
sshConnection.close();
}
return new Answer(cmd, false, "modifySshkeys failed");
return new Answer(cmd);
}
private boolean doPingTest(final String computingHostIp) {
@ -2093,6 +2078,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
/* Does the template exist in primary storage pool? If yes, no copy */
VDI vmtmpltvdi = null;
VDI snapshotvdi = null;
Set<VDI> vdis = VDI.getByNameLabel(conn, "Template " + cmd.getName());
@ -2124,20 +2110,22 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
s_logger.warn(msg);
return new DownloadAnswer(null, 0, msg, com.cloud.storage.VMTemplateStorageResourceAssoc.Status.DOWNLOAD_ERROR, "", "", 0);
}
vmtmpltvdi = tmpltvdi.copy(conn, poolsr);
vmtmpltvdi.setNameLabel(conn, "Template " + cmd.getName());
vmtmpltvdi = cloudVDIcopy(tmpltvdi, poolsr);
snapshotvdi = vmtmpltvdi.snapshot(conn, new HashMap<String, String>());
vmtmpltvdi.destroy(conn);
snapshotvdi.setNameLabel(conn, "Template " + cmd.getName());
// vmtmpltvdi.setNameDescription(conn, cmd.getDescription());
uuid = vmtmpltvdi.getUuid(conn);
uuid = snapshotvdi.getUuid(conn);
vmtmpltvdi = snapshotvdi;
} else
uuid = vmtmpltvdi.getUuid(conn);
// Determine the size of the template
long createdSize = vmtmpltvdi.getVirtualSize(conn);
long phySize = vmtmpltvdi.getPhysicalUtilisation(conn);
DownloadAnswer answer = new DownloadAnswer(null, 100, cmd, com.cloud.storage.VMTemplateStorageResourceAssoc.Status.DOWNLOADED, uuid, uuid);
answer.setTemplateSize(createdSize);
answer.setTemplateSize(phySize);
return answer;
@ -3187,13 +3175,6 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
Ternary<SR, VDI, VolumeVO> mount = mounts.get(0);
if (!patchSystemVm(mount.second(), vmName)) { // FIXME make this
// nonspecific
String msg = "patch system vm failed";
s_logger.warn(msg);
return msg;
}
Set<VM> templates = VM.getByNameLabel(conn, "CentOS 5.3");
if (templates.size() == 0) {
templates = VM.getByNameLabel(conn, "CentOS 5.3 (64-bit)");
@ -3232,6 +3213,17 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
vbdr.type = Types.VbdType.DISK;
VBD.create(conn, vbdr);
/* create CD-ROM VBD */
VBD.Record cdromVBDR = new VBD.Record();
cdromVBDR.VM = vm;
cdromVBDR.empty = true;
cdromVBDR.bootable = false;
cdromVBDR.userdevice = "3";
cdromVBDR.mode = Types.VbdMode.RO;
cdromVBDR.type = Types.VbdType.CD;
VBD cdromVBD = VBD.create(conn, cdromVBDR);
cdromVBD.insert(conn, VDI.getByUuid(conn, _host.systemvmisouuid));
/* create VIF0 */
VIF.Record vifr = new VIF.Record();
@ -3488,8 +3480,12 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
return false;
return true;
}
protected String callHostPlugin(String plugin, String cmd, String... params) {
//default time out is 300 s
return callHostPluginWithTimeOut(plugin, cmd, 300, params);
}
protected String callHostPluginWithTimeOut(String plugin, String cmd, int timeout, String... params) {
Map<String, String> args = new HashMap<String, String>();
Session slaveSession = null;
Connection slaveConn = null;
@ -3501,15 +3497,13 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
// TODO Auto-generated catch block
e.printStackTrace();
}
slaveConn = new Connection(slaveUrl, 10);
slaveConn = new Connection(slaveUrl, timeout);
slaveSession = Session.slaveLocalLoginWithPassword(slaveConn, _username, _password);
if (s_logger.isDebugEnabled()) {
s_logger.debug("Slave logon successful. session= " + slaveSession);
}
Host host = Host.getByUuid(slaveConn, _host.uuid);
for (int i = 0; i < params.length; i += 2) {
args.put(params[i], params[i + 1]);
}
@ -3530,7 +3524,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
} finally {
if( slaveSession != null) {
try {
slaveSession.localLogout(slaveConn);
Session.localLogout(slaveConn);
} catch (Exception e) {
}
}
@ -3848,13 +3842,9 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
} catch (XenAPIException e) {
String msg = "Unable to disable VLAN network due to " + e.toString();
s_logger.warn(msg, e);
throw new InternalErrorException(msg);
} catch (XmlRpcException e) {
} catch (Exception e) {
String msg = "Unable to disable VLAN network due to " + e.getMessage();
s_logger.warn(msg, e);
throw new InternalErrorException(msg);
} catch (Exception e) {
throw new InternalErrorException(e.getMessage());
}
}
@ -4013,7 +4003,38 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
try {
Host myself = Host.getByUuid(conn, _host.uuid);
_host.pool = getPoolUuid();
boolean findsystemvmiso = false;
Set<SR> srs = SR.getByNameLabel(conn, "XenServer Tools");
if( srs.size() != 1 ) {
throw new CloudRuntimeException("There are " + srs.size() + " SRs with name XenServer Tools");
}
SR sr = srs.iterator().next();
sr.scan(conn);
SR.Record srr = sr.getRecord(conn);
_host.systemvmisouuid = null;
for( VDI vdi : srr.VDIs ) {
VDI.Record vdir = vdi.getRecord(conn);
if(vdir.nameLabel.contains("systemvm-premium")){
_host.systemvmisouuid = vdir.uuid;
break;
}
}
if( _host.systemvmisouuid == null ) {
for( VDI vdi : srr.VDIs ) {
VDI.Record vdir = vdi.getRecord(conn);
if(vdir.nameLabel.contains("systemvm")){
_host.systemvmisouuid = vdir.uuid;
break;
}
}
}
if( _host.systemvmisouuid == null ) {
throw new CloudRuntimeException("can not find systemvmiso");
}
String name = "cloud-private";
if (_privateNetworkName != null) {
name = _privateNetworkName;
@ -4298,63 +4319,64 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
}
SCPClient scp = new SCPClient(sshConnection);
File file = new File(_patchPath);
Properties props = new Properties();
props.load(new FileInputStream(file));
String path = _patchPath.substring(0, _patchPath.lastIndexOf(File.separator) + 1);
for (Map.Entry<Object, Object> entry : props.entrySet()) {
String k = (String) entry.getKey();
String v = (String) entry.getValue();
assert (k != null && k.length() > 0 && v != null && v.length() > 0) : "Problems with " + k + "=" + v;
String[] tokens = v.split(",");
String f = null;
if (tokens.length == 3 && tokens[0].length() > 0) {
if (tokens[0].startsWith("/")) {
f = tokens[0];
} else if (tokens[0].startsWith("~")) {
String homedir = System.getenv("HOME");
f = homedir + tokens[0].substring(1) + k;
} else {
f = path + tokens[0] + '/' + k;
}
} else {
f = path + k;
}
String d = tokens[tokens.length - 1];
f = f.replace('/', File.separatorChar);
String p = "0755";
if (tokens.length == 3) {
p = tokens[1];
} else if (tokens.length == 2) {
p = tokens[0];
}
if (!new File(f).exists()) {
s_logger.warn("We cannot locate " + f);
continue;
}
if (s_logger.isDebugEnabled()) {
s_logger.debug("Copying " + f + " to " + d + " on " + hr.address + " with permission " + p);
}
scp.put(f, d, p);
List<File> files = getPatchFiles();
if( files == null || files.isEmpty() ) {
throw new CloudRuntimeException("Can not find patch file");
}
for( File file :files) {
Properties props = new Properties();
props.load(new FileInputStream(file));
for (Map.Entry<Object, Object> entry : props.entrySet()) {
String k = (String) entry.getKey();
String v = (String) entry.getValue();
assert (k != null && k.length() > 0 && v != null && v.length() > 0) : "Problems with " + k + "=" + v;
String[] tokens = v.split(",");
String f = null;
if (tokens.length == 3 && tokens[0].length() > 0) {
if (tokens[0].startsWith("/")) {
f = tokens[0];
} else if (tokens[0].startsWith("~")) {
String homedir = System.getenv("HOME");
f = homedir + tokens[0].substring(1) + k;
} else {
f = path + tokens[0] + '/' + k;
}
} else {
f = path + k;
}
String d = tokens[tokens.length - 1];
f = f.replace('/', File.separatorChar);
String p = "0755";
if (tokens.length == 3) {
p = tokens[1];
} else if (tokens.length == 2) {
p = tokens[0];
}
if (!new File(f).exists()) {
s_logger.warn("We cannot locate " + f);
continue;
}
if (s_logger.isDebugEnabled()) {
s_logger.debug("Copying " + f + " to " + d + " on " + hr.address + " with permission " + p);
}
scp.put(f, d, p);
}
}
} catch (IOException e) {
throw new CloudRuntimeException("Unable to setup the server correctly", e);
} finally {
sshConnection.close();
}
try {
// wait 2 seconds before call plugin
Thread.sleep(2000);
} catch (final InterruptedException ex) {
}
if (!setIptables()) {
s_logger.warn("set xenserver Iptable failed");
}
@ -4372,6 +4394,13 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
}
}
protected List<File> getPatchFiles() {
List<File> files = new ArrayList<File>();
File file = new File(_patchPath);
files.add(file);
return files;
}
protected SR getSRByNameLabelandHost(String name) throws BadServerResponse, XenAPIException, XmlRpcException {
Connection conn = getConnection();
Set<SR> srs = SR.getByNameLabel(conn, name);
@ -4429,9 +4458,8 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
SR.Record srr = sr.getRecord(conn);
Set<PBD> pbds = sr.getPBDs(conn);
if (pbds.size() == 0) {
String msg = "There is no PBDs for this SR: " + _host.uuid;
String msg = "There is no PBDs for this SR: " + srr.nameLabel + " on host:" + _host.uuid;
s_logger.warn(msg);
removeSR(sr);
return false;
}
Set<Host> hosts = null;
@ -4485,17 +4513,11 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
protected Answer execute(ModifyStoragePoolCommand cmd) {
StoragePoolVO pool = cmd.getPool();
StoragePoolTO poolTO = new StoragePoolTO(pool);
try {
Connection conn = getConnection();
SR sr = getStorageRepository(conn, pool);
if (!checkSR(sr)) {
String msg = "ModifyStoragePoolCommand checkSR failed! host:" + _host.uuid + " pool: " + pool.getName() + pool.getHostAddress() + pool.getPath();
s_logger.warn(msg);
return new Answer(cmd, false, msg);
}
sr.setNameLabel(conn, pool.getUuid());
sr.setNameDescription(conn, pool.getName());
SR sr = getStorageRepository(conn, poolTO);
long capacity = sr.getPhysicalSize(conn);
long available = capacity - sr.getPhysicalUtilisation(conn);
if (capacity == -1) {
@ -4520,14 +4542,10 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
protected Answer execute(DeleteStoragePoolCommand cmd) {
StoragePoolVO pool = cmd.getPool();
StoragePoolTO poolTO = new StoragePoolTO(pool);
try {
Connection conn = getConnection();
SR sr = getStorageRepository(conn, pool);
if (!checkSR(sr)) {
String msg = "DeleteStoragePoolCommand checkSR failed! host:" + _host.uuid + " pool: " + pool.getName() + pool.getHostAddress() + pool.getPath();
s_logger.warn(msg);
return new Answer(cmd, false, msg);
}
SR sr = getStorageRepository(conn, poolTO);
sr.setNameLabel(conn, pool.getUuid());
sr.setNameDescription(conn, pool.getName());
@ -4937,119 +4955,10 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
s_logger.warn(msg, e);
throw new CloudRuntimeException(msg, e);
}
}
protected SR getIscsiSR(Connection conn, StoragePoolVO pool) {
synchronized (pool.getUuid().intern()) {
Map<String, String> deviceConfig = new HashMap<String, String>();
try {
String target = pool.getHostAddress().trim();
String path = pool.getPath().trim();
if (path.endsWith("/")) {
path = path.substring(0, path.length() - 1);
}
String tmp[] = path.split("/");
if (tmp.length != 3) {
String msg = "Wrong iscsi path " + pool.getPath() + " it should be /targetIQN/LUN";
s_logger.warn(msg);
throw new CloudRuntimeException(msg);
}
String targetiqn = tmp[1].trim();
String lunid = tmp[2].trim();
String scsiid = "";
Set<SR> srs = SR.getByNameLabel(conn, pool.getUuid());
for (SR sr : srs) {
if (!SRType.LVMOISCSI.equals(sr.getType(conn)))
continue;
Set<PBD> pbds = sr.getPBDs(conn);
if (pbds.isEmpty())
continue;
PBD pbd = pbds.iterator().next();
Map<String, String> dc = pbd.getDeviceConfig(conn);
if (dc == null)
continue;
if (dc.get("target") == null)
continue;
if (dc.get("targetIQN") == null)
continue;
if (dc.get("lunid") == null)
continue;
if (target.equals(dc.get("target")) && targetiqn.equals(dc.get("targetIQN")) && lunid.equals(dc.get("lunid"))) {
return sr;
}
}
deviceConfig.put("target", target);
deviceConfig.put("targetIQN", targetiqn);
Host host = Host.getByUuid(conn, _host.uuid);
SR sr = null;
try {
sr = SR.create(conn, host, deviceConfig, new Long(0), pool.getUuid(), pool.getName(), SRType.LVMOISCSI.toString(), "user", true, new HashMap<String, String>());
} catch (XenAPIException e) {
String errmsg = e.toString();
if (errmsg.contains("SR_BACKEND_FAILURE_107")) {
String lun[] = errmsg.split("<LUN>");
boolean found = false;
for (int i = 1; i < lun.length; i++) {
int blunindex = lun[i].indexOf("<LUNid>") + 7;
int elunindex = lun[i].indexOf("</LUNid>");
String ilun = lun[i].substring(blunindex, elunindex);
ilun = ilun.trim();
if (ilun.equals(lunid)) {
int bscsiindex = lun[i].indexOf("<SCSIid>") + 8;
int escsiindex = lun[i].indexOf("</SCSIid>");
scsiid = lun[i].substring(bscsiindex, escsiindex);
scsiid = scsiid.trim();
found = true;
break;
}
}
if (!found) {
String msg = "can not find LUN " + lunid + " in " + errmsg;
s_logger.warn(msg);
throw new CloudRuntimeException(msg);
}
} else {
String msg = "Unable to create Iscsi SR " + deviceConfig + " due to " + e.toString();
s_logger.warn(msg, e);
throw new CloudRuntimeException(msg, e);
}
}
deviceConfig.put("SCSIid", scsiid);
sr = SR.create(conn, host, deviceConfig, new Long(0), pool.getUuid(), pool.getName(), SRType.LVMOISCSI.toString(), "user", true, new HashMap<String, String>());
if( !checkSR(sr) ) {
throw new Exception("no attached PBD");
}
sr.scan(conn);
return sr;
} catch (XenAPIException e) {
String msg = "Unable to create Iscsi SR " + deviceConfig + " due to " + e.toString();
s_logger.warn(msg, e);
throw new CloudRuntimeException(msg, e);
} catch (Exception e) {
String msg = "Unable to create Iscsi SR " + deviceConfig + " due to " + e.getMessage();
s_logger.warn(msg, e);
throw new CloudRuntimeException(msg, e);
}
}
}
protected SR getIscsiSR(Connection conn, StoragePoolTO pool) {
protected SR getIscsiSR(StoragePoolTO pool) {
Connection conn = getConnection();
synchronized (pool.getUuid().intern()) {
Map<String, String> deviceConfig = new HashMap<String, String>();
try {
@ -5098,6 +5007,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
if (checkSR(sr)) {
return sr;
}
throw new CloudRuntimeException("SR check failed for storage pool: " + pool.getUuid() + "on host:" + _host.uuid);
}
}
@ -5157,13 +5067,12 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
}
}
protected SR getNfsSR(StoragePoolVO pool) {
protected SR getNfsSR(StoragePoolTO pool) {
Connection conn = getConnection();
Map<String, String> deviceConfig = new HashMap<String, String>();
try {
String server = pool.getHostAddress();
String server = pool.getHost();
String serverpath = pool.getPath();
serverpath = serverpath.replace("//", "/");
Set<SR> srs = SR.getAll(conn);
@ -5192,59 +5101,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
if (checkSR(sr)) {
return sr;
}
}
}
deviceConfig.put("server", server);
deviceConfig.put("serverpath", serverpath);
Host host = Host.getByUuid(conn, _host.uuid);
SR sr = SR.create(conn, host, deviceConfig, new Long(0), pool.getUuid(), pool.getName(), SRType.NFS.toString(), "user", true, new HashMap<String, String>());
sr.scan(conn);
return sr;
} catch (XenAPIException e) {
String msg = "Unable to create NFS SR " + deviceConfig + " due to " + e.toString();
s_logger.warn(msg, e);
throw new CloudRuntimeException(msg, e);
} catch (Exception e) {
String msg = "Unable to create NFS SR " + deviceConfig + " due to " + e.getMessage();
s_logger.warn(msg);
throw new CloudRuntimeException(msg, e);
}
}
protected SR getNfsSR(Connection conn, StoragePoolTO pool) {
Map<String, String> deviceConfig = new HashMap<String, String>();
String server = pool.getHost();
String serverpath = pool.getPath();
serverpath = serverpath.replace("//", "/");
try {
Set<SR> srs = SR.getAll(conn);
for (SR sr : srs) {
if (!SRType.NFS.equals(sr.getType(conn)))
continue;
Set<PBD> pbds = sr.getPBDs(conn);
if (pbds.isEmpty())
continue;
PBD pbd = pbds.iterator().next();
Map<String, String> dc = pbd.getDeviceConfig(conn);
if (dc == null)
continue;
if (dc.get("server") == null)
continue;
if (dc.get("serverpath") == null)
continue;
if (server.equals(dc.get("server")) && serverpath.equals(dc.get("serverpath"))) {
return sr;
throw new CloudRuntimeException("SR check failed for storage pool: " + pool.getUuid() + "on host:" + _host.uuid);
}
}
@ -5331,6 +5188,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
public CopyVolumeAnswer execute(final CopyVolumeCommand cmd) {
String volumeUUID = cmd.getVolumePath();
StoragePoolVO pool = cmd.getPool();
StoragePoolTO poolTO = new StoragePoolTO(pool);
String secondaryStorageURL = cmd.getSecondaryStorageURL();
URI uri = null;
@ -5364,7 +5222,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
srcVolume = getVDIbyUuid(volumeUUID);
// Copy the volume to secondary storage
destVolume = srcVolume.copy(conn, secondaryStorage);
destVolume = cloudVDIcopy(srcVolume, secondaryStorage);
} else {
// Mount the volume folder
secondaryStorage = createNfsSRbyURI(new URI(secondaryStorageURL + "/volumes/" + volumeFolder), false);
@ -5383,8 +5241,8 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
}
// Copy the volume to the primary storage pool
primaryStoragePool = getStorageRepository(conn, pool);
destVolume = srcVolume.copy(conn, primaryStoragePool);
primaryStoragePool = getStorageRepository(conn, poolTO);
destVolume = cloudVDIcopy(srcVolume, primaryStoragePool);
}
String srUUID;
@ -5773,7 +5631,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
// Look up the snapshot and copy it to secondary storage
VDI snapshot = getVDIbyUuid(snapshotUUID);
privateTemplate = snapshot.copy(conn, secondaryStorage);
privateTemplate = cloudVDIcopy(snapshot, secondaryStorage);
if (userSpecifiedName != null) {
privateTemplate.setNameLabel(conn, userSpecifiedName);
@ -6019,7 +5877,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
if (vdi != null) {
s_logger.debug("Successfully created VDI on secondary storage SR " + temporarySROnSecondaryStorage.getNameLabel(conn) + " with uuid " + vhdUUID);
s_logger.debug("Copying VDI: " + vdi.getLocation(conn) + " from secondary to primary");
VDI vdiOnPrimaryStorage = vdi.copy(conn, primaryStorageSR);
VDI vdiOnPrimaryStorage = cloudVDIcopy(vdi, primaryStorageSR);
// vdi.copy introduces the vdi into the database. Don't
// need to do a scan on the primary
// storage.
@ -6257,40 +6115,6 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
throw new CloudRuntimeException("Unable to get SR " + pool.getUuid() + " due to " + e.getMessage(), e);
}
if (srs.size() > 1) {
throw new CloudRuntimeException("More than one storage repository was found for pool with uuid: " + pool.getUuid());
}
if (srs.size() == 1) {
SR sr = srs.iterator().next();
if (s_logger.isDebugEnabled()) {
s_logger.debug("SR retrieved for " + pool.getId() + " is mapped to " + sr.toString());
}
if (checkSR(sr)) {
return sr;
}
}
if (pool.getType() == StoragePoolType.NetworkFilesystem)
return getNfsSR(conn, pool);
else if (pool.getType() == StoragePoolType.IscsiLUN)
return getIscsiSR(conn, pool);
else
throw new CloudRuntimeException("The pool type: " + pool.getType().name() + " is not supported.");
}
protected SR getStorageRepository(Connection conn, StoragePoolVO pool) {
Set<SR> srs;
try {
srs = SR.getByNameLabel(conn, pool.getUuid());
} catch (XenAPIException e) {
throw new CloudRuntimeException("Unable to get SR " + pool.getUuid() + " due to " + e.toString(), e);
} catch (Exception e) {
throw new CloudRuntimeException("Unable to get SR " + pool.getUuid() + " due to " + e.getMessage(), e);
}
if (srs.size() > 1) {
throw new CloudRuntimeException("More than one storage repository was found for pool with uuid: " + pool.getUuid());
} else if (srs.size() == 1) {
@ -6302,15 +6126,15 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
if (checkSR(sr)) {
return sr;
}
throw new CloudRuntimeException("Check this SR failed");
throw new CloudRuntimeException("SR check failed for storage pool: " + pool.getUuid() + "on host:" + _host.uuid);
} else {
if (pool.getPoolType() == StoragePoolType.NetworkFilesystem)
if (pool.getType() == StoragePoolType.NetworkFilesystem)
return getNfsSR(pool);
else if (pool.getPoolType() == StoragePoolType.IscsiLUN)
return getIscsiSR(conn, pool);
else if (pool.getType() == StoragePoolType.IscsiLUN)
return getIscsiSR(pool);
else
throw new CloudRuntimeException("The pool type: " + pool.getPoolType().name() + " is not supported.");
throw new CloudRuntimeException("The pool type: " + pool.getType().name() + " is not supported.");
}
}
@ -6385,7 +6209,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
checksum = "";
}
String result = callHostPlugin("vmopsSnapshot", "post_create_private_template", "remoteTemplateMountPath", remoteTemplateMountPath, "templateDownloadFolder", templateDownloadFolder,
String result = callHostPluginWithTimeOut("vmopsSnapshot", "post_create_private_template", 110*60, "remoteTemplateMountPath", remoteTemplateMountPath, "templateDownloadFolder", templateDownloadFolder,
"templateInstallFolder", templateInstallFolder, "templateFilename", templateFilename, "templateName", templateName, "templateDescription", templateDescription,
"checksum", checksum, "virtualSize", String.valueOf(virtualSize), "templateId", String.valueOf(templateId));
@ -6423,7 +6247,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
// Each argument is put in a separate line for readability.
// Using more lines does not harm the environment.
String results = callHostPlugin("vmopsSnapshot", "backupSnapshot", "primaryStorageSRUuid", primaryStorageSRUuid, "dcId", dcId.toString(), "accountId", accountId.toString(), "volumeId",
String results = callHostPluginWithTimeOut("vmopsSnapshot", "backupSnapshot", 110*60, "primaryStorageSRUuid", primaryStorageSRUuid, "dcId", dcId.toString(), "accountId", accountId.toString(), "volumeId",
volumeId.toString(), "secondaryStorageMountPath", secondaryStorageMountPath, "snapshotUuid", snapshotUuid, "prevSnapshotUuid", prevSnapshotUuid, "prevBackupUuid",
prevBackupUuid, "isFirstSnapshotOfRootVolume", isFirstSnapshotOfRootVolume.toString(), "isISCSI", isISCSI.toString());
@ -6526,7 +6350,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
String failureString = "Could not create volume from " + backedUpSnapshotUuid;
templatePath = (templatePath == null) ? "" : templatePath;
String results = callHostPlugin("vmopsSnapshot", "createVolumeFromSnapshot", "dcId", dcId.toString(), "accountId", accountId.toString(), "volumeId", volumeId.toString(),
String results = callHostPluginWithTimeOut("vmopsSnapshot","createVolumeFromSnapshot", 110*60, "dcId", dcId.toString(), "accountId", accountId.toString(), "volumeId", volumeId.toString(),
"secondaryStorageMountPath", secondaryStorageMountPath, "backedUpSnapshotUuid", backedUpSnapshotUuid, "templatePath", templatePath, "templateDownloadFolder",
templateDownloadFolder, "isISCSI", isISCSI.toString());
@ -6639,6 +6463,7 @@ public abstract class CitrixResourceBase implements StoragePoolResource, ServerR
// the resource first connects to XenServer. These UUIDs do
// not change over time.
protected class XenServerHost {
public String systemvmisouuid;
public String uuid;
public String ip;
public String publicNetwork;

View File

@ -76,7 +76,8 @@ public class Criteria {
public static final String TARGET_IQN = "targetiqn";
public static final String SCOPE = "scope";
public static final String NETWORKGROUP = "networkGroup";
public static final String GROUP = "group";
public static final String EMPTY_GROUP = "emptyGroup";
public Criteria(String orderBy, Boolean ascending, Long offset, Long limit) {
this.offset = offset;

View File

@ -615,8 +615,8 @@ public interface ManagementServer {
* @volumeId
* @throws InvalidParameterValueException, InternalErrorException
*/
void detachVolumeFromVM(long volumeId, long startEventId) throws InternalErrorException;
long detachVolumeFromVMAsync(long volumeId) throws InvalidParameterValueException;
void detachVolumeFromVM(long volumeId, long startEventId, long deviceId, long instanceId) throws InternalErrorException;
long detachVolumeFromVMAsync(long volumeId, long deviceId, long instanceId) throws InvalidParameterValueException;
/**
* Attaches an ISO to the virtual CDROM device of the specified VM. Will fail if the VM already has an ISO mounted.
@ -1836,14 +1836,14 @@ public interface ManagementServer {
* @param tags Comma separated string to indicate special tags for the disk offering.
* @return the created disk offering, null if failed to create
*/
DiskOfferingVO createDiskOffering(long domainId, String name, String description, int numGibibytes, String tags) throws InvalidParameterValueException;
DiskOfferingVO createDiskOffering(long userId, long domainId, String name, String description, int numGibibytes, String tags) throws InvalidParameterValueException;
/**
* Delete a disk offering
* @param id id of the disk offering to delete
* @return true if deleted, false otherwise
*/
boolean deleteDiskOffering(long id);
boolean deleteDiskOffering(long userId, long id);
/**
* Update a disk offering
@ -2195,4 +2195,8 @@ public interface ManagementServer {
*/
void extractTemplate(String url, Long templateId, Long zoneId) throws URISyntaxException;
Map<String, String> listCapabilities();
GuestOSVO getGuestOs(Long guestOsId);
VolumeVO findVolumeByInstanceAndDeviceId(long instanceId, long deviceId);
VolumeVO getRootVolume(Long instanceId);
}

View File

@ -90,6 +90,10 @@ public class VolumeVO implements Volume {
@Column(name="created")
Date created;
@Column(name="attached")
@Temporal(value=TemporalType.TIMESTAMP)
Date attached;
@Column(name="data_center_id")
long dataCenterId;
@ -535,4 +539,15 @@ public class VolumeVO implements Volume {
public Long getSourceId(){
return this.sourceId;
}
@Override
public Date getAttached(){
return this.attached;
}
@Override
public void setAttached(Date attached){
this.attached = attached;
}
}

View File

@ -46,4 +46,5 @@ public interface VolumeDao extends GenericDao<VolumeVO, Long> {
List<VolumeVO> listRemovedButNotDestroyed();
List<VolumeVO> findCreatedByInstance(long id);
List<VolumeVO> findByPoolId(long poolId);
List<VolumeVO> findByInstanceAndDeviceId(long instanceId, long deviceId);
}

View File

@ -61,6 +61,7 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
protected final GenericSearchBuilder<VolumeVO, Long> ActiveTemplateSearch;
protected final SearchBuilder<VolumeVO> RemovedButNotDestroyedSearch;
protected final SearchBuilder<VolumeVO> PoolIdSearch;
protected final SearchBuilder<VolumeVO> InstanceAndDeviceIdSearch;
protected static final String SELECT_VM_SQL = "SELECT DISTINCT instance_id from volumes v where v.host_id = ? and v.mirror_state = ?";
protected static final String SELECT_VM_ID_SQL = "SELECT DISTINCT instance_id from volumes v where v.host_id = ?";
@ -117,6 +118,14 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
sc.setParameters("instanceId", id);
return listActiveBy(sc);
}
@Override
public List<VolumeVO> findByInstanceAndDeviceId(long instanceId, long deviceId){
SearchCriteria<VolumeVO> sc = InstanceAndDeviceIdSearch.create();
sc.setParameters("instanceId", instanceId);
sc.setParameters("deviceId", deviceId);
return listActiveBy(sc);
}
@Override
public List<VolumeVO> findByPoolId(long poolId) {
@ -234,6 +243,7 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
volume.setInstanceId(vmId);
volume.setDeviceId(deviceId);
volume.setUpdated(new Date());
volume.setAttached(new Date());
update(volumeId, volume);
}
@ -243,6 +253,7 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
volume.setInstanceId(null);
volume.setDeviceId(null);
volume.setUpdated(new Date());
volume.setAttached(null);
update(volumeId, volume);
}
@ -302,6 +313,11 @@ public class VolumeDaoImpl extends GenericDaoBase<VolumeVO, Long> implements Vol
InstanceIdSearch.and("instanceId", InstanceIdSearch.entity().getInstanceId(), SearchCriteria.Op.EQ);
InstanceIdSearch.done();
InstanceAndDeviceIdSearch = createSearchBuilder();
InstanceAndDeviceIdSearch.and("instanceId", InstanceAndDeviceIdSearch.entity().getInstanceId(), SearchCriteria.Op.EQ);
InstanceAndDeviceIdSearch.and("deviceId", InstanceAndDeviceIdSearch.entity().getDeviceId(), SearchCriteria.Op.EQ);
InstanceAndDeviceIdSearch.done();
PoolIdSearch = createSearchBuilder();
PoolIdSearch.and("poolId", PoolIdSearch.entity().getPoolId(), SearchCriteria.Op.EQ);
PoolIdSearch.done();

View File

@ -303,7 +303,7 @@ public class DownloadManagerImpl implements DownloadManager {
}
// add options common to ISO and template
String extension = dnld.getFormat().toString().toLowerCase();
String extension = dnld.getFormat().getFileExtension();
String templateName = "";
if( extension.equals("iso")) {
templateName = jobs.get(jobId).getTmpltName().trim().replace(" ", "_");
@ -353,6 +353,7 @@ public class DownloadManagerImpl implements DownloadManager {
try {
info = processor.process(templatePath, null, templateName);
} catch (InternalErrorException e) {
s_logger.error("Template process exception ", e);
return e.toString();
}
if (info != null) {
@ -781,6 +782,11 @@ public class DownloadManagerImpl implements DownloadManager {
processor = new QCOW2Processor();
processor.configure("QCOW2 Processor", params);
processors.add(processor);
processor = new VmdkProcessor();
processor.configure("VMDK Processor", params);
processors.add(processor);
// Add more processors here.
threadPool = Executors.newFixedThreadPool(numInstallThreads);
return true;

View File

@ -0,0 +1,69 @@
package com.cloud.storage.template;
import java.io.File;
import java.util.Map;
import javax.naming.ConfigurationException;
import org.apache.log4j.Logger;
import com.cloud.exception.InternalErrorException;
import com.cloud.storage.StorageLayer;
import com.cloud.storage.Storage.ImageFormat;
public class VmdkProcessor implements Processor {
private static final Logger s_logger = Logger.getLogger(VmdkProcessor.class);
String _name;
StorageLayer _storage;
@Override
public FormatInfo process(String templatePath, ImageFormat format, String templateName) throws InternalErrorException {
if (format != null) {
if(s_logger.isInfoEnabled())
s_logger.info("We currently don't handle conversion from " + format + " to VMDK.");
return null;
}
s_logger.info("Template processing. templatePath: " + templatePath + ", templateName: " + templateName);
String templateFilePath = templatePath + File.separator + templateName + "." + ImageFormat.VMDK.getFileExtension();
if (!_storage.exists(templateFilePath)) {
if(s_logger.isInfoEnabled())
s_logger.info("Unable to find the vmware template file: " + templateFilePath);
return null;
}
FormatInfo info = new FormatInfo();
info.format = ImageFormat.VMDK;
info.filename = templateName + "." + ImageFormat.VMDK.getFileExtension();
info.size = _storage.getSize(templateFilePath);
info.virtualSize = info.size;
return info;
}
@Override
public boolean configure(String name, Map<String, Object> params) throws ConfigurationException {
_name = name;
_storage = (StorageLayer)params.get(StorageLayer.InstanceConfigKey);
if (_storage == null) {
throw new ConfigurationException("Unable to get storage implementation");
}
return true;
}
@Override
public String getName() {
return _name;
}
@Override
public boolean start() {
return true;
}
@Override
public boolean stop() {
return true;
}
}

View File

@ -116,6 +116,8 @@ public class UserVmDaoImpl extends GenericDaoBase<UserVmVO, Long> implements Use
DestroySearch.and("updateTime", DestroySearch.entity().getUpdateTime(), SearchCriteria.Op.LT);
DestroySearch.done();
_updateTimeAttr = _allAttributes.get("updateTime");
assert _updateTimeAttr != null : "Couldn't get this updateTime attribute";
}

View File

@ -80,4 +80,5 @@ public interface VMInstanceDao extends GenericDao<VMInstanceVO, Long> {
List<VMInstanceVO> listByHostIdTypes(long hostid, VirtualMachine.Type... types);
List<VMInstanceVO> listUpByHostIdTypes(long hostid, VirtualMachine.Type... types);
List<VMInstanceVO> listByZoneIdAndType(long zoneId, VirtualMachine.Type type);
}

View File

@ -51,6 +51,7 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
protected final SearchBuilder<VMInstanceVO> HostSearch;
protected final SearchBuilder<VMInstanceVO> LastHostSearch;
protected final SearchBuilder<VMInstanceVO> ZoneSearch;
protected final SearchBuilder<VMInstanceVO> ZoneVmTypeSearch;
protected final SearchBuilder<VMInstanceVO> ZoneTemplateNonExpungedSearch;
protected final SearchBuilder<VMInstanceVO> NameLikeSearch;
protected final SearchBuilder<VMInstanceVO> StateChangeSearch;
@ -79,6 +80,11 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
ZoneSearch = createSearchBuilder();
ZoneSearch.and("zone", ZoneSearch.entity().getDataCenterId(), SearchCriteria.Op.EQ);
ZoneSearch.done();
ZoneVmTypeSearch = createSearchBuilder();
ZoneVmTypeSearch.and("zone", ZoneVmTypeSearch.entity().getDataCenterId(), SearchCriteria.Op.EQ);
ZoneVmTypeSearch.and("type", ZoneVmTypeSearch.entity().getType(), SearchCriteria.Op.EQ);
ZoneVmTypeSearch.done();
ZoneTemplateNonExpungedSearch = createSearchBuilder();
ZoneTemplateNonExpungedSearch.and("zone", ZoneTemplateNonExpungedSearch.entity().getDataCenterId(), SearchCriteria.Op.EQ);
@ -193,6 +199,15 @@ public class VMInstanceDaoImpl extends GenericDaoBase<VMInstanceVO, Long> implem
return listActiveBy(sc);
}
@Override
public List<VMInstanceVO> listByZoneIdAndType(long zoneId, VirtualMachine.Type type) {
SearchCriteria<VMInstanceVO> sc = ZoneVmTypeSearch.create();
sc.setParameters("zone", zoneId);
sc.setParameters("type", type.toString());
return listActiveBy(sc);
}
@Override
public List<VMInstanceVO> listNonExpungedByZoneAndTemplate(long zoneId, long templateId) {
SearchCriteria<VMInstanceVO> sc = ZoneTemplateNonExpungedSearch.create();

7
daemonize/wscript_build Normal file
View File

@ -0,0 +1,7 @@
if bld.env.DISTRO not in ['Windows','Mac']:
# build / install declarations of the daemonization utility - except for Windows
bld(
name='daemonize',
features='cc cprogram',
source='daemonize.c',
target='cloud-daemonize')

View File

@ -1,2 +1,29 @@
/usr/lib/cloud/agent/scripts/*
/usr/lib/cloud/agent/scripts/installer/*
/usr/lib/cloud/agent/scripts/network/*
/usr/lib/cloud/agent/scripts/storage/*
/usr/lib/cloud/agent/scripts/util/*
/usr/lib/cloud/agent/scripts/vm/network/*
/usr/lib/cloud/agent/scripts/vm/pingtest.sh
/usr/lib/cloud/agent/scripts/vm/storage/*
/usr/lib/cloud/agent/scripts/vm/hypervisor/kvm/*
/usr/lib/cloud/agent/scripts/vm/hypervisor/versions.sh
/usr/lib/cloud/agent/scripts/vm/hypervisor/xen/*
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/hostvmstats.py
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/id_rsa.cloud
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/make_migratable.sh
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/network_info.sh
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/networkUsage.sh
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/setup_iscsi.sh
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/setupxenserver.sh
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/vmops
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/vmopsSnapshot
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/xcpserver/*
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/xenserver56/cleanup.py
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/xenserver56/ISCSISR.py
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/xenserver56/LUNperVDI.py
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/xenserver56/nfs.py
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/xenserver56/NFSSR.py
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/xenserver56/patch
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/xenserver56/scsiutil.py
/usr/lib/cloud/agent/vms/systemvm.zip
/usr/lib/cloud/agent/vms/systemvm.iso

View File

@ -17,8 +17,6 @@ case "$1" in
chgrp cloud $i
done
test -f /var/lib/cloud/management/.ssh/id_rsa || su - cloud -c 'yes "" | ssh-keygen -t rsa -q -N ""' < /dev/null
for i in /etc/cloud/management/db.properties
do
chmod 0640 $i

View File

@ -4,6 +4,12 @@
/etc/cloud/management/components-premium.xml
/usr/share/cloud/setup/create-database-premium.sql
/usr/share/cloud/setup/create-schema-premium.sql
/usr/lib/cloud/agent/scripts/vm/hypervisor/xen/*
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/*
/usr/lib/cloud/agent/vms/systemvm-premium.zip
/usr/lib/cloud/agent/vms/systemvm-premium.iso
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/check_heartbeat.sh
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/find_bond.sh
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/launch_hb.sh
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/setup_heartbeat_sr.sh
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/vmopspremium
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/xenheartbeat.sh
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/xenserver56/patch-premium
/usr/lib/cloud/agent/scripts/vm/hypervisor/xenserver/xs_cleanup.sh

View File

@ -4,11 +4,14 @@
/usr/share/cloud/setup/create-index-fk.sql
/usr/share/cloud/setup/create-schema.sql
/usr/share/cloud/setup/server-setup.sql
/usr/share/cloud/setup/templates.kvm.sql
/usr/share/cloud/setup/templates.xenserver.sql
/usr/share/cloud/setup/templates.*.sql
/usr/share/cloud/setup/deploy-db-dev.sh
/usr/share/cloud/setup/server-setup.xml
/usr/share/cloud/setup/data-20to21.sql
/usr/share/cloud/setup/index-20to21.sql
/usr/share/cloud/setup/index-212to213.sql
/usr/share/cloud/setup/postprocess-20to21.sql
/usr/share/cloud/setup/schema-20to21.sql
/usr/share/cloud/setup/schema-level.sql
/usr/share/cloud/setup/schema-21to22.sql
/usr/share/cloud/setup/data-21to22.sql

4
debian/control vendored
View File

@ -2,7 +2,7 @@ Source: cloud
Section: libs
Priority: extra
Maintainer: Manuel Amador (Rudd-O) <manuel@cloud.com>
Build-Depends: debhelper (>= 7), openjdk-6-jdk, tomcat6, libws-commons-util-java, libcommons-dbcp-java, libcommons-collections-java, libcommons-httpclient-java, libservlet2.5-java
Build-Depends: debhelper (>= 7), openjdk-6-jdk, tomcat6, libws-commons-util-java, libcommons-dbcp-java, libcommons-collections-java, libcommons-httpclient-java, libservlet2.5-java, genisoimage, python-mysqldb
Standards-Version: 3.8.1
Homepage: http://techcenter.cloud.com/software/cloudstack
@ -128,7 +128,7 @@ Provides: vmops-setup
Conflicts: vmops-setup
Replaces: vmops-setup
Architecture: any
Depends: openjdk-6-jre, python, cloud-utils (= ${source:Version}), mysql-client, cloud-deps (= ${source:Version}), cloud-server (= ${source:Version}), cloud-python (= ${source:Version}), python-mysqldb
Depends: openjdk-6-jre, python, cloud-utils (= ${source:Version}), cloud-deps (= ${source:Version}), cloud-server (= ${source:Version}), cloud-python (= ${source:Version}), python-mysqldb
Description: Cloud.com client
The Cloud.com setup tools let you set up your Management Server and Usage Server.

2
debian/rules vendored
View File

@ -91,7 +91,7 @@ binary-common:
dh_testdir
dh_testroot
dh_installchangelogs
dh_installdocs -A README INSTALL HACKING README.html
dh_installdocs -A README.html
# dh_installexamples
# dh_installmenu
# dh_installdebconf

1
deps/wscript_build vendored Normal file
View File

@ -0,0 +1 @@
bld.install_files('${JAVADIR}','*.jar')

View File

@ -1,223 +0,0 @@
#! /bin/bash
# chkconfig: 35 09 90
# description: pre-boot configuration using boot line parameters
# This file exists in /etc/init.d/
replace_in_file() {
local filename=$1
local keyname=$2
local value=$3
sed -i /$keyname=/d $filename
echo "$keyname=$value" >> $filename
return $?
}
setup_interface() {
local intfnum=$1
local ip=$2
local mask=$3
cfg=/etc/sysconfig/network-scripts/ifcfg-eth${intfnum}
replace_in_file ${cfg} IPADDR ${ip}
replace_in_file ${cfg} NETMASK ${mask}
replace_in_file ${cfg} BOOTPROTO STATIC
if [ "$ip" == "0.0.0.0" ]
then
replace_in_file ${cfg} ONBOOT No
else
replace_in_file ${cfg} ONBOOT Yes
fi
}
setup_common() {
setup_interface "0" $ETH0_IP $ETH0_MASK
setup_interface "1" $ETH1_IP $ETH1_MASK
setup_interface "2" $ETH2_IP $ETH2_MASK
replace_in_file /etc/sysconfig/network GATEWAY $GW
replace_in_file /etc/sysconfig/network HOSTNAME $NAME
echo "NOZEROCONF=yes" >> /etc/sysconfig/network
hostname $NAME
#Nameserver
if [ -n "$NS1" ]
then
echo "nameserver $NS1" > /etc/dnsmasq-resolv.conf
echo "nameserver $NS1" > /etc/resolv.conf
fi
if [ -n "$NS2" ]
then
echo "nameserver $NS2" >> /etc/dnsmasq-resolv.conf
echo "nameserver $NS2" >> /etc/resolv.conf
fi
if [[ -n "$MGMTNET" && -n "$LOCAL_GW" ]]
then
echo "$MGMTNET via $LOCAL_GW dev eth1" > /etc/sysconfig/network-scripts/route-eth1
fi
}
setup_router() {
setup_common
[ -z $DHCP_RANGE ] && DHCP_RANGE=$ETH0_IP
if [ -n "$DOMAIN" ]
then
#send domain name to dhcp clients
sed -i s/[#]*dhcp-option=15.*$/dhcp-option=15,\"$DOMAIN\"/ /etc/dnsmasq.conf
#DNS server will append $DOMAIN to local queries
sed -r -i s/^[#]?domain=.*$/domain=$DOMAIN/ /etc/dnsmasq.conf
#answer all local domain queries
sed -i -e "s/^[#]*local=.*$/local=\/$DOMAIN\//" /etc/dnsmasq.conf
fi
sed -i -e "s/^dhcp-range=.*$/dhcp-range=$DHCP_RANGE,static/" /etc/dnsmasq.conf
sed -i -e "s/^[#]*listen-address=.*$/listen-address=$ETH0_IP/" /etc/dnsmasq.conf
sed -i /gateway/d /etc/hosts
echo "$ETH0_IP $NAME" >> /etc/hosts
[ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*$/Listen $ETH0_IP:80/" /etc/httpd/conf/httpd.conf
[ -f /etc/httpd/conf.d/ssl.conf ] && mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.bak
[ -f /etc/ssh/sshd_config ] && sed -i -e "s/^[#]*ListenAddress.*$/ListenAddress $ETH1_IP/" /etc/ssh/sshd_config
}
setup_dhcpsrvr() {
setup_common
[ -z $DHCP_RANGE ] && DHCP_RANGE=$ETH0_IP
if [ -n "$DOMAIN" ]
then
#send domain name to dhcp clients
sed -i s/[#]*dhcp-option=15.*$/dhcp-option=15,\"$DOMAIN\"/ /etc/dnsmasq.conf
#DNS server will append $DOMAIN to local queries
sed -r -i s/^[#]?domain=.*$/domain=$DOMAIN/ /etc/dnsmasq.conf
#answer all local domain queries
sed -i -e "s/^[#]*local=.*$/local=\/$DOMAIN\//" /etc/dnsmasq.conf
else
#delete domain option
sed -i /^dhcp-option=15.*$/d /etc/dnsmasq.conf
sed -i /^domain=.*$/d /etc/dnsmasq.conf
sed -i -e "/^local=.*$/d" /etc/dnsmasq.conf
fi
sed -i -e "s/^dhcp-range=.*$/dhcp-range=$DHCP_RANGE,static/" /etc/dnsmasq.conf
sed -i -e "s/^[#]*dhcp-option=option:router.*$/dhcp-option=option:router,$GW/" /etc/dnsmasq.conf
echo "dhcp-option=6,$NS1,$NS2" >> /etc/dnsmasq.conf
sed -i /gateway/d /etc/hosts
echo "$ETH0_IP $NAME" >> /etc/hosts
[ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*$/Listen $ETH0_IP:80/" /etc/httpd/conf/httpd.conf
[ -f /etc/httpd/conf.d/ssl.conf ] && mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.bak
}
setup_secstorage() {
setup_common
sed -i /gateway/d /etc/hosts
public_ip=$ETH2_IP
[ "$ETH2_IP" == "0.0.0.0" ] && public_ip=$ETH1_IP
echo "$public_ip $NAME" >> /etc/hosts
[ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*:80$/Listen $public_ip:80/" /etc/httpd/conf/httpd.conf
[ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*:443$/Listen $public_ip:443/" /etc/httpd/conf/httpd.conf
}
setup_console_proxy() {
setup_common
public_ip=$ETH2_IP
[ "$ETH2_IP" == "0.0.0.0" ] && public_ip=$ETH1_IP
sed -i /gateway/d /etc/hosts
echo "$public_ip $NAME" >> /etc/hosts
}
if [ -f /mnt/cmdline ]
then
CMDLINE=$(cat /mnt/cmdline)
else
CMDLINE=$(cat /proc/cmdline)
fi
TYPE="router"
for i in $CMDLINE
do
# search for foo=bar pattern and cut out foo
KEY=$(echo $i | cut -d= -f1)
VALUE=$(echo $i | cut -d= -f2)
case $KEY in
eth0ip)
ETH0_IP=$VALUE
;;
eth1ip)
ETH1_IP=$VALUE
;;
eth2ip)
ETH2_IP=$VALUE
;;
gateway)
GW=$VALUE
;;
eth0mask)
ETH0_MASK=$VALUE
;;
eth1mask)
ETH1_MASK=$VALUE
;;
eth2mask)
ETH2_MASK=$VALUE
;;
dns1)
NS1=$VALUE
;;
dns2)
NS2=$VALUE
;;
domain)
DOMAIN=$VALUE
;;
mgmtcidr)
MGMTNET=$VALUE
;;
localgw)
LOCAL_GW=$VALUE
;;
template)
TEMPLATE=$VALUE
;;
name)
NAME=$VALUE
;;
dhcprange)
DHCP_RANGE=$(echo $VALUE | tr ':' ',')
;;
type)
TYPE=$VALUE
;;
esac
done
case $TYPE in
router)
[ "$NAME" == "" ] && NAME=router
setup_router
;;
dhcpsrvr)
[ "$NAME" == "" ] && NAME=dhcpsrvr
setup_dhcpsrvr
;;
secstorage)
[ "$NAME" == "" ] && NAME=secstorage
setup_secstorage;
;;
consoleproxy)
[ "$NAME" == "" ] && NAME=consoleproxy
setup_console_proxy;
;;
esac
if [ ! -d /root/.ssh ]
then
mkdir /root/.ssh
chmod 700 /root/.ssh
fi
if [ -f /mnt/id_rsa.pub ]
then
cat /mnt/id_rsa.pub > /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
fi

View File

@ -1,33 +0,0 @@
# Generated by iptables-save v1.3.8 on Thu Oct 1 18:16:05 2009
# @VERSION@
*nat
:PREROUTING ACCEPT [499:70846]
:POSTROUTING ACCEPT [1:85]
:OUTPUT ACCEPT [1:85]
COMMIT
# Completed on Thu Oct 1 18:16:06 2009
# Generated by iptables-save v1.3.8 on Thu Oct 1 18:16:06 2009
*filter
#:INPUT DROP [288:42467]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [65:9665]
-A INPUT -i eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i eth0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i eth0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i eth1 -p tcp -m tcp --dport 3922 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 8080 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -p tcp -m tcp --dport 8001 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
-A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 8001 -j ACCEPT
-A INPUT -i eth2 -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
-A INPUT -i eth2 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A FORWARD -i eth0 -o eth1 -j ACCEPT
-A FORWARD -i eth0 -o eth2 -j ACCEPT
-A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i eth2 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Thu Oct 1 18:16:06 2009

View File

@ -1,48 +0,0 @@
# Load additional iptables modules (nat helpers)
# Default: -none-
# Space separated list of nat helpers (e.g. 'ip_nat_ftp ip_nat_irc'), which
# are loaded after the firewall rules are applied. Options for the helpers are
# stored in /etc/modprobe.conf.
IPTABLES_MODULES="ip_conntrack_ftp nf_nat_ftp"
# Unload modules on restart and stop
# Value: yes|no, default: yes
# This option has to be 'yes' to get to a sane state for a firewall
# restart or stop. Only set to 'no' if there are problems unloading netfilter
# modules.
IPTABLES_MODULES_UNLOAD="yes"
# Save current firewall rules on stop.
# Value: yes|no, default: no
# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets stopped
# (e.g. on system shutdown).
IPTABLES_SAVE_ON_STOP="no"
# Save current firewall rules on restart.
# Value: yes|no, default: no
# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets
# restarted.
IPTABLES_SAVE_ON_RESTART="no"
# Save (and restore) rule and chain counter.
# Value: yes|no, default: no
# Save counters for rules and chains to /etc/sysconfig/iptables if
# 'service iptables save' is called or on stop or restart if SAVE_ON_STOP or
# SAVE_ON_RESTART is enabled.
IPTABLES_SAVE_COUNTER="no"
# Numeric status output
# Value: yes|no, default: yes
# Print IP addresses and port numbers in numeric format in the status output.
IPTABLES_STATUS_NUMERIC="yes"
# Verbose status output
# Value: yes|no, default: yes
# Print info about the number of packets and bytes plus the "input-" and
# "outputdevice" in the status output.
IPTABLES_STATUS_VERBOSE="no"
# Status output with numbered lines
# Value: yes|no, default: yes
# Print a counter/number for every rule in the status output.
IPTABLES_STATUS_LINENUMBERS="yes"

View File

@ -74,13 +74,15 @@ resolv-file=/etc/dnsmasq-resolv.conf
interface=eth0
# Or you can specify which interface _not_ to listen on
except-interface=eth1
except-interface=eth2
# Or which to listen on by address (remember to include 127.0.0.1 if
# you use this.)
#listen-address=
# If you want dnsmasq to provide only DNS service on an interface,
# configure it as shown above, and then use the following line to
# disable DHCP on it.
#no-dhcp-interface=eth1
no-dhcp-interface=eth1
no-dhcp-interface=eth2
# On systems which support it, dnsmasq binds the wildcard address,
# even when it is listening on only some interfaces. It then discards
@ -109,7 +111,7 @@ expand-hosts
# 2) Sets the "domain" DHCP option thereby potentially setting the
# domain of all systems configured by DHCP
# 3) Provides the domain part for "expand-hosts"
domain=foo.com
#domain=foo.com
# Uncomment this to enable the integrated DHCP server, you need
# to supply the range of addresses available for lease and optionally
@ -248,7 +250,7 @@ dhcp-hostsfile=/etc/dhcphosts.txt
#dhcp-option=27,1
# Set the domain
dhcp-option=15,"foo.com"
#dhcp-option=15,"foo.com"
# Send the etherboot magic flag and then etherboot options (a string).
#dhcp-option=128,e4:45:74:68:00:00

View File

@ -26,7 +26,14 @@ setup_console_proxy() {
echo "$public_ip $NAME" >> /etc/hosts
}
CMDLINE=$(cat /proc/cmdline)
if [ -f /mnt/cmdline ]
then
CMDLINE=$(cat /mnt/cmdline)
else
CMDLINE=$(cat /proc/cmdline)
fi
TYPE="router"
BOOTPROTO="static"

View File

@ -49,7 +49,13 @@ setup_common() {
if [ "$BOOTPROTO" == "static" ]
then
replace_in_file /etc/sysconfig/network GATEWAY $GW
replace_in_file /etc/sysconfig/network GATEWAY $GW
if [ -n "$ETH2_IP" -a "$ETH2_IP" != "0.0.0.0" ]
then
replace_in_file /etc/sysconfig/network GATEWAYDEV "eth2"
else
sed -i /GATEWAYDEV/d /etc/sysconfig/network
fi
else
sed -i /GATEWAY/d /etc/sysconfig/network
fi
@ -112,7 +118,7 @@ setup_dhcpsrvr() {
sed -i -e "s/^dhcp-range=.*$/dhcp-range=$DHCP_RANGE,static/" /etc/dnsmasq.conf
sed -i -e "s/^[#]*dhcp-option=option:router.*$/dhcp-option=option:router,$GW/" /etc/dnsmasq.conf
#for now set up ourself as the dns server as well
#echo "dhcp-option=6,$NS1,$NS2" >> /etc/dnsmasq.conf
sed -i s/[#]*dhcp-option=6.*$/dhcp-option=6,\"$NS1\",\"$NS2\"/ /etc/dnsmasq.conf
sed -i /gateway/d /etc/hosts
echo "$ETH0_IP $NAME" >> /etc/hosts
[ -f /etc/httpd/conf/httpd.conf ] && sed -i -e "s/^Listen.*$/Listen $ETH0_IP:80/" /etc/httpd/conf/httpd.conf
@ -137,7 +143,25 @@ setup_console_proxy() {
echo "$public_ip $NAME" >> /etc/hosts
}
CMDLINE=$(cat /proc/cmdline)
if [ -f /mnt/cmdline ]
then
CMDLINE=$(cat /mnt/cmdline)
else
CMDLINE=$(cat /proc/cmdline)
fi
if [ ! -d /root/.ssh ]
then
mkdir /root/.ssh
chmod 700 /root/.ssh
fi
if [ -f /mnt/id_rsa.pub ]
then
cat /mnt/id_rsa.pub > /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
fi
TYPE="router"
BOOTPROTO="static"

View File

@ -1,13 +1,12 @@
# @VERSION@
*nat
:PREROUTING ACCEPT [499:70846]
:POSTROUTING ACCEPT [1:85]
:OUTPUT ACCEPT [1:85]
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT
*filter
:INPUT DROP [288:42467]
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [65:9665]
:OUTPUT ACCEPT [0:0]
-A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT
@ -17,7 +16,9 @@ COMMIT
-A INPUT -i eth0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i eth1 -p tcp -m state --state NEW --dport 3922 -j ACCEPT
-A INPUT -i eth0 -p tcp -m state --state NEW --dport 8080 -j ACCEPT
-A INPUT -i eth0 -p tcp -m state --state NEW --dport 80 -j ACCEPT
-A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i eth0 -o eth2 -j ACCEPT
-A FORWARD -i eth2 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
COMMIT

View File

@ -0,0 +1 @@
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA3VD1tGRDn3stlJvPNXmQZdQCNjqcfY+xlitd5q0n3KYqJ5OBrty3/00XBUdLt31TbQ4dv+GR7uEr+ex7rm0jjmTFKV4rHYPi882CuC5+bkBp5R4k+mpcyKbxb+IoNS9ItbiExQxMiiRQpHvNem0GGnNFO3lElRPwUFs8evTvZu5HcTj4k4RJLJ66jeIGJ3sMAJ03SICGwfEZjrsyeOMwJk7cH8WNeuNzxzoZd9v02eI0lHdK9O5z7FwrxvRBbzsmJ0EwuhbH8pR7WR6kGLTNP9KEwtrnzV1LYWd+rFoSeh6ImExG7fma3Ldydg8CPTQsjvCEQUxiuV1/x5am5VJlUw== root@r-6-TEST

View File

@ -0,0 +1,116 @@
#/bin/bash
# $Id: patchsystemvm.sh 10800 2010-07-16 13:48:39Z edison $ $HeadURL: svn://svn.lab.vmops.com/repos/branches/2.1.x/java/scripts/vm/hypervisor/xenserver/prepsystemvm.sh $
#set -x
logfile="/var/log/patchsystemvm.log"
#
# To use existing console proxy .zip-based package file
#
patch_console_proxy() {
local patchfile=$1
rm /usr/local/cloud/systemvm -rf
mkdir -p /usr/local/cloud/systemvm
echo "All" | unzip $patchfile -d /usr/local/cloud/systemvm >$logfile 2>&1
find /usr/local/cloud/systemvm/ -name \*.sh | xargs chmod 555
return 0
}
consoleproxy_svcs() {
chkconfig cloud on
chkconfig postinit on
chkconfig domr_webserver off
chkconfig haproxy off ;
chkconfig dnsmasq off
chkconfig sshd on
chkconfig httpd off
chkconfig nfs off
chkconfig nfslock off
chkconfig rpcbind off
chkconfig rpcidmap off
cp /etc/sysconfig/iptables-consoleproxy /etc/sysconfig/iptables
mkdir -p /var/log/cloud
}
secstorage_svcs() {
chkconfig cloud on
chkconfig postinit on
chkconfig domr_webserver off
chkconfig haproxy off ;
chkconfig dnsmasq off
chkconfig sshd on
chkconfig httpd off
cp /etc/sysconfig/iptables-secstorage /etc/sysconfig/iptables
scp 169.254.0.1:/usr/sbin/vhd-util /usr/sbin
mkdir -p /var/log/cloud
}
routing_svcs() {
chkconfig cloud off
chkconfig domr_webserver on ;
chkconfig haproxy on ;
chkconfig dnsmasq on
chkconfig sshd on
chkconfig nfs off
chkconfig nfslock off
chkconfig rpcbind off
chkconfig rpcidmap off
cp /etc/sysconfig/iptables-domr /etc/sysconfig/iptables
}
CMDLINE=$(cat /proc/cmdline)
TYPE="router"
for i in $CMDLINE
do
# search for foo=bar pattern and cut out foo
KEY=$(echo $i | cut -d= -f1)
VALUE=$(echo $i | cut -d= -f2)
case $KEY in
type)
TYPE=$VALUE
;;
*)
;;
esac
done
if [ "$TYPE" = "consoleproxy" ] || [ "$TYPE" = "secstorage" ] && [ -f /media/cdrom/systemvm.zip ]
then
patch_console_proxy /media/cdrom/systemvm.zip
if [ $? -gt 0 ]
then
printf "Failed to apply patch systemvm\n" >$logfile
exit 5
fi
fi
#empty known hosts
echo "" > /root/.ssh/known_hosts
if [ "$TYPE" = "consoleproxy" ]
then
consoleproxy_svcs
if [ $? -gt 0 ]
then
printf "Failed to execute consoleproxy_svcs\n" >$logfile
exit 6
fi
elif [ "$TYPE" = "secstorage" ]
then
secstorage_svcs
if [ $? -gt 0 ]
then
printf "Failed to execute secstorage_svcs\n" >$logfile
exit 7
fi
else
routing_svcs
if [ $? -gt 0 ]
then
printf "Failed to execute routing_svcs\n" >$logfile
exit 8
fi
fi
exit $?

18
patches/wscript_build Normal file
View File

@ -0,0 +1,18 @@
import os, Utils, glob, re
bld.substitute("*/**",name="patchsubst")
for virttech in Utils.to_list(bld.path.ant_glob("*",dir=True)):
if virttech in ["shared","wscript_build"]: continue
patchfiles = bld.path.ant_glob('shared/** %s/**'%virttech,src=False,bld=True,dir=False,flat=True)
tgen = bld(
features = 'tar',#Utils.tar_up,
source = patchfiles,
target = '%s-patch.tgz'%virttech,
name = '%s-patch_tgz'%virttech,
root = os.path.join("patches",virttech),
rename = lambda x: re.sub(".subst$","",x),
)
if virttech != "xenserver":
# xenserver uses the patch.tgz file later to make an ISO, so we do not need to install it
bld.install_as("${AGENTLIBDIR}/scripts/vm/hypervisor/%s/patch.tgz"%virttech, "%s-patch.tgz"%virttech)

View File

@ -1,48 +0,0 @@
# Load additional iptables modules (nat helpers)
# Default: -none-
# Space separated list of nat helpers (e.g. 'ip_nat_ftp ip_nat_irc'), which
# are loaded after the firewall rules are applied. Options for the helpers are
# stored in /etc/modprobe.conf.
IPTABLES_MODULES="ip_conntrack_ftp nf_nat_ftp"
# Unload modules on restart and stop
# Value: yes|no, default: yes
# This option has to be 'yes' to get to a sane state for a firewall
# restart or stop. Only set to 'no' if there are problems unloading netfilter
# modules.
IPTABLES_MODULES_UNLOAD="yes"
# Save current firewall rules on stop.
# Value: yes|no, default: no
# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets stopped
# (e.g. on system shutdown).
IPTABLES_SAVE_ON_STOP="no"
# Save current firewall rules on restart.
# Value: yes|no, default: no
# Saves all firewall rules to /etc/sysconfig/iptables if firewall gets
# restarted.
IPTABLES_SAVE_ON_RESTART="no"
# Save (and restore) rule and chain counter.
# Value: yes|no, default: no
# Save counters for rules and chains to /etc/sysconfig/iptables if
# 'service iptables save' is called or on stop or restart if SAVE_ON_STOP or
# SAVE_ON_RESTART is enabled.
IPTABLES_SAVE_COUNTER="no"
# Numeric status output
# Value: yes|no, default: yes
# Print IP addresses and port numbers in numeric format in the status output.
IPTABLES_STATUS_NUMERIC="yes"
# Verbose status output
# Value: yes|no, default: yes
# Print info about the number of packets and bytes plus the "input-" and
# "outputdevice" in the status output.
IPTABLES_STATUS_VERBOSE="no"
# Status output with numbered lines
# Value: yes|no, default: yes
# Print a counter/number for every rule in the status output.
IPTABLES_STATUS_LINENUMBERS="yes"

View File

@ -0,0 +1,136 @@
#! /usr/bin/python
import web
import socket, struct
import cloud_utils
from cloud_utils import Command
urls = ("/ipallocator", "ipallocator")
app = web.application(urls, globals())
augtool = Command("augtool")
service = Command("service")
class dhcp:
_instance = None
def __init__(self):
self.availIP=[]
self.router=None
self.netmask=None
self.initialized=False
options = augtool.match("/files/etc/dnsmasq.conf/dhcp-option").stdout.strip()
for option in options.splitlines():
if option.find("option:router") != -1:
self.router = option.split("=")[1].strip().split(",")[1]
print self.router
dhcp_range = augtool.get("/files/etc/dnsmasq.conf/dhcp-range").stdout.strip()
dhcp_start = dhcp_range.split("=")[1].strip().split(",")[0]
dhcp_end = dhcp_range.split("=")[1].strip().split(",")[1]
self.netmask = dhcp_range.split("=")[1].strip().split(",")[2]
print dhcp_start, dhcp_end, self.netmask
start_ip_num = self.ipToNum(dhcp_start);
end_ip_num = self.ipToNum(dhcp_end)
print start_ip_num, end_ip_num
for ip in range(start_ip_num, end_ip_num + 1):
self.availIP.append(ip)
print self.availIP[0], self.availIP[len(self.availIP) - 1]
#load the ip already allocated
self.reloadAllocatedIP()
def ipToNum(self, ip):
return struct.unpack("!I", socket.inet_aton(ip))[0]
def numToIp(self, num):
return socket.inet_ntoa(struct.pack('!I', num))
def getFreeIP(self):
if len(self.availIP) > 0:
ip = self.numToIp(self.availIP[0])
self.availIP.remove(self.availIP[0])
return ip
else:
return None
def getNetmask(self):
return self.netmask
def getRouter(self):
return self.router
def getInstance():
if not dhcp._instance:
dhcp._instance = dhcp()
return dhcp._instance
getInstance = staticmethod(getInstance)
def reloadAllocatedIP(self):
dhcp_hosts = augtool.match("/files/etc/dnsmasq.conf/dhcp-host").stdout.strip().splitlines()
for host in dhcp_hosts:
if host.find("dhcp-host") != -1:
allocatedIP = self.ipToNum(host.split("=")[1].strip().split(",")[1])
if allocatedIP in self.availIP:
self.availIP.remove(allocatedIP)
def allocateIP(self, mac):
newIP = self.getFreeIP()
dhcp_host = augtool.match("/files/etc/dnsmasq.conf/dhcp-host").stdout.strip()
cnt = len(dhcp_host.splitlines()) + 1
script = """set %s %s
save"""%("/files/etc/dnsmasq.conf/dhcp-host[" + str(cnt) + "]", str(mac) + "," + newIP)
augtool < script
#reset dnsmasq
service("dnsmasq", "restart", stdout=None, stderr=None)
return newIP
def releaseIP(self, ip):
dhcp_host = augtool.match("/files/etc/dnsmasq.conf/dhcp-host").stdout.strip()
path = None
for host in dhcp_host.splitlines():
if host.find(ip) != -1:
path = host.split("=")[0].strip()
if path == None:
print "Can't find " + str(ip) + " in conf file"
return None
print path
script = """rm %s
save"""%(path)
augtool < script
#reset dnsmasq
service("dnsmasq", "restart", stdout=None, stderr=None)
class ipallocator:
def GET(self):
try:
user_data = web.input()
command = user_data.command
print "Processing: " + command
dhcpInit = dhcp.getInstance()
if command == "getIpAddr":
mac = user_data.mac
zone_id = user_data.dc
pod_id = user_data.pod
print mac, zone_id, pod_id
freeIP = dhcpInit.allocateIP(mac)
if not freeIP:
return "0,0,0"
print "Find an available IP: " + freeIP
return freeIP + "," + dhcpInit.getNetmask() + "," + dhcpInit.getRouter()
elif command == "releaseIpAddr":
ip = user_data.ip
zone_id = user_data.dc
pod_id = user_data.pod
dhcpInit.releaseIP(ip)
except:
return None
if __name__ == "__main__":
app.run()

File diff suppressed because it is too large Load Diff

2
python/wscript_build Normal file
View File

@ -0,0 +1,2 @@
obj = bld(features = 'py',name='pythonmodules')
obj.find_sources_in_dirs('lib', exts=['.py'])

View File

@ -18,7 +18,7 @@ check_gw() {
return $?;
}
cert="$(dirname $0)/id_rsa"
cert="/root/.ssh/id_rsa.cloud"
create_usage_rules () {
local dRIp=$1

View File

@ -78,6 +78,23 @@ create_from_file() {
then
rm -f $tmpltimg
fi
chmod a+r /$tmpltfs/$tmpltname
}
create_from_snapshot() {
local tmpltImg=$1
local snapshotName=$2
local tmpltfs=$3
local tmpltname=$4
cloud-qemu-img convert -f qcow2 -O qcow2 -s $snapshotName $tmpltImg /$tmpltfs/$tmpltname >& /dev/null
if [ $? -gt 0 ]
then
printf "Failed to create template /$tmplfs/$tmpltname from snapshot $snapshotName on disk $tmpltImg "
exit 2
fi
chmod a+r /$tmpltfs/$tmpltname
}
tflag=
@ -89,8 +106,9 @@ hvm=false
cleanup=false
dflag=
cflag=
snapshotName=
while getopts 'uht:n:f:s:c:d:' OPTION
while getopts 'uht:n:f:sc:d:' OPTION
do
case $OPTION in
t) tflag=1
@ -103,10 +121,10 @@ do
tmpltimg="$OPTARG"
;;
s) sflag=1
volsize="$OPTARG"
sflag=1
;;
c) cflag=1
cksum="$OPTARG"
snapshotName="$OPTARG"
;;
d) dflag=1
descr="$OPTARG"
@ -119,12 +137,6 @@ do
esac
done
if [ "$tflag$nflag$fflag" != "111" ]
then
usage
exit 2
fi
if [ ! -d /$tmpltfs ]
then
@ -148,9 +160,15 @@ then
printf "failed to uncompress $tmpltimg\n"
fi
create_from_file $tmpltfs $tmpltimg $tmpltname
if [ "$sflag" == "1" ]
then
create_from_snapshot $tmpltimg $snapshotName $tmpltfs $tmpltname
else
create_from_file $tmpltfs $tmpltimg $tmpltname
fi
touch /$tmpltfs/template.properties
chmod a+r /$tmpltfs/template.properties
echo -n "" > /$tmpltfs/template.properties
today=$(date '+%m_%d_%Y')

View File

@ -16,13 +16,20 @@ create_snapshot() {
local snapshotname=$2
local failed=0
qemu-img snapshot -c $snapshotname $disk
if [ ! -f $disk ]
then
failed=1
printf "No disk $disk exist\n" >&2
return $failed
fi
cloud-qemu-img snapshot -c $snapshotname $disk
if [ $? -gt 0 ]
then
failed=1
failed=2
printf "***Failed to create snapshot $snapshotname for path $disk\n" >&2
qemu-img snapshot -d $snapshotname $disk
cloud-qemu-img snapshot -d $snapshotname $disk
if [ $? -gt 0 ]
then
@ -34,21 +41,40 @@ create_snapshot() {
}
destroy_snapshot() {
local backupSnapDir=$1
local disk=$1
local snapshotname=$2
local deleteDir=$3
local failed=0
if [ -f $backupSnapDir/$snapshotname ]
if [ -d $disk ]
then
rm -f $backupSnapDir/$snapshotname
if [ $? -gt 0 ]
if [ -f $disk/$snapshotname ]
then
printf "***Failed to delete snapshot $snapshotname for path $backupSnapDir\n" >&2
failed=1
rm -rf $disk/$snapshotname >& /dev/null
fi
if [ "$deleteDir" == "1" ]
then
rm -rf %disk >& /dev/null
fi
return $failed
fi
if [ ! -f $disk ]
then
failed=1
printf "No disk $disk exist\n" >&2
return $failed
fi
cloud-qemu-img snapshot -d $snapshotname $disk
if [ $? -gt 0 ]
then
failed=2
printf "Failed to delete snapshot $snapshotname for path $disk\n" >&2
fi
return $failed
}
@ -71,6 +97,7 @@ backup_snapshot() {
local disk=$1
local snapshotname=$2
local destPath=$3
local destName=$4
if [ ! -d $destPath ]
then
@ -90,7 +117,7 @@ backup_snapshot() {
return 1
fi
cloud-qemu-img convert -f qcow2 -O qcow2 -s $snapshotname $disk $destPath/$snapshotname >& /dev/null
cloud-qemu-img convert -f qcow2 -O qcow2 -s $snapshotname $disk $destPath/$destName >& /dev/null
if [ $? -gt 0 ]
then
printf "Failed to backup $snapshotname for disk $disk to $destPath" >&2
@ -107,8 +134,10 @@ bflag=
nflag=
pathval=
snapshot=
tmplName=
deleteDir=
while getopts 'c:d:r:n:b:p:' OPTION
while getopts 'c:d:r:n:b:p:t:f' OPTION
do
case $OPTION in
c) cflag=1
@ -128,6 +157,10 @@ do
;;
p) destPath="$OPTARG"
;;
t) tmplName="$OPTARG"
;;
f) deleteDir=1
;;
?) usage
;;
esac
@ -140,11 +173,11 @@ then
exit $?
elif [ "$dflag" == "1" ]
then
destroy_snapshot $pathval $snapshot
destroy_snapshot $pathval $snapshot $deleteDir
exit $?
elif [ "$bflag" == "1" ]
then
backup_snapshot $pathval $snapshot $destPath
backup_snapshot $pathval $snapshot $destPath $tmplName
exit $?
elif [ "$rflag" == "1" ]
then

View File

@ -3,7 +3,7 @@
# createtmplt.sh -- install a template
usage() {
printf "Usage: %s: -t <template-fs> -n <templatename> -f <root disk file> -s <size in Gigabytes> -c <md5 cksum> -d <descr> -h [-u]\n" $(basename $0) >&2
printf "Usage: %s: -t <template-fs> -n <templatename> -f <root disk file> -c <md5 cksum> -d <descr> -h [-u]\n" $(basename $0) >&2
}
@ -47,8 +47,7 @@ untar() {
uncompress() {
local ft=$(file $1| awk -F" " '{print $2}')
local imgfile=${1%.*} #strip out trailing file suffix
local tmpfile=${imgfile}.tmp
local tmpfile=${1}.tmp
case $ft in
gzip) gunzip -q -c $1 > $tmpfile
@ -68,8 +67,8 @@ uncompress() {
return 1
fi
mv $tmpfile $imgfile
printf "$imgfile"
rm -f $1
printf $tmpfile
return 0
}
@ -78,16 +77,10 @@ create_from_file() {
local tmpltfs=$1
local tmpltimg=$2
local tmpltname=$3
local volsize=$4
local cleanup=$5
#copy the file to the disk
mv $tmpltimg /$tmpltfs/$tmpltname
# if [ "$cleanup" == "true" ]
# then
# rm -f $tmpltimg
# fi
}
tflag=
@ -113,7 +106,6 @@ do
tmpltimg="$OPTARG"
;;
s) sflag=1
volsize="$OPTARG"
;;
c) cflag=1
cksum="$OPTARG"
@ -143,77 +135,48 @@ then
exit 2
fi
mkdir -p $tmpltfs
if [ ! -f $tmpltimg ]
then
printf "root disk file $tmpltimg doesn't exist\n"
exit 3
fi
if [ -n "$cksum" ]
then
verify_cksum $cksum $tmpltimg
fi
#if [ ! -d /$tmpltfs ]
#then
# mkdir /$tmpltfs
# if [ $? -gt 0 ]
# then
# printf "Failed to create user fs $tmpltfs\n" >&2
# exit 1
# fi
#fi
tmpltimg2=$(uncompress $tmpltimg)
if [ $? -ne 0 ]
then
rollback_if_needed $tmpltfs 2 "failed to uncompress $tmpltimg\n"
fi
rollback_if_needed $tmpltfs $? "failed to uncompress $tmpltimg\n"
tmpltimg2=$(untar $tmpltimg2 /$tmpltfs vmi-root)
if [ $? -ne 0 ]
then
rollback_if_needed $tmpltfs 2 "tar archives not supported\n"
fi
tmpltimg2=$(untar $tmpltimg2)
rollback_if_needed $tmpltfs $? "tar archives not supported\n"
if [ ! -f $tmpltimg2 ]
if [ ${tmpltname%.vhd} != ${tmpltname} ]
then
rollback_if_needed $tmpltfs 2 "root disk file $tmpltimg doesn't exist\n"
exit 3
fi
# need the 'G' suffix on volume size
if [ ${volsize:(-1)} != G ]
then
volsize=${volsize}G
fi
#determine source file size -- it needs to be less than or equal to volsize
imgsize=$(ls -lh $tmpltimg2| awk -F" " '{print $5}')
if [ ${imgsize:(-1)} == G ]
then
imgsize=${imgsize%G} #strip out the G
imgsize=${imgsize%.*} #...and any decimal part
let imgsize=imgsize+1 # add 1 to compensate for decimal part
volsizetmp=${volsize%G}
if [ $volsizetmp -lt $imgsize ]
then
volsize=${imgsize}G
if which vhd-util 2>/dev/null
then
vhd-util check -n ${tmpltimg2} > /dev/null
rollback_if_needed $tmpltfs $? "vhd tool check $tmpltimg2 failed\n"
fi
fi
tgtfile=${tmpltfs}/vmi-root-${tmpltname}
imgsize=$(ls -l $tmpltimg2| awk -F" " '{print $5}')
create_from_file $tmpltfs $tmpltimg2 $tmpltname $volsize $cleanup
create_from_file $tmpltfs $tmpltimg2 $tmpltname
tgtfilename=$(echo $tmpltimg2 | awk -F"/" '{print $NF}')
touch /$tmpltfs/template.properties
rollback_if_needed $tmpltfs $? "Failed to create template.properties file"
echo -n "" > /$tmpltfs/template.properties
today=$(date '+%m_%d_%Y')
echo "filename=$tmpltname" > /$tmpltfs/template.properties
echo "snapshot.name=$today" >> /$tmpltfs/template.properties
echo "description=$descr" >> /$tmpltfs/template.properties
echo "name=$tmpltname" >> /$tmpltfs/template.properties
echo "checksum=$cksum" >> /$tmpltfs/template.properties
echo "hvm=$hvm" >> /$tmpltfs/template.properties
echo "volume.size=$volsize" >> /$tmpltfs/template.properties
echo "size=$imgsize" >> /$tmpltfs/template.properties
if [ "$cleanup" == "true" ]
then

View File

@ -116,6 +116,9 @@ then
echo "Failed to install routing template $tmpltimg to $destdir"
fi
tmpltfile=$destdir/$tmpfile
tmpltsize=$(ls -l $tmpltfile| awk -F" " '{print $5}')
echo "vhd=true" >> $destdir/template.properties
echo "id=1" >> $destdir/template.properties
echo "public=true" >> $destdir/template.properties
@ -123,6 +126,6 @@ echo "vhd.filename=$localfile" >> $destdir/template.properties
echo "uniquename=routing" >> $destdir/template.properties
echo "vhd.virtualsize=2147483648" >> $destdir/template.properties
echo "virtualsize=2147483648" >> $destdir/template.properties
echo "vhd.size=2101252608" >> $destdir/template.properties
echo "vhd.size=$tmpltsize" >> $destdir/template.properties
echo "Successfully installed routing template $tmpltimg to $destdir"

View File

@ -174,4 +174,4 @@ done
#install_cloud_agent $dflag
#install_cloud_consoleP $dflag
cloud_agent_setup $host $zone $pod $guid
cloud_consoleP_setup $host $zone $pod
#cloud_consoleP_setup $host $zone $pod

View File

@ -1,58 +0,0 @@
#!/bin/bash
#set -x
usage() {
printf "Usage: %s [uuid of this host] [interval in seconds]\n" $(basename $0) >&2
}
if [ -z $1 ]; then
usage
exit 2
fi
if [ -z $2 ]; then
usage
exit 3
fi
date=`date +%s`
hbs=`lvscan | grep hb-$1 | awk '{print $2}'`
for hb in $hbs
do
hb=${hb:1:`expr ${#hb} - 2`}
active=`lvscan | grep $hb | awk '{print $1}'`
if [ "$active" == "inactive" ]; then
lvchange -ay $hb
if [ ! -L $hb ]; then
continue;
fi
fi
ping=`dd if=$hb bs=1 count=100`
if [ $? -ne 0 ]; then
continue;
fi
diff=`expr $date - $ping`
if [ $diff -lt $2 ]; then
echo "=====> ALIVE <====="
exit 0;
fi
done
hbs=`ls -l /var/run/sr-mount/*/hb-$1 | awk '{print $9}'`
for hb in $hbs
do
ping=`cat $hb`
if [ $? -ne 0 ]; then
continue;
fi
diff=`expr $date - $ping`
if [ $diff -lt $2 ]; then
echo "=====> ALIVE <====="
exit 0;
fi
done
echo "=====> DEAD <======"

View File

@ -1,111 +0,0 @@
#!/bin/sh
# $Id: find_bond.sh 10510 2010-07-11 10:10:03Z alex $ $HeadURL: svn://svn.lab.vmops.com/repos/vmdev/java/scripts/vm/hypervisor/xenserver/find_bond.sh $
#set -x
cleanup_vlan() {
for vlan in `xe vlan-list | grep uuid | awk '{print $NF}'`; do
untagged=$(xe vlan-param-list uuid=$vlan | grep untagged | awk '{print $NF}')
network=$(xe pif-param-get param-name=network-uuid uuid=$untagged)
xe vlan-destroy uuid=$vlan
xe network-destroy uuid=$network
done
}
usage() {
echo "$0 device"
exit 1
}
sflag=
dflag=
while getopts 'sd' OPTION
do
case $OPTION in
d) dflag=1
;;
s) sflag=1
;;
?) usage
exit 1
;;
esac
done
shift $(($OPTIND - 1))
nic=$1
[ -z "$nic" ] && usage
addr=$(ip addr | grep $nic | grep inet | awk '{print $2}')
addr=${addr%/*}
bridges=$(brctl show | grep -v bridge | awk '{print $1}')
host_uuid=$(xe host-list hostname=$(hostname) | grep uuid | awk '{print $NF}')
if [ -z "$host_uuid" ]; then
printf "Unable to find host uuid using $(hostname)\n" >&2
exit 2
fi
if [ -z "$addr" ]; then
printf "Unable to find an ip address for $nic\n" >&2
exit 3
fi
current=$(brctl show | grep $nic | awk '{print $NF}')
for dev in `ip addr | grep mtu | grep -v -E "\.[0-9]*@|lo|$nic|$current" | awk '{print $2}'`
do
dev=${dev%:}
echo $bridges | grep $dev >/dev/null 2>&1
br=$?
ifconfig $dev | grep UP >/dev/null 2>&1
rc=$?
if [ $rc -eq 1 ]; then
ifconfig $dev up
sleep 4
fi
arping -q -c 1 -w 2 -D -I $dev $addr >/dev/null 2>&1
rc=$?
if [ $rc -ne 1 ]; then
continue;
fi
if [ $br -ne 0 ]; then
# What we've found is the naked nic.
pif_uuid=$(xe pif-list device=$dev host-uuid=$host_uuid | grep -B 3 "( RO): -1" | grep uuid | awk '{print $NF}')
if [ -z "$pif_uuid" ]; then
mac=$(ifconfig $dev | grep HWaddr | awk '{print $NF}')
pif_uuid=$(xe pif-introduce host-uuid=$host_uuid device=$dev mac=$mac)
fi
if [ -z $pif_uuid ]; then
continue;
fi
bridge=$(xe network-list PIF-uuids=$pif_uuid | grep bridge | awk '{print $NF}')
if [ -z $bridge ]; then
continue;
fi
xe pif-plug uuid=$pif_uuid
echo ">>>$dev<<<"
exit 0
else
# What we've found is the bridge
network_uuid=`xe network-list bridge=$dev | grep uuid | awk '{print $NF}'`
if [ -z "$network_uuid" ]; then
continue;
fi
pif=`xe pif-list network-uuid=$network_uuid host-uuid=$host_uuid VLAN=-1 | grep device | awk '{print $NF}'`
if [ -z "$pif" ]; then
continue;
fi
echo ">>>$pif<<<"
exit 0
fi
done
exit 4

View File

@ -12,14 +12,14 @@ def get_stats(session, collect_host_stats, consolidation_function, interval, sta
if collect_host_stats == "true" :
url = "http://localhost/rrd_updates?"
url += "session_id=" + session
url += "session_id=" + session._session
url += "&host=" + collect_host_stats
url += "&cf=" + consolidation_function
url += "&interval=" + str(interval)
url += "&start=" + str(int(time.time())-100)
else :
url = "http://localhost/rrd_updates?"
url += "session_id=" + session
url += "session_id=" + session._session
url += "&host=" + collect_host_stats
url += "&cf=" + consolidation_function
url += "&interval=" + str(interval)

View File

@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEA3VD1tGRDn3stlJvPNXmQZdQCNjqcfY+xlitd5q0n3KYqJ5OB
rty3/00XBUdLt31TbQ4dv+GR7uEr+ex7rm0jjmTFKV4rHYPi882CuC5+bkBp5R4k
+mpcyKbxb+IoNS9ItbiExQxMiiRQpHvNem0GGnNFO3lElRPwUFs8evTvZu5HcTj4
k4RJLJ66jeIGJ3sMAJ03SICGwfEZjrsyeOMwJk7cH8WNeuNzxzoZd9v02eI0lHdK
9O5z7FwrxvRBbzsmJ0EwuhbH8pR7WR6kGLTNP9KEwtrnzV1LYWd+rFoSeh6ImExG
7fma3Ldydg8CPTQsjvCEQUxiuV1/x5am5VJlUwIBIwKCAQEA0KtrUk/n/MSYsLAp
xLRyNB+qUGMl1Xjao4f5cxhKJ8/emlfgrC8xI+mZXL+QiG7ZoVZz0ixzprcMNMkG
5kmlLnxE3dxxy18Xz+2nIq9+hTVrKHuB82uZT3jVAxcP96GcU5C3snlPeu8KNK8+
FFgqU3P/cpbo5FSgwMsNI3k5fkyffYtmBdtjZhWXJqnA9+bMdCmYEKyQFWp18LvV
pjGx1jLFZTx9+aDz7gdIk21zbVXmwQmnS1fVKJEByTMvokpvdJUvDedvpgqGqX/g
IXkTXe49pYhYwxVguLK6FXyQBwOuUsnur2A79T3wBvzEMozkYLkEG/zcw0fyo3iC
fdzc6wKBgQD2gq+kUc2r/+xE+smIej2ICvFZZlSh1ko2tVmVUHuuuMCuBt054Dq9
mf8/yIbXSvVtuBMJ+jewVnKfhucEQKf6E1jBdQShezlomFLOQ8cFQJhT6tAwJl/k
TR+OjeTuOcBknkE8nstNt7hAkZxY6h/Lu54OM9AkXyZ9skx7gHh+IwKBgQDl1f09
YkoM9rqXM8lMKjF0z81T4ACCaFUA6ZKjSZelyG+azJDlRFNWX1In3Kq6aInpZPzs
owwIS9tjkXIaLR1wDJ+K8IGJQ19sqCzv3/kBCDXA6mqXkkPR80xRi4wuZ3lETOdL
OBXPffuQaKxk32esqsxK6As1LgH4+048JS23EQKBgQCpCSf7pc7cV7f0yTm8q5fo
QgSVEvg0da87dQo6gFTPlKFhY8rl25X+WvgrvLQ726D6x12DLzwhJVXpu5cY2+Dl
/qNC0+XrEqsF5MsRGIh4oVKCr6SzTYOVPDLlaJz7IElpkRbKe4QYCPNfecpLmTpf
0Rvse0zlvZa8l4Tm+QIqmwKBgBOzQZeMFPnMAV1q1r1is8gvEZl5maTHHTqXrXu1
2cxhoyqGkBOmxVCL09eH8WBvXEc0irUyjAC2C32QH7kZz1K/QOAF/Hl6zao6TP6e
K0k7N861AdJ6QFPTBoqlj6w0wUBeXPfRm3gvXrSbQfoEhTqvjdqI6wSO6jnpp57B
W7CbAoGABFHMVXEyT3SliMSRtiCuDOrtl9E/aiOByPulXolqth5WDSel31Lz+iY7
ldOLNQO/oononTStdd0fDGChl3WXBSOToJJ/HjIWH05bDY9n2EDAyZvmaW9rX3JQ
pH9c/1vlD9lxDEBvq4JXmTtdL0Ho00F5vVHnWnwINtfx6c5BIjg=
-----END RSA PRIVATE KEY-----

View File

@ -1,30 +0,0 @@
#!/bin/bash
#set -x
usage() {
printf "Usage: %s [uuid of this host] [interval in seconds]\n" $(basename $0)
}
if [ -z $1 ]; then
usage
exit 2
fi
if [ -z $2 ]; then
usage
exit 3
fi
if [ ! -f /opt/xensource/bin/xenheartbeat.sh ]; then
printf "Error: Unable to find xenheartbeat.sh to launch\n"
exit 4
fi
for psid in `ps -ef | grep xenheartbeat | grep -v grep | awk '{print $2}'`; do
kill $psid
done
nohup /opt/xensource/bin/xenheartbeat.sh $1 $2 >/var/log/heartbeat.log 2>&1 &
echo "======> DONE <======"

View File

@ -1,232 +0,0 @@
#/bin/bash
# $Id: prepsystemvm.sh 10800 2010-07-16 13:48:39Z edison $ $HeadURL: svn://svn.lab.vmops.com/repos/vmdev/java/scripts/vm/hypervisor/xenserver/prepsystemvm.sh $
#set -x
mntpath() {
local vmname=$1
echo "/mnt/$vmname"
}
mount_local() {
local vmname=$1
local disk=$2
local path=$(mntpath $vmname)
mkdir -p ${path}
mount $disk ${path}
return $?
}
umount_local() {
local vmname=$1
local path=$(mntpath $vmname)
umount $path
local ret=$?
rm -rf $path
return $ret
}
patch_scripts() {
local vmname=$1
local patchfile=$2
local path=$(mntpath $vmname)
local oldmd5=
local md5file=${path}/md5sum
[ -f ${md5file} ] && oldmd5=$(cat ${md5file})
local newmd5=$(md5sum $patchfile | awk '{print $1}')
if [ "$oldmd5" != "$newmd5" ]
then
tar xzf $patchfile -C ${path}
echo ${newmd5} > ${md5file}
fi
return 0
}
#
# To use existing console proxy .zip-based package file
#
patch_console_proxy() {
local vmname=$1
local patchfile=$2
local path=$(mntpath $vmname)
local oldmd5=
local md5file=${path}/usr/local/cloud/systemvm/md5sum
[ -f ${md5file} ] && oldmd5=$(cat ${md5file})
local newmd5=$(md5sum $patchfile | awk '{print $1}')
if [ "$oldmd5" != "$newmd5" ]
then
echo "All" | unzip $patchfile -d ${path}/usr/local/cloud/systemvm >/dev/null 2>&1
chmod 555 ${path}/usr/local/cloud/systemvm/run.sh
find ${path}/usr/local/cloud/systemvm/ -name \*.sh | xargs chmod 555
echo ${newmd5} > ${md5file}
fi
return 0
}
consoleproxy_svcs() {
local vmname=$1
local path=$(mntpath $vmname)
chroot ${path} /sbin/chkconfig cloud on
chroot ${path} /sbin/chkconfig postinit on
chroot ${path} /sbin/chkconfig domr_webserver off
chroot ${path} /sbin/chkconfig haproxy off ;
chroot ${path} /sbin/chkconfig dnsmasq off
chroot ${path} /sbin/chkconfig sshd on
chroot ${path} /sbin/chkconfig httpd off
chroot ${path} /sbin/chkconfig nfs off
chroot ${path} /sbin/chkconfig nfslock off
chroot ${path} /sbin/chkconfig rpcbind off
chroot ${path} /sbin/chkconfig rpcidmap off
cp ${path}/etc/sysconfig/iptables-consoleproxy ${path}/etc/sysconfig/iptables
}
secstorage_svcs() {
local vmname=$1
local path=$(mntpath $vmname)
chroot ${path} /sbin/chkconfig cloud on
chroot ${path} /sbin/chkconfig postinit on
chroot ${path} /sbin/chkconfig domr_webserver off
chroot ${path} /sbin/chkconfig haproxy off ;
chroot ${path} /sbin/chkconfig dnsmasq off
chroot ${path} /sbin/chkconfig sshd on
chroot ${path} /sbin/chkconfig httpd off
cp ${path}/etc/sysconfig/iptables-secstorage ${path}/etc/sysconfig/iptables
mkdir -p ${path}/var/log/cloud
}
routing_svcs() {
local vmname=$1
local path=$(mntpath $vmname)
chroot ${path} /sbin/chkconfig cloud off
chroot ${path} /sbin/chkconfig domr_webserver on ;
chroot ${path} /sbin/chkconfig haproxy on ;
chroot ${path} /sbin/chkconfig dnsmasq on
chroot ${path} /sbin/chkconfig sshd on
chroot ${path} /sbin/chkconfig nfs off
chroot ${path} /sbin/chkconfig nfslock off
chroot ${path} /sbin/chkconfig rpcbind off
chroot ${path} /sbin/chkconfig rpcidmap off
cp ${path}/etc/sysconfig/iptables-domr ${path}/etc/sysconfig/iptables
}
lflag=
dflag=
while getopts 't:l:d:' OPTION
do
case $OPTION in
l) lflag=1
vmname="$OPTARG"
;;
t) tflag=1
vmtype="$OPTARG"
;;
d) dflag=1
rootdisk="$OPTARG"
;;
*) ;;
esac
done
if [ "$lflag$tflag$dflag" != "111" ]
then
printf "Error: Not enough parameter\n" >&2
exit 1
fi
mount_local $vmname $rootdisk
if [ $? -gt 0 ]
then
printf "Failed to mount disk $rootdisk for $vmname\n" >&2
exit 1
fi
if [ -f $(dirname $0)/patch.tgz ]
then
patch_scripts $vmname $(dirname $0)/patch.tgz
if [ $? -gt 0 ]
then
printf "Failed to apply patch patch.zip to $vmname\n" >&2
umount_local $vmname
exit 4
fi
fi
cpfile=$(dirname $0)/systemvm-premium.zip
if [ "$vmtype" == "consoleproxy" ] || [ "$vmtype" == "secstorage" ] && [ -f $cpfile ]
then
patch_console_proxy $vmname $cpfile
if [ $? -gt 0 ]
then
printf "Failed to apply patch $patch $cpfile to $vmname\n" >&2
umount_local $vmname
exit 5
fi
fi
# domr is 64 bit, need to copy 32bit chkconfig to domr
# this is workaroud, will use 32 bit domr
dompath=$(mntpath $vmname)
cp /sbin/chkconfig $dompath/sbin
# copy public key to system vm
cp $(dirname $0)/id_rsa.pub $dompath/root/.ssh/authorized_keys
#empty known hosts
echo "" > $dompath/root/.ssh/known_hosts
if [ "$vmtype" == "router" ]
then
routing_svcs $vmname
if [ $? -gt 0 ]
then
printf "Failed to execute routing_svcs\n" >&2
umount_local $vmname
exit 6
fi
fi
if [ "$vmtype" == "consoleproxy" ]
then
consoleproxy_svcs $vmname
if [ $? -gt 0 ]
then
printf "Failed to execute consoleproxy_svcs\n" >&2
umount_local $vmname
exit 7
fi
fi
if [ "$vmtype" == "secstorage" ]
then
secstorage_svcs $vmname
if [ $? -gt 0 ]
then
printf "Failed to execute secstorage_svcs\n" >&2
umount_local $vmname
exit 8
fi
fi
umount_local $vmname
exit $?

View File

@ -1,83 +0,0 @@
#!/bin/bash
#set -x
usage() {
printf "Usage: %s [uuid of this host] [uuid of the sr to place the heartbeat]\n" $(basename $0) >&2
}
if [ -z $1 ]; then
usage
exit 2
fi
if [ -z $2 ]; then
usage
exit 3
fi
if [ `xe host-list | grep $1 | wc -l` -ne 1 ]; then
printf "Error: Unable to find the host uuid: $1\n" >&2
usage
exit 4
fi
if [ `xe sr-list uuid=$2 | wc -l` -eq 0 ]; then
printf "Error: Unable to find SR with uuid: $2\n" >&2
usage
exit 5
fi
if [ `xe pbd-list sr-uuid=$2 | grep -B 1 $1 | wc -l` -eq 0 ]; then
printf "Error: Unable to find a pbd for the SR: $2\n" >&2
usage
exit 6
fi
srtype=`xe sr-param-get param-name=type uuid=$2`
if [ "$srtype" == "nfs" ];then
filename=/var/run/sr-mount/$2/hb-$1
files=`ls /var/run/sr-mount/$2 | grep "hb-$1"`
if [ -z "$files" ]; then
date=`date +%s`
echo "$date" > $filename
fi
else
link=/dev/VG_XenStorage-$2/hb-$1
lv=`lvscan | grep $link`
if [ -z "$lv" ]; then
lvcreate VG_XenStorage-$2 -n hb-$1 --size 1M
if [ $? -ne 0 ]; then
printf "Error: Unable to create heartbeat SR\n" >&2
exit 7
fi
lv=`lvscan | grep $link`
if [ -z "$lv" ]; then
printf "Error: Unable to create heartbeat SR\n" >&2
exit 8
fi
fi
if [ `echo $lv | awk '{print $1}'` == "inactive" ]; then
lvchange -ay $link
if [ $? -ne 0 ]; then
printf "Error: Unable to make the heartbeat SR active\n" >&2
exit 8
fi
fi
if [ ! -L $link ]; then
printf "Error: Unable to find the soft link $link\n" >&2
exit 9
fi
dd if=/dev/zero of=$link bs=1 count=100
fi
echo "======> DONE <======"
exit 0

View File

@ -38,9 +38,9 @@ def echo(fn):
def get_xapi_session():
xapi = XenAPI.xapi_local();
xapi.login_with_password("","")
return xapi._session
session = XenAPI.xapi_local();
session.login_with_password("","")
return session
@echo
def gethostvmstats(session, args):
@ -52,17 +52,7 @@ def gethostvmstats(session, args):
result = hostvmstats.get_stats(session, collect_host_stats, consolidation_function, interval, start_time)
return result
@echo
def find_bond(session, args):
nic = args['arg1']
try:
cmd = ["bash", "/opt/xensource/bin/find_bond.sh", nic]
txt = util.pread2(cmd)
except:
txt = ''
return txt
@echo
def setup_iscsi(session, args):
uuid=args['uuid']
@ -428,40 +418,6 @@ def networkUsage(session, args):
return txt
@echo
def setup_heartbeat_sr(session, args):
host = args['host']
sr = args['sr']
try:
cmd = ["bash", "/opt/xensource/bin/setup_heartbeat_sr.sh", host, sr]
txt = util.pread2(cmd)
except:
txt = ''
return txt
@echo
def check_heartbeat(session, args):
host = args['host']
interval = args['interval']
try:
cmd = ["bash", "/opt/xensource/bin/check_heartbeat.sh", host, interval]
txt = util.pread2(cmd)
except:
txt=''
return txt
@echo
def heartbeat(session, args):
host = args['host']
interval = args['interval']
try:
cmd = ["/bin/bash", "/opt/xensource/bin/launch_hb.sh", host, interval]
txt = util.pread2(cmd)
except:
txt='fail'
return txt
def get_private_nic( args):
session = get_xapi_session()
vms = session.xenapi.VM.get_all()
@ -1132,5 +1088,5 @@ def network_rules(session, args):
if __name__ == "__main__":
XenAPIPlugin.dispatch({"pingtest": pingtest, "setup_heartbeat_sr":setup_heartbeat_sr, "check_heartbeat":check_heartbeat, "heartbeat": heartbeat, "setup_iscsi":setup_iscsi, "find_bond": find_bond, "gethostvmstats": gethostvmstats, "getvncport": getvncport, "getgateway": getgateway, "getnetwork": getnetwork, "preparemigration": preparemigration, "setIptables": setIptables, "patchdomr": patchdomr, "pingdomr": pingdomr, "pingxenserver": pingxenserver, "ipassoc": ipassoc, "vm_data": vm_data, "savePassword": savePassword, "saveDhcpEntry": saveDhcpEntry, "setFirewallRule": setFirewallRule, "setLoadBalancerRule": setLoadBalancerRule, "createFile": createFile, "deleteFile": deleteFile, "checkMount": checkMount, "checkIscsi": checkIscsi, "networkUsage": networkUsage, "network_rules":network_rules, "can_bridge_firewall":can_bridge_firewall, "default_network_rules":default_network_rules, "destroy_network_rules_for_vm":destroy_network_rules_for_vm, "default_network_rules_systemvm":default_network_rules_systemvm, "get_rule_logs_for_vms":get_rule_logs_for_vms, "setLinkLocalIP":setLinkLocalIP})
XenAPIPlugin.dispatch({"pingtest": pingtest, "setup_iscsi":setup_iscsi, "gethostvmstats": gethostvmstats, "getvncport": getvncport, "getgateway": getgateway, "getnetwork": getnetwork, "preparemigration": preparemigration, "setIptables": setIptables, "patchdomr": patchdomr, "pingdomr": pingdomr, "pingxenserver": pingxenserver, "ipassoc": ipassoc, "vm_data": vm_data, "savePassword": savePassword, "saveDhcpEntry": saveDhcpEntry, "setFirewallRule": setFirewallRule, "setLoadBalancerRule": setLoadBalancerRule, "createFile": createFile, "deleteFile": deleteFile, "checkMount": checkMount, "checkIscsi": checkIscsi, "networkUsage": networkUsage, "network_rules":network_rules, "can_bridge_firewall":can_bridge_firewall, "default_network_rules":default_network_rules, "destroy_network_rules_for_vm":destroy_network_rules_for_vm, "default_network_rules_systemvm":default_network_rules_systemvm, "get_rule_logs_for_vms":get_rule_logs_for_vms, "setLinkLocalIP":setLinkLocalIP})

View File

@ -458,12 +458,16 @@ def getIsTrueString(stringValue):
return booleanValue
def makeUnavailable(uuid, primarySRPath, isISCSI):
if not isISCSI:
return
VHD = getVHD(uuid, isISCSI)
path = os.path.join(primarySRPath, VHD)
manageAvailability(path, '-an')
return
def manageAvailability(path, value):
if path.__contains__("/var/run/sr-mount"):
return
util.SMlog("Setting availability of " + path + " to " + value)
try:
cmd = ['/usr/sbin/lvchange', value, path]

View File

@ -1,62 +0,0 @@
#!/bin/bash
# Version @VERSION@
#set -x
usage() {
printf "Usage: %s [uuid of this host] [interval in seconds]\n" $(basename $0) >&2
}
if [ -z $1 ]; then
usage
exit 2
fi
if [ -z $2 ]; then
usage
exit 3
fi
hbs=
while true
do
sleep $2
date=`date +%s`
lvscan
hbs=`ls -l /dev/VG*/hb-$1 | awk '{print $9}'`
for hb in $hbs
do
echo $date | dd of=$hb count=100 bs=1
if [ $? -ne 0 ]; then
reboot -f
echo "Problem with $hb";
fi
done
dirs=`ls /var/run/sr-mount`
if [ "$dirs" == "" ]; then
continue
fi
ls /var/run/sr-mount/* >/dev/null 2>&1
if [ $? -ne 0 ]; then
reboot -f
echo "Problem with ls";
fi
hbs=`ls -l /var/run/sr-mount/*/hb-$1 | awk '{print $9}'`
for hb in $hbs
do
echo $date > $hb
if [ $? -ne 0 ]; then
reboot -f
echo "Problem with $hb";
fi
done
done

View File

@ -18,25 +18,21 @@ nfs.py=/opt/xensource/sm
patch.tgz=..,0775,/opt/xensource/bin
vmops=..,0755,/etc/xapi.d/plugins
vmopsSnapshot=..,0755,/etc/xapi.d/plugins
systemvm-premium.zip=../../../../../vms,0755,/opt/xensource/bin
hostvmstats.py=..,0755,/opt/xensource/sm
xs_cleanup.sh=..,0755,/opt/xensource/bin
systemvm.iso=../../../../../vms,0644,/opt/xensource/packages/iso
hostvmstats.py=..,0755,/opt/xensource/sm
id_rsa.cloud=..,0600,/root/.ssh
network_info.sh=..,0755,/opt/xensource/bin
prepsystemvm.sh=..,0755,/opt/xensource/bin
setupxenserver.sh=..,0755,/opt/xensource/bin
make_migratable.sh=..,0755,/opt/xensource/bin
networkUsage.sh=..,0755,/opt/xensource/bin
find_bond.sh=..,0755,/opt/xensource/bin
setup_iscsi.sh=..,0755,/opt/xensource/bin
version=..,0755,/opt/xensource/bin
setup_heartbeat_sr.sh=..,0755,/opt/xensource/bin
check_heartbeat.sh=..,0755,/opt/xensource/bin
xenheartbeat.sh=..,0755,/opt/xensource/bin
launch_hb.sh=..,0755,/opt/xensource/bin
pingtest.sh=../../..,0755,/opt/xensource/bin
dhcp_entry.sh=../../../../network/domr/,0755,/opt/xensource/bin
ipassoc.sh=../../../../network/domr/,0755,/opt/xensource/bin
vm_data.sh=../../../../network/domr/,0755,/opt/xensource/bin
save_password_to_domr.sh=../../../../network/domr/,0755,/opt/xensource/bin
networkUsage.sh=../../../../network/domr/,0755,/opt/xensource/bin
call_firewall.sh=../../../../network/domr/,0755,/opt/xensource/bin
call_loadbalancer.sh=../../../../network/domr/,0755,/opt/xensource/bin

View File

@ -1,54 +0,0 @@
#!/bin/bash
#set -x
usage() {
printf "Usage: %s \n" $(basename $0) >&2
}
# remove device which is in xenstore but not in xapi
remove_device() {
be=$1
xenstore-write /local/domain/0/backend/tap/0/$be/online 0 &>/dev/null
xenstore-write /local/domain/0/backend/tap/0/$be/shutdown-request normal &>/dev/null
for i in $(seq 20)
do
sleep 1
xenstore-exists /local/domain/0/backend/tap/0/$be/shutdown-done &>/dev/null
if [ $? -eq 0 ] ; then
xenstore-rm /local/domain/0/device/vbd/$be &>/dev/null
xenstore-rm /local/domain/0/backend/tap/0/$be &>/dev/null
xenstore-rm /local/domain/0/error/backend/tap/0/$be &>/dev/null
xenstore-rm /local/domain/0/error/device/vbd/$be &>/dev/null
return
fi
xenstore-exists /local/domain/0/backend/tap/0/$be &>/dev/null
if [ $? -ne 0 ] ; then
return
fi
done
echo "unplug device $be failed"
exit 2
}
bes=`xenstore-list /local/domain/0/backend/tap/0`
if [ -z "$bes" ]; then
exit 0
fi
for be in $bes
do
device=`xenstore-read /local/domain/0/backend/tap/0/$be/dev`
ls $device >/dev/null 2>&1
if [ $? -ne 0 ]; then
remove_device $be
fi
done
echo "======> DONE <======"
exit 0

1
scripts/wscript_build Normal file
View File

@ -0,0 +1 @@
bld.substitute('**',"${AGENTLIBDIR}/scripts",chmod=0755)

View File

@ -468,6 +468,9 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
AgentAttache attache = createAttache(id, server, resource);
if (!resource.IsRemoteAgent())
notifyMonitorsOfConnection(attache, startup);
else {
_hostDao.updateStatus(server, Event.AgentConnected, _nodeId);
}
return attache;
}
@ -545,11 +548,12 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
_dcDao.releasePrivateIpAddress(host.getPrivateIpAddress(), host.getDataCenterId(), null);
AgentAttache attache = _agents.get(hostId);
handleDisconnect(attache, Status.Event.Remove, false);
/*
/*Disconnected agent needs special handling here*/
host.setGuid(null);
host.setClusterId(null);
_hostDao.update(host.getId(), host);
*/
_hostDao.remove(hostId);
//delete the associated primary storage from db
@ -613,6 +617,8 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
templateHostSC.addAnd("hostId", SearchCriteria.Op.EQ, secStorageHost.getId());
_vmTemplateHostDao.remove(templateHostSC);
/*Disconnected agent needs special handling here*/
secStorageHost.setGuid(null);
txn.commit();
return true;
}catch (Throwable t) {
@ -1126,11 +1132,16 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
}
}
}
@Override
public Answer easySend(final Long hostId, final Command cmd) {
return easySend(hostId, cmd, _wait);
}
@Override
public Answer easySend(final Long hostId, final Command cmd) {
public Answer easySend(final Long hostId, final Command cmd, int timeout) {
try {
final Answer answer = send(hostId, cmd, _wait);
final Answer answer = send(hostId, cmd, timeout);
if (answer == null) {
s_logger.warn("send returns null answer");
return null;
@ -1785,28 +1796,28 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
}
}
protected void upgradeAgent(final Link link, final byte[] request, final String reason) {
if (reason == UnsupportedVersionException.IncompatibleVersion) {
final UpgradeResponse response = new UpgradeResponse(request, _upgradeMgr.getAgentUrl());
try {
s_logger.info("Asking for the agent to update due to incompatible version: " + response.toString());
link.send(response.toBytes());
} catch (final ClosedChannelException e) {
s_logger.warn("Unable to send response due to connection closed: " + response.toString());
}
return;
}
assert (reason == UnsupportedVersionException.UnknownVersion) : "Unknown reason: " + reason;
final UpgradeResponse response = new UpgradeResponse(request, _upgradeMgr.getAgentUrl());
try {
s_logger.info("Asking for the agent to update due to unknown version: " + response.toString());
link.send(response.toBytes());
} catch (final ClosedChannelException e) {
s_logger.warn("Unable to send response due to connection closed: " + response.toString());
}
}
// protected void upgradeAgent(final Link link, final byte[] request, final String reason) {
//
// if (reason == UnsupportedVersionException.IncompatibleVersion) {
// final UpgradeResponse response = new UpgradeResponse(request, _upgradeMgr.getAgentUrl());
// try {
// s_logger.info("Asking for the agent to update due to incompatible version: " + response.toString());
// link.send(response.toBytes());
// } catch (final ClosedChannelException e) {
// s_logger.warn("Unable to send response due to connection closed: " + response.toString());
// }
// return;
// }
//
// assert (reason == UnsupportedVersionException.UnknownVersion) : "Unknown reason: " + reason;
// final UpgradeResponse response = new UpgradeResponse(request, _upgradeMgr.getAgentUrl());
// try {
// s_logger.info("Asking for the agent to update due to unknown version: " + response.toString());
// link.send(response.toBytes());
// } catch (final ClosedChannelException e) {
// s_logger.warn("Unable to send response due to connection closed: " + response.toString());
// }
// }
protected class SimulateStartTask implements Runnable {
ServerResource resource;
@ -1857,17 +1868,17 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
return;
}
StartupCommand startup = (StartupCommand) cmd;
if ((_upgradeMgr.registerForUpgrade(-1, startup.getVersion()) == UpgradeManager.State.RequiresUpdate) && (_upgradeMgr.getAgentUrl() != null)) {
final UpgradeCommand upgrade = new UpgradeCommand(_upgradeMgr.getAgentUrl());
final Request req = new Request(1, -1, -1, new Command[] { upgrade }, true, true);
s_logger.info("Agent requires upgrade: " + req.toString());
try {
link.send(req.toBytes());
} catch (ClosedChannelException e) {
s_logger.warn("Unable to tell agent it should update.");
}
return;
}
// if ((_upgradeMgr.registerForUpgrade(-1, startup.getVersion()) == UpgradeManager.State.RequiresUpdate) && (_upgradeMgr.getAgentUrl() != null)) {
// final UpgradeCommand upgrade = new UpgradeCommand(_upgradeMgr.getAgentUrl());
// final Request req = new Request(1, -1, -1, new Command[] { upgrade }, true, true);
// s_logger.info("Agent requires upgrade: " + req.toString());
// try {
// link.send(req.toBytes());
// } catch (ClosedChannelException e) {
// s_logger.warn("Unable to tell agent it should update.");
// }
// return;
// }
try {
StartupCommand[] startups = new StartupCommand[cmds.length];
for (int i = 0; i < cmds.length; i++)
@ -2036,7 +2047,7 @@ public class AgentManagerImpl implements AgentManager, HandlerFactory {
}
} catch (final UnsupportedVersionException e) {
s_logger.warn(e.getMessage());
upgradeAgent(task.getLink(), data, e.getReason());
//upgradeAgent(task.getLink(), data, e.getReason());
}
} else if (type == Task.Type.CONNECT) {
} else if (type == Task.Type.DISCONNECT) {

View File

@ -124,7 +124,11 @@ public class DirectAgentAttache extends AgentAttache {
public synchronized void run() {
try {
ServerResource resource = _resource;
if (resource.IsRemoteAgent()) {
return;
}
if (resource != null) {
PingCommand cmd = resource.getCurrentStatus(_id);
if (cmd == null) {

Some files were not shown because too many files have changed in this diff Show More