Wednesday, June 29, 2011

Install OpenNebula 2.2.1 on Ubuntu 11.04 Enterprise Server

Recently I was in a process of setting up an open source cloud computing environment. The experience I gained through this project was very interesting to say the least. The goal for me is to create a ultra reliable cloud computing environment with the potential to handle not only IaaS requirement but also PaaS as well as future SaaS goals. On top of that this environment should be able to utilize commodity servers or even desktop grade machine to form the cloud. After much research and experiment, eventually I have decided to go with OpenNebula due to its rich feature set and well thought-out integration with different hypervisors as well as public cloud vendor.

In this post, I would like to share my findings while setting up OpenNebula 2.2.1 on Ubuntu 11.04 Enterprise Server to maybe make your life a bit easier if you decided to go down a similar path. The hypervisor technology we decided to use is KVM however OpenNebula works with Xen and VMWare as well.

(In this guide I will focus on how not why. To find out more about why certain steps were performed please refer to OpenNebula document)

Cloud Controller Setup

1. Getting OpenNebula 2.2.1

I basically just downloaded the tar ball from http://downloads.dsa-research.org/opennebula/

2. Requisite Software Installation

sudo apt-get install ruby sqlite3 libxmlrpc-c3-0 openssl ssh

sudo apt-get install ruby-dev rubygems rake make libxml-parser-ruby libxml2 libxslt1.1 libxml-ruby libxslt-ruby libnokogiri-ruby1.8

3. Setup Directory Structure

$ tree /srv
/srv/
|
`-- cloud
  |-- one
  `-- images


4. Create OpenNebula User and Group Account

groupadd cloud
useradd -d /srv/cloud/one -g cloud -m oneadmin
sudo passwd oneadmin
$ id oneadmin
uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud)


5. Prepare for Build

sudo apt-get install libsqlite3-dev libxmlrpc-c3-dev scons g++ ruby libopenssl-ruby libssl-dev ruby-dev make rake rubygems libxml-parser-ruby1.8 libxslt1-dev

6. Fix bug #265 (Optional)

Currently OpenNebula automatically deletes VM image that fails to start for whatever reason. This behavior might not be desirable for some installation including ours. This is currently filed as a bug/improvement for 3.0 release, but you can apply the temporary patch following the instruction on this page
http://dev.opennebula.org/issues/265
7. Build

~$ wget <binary download url>
~$ tar xzf
~$ cd one-2.0
~/one-2.0$ scons -j2
....
scons: done building targets.
~/one-2.0$ ./install.sh -d /srv/cloud/one

8. Add OpenNebula environment variables

Edit ~/.profile and add the following lines (or .bashrc if you want to use bash)

export PATH=$PATH:/var/lib/gems/1.8/bin:/srv/cloud/one/bin
export ONE_LOCATION=/srv/cloud/one
export ONE_XMLRPC=http://localhost:2633/RPC2
export ONE_AUTH=/srv/cloud/one/.one/one_auth

9. Create the user's OpenNebula config directory:

~$ mkdir ~/.one

10. Configure Authentication File

# Add this one liner to one_auth file match oneadmin user's password
~$ vim /srv/cloud/one/.one/one_auth

oneadmin:oneadmin

11. Prepare to install Sunstone (WebUI for OpenNebula)

sudo gem install json sinatra thin rack sequel

* Ubuntu does not add gem to your path so add the following directory to your path /var/lib/gems/1.8/bin

At this point your controller server is pretty much setup now. Next we will create a cloud node server.


Cloud Node Setup

1. Verify CPU Virtualization Support

Since KVM relies on CPU virtualization support before we install KVM we need to make sure it will work with your hardware.

egrep '(vmx|svm)' /proc/cpuinfo

If there is any match, it means you CPU supports KVM otherwise you can try VMWare hypervisor or Xen.

2. Install KVM

If you have already selected KVM during Ubuntu server installation then you really don't have to do anything here, otherwise:

sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder virt-manager virtinst bridge-utils


3. Remove default NAT bridge

By default if you selected KVM during Ubuntu installation it will also install the default NAT bridge. But usually this NAT bridge is not needed for server type setup and compromise performance, so I will remove the bridge here.

# virsh net-destroy default
# virsh net-undefine default
# service libvirt-bin restart
# ifconfig


4. Setup bridge for VM network

In a server type environment you want your VM to be accessible from anyone on the network, to achieve that you need to bridge the network from the host. Modify the /etc/network/interfaces file.

DHCP

auto eth1
iface eth1 inet manual

auto virbr1
iface virbr1 inet dhcp
hostname 
bridge_ports eth1
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off


Static IP

auto eth1
iface eth1 inet manual

auto virbr1
iface virbr1 inet static
   address 10.128.129.43
   network 10.128.129.0
   netmask 255.255.255.0
   broadcast 10.128.129.255
   gateway 10.128.129.1
   bridge_ports eth1
   bridge_fd 9
   bridge_hello 2
   bridge_maxage 12
   bridge_stp off


5. Setup Software required by OpenNebula

sudo apt-get install ruby ssh

6. Create Directory Structure

$ tree /srv
/srv/
|
`-- cloud
  |-- one


7. Create OpenNebula User and Group (with the same G/UID)

groupadd --gid 1001 cloud
useradd --uid 1001 -g cloud -G libvirtd -d /srv/cloud/one oneadmin
sudo passwd oneadmin – setup the password to oneadmin
chown -R oneadmin:cloud cloud

* Make sure the gid and uid match the ones on the controller

8. Setup password-less SSH access from controller to node for oneadmin user

ssh-keygen
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ cat ~/.ssh/config
Host *
StrictHostKeyChecking no

* If you are using SSH transport driver with OpenNebula, you need to make sure there is bi-directional password-less ssh access from controller to node and vice versa.

Finally now you are done setting up a basic cloud environment. Try start OpenNebula on the controller:

~/one/bin/one start

You should be able to add the host by using onehost command:

onehost create im_kvm vmm_kvm tm_ssh

Hope you have found this guide helpful and having fun with OpenNebula :) In future post I will try to cover how to run OpenNebula on distributed reliable file system MoosFS to achieve true elasticity without using expensive SAN solution.

Monday, June 20, 2011

How To Configure izPack with Installer plugin in Griffon

Recently I used Installer plugin for Griffon in one of my open source project. Overall it was very easy to install and have it creating a simple installer for your Griffon application, however when it comes to customizing the izPack configuration I did not find any good documentation. Thanks to the usual helpfulness of Andres Almiray :) and going through some source code, I managed to customize the installer to meet my requirement and would like to share my findings here.

Once you install the installer plugin through:

griffon install-plugin installer

A set of izPack configuration templates will also be installed with the plugin. The template works pretty well for any Griffon example application but probably does not make too much sense for anything else. The easiest way to provide your own customization is by creating your custom template and override the default ones by hooking into Griffon build time event notification. I will show you how to do that here step-by-step.

First: Create your installer source directory to store your configuration and resources such as icons. In this tutorial we will create a folder structure as the following:

/src/installer/izpack/resources

Second: Create your own izPack configuration. Copy the default configuration to the resources folder you just created. The default configuration files can be found under ~/.griffon/projects/installer/izpack/resources.

Third: Create event handler to listen on packaging event and override the default configurations with yours. Open _Events.groovy file under /scripts (Create it if it does not exist yet). Add the following lines:

eventPreparePackageEnd = {installers ->
ant.copy( todir: "${projectWorkDir}/installer/izpack/resources", overwrite: true ) {
fileset( dir: "${basedir}/src/installer/izpack/resources", includes: "**" )
}

ant.replace( dir: "${projectWorkDir}/installer/izpack/resources" ) {
replacefilter(token: "@app.name@", value: griffonAppName)
replacefilter(token: "@app.version@", value: griffonAppVersion)
}
}


Now run the izPack packaging command

griffon package izpack

You should see the customized installer based on your configuration being generated. Hope you have found this tutorial helpful and again you are always welcome to provide your feedback and comment here.

JNDI Warrior v0.3 Released

Just released JNDI Warrior 0.3 yesterday. In this release the following improvement and new features have been implemented:

  • Better UI look and feel to give it more mature look rather than the original demo-like appearance
  • Better ability to manage classpath per JNDI connection session
  • A izPack powered cross-platform installer to make it easy to install the application on any platform
  • New feature implemented to create, edit, and execute Groovy script from the application directly making prototyping and debugging with JNDI even easier.
Two built-in variables are available currently within the script.
  1. context - IntitialContext of connected JNDI provider
  2. out - a PrintWriter instance which allows you to print directly to script output area
Here is a screenshot of the 0.3 release: