Wednesday, September 14, 2011

Setup NFS4 & LiveMigration with OpenNebula 2.2.1 on Ubuntu 11.04 Natty

About 2 months ago we setup a experimental internal cloud environment for one of our client using pure commodity servers and open source technology (OpenNebula 2.2.1 on Ubuntu 11.04 Enterprise Server). Since then the proof-of-concept environment has been working pretty well for our client hosting internal QA, Development, as well as some IT virtualized appliances. However the initial setup was based on SSH transport which works well in the small environment but struggles with large VMs. The main disadvantages of SSH based configuration are:
  • Performance with large VM image - With ssh transport each VM image will have to be transferred to the cloud node from the repository by the controller, and then retrieved back from the node (if save flag is on). Obviously this process takes a lot of bandwidth with large images.
  • No live migration - Live migration requires shared VM image storage between cloud nodes. Live migration is the key feature allowing any cloud operator to maintain the no downtime illusion and elasticity.
  • Vulnerable to data loss - If you are like our client who is running a cloud on top of commodity servers or even PCs, then most likely your cloud node server does not have RAID disks or much of other hardware redundancies. Since ssh based solution transfers the image to the node then it lets the node to run the VM using local hard disk, thus if you suffer a hard drive failure while running the VM you might loose all or part of your VM data.
To solve these problems OpenNebula and Linux community have already provided many solutions, in this post I would like to introduce the most straightforward alternative solution with NFSv4. With NFS a server can export a part of it's own file system to client machines whom in turn can then mount the exported file system locally and use it just as any other local file system. Originally we expected the change to be quite simple and straightforward however it turned out there were a few interesting hurdles some OpenNebula specific some Ubuntu. I hope this post can help you implementing similar solution with little effort.

First: NFS Server

Since this NFS server will be your centralized storage server therefore a little bit extra investment in this server would not hurt. We recommend to at least have 2 physical NICs, dual power supply, and RAID 1/5/10 disks.

(In our environment we chose software RAID 5 with 3 disks and 1 hot spare)

Install NFS4 Server

sudo apt-get install nfs-kernel-server

In /etc/default/nfs-common turn on IDMAPD:

sudo vi /etc/default/nfs-common

# NFSv4 specific
NEED_GSSD=no # no is default

Export /storage folder:
sudo vi /etc/exports
[nfs client or your network]/storage (rw,async,no_root_squash,insecure,no_subtree_check,anonuid=1001,anongid=1001)

Here in /etc/exports we use async flag to improve NFS performance and also map default uid and gid to oneadmin/cloud (1001/1001).

# Refresh export table

sudo exportfs -r

Create oneadmin and cloud group

sudo groupadd --gid 1001 cloud

useradd --uid 1001 -g cloud oneadmin

Now you are done with the server setup.

Second: NFS Client (OpenNebula controller and nodes)

Install NFS Client

sudo apt-get install nfs-common

In/etc/default/nfs-common turn on IDMAPD:

# NFSv4 specific
NEED_GSSD=no # no is default

# Add mount
sudo vi/etc/fstab
[nfs server]:/storage /storage nfs4 rw,_netdev,rsize=1048576,wsize=1048576,auto 0 0

Restart the OS:
sudo reboot

Now lets test the NFS mount:
#NFS performance test
dd if=/dev/zero of=/storage/100mb bs=131072 count=800

Shutdown OpenNebula and all VMs then move /srv/cloud/one/var to NFS mount:

mv /srv/cloud/one/var /storage/one/var

ln -s /storage/one/var /srv/cloud/one/var

(Also merge the var content on cloud nodes to the same directory if you have been running under ssh mode)

Turn Off Dynamic Ownership for QEMU on Nodes

sudo vi /etc/libvirtd/qemu.conf

# The user ID for QEMU processes run by the system instance
user = "root"

# The group ID for QEMU processes run by the system instance
group = "root"
dynamic_ownership = 0

Otherwise you might experience "unable to set user and group to '102:105' error" when migrating VMs

Fix LiveMigration Bug

Right now if you try to livemigrate with this setup you will get "Cannot access CA certificate '/etc/pki/CA/cacert.pem'" error since QEMU by default is configured to use TLS. With OpenNebula since you have already configured bi-directional passwordless ssh access between controller and nodes the easiest fix it to ask qemu to use ssh instead of TLS.

vi /srv/cloud/one/var/remotes/vmm/kvmrc

export LIBVIRT_URI=qemu+ssh:///system
export QEMU_PROTOCOL=qemu+ssh

Disable AppArmor for Libvirtd on Nodes

By default Ubuntu AppArmor will prevent libvirtd from access nfs mount, you can add the rw permission for the directory or simply disble apparmor for libvirtd. I chose to disable it:

sudo ln-s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
apparmor_parser-R /etc/apparmor.d/usr.sbin.libvirtd

At this point you should have a working NFS based OpenNebula setup. Now uncomment the tm_nfs section in oned.conf then add your hosts back use the new NFS transport:

onehost create im_kvm vmm_kvm tm_nfs

You should be able to use all the features OpenNebula provides now including live migration and should see a significant performance improvement for VM provisioning. Have fun and enjoy :-)

Saturday, September 10, 2011

Griffon Validation Plugin v0.8.1 Release

Just released v0.8.1 for Griffon Validation Plugin, a minor bug fix release to address the multi-error rendering issue recently discovered by Frank Shaw. If you are using several constraint especially custom constraints on your model, it is highly recommended to upgrade to this release immediately since the error rendering engine currently does not handle multi-error scenario correctly. To upgrade please use the following command:

griffon install-plugin validation

Friday, August 19, 2011

Griffon Validation Plugin 0.8 Released

I am happy to announce the release of Griffon Validation plugin v0.8. In this release the following enhancement and new features were introduced:

1. Upgraded to match Griffon core 0.9.3 release

For v0.8 release you will need Griffon 0.9.3+ to run it.

2. Upgraded to i18n plugin 0.4 release

Thanks to the new multi-bundle support in i18n plugin now when using validation plugin you no longer need to manually create default error messages. The validation plugin is now shipped with these default messages and will automatically fall back to it if no other localized messages are found.

3. Real time validation support

Now validation plugin can automatically trigger validation for your model beans if you include the realTime flag in your annotation.


class MyModel{

The real time validation feature is implemented relying on the property change support therefore any property value change, for example triggered by bind{}, will invoke the validation logic associated with that particular property. The actual validation is performed in a separate thread using GriffonApplication.execAsync() before it switches back to EDT for error rendering. Currently this flag only works with Griffon MVC model beans if you apply it to a regular POGO object the flag will be ignored.

As usually you can upgrade your validation by simply issue the following command:

griffon install-plugin validation

And please refer to the plugin Wiki for detail information of the plugin.

Monday, July 11, 2011

The End is the Beginning

I have decided today to terminate Hydra Cache project. What is Hydra Cache anyway?

About 2 and half years ago, a few of my friends and I, inspired by our experience with one of our client at that time and Amazon's Dynamo design, started an open source project to build a distributed elastic cache called Hydra Cache. However after the inception of this project I personally was involved in a number of startup business and open source projects, thus spreading myself too thin and even till today still only have the Hydra Cache project in a pre-release 1.0RC1 stage (although really close to be release ready).

Of course this slow development of a promising project only had myself to blame. When I was searching for the reason why I want to terminate the project despite it being so close to release, I had many answers such as competent alternative open source options (JBoss InfiniSpan) now available, but I think the most important reason is that I have lost my drive and obsession with this idea a long time ago. All the things I did after that point for one reason or another were not really helping the project but rather just me dragging my feet to admit the fate of the project. Nevertheless this project has taught me many lessons, both technical and non-technical, however most importantly after this experience I realized the best way to do an outstanding job with open source project is to do it with your passion and have fun. The moment you realize the flame of your passion is gone and you are no longer having fun with it, you should stop dragging it alone. Since we don't get to be paid for open source contribution, so if you are not having fun then why even bother :)

Wednesday, June 29, 2011

Install OpenNebula 2.2.1 on Ubuntu 11.04 Enterprise Server

Recently I was in a process of setting up an open source cloud computing environment. The experience I gained through this project was very interesting to say the least. The goal for me is to create a ultra reliable cloud computing environment with the potential to handle not only IaaS requirement but also PaaS as well as future SaaS goals. On top of that this environment should be able to utilize commodity servers or even desktop grade machine to form the cloud. After much research and experiment, eventually I have decided to go with OpenNebula due to its rich feature set and well thought-out integration with different hypervisors as well as public cloud vendor.

In this post, I would like to share my findings while setting up OpenNebula 2.2.1 on Ubuntu 11.04 Enterprise Server to maybe make your life a bit easier if you decided to go down a similar path. The hypervisor technology we decided to use is KVM however OpenNebula works with Xen and VMWare as well.

(In this guide I will focus on how not why. To find out more about why certain steps were performed please refer to OpenNebula document)

Cloud Controller Setup

1. Getting OpenNebula 2.2.1

I basically just downloaded the tar ball from

2. Requisite Software Installation

sudo apt-get install ruby sqlite3 libxmlrpc-c3-0 openssl ssh

sudo apt-get install ruby-dev rubygems rake make libxml-parser-ruby libxml2 libxslt1.1 libxml-ruby libxslt-ruby libnokogiri-ruby1.8

3. Setup Directory Structure

$ tree /srv
`-- cloud
  |-- one
  `-- images

4. Create OpenNebula User and Group Account

groupadd cloud
useradd -d /srv/cloud/one -g cloud -m oneadmin
sudo passwd oneadmin
$ id oneadmin
uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud)

5. Prepare for Build

sudo apt-get install libsqlite3-dev libxmlrpc-c3-dev scons g++ ruby libopenssl-ruby libssl-dev ruby-dev make rake rubygems libxml-parser-ruby1.8 libxslt1-dev

6. Fix bug #265 (Optional)

Currently OpenNebula automatically deletes VM image that fails to start for whatever reason. This behavior might not be desirable for some installation including ours. This is currently filed as a bug/improvement for 3.0 release, but you can apply the temporary patch following the instruction on this page
7. Build

~$ wget <binary download url>
~$ tar xzf
~$ cd one-2.0
~/one-2.0$ scons -j2
scons: done building targets.
~/one-2.0$ ./ -d /srv/cloud/one

8. Add OpenNebula environment variables

Edit ~/.profile and add the following lines (or .bashrc if you want to use bash)

export PATH=$PATH:/var/lib/gems/1.8/bin:/srv/cloud/one/bin
export ONE_LOCATION=/srv/cloud/one
export ONE_XMLRPC=http://localhost:2633/RPC2
export ONE_AUTH=/srv/cloud/one/.one/one_auth

9. Create the user's OpenNebula config directory:

~$ mkdir ~/.one

10. Configure Authentication File

# Add this one liner to one_auth file match oneadmin user's password
~$ vim /srv/cloud/one/.one/one_auth


11. Prepare to install Sunstone (WebUI for OpenNebula)

sudo gem install json sinatra thin rack sequel

* Ubuntu does not add gem to your path so add the following directory to your path /var/lib/gems/1.8/bin

At this point your controller server is pretty much setup now. Next we will create a cloud node server.

Cloud Node Setup

1. Verify CPU Virtualization Support

Since KVM relies on CPU virtualization support before we install KVM we need to make sure it will work with your hardware.

egrep '(vmx|svm)' /proc/cpuinfo

If there is any match, it means you CPU supports KVM otherwise you can try VMWare hypervisor or Xen.

2. Install KVM

If you have already selected KVM during Ubuntu server installation then you really don't have to do anything here, otherwise:

sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder virt-manager virtinst bridge-utils

3. Remove default NAT bridge

By default if you selected KVM during Ubuntu installation it will also install the default NAT bridge. But usually this NAT bridge is not needed for server type setup and compromise performance, so I will remove the bridge here.

# virsh net-destroy default
# virsh net-undefine default
# service libvirt-bin restart
# ifconfig

4. Setup bridge for VM network

In a server type environment you want your VM to be accessible from anyone on the network, to achieve that you need to bridge the network from the host. Modify the /etc/network/interfaces file.


auto eth1
iface eth1 inet manual

auto virbr1
iface virbr1 inet dhcp
bridge_ports eth1
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

Static IP

auto eth1
iface eth1 inet manual

auto virbr1
iface virbr1 inet static
   bridge_ports eth1
   bridge_fd 9
   bridge_hello 2
   bridge_maxage 12
   bridge_stp off

5. Setup Software required by OpenNebula

sudo apt-get install ruby ssh

6. Create Directory Structure

$ tree /srv
`-- cloud
  |-- one

7. Create OpenNebula User and Group (with the same G/UID)

groupadd --gid 1001 cloud
useradd --uid 1001 -g cloud -G libvirtd -d /srv/cloud/one oneadmin
sudo passwd oneadmin – setup the password to oneadmin
chown -R oneadmin:cloud cloud

* Make sure the gid and uid match the ones on the controller

8. Setup password-less SSH access from controller to node for oneadmin user

cat ~/.ssh/ >> ~/.ssh/authorized_keys
$ cat ~/.ssh/config
Host *
StrictHostKeyChecking no

* If you are using SSH transport driver with OpenNebula, you need to make sure there is bi-directional password-less ssh access from controller to node and vice versa.

Finally now you are done setting up a basic cloud environment. Try start OpenNebula on the controller:

~/one/bin/one start

You should be able to add the host by using onehost command:

onehost create im_kvm vmm_kvm tm_ssh

Hope you have found this guide helpful and having fun with OpenNebula :) In future post I will try to cover how to run OpenNebula on distributed reliable file system MoosFS to achieve true elasticity without using expensive SAN solution.

Monday, June 20, 2011

How To Configure izPack with Installer plugin in Griffon

Recently I used Installer plugin for Griffon in one of my open source project. Overall it was very easy to install and have it creating a simple installer for your Griffon application, however when it comes to customizing the izPack configuration I did not find any good documentation. Thanks to the usual helpfulness of Andres Almiray :) and going through some source code, I managed to customize the installer to meet my requirement and would like to share my findings here.

Once you install the installer plugin through:

griffon install-plugin installer

A set of izPack configuration templates will also be installed with the plugin. The template works pretty well for any Griffon example application but probably does not make too much sense for anything else. The easiest way to provide your own customization is by creating your custom template and override the default ones by hooking into Griffon build time event notification. I will show you how to do that here step-by-step.

First: Create your installer source directory to store your configuration and resources such as icons. In this tutorial we will create a folder structure as the following:


Second: Create your own izPack configuration. Copy the default configuration to the resources folder you just created. The default configuration files can be found under ~/.griffon/projects/installer/izpack/resources.

Third: Create event handler to listen on packaging event and override the default configurations with yours. Open _Events.groovy file under /scripts (Create it if it does not exist yet). Add the following lines:

eventPreparePackageEnd = {installers ->
ant.copy( todir: "${projectWorkDir}/installer/izpack/resources", overwrite: true ) {
fileset( dir: "${basedir}/src/installer/izpack/resources", includes: "**" )

ant.replace( dir: "${projectWorkDir}/installer/izpack/resources" ) {
replacefilter(token: "", value: griffonAppName)
replacefilter(token: "@app.version@", value: griffonAppVersion)

Now run the izPack packaging command

griffon package izpack

You should see the customized installer based on your configuration being generated. Hope you have found this tutorial helpful and again you are always welcome to provide your feedback and comment here.

JNDI Warrior v0.3 Released

Just released JNDI Warrior 0.3 yesterday. In this release the following improvement and new features have been implemented:

  • Better UI look and feel to give it more mature look rather than the original demo-like appearance
  • Better ability to manage classpath per JNDI connection session
  • A izPack powered cross-platform installer to make it easy to install the application on any platform
  • New feature implemented to create, edit, and execute Groovy script from the application directly making prototyping and debugging with JNDI even easier.
Two built-in variables are available currently within the script.
  1. context - IntitialContext of connected JNDI provider
  2. out - a PrintWriter instance which allows you to print directly to script output area
Here is a screenshot of the 0.3 release:

Friday, May 13, 2011

Griffon Validation Plugin 0.7.1 Released

This is bug fix minor release for the Griffon Validation plugin. The main issue fixed in this release was the problem when combining popup error rendering style in a JDialog. To upgrade your installed plugin use the following command:

griffon install-plugin validation

Thursday, May 05, 2011

Griffon Airbag Plugin 0.1 Release

In one of my recent open source project I created a mechanism in Griffon framework to capture any uncaught exception raised in an application regardless what threading model it is using. The idea eventually caught some attention on Griffon dev maillist and absorbed into the 0.9.2 core. The actual implementation of this mechanism now in Griffon core is definitely more efficient and reliable than my original meta method wrapping based attempt :)

The rest of the original implementation, mostly a Swing dialog, designed to provide a simple and user friendly way to inform any unexpected exception has been packaged as a standard Griffon plugin called Airbag. I have just released the 0.1 version of it which includes pretty much a direct port of the code from my other project.

To Install

griffon install-plugin airbag

How to use?

Once installed Airbag dialog can be easily initiated by using the following event handler in your main controller.

   void onUncaughtExceptionThrown(ex) {
doLater {
try {
def errorDialog = new AirBagErrorDialog(view.mainWindow,
"Uncaught Exception Occured",
} catch (Exception e) {

Screen Shot

Tuesday, May 03, 2011

Griffon Validation Plugin 0.7 Released

Version 0.7 of Griffon Validation Plugin has just been released to the plugin repository. This is a major release on Validation plugin road map which included some significant internal refactoring as well as some key additional features. In this post, I would like to provide a walk-through of these major improvements.

1. Removed runtime enhancement favoring AST based compile-time enhancement (backward compatibility breaking change!)

From the inception of this plugin, all model instances are automatically enhanced during runtime to have the validation capability injected. Since v0.4 an additional AST transformation annotation @Validatable has been added so non-model classes can be annotated to have the same validation capability injected. However since these two approaches work quite differently; one performs meta class based enhancement during runtime, and the other AST based approach performs byte code level enhancement during compile-time. After much discussion we have realized that firstly the on-going effort to make sure two different approaches produce identical result is getting more and more complicated as new feature being introduced to the enhancement logic. Secondly the default runtime enhancement is less flexible and less efficient comparing to the AST based approach. Thus a hard decision was made to drop the support for the auto runtime enhancement for model instances. In other words after upgrading to 0.7 you will need to add @Validatable to all model classes that you want validation feature to be injected.

2. Error Renderers

One of the common challenge we face when building UI using any GUI framework is how to effectively and easily notify the user about errors. Ideally a validation framework should not just help developer define constraints and validation logic but also handle the presentation of the error message automatically with little coding involved. With this vision in mind, also thanks to many useful input and ideas provided by Andres during our discussion, this mechanism called Error Renderer was created in this release as my first attempt to address this challenge with some Groovy/Griffon awesomeness.

Error Renderer can be declared easily by using the additional synthetic attribute 'errorRenderer' introduced in this release. See the following example:

textField(text: bind(target: model, 'email'), errorRenderer:'for: email, styles: [highlight, popup]')

In the above example, two error renderers were declared for the textField widget for the 'email' field in the model. Basically what it means is that if any error was detected for the email field in the model two types of error renderer will be activated to display the error(s). The styles portion of the configuration is optional. If no renderer style is defined, by default highlight renderer will be used. Currently three types of error renderer styles are implemented, I will go through them quickly here.

I. Highlight Error Renderer

This renderer basically change the background color of the component to pink. Mostly it is used for text based input fields. Here is a screen shot of the rendering result.

II. Popup Error Renderer

This renderer display the error message associated with the error using a tooltip-like popup box. Here is a screen shot of the rendering result.

III. On With Error Renderer

This is an invisible renderer that does not render anything itself but switch the component visible attribute on when the error is detected. It is commonly used to display initially invisible custom component when error occurs. This renderer is used in combination of the new errorIcon widget also introduced in this release. Here is a screen shot of it used with errorIcon.

3. New Error Related Widgets

A few simple widgets were also added to this release to reduce the effort required when working errors.

I. Error Messages Widgets

This is basically the old wine in a new bottle. This widget is essentially identical to the ErrorMessagePanel class existed since v0.2 however it is now implemented as a widget to make it easier to use. Usage:

errorMessages(constraints: NORTH, errors: bind(source: model, 'errors'))

Screen shot:

II. Error Icon Widgets

As mentioned before this widget is mainly used in combination with the onWithError renderer. This icon widget is initially invisible and will only be turned on by the onWithError renderer. Usage:

errorIcon(errorRenderer:'for: creditCard, styles: [onWithError]')

Screen shot:

4. Limitation

Currently the error renderer only works with the model instance within a MVC group. Future work is need to support plain POGO annotated with @Validatable.

Monday, February 28, 2011

How to support custom artifact type in Griffon 2nd revision

Last year June I wrote a blog post about how to support custom artifact type in Griffon framework. Now Griffon 0.9.2 release is just around the corner. In this release some major refactoring was done with the Artifact API, that's why I am updating this guide to match the new API.

Out-of-box Griffon supports 4 different types of artifacts: Model, View, Controller (MVC) plus Service, these are the major building blocks of any Griffon application. Just like it's cousin Grails, in Griffon plugins and addons can also introduce new artifact types, however the process is fairly different from what Grails employs. As part of Griffon Validation Plugin I implemented the support for a new custom artifact type - Constraint, and I would like to share some of my learning here so it would be a little bit easier if you are planning to do something similar.

Step 1 - Handle your artifact

To support a new artifact type you have to tell the Griffon core about the artifact type first. You can achieve this by implementing your own ArtifactClass and ArtifactHandler, for most of the common cases extending from DefaultGriffonClass and ArtifactHandlerAdapter should be enough. Here is what it looks like for the Constraint artifact type:

public interface GriffonConstraintClass extends GriffonClass {
/** "constraint" */
String TYPE = "constraint";
/** "Constraint" */
String TRAILING = "Constraint";

public class ConstraintArtifactHandler extends ArtifactHandlerAdapter {
public ConstraintArtifactHandler(GriffonApplication app) {
super(app, GriffonConstraintClass.TYPE, GriffonConstraintClass.TRAILING);

protected GriffonClass newGriffonClassInstance(Class clazz) {
return new DefaultGriffonConstraintClass(getApp(), clazz);

public class DefaultGriffonConstraintClass extends DefaultGriffonClass implements GriffonConstraintClass {
public DefaultGriffonConstraintClass(GriffonApplication app, Class clazz) {
super(app, clazz, GriffonConstraintClass.TYPE, GriffonConstraintClass.TRAILING);
One note of caution on the artifact class and it's handler:
I ran into some problem when implementing them in Groovy initially, and had to change all implementation to Java instead. Thanks to Andres for the investigation and input on this discovery.

Step 2 - Register your artifact

As shown above, if you have experience working with Grails artefact support, you will notice the handler implementation is almost identical in Griffon, however things starting to differ from this point forward. Now you have the handler implemented, next thing is to register it with Griffon core. This is best achieved during initialization phase in your addon. Open the [PluginName]GriffonAddon.groovy file add the following callback if its not already there:

def addonInit = {app ->
app.artifactManager.registerArtifactHandler(new ConstraintArtifactHandler(app))

Step 3 - Find your artifacts

Now we have the new artifact type registered, next step is to tell Griffon where to find the artifacts. Ever wonder how Griffon knows to look under griffon-app/models for model classes? This is what we are going to do in this step. This is also where Griffon custom artifact support truly differs from Grails. Instead of handling it at run-time, Griffon chooses to handle this at build time, so to achieve this you need to tap into the Griffon event model. Open the _Events.groovy script and implement the following event listener:

eventCollectArtifacts = { artifactsInfo ->
if(!artifactsInfo.find{ it.type == 'constraint' }) {
artifactsInfo << [type: 'constraint', path: 'constraints', suffix: 'Constraint'] } }

This event listener will tell Griffon to look for anything under griffon-app/constraints folder with suffix 'Constraint' and register them as constraint artifacts.

Step 4 - Measure your artifacts

Now we pretty much have the basic bolts and nuts in place, its time to make our newly found artifact type to be more integrated with Griffon as any other first class artifact types do. One of the nice feature of Griffon is the stats command, it gives you an instant overview of how big your app is in terms of how many files and Line of Code per type including artifacts. Won't it be nice to have it also display the metrics about our own custom artifacts? fortunately its actually pretty easy to achieve in Griffon, similar to the previous step we will add another listener to the _Events script.

eventStatsStart = { pathToInfo ->
if(!pathToInfo.find{ it.path == 'constraints'} ) {
pathToInfo << [name: 'Constraints', path: 'constraints', filetype: ['.groovy','.java']] } }

Griffon event model is a very powerful concept, usually when I am not sure how to do something funky in Griffon this is the first place I look.

Step 5 - Automate your artifacts

One of big selling point of the next generation Rails-like RIA framework is the ability to create any artifact simply by using one of the built-in command, for example grails create-controller or griffon create-mvc. To make our new artifact type a true first-class citizen of Griffon, of course we need all the bells and whistles. To add a new command to Griffon, you need to create a new Groovy script under scripts folder:


includeTargets << griffonScript("_GriffonInit")
includeTargets << griffonScript("_GriffonCreateArtifacts")

target('default': "Creates a new constraint") {
depends(checkVersion, parseArguments)

promptForName(type: "Constraint")

def name = argsMap["params"][0]
createArtifact(name: name, suffix: "Constraint", type: "Constraint", path: "griffon-app/constraints")
createUnitTest(name: name, suffix: "Constraint")

Like other convention-over-configuration framework, Griffon relies heavily on simple naming conventions, so in the script make sure you naming everything consistent to avoid unnecessary complexity. This script will create artifact for the type of Constriant and related unit test case, as you can see it will be a simple matter to create integration test case if need be.

Now with the command in place, you can finally provide the template for the new artifact being created. Again naming convention is being used to determine where to find template file, for our example the template file should be placed under src/templates/artifacts and named Constraint.groovy:

@artifact.package@class {

def validate(propertyValue, bean, parameter) {
// insert your custom constraint logic


Phew, now finally we are done, hope I did not miss anything :) This is a long post and as you can see a lot of plumbing; this is exactly why currently there are some discussion going on within Griffon dev mail list to provide declaration based custom artifact support either using Grails style or Groovy AST transformation, so stay tuned for future updates on this topic.

Monday, February 21, 2011

Griffon Validation Plugin 0.6 Released

Griffon Validation Plugin v0.6 was released today. In this release the validation plugin's artifact handling has been rewritten to be compatible withe the up-coming Griffon 0.9.2 release. Also in this release validation plugin now simulates inheritance effect for the constraints you define. Since the constraints are defined using static fields following Grails convention, no real inheritance can be implemented however now with 0.6 release you class will basically copy the parent class' constraints, and additionally you can also override the parent constraint. See the following example:

class ServerParameter {
@Bindable String serverName
@Bindable int port
@Bindable String displayName

def beforeValidation = {
setDisplayName "${serverName}:${port}"

static constraints = {
serverName(nullable: false, blank: false)
port(range: 0..65535)
displayName(nullable: false, blank: false)

class ProtocolSpecificServerParameter extends ServerParameter{
@Bindable String protocol

def beforeValidation = {
setDisplayName "${protocol}://${serverName}:${port}"

static constraints = {
protocol(blank: false, nullable: false)

In the above example, the ProtocolSpecificServerParameter will not only inherent ServerParameter's serverName and port fields but also their associated constraints. The only resitration you need to be aware of is if the parent constraint generates error for a certain condition then the overriding child constraint has to generate error as well. In other words, validation plugin does not allow error hiding by using constraint override in the child class, similar to the method exception handling during inheritance within Java.

Sunday, February 20, 2011

JNDI Warrior v0.2 Released

JNDI Warrior v0.2 was released today. In this version you can now save your JNDI connection settings including it's classpath for future reuse. As well a simple sorting capability was added to the JNDI tree browser to help navigating through big JNDI tree commonly see in large enterprise. You can download the latest binary from here.

You are always welcome to use our Trac to submit any bug report or feature request.

Enhanced connection dialog:

Thursday, January 27, 2011

Sample pom.xml For Groovy Project with Spock

Had to setup a new Groovy project using Maven today with Spock as its testing framework. Ran into a few glitch; thought to share it here with a sample pom.xml as a day-log.


General Purpose Groovy Project









Few things I ran into:

  1. Maven 3 currently does not work with the Spock plugin
  2. Spock plugin version 0.5-groovy-1.7 does not seem to pick up unit spec test correctly, so I had to fall back to 0.4 to make it work
  3. In order to mix both Java and Groovy code you need to enable the stub generate execution phase in GMaven plugin

Tuesday, January 04, 2011

JNDI Warrior v0.1 Released

It started as a simple demonstration application to showcase Griffon framework’s capability to deliver rich Swing based client application with unprecedented speed. However the demo itself turned out to be quite a handy little tool especially for Java EE developers, I personally had used it a couple of times to debug some odd JNDI lookup and classpath issues in the last a couple of weeks. So I have decided to officially host it on as an open source project - JNDI Warrior.

Its a simple Swing based client utility application which allows you to connect to a given JNDI provider typically found with any Java EE application server. It has three JNDI templates built-in (Weblogic, JBoss AS, and Glassfish) to help you filling some of the blanks, but in theory it should work like any Java based JNDI client with any provider by manually entering your JNDI settings. You can download this little application from here.

You are always welcome to use project Trac to submit any bug report or feature request.

Happy new year! and enjoy :)

Here are a few screen shots: