Saturday, December 16, 2006

Must read book list for any serious Java developer

Last week one of my friend who was planning to shift his career focus from something else to Java platform asked me to recommend some must read books to advance his understanding and coding ability in Java, and I came up with this list. Although by no means if you haven't read all the books does not mean you are not a serious Java developer, especially for people who collected his/her fundamental coding knowledge on platform other than Java before he/she started exploring the Java world, but this is a list at least from my opinion definitely worth having them on your bookshelf.

  • Thinking in Java – Entry level book perfect as a reference book as well as be used to establish a sound and solid understanding on Java language and its core API

Once you finish all these books I believe you will obtain both a solid understanding of the platform and most importantly a set of good habits for programming on Java platform or in this matter even any other Object Oriented platform, because I believe what Kent Beck said about a great programmer is nothing but just a good programmer with very good habits.

Tuesday, December 05, 2006

Comments

DO NOT write comments to comment what the code does; use short method and self-descriptive method name to explain it.

DO NOT write comments to comment when the code should be used; use assertion to not only describe it but force it too.

DO NOT write comments to comment how the code should be used; use design pattern to communicate it.

Write comments to comment why the code is written the way it is.

Monday, November 20, 2006

Thinking from Memory Defragmentation to Architecture Reengineering

Recently I don't know if its just my luck or the whole industry is planning to upgrade their architecture, almost every friend I talked to and even every interview I had asked me the same question “How to drastically improve your architecture without trashing the existing one”, and my recent research and study in a seemingly irrelevant field “Memory Management” provided me some unexpected insight into this topic.

Despite the apparent disparity between these two topic, there are some shocking similarity under the skin for these two. Lets take a look at a classic problem in Memory Management domain the Memory Fragmentation problem. In short, Memory Fragmentation is a memory state, usually caused by some long-running process, in which blocks of allocated memory scattered between blocks of free memory instead of one continuous extent. Memory Fragmentation causes the performance of memory allocation to decrease or even prevent program from allocating memory if the free memory is too fragmented. Memory Defragmentation is a reverse process that we use to fix the fragmentation problem, and the standard approach is pretty drastic: we can shut everything down (or at least freeze everything), free all memory, and restart (or resume) the whole program.

Now how about the architecture side? Although you usually will not have fragmentation problem in your architecture (notice I said usually), but without careful design, relentless refactory, and proper migration path software architecture tend to get outdated pretty quickly (what I call Architecture Deterioration), if you had experience integrating with some legacy systems out there you will know what I mean, and that's why we have the Architecture Reengineering to basically reverse the architecture deterioration to incorporate better technology and better approach before the architecture becomes another “legacy system”. The standard approach of Architecture Reengineering could be pretty drastic too: we can stop releasing new features or updates for the product or solution where the architecture was used for 6-12 months, redesign and implement the architecture and make sure it fulfill all the requirement that the predecessor does, then resume the new feature and release cycle.

Do you see the resemblance now? As you can see that both of the standard approaches to these two problems are probably not good enough for most of the situations we as software engineers commonly encounter. The standard Memory Defragmentation approach will result at least a long noticeable freeze which will lead to not only performance lose but even some unacceptable consequences in a mission-critical hard real-time system. On the other hand, the standard Architecture Reengineering approach won't even work in most of cases simply because freezing feature release cycle for 6-12 months will simply be committing suicide in the business world. So what is the better way to attack this kind of problem? Simple, instead of try to fix the whole thing in one shot, take an incremental approach fix the system in bits and pieces. First, lets take a look at how people perform Memory Defragmentation in an incremental fashion. In a normal C-based system, this can only be done with the classic programmer trick

Any problem in computer science can be solved with another layer of indirection. -- David Wheeler

If programs are not allowed to create direct pointer to the memory address, but only pointers to pointers that are pointing to memory, defragmentation process can move memory and only update the single pointer in the system, hence incrementally defragment the memory space.

Following the same line of thinking, adding a another layer of indirection should be able to help us reengineering existing architecture. Here I want to introduce a design pattern that I often use to help me to perform incremental Architecture Reengineering and eventually achieve what I call Continuous Evolving Architecture. This design pattern consists of two basic steps:

First – You need to encapsulate the part of the system you want to reengineer to level-4 which is the service level (or just wrap the whole system if you are planning to do an overhaul on the entire architecture). When I mentioned service here, I did not mean you need to implement a set of Web-Service interfaces and use SOAP as the protocol, but a rather more conceptual idea of services.

Second – You need to implement a classic Message BUS as a mediator and proxy to expose the services that you just created for your existing system through pure message exchange. If you are familiar with the Enterprise Service BUS concept, you can imagine this as a small scale implementation without all the Web-Service and BPEL stuff.

Now you have a layer of indirection that allow you to implement new features that might heavily rely on the existing system but without the need of talking to the existing system directly or even knowing the existence of the existing components. At the same time, it gives you the freedom to change any existing part of the system according to your own time line and budget without worrying about affecting the rest of the system, and hence a path to reegineer your existing architecture incrementally and safely.

Thursday, November 02, 2006

Two common mistakes that people makes when adopting Scrum for the first time

I came back from an almost a-month-long vacation about 2 days ago fully refreshed and with somewhat terrible jet lag as well, but the worst thing is since I currently don't really have a full time job yet therefore there is no motivation whatsoever for me to adjust my biological clock this time that's why I think my timezone right now is still somewhere around Honolulu.

In this first post of November, I want to talk about two common but sometime fatal mistakes that people usually make when they attempt to adopt Scrum methodology for the first time in their organization.

Adopt Scrum as everything but also the only thing

A lot of people adopt Scrum without realizing that Scrum is not a complete package. As stated on Control Chaos website:

Scrum is an agile, lightweight process that can be used to manage and control software and product development using iterative, incremental practices. Wrapping existing engineering practices, including Extreme Programming and RUP, Scrum generates the benefits of agile development with the advantages of a simple implementation.

As you can see, Scrum by itself is not complete because it is only a management process including a set of principals, disciplines, and artifacts that can be used to manage a agile and iterative project, but it does not contain any engineering practices thats why it must be co-implemented with other compatible engineering practices, such as XP and RUP, in order to be effective. For people who forgets about this or misunderstands what Scrum is all about, can turn a project into total chaos instead of controlling it, since Scrum alone does not address the following major engineering concerns:

  • Design – How and when the architecture is drawn and the major design choices are made?

  • Risk Management – How and when the technical risk can be mitigated?

  • Quality Control – How, when and who does the job to guarantee the product meeting customer's requirement as well as the quality expectation?

I can hardly imagine any project that fails to address these concerns delivering high quality product on-time without blowing off budget.

Chicken-head syndrome

I called it chicken-head syndrome when you see a Scrum project where the “chickens” are making decisions or even creating and dispatching ad-hoc tasks to the “pigs”. This usually happens with inexperienced Scrum master who allows “chicken” to give orders during the daily Scrum meeting instead of being a mere observer, since in the most of organizations at least some “chickens” are usually higher up on the organizational chart. A mild symptom of this syndrome is seeing “chicken” jump in during the daily meeting or initiating too many discussions or meetings with the developers (pigs). In a Scrum project a good Scrum master should act as a umbrella that shields the team from unnecessary communications from the outside world (personally I prefer 0 external communication during an active Sprint) hence allowing the developers to be productive and focus on completing their backlog items and constructive communications with the product owner thats why an iteration in Scrum is called Sprint not socializing or mingling. Projects suffer from this syndrome usually result in drop of productivity which eventually leads to schedule overrun, budget overrun, or both.

Friday, October 06, 2006

Does agile process need architecture upfront?

September turned out was almost the busiest month of the whole 2006 so far. It was the end of my last contract, and as usual the deadline of some critical milestones of my current project, plus I just moved to my new apartment last week and still no internet yet, I am writing this post in a Starbucks. Now put that all together with a couple of camping and cottage trips (because of the tight schedule I did not get to take vacation in the whole summer, this is some kind of compensation for that before temperature drops below zero again) thats the reason why not a single post in September from me.

Now since the craziness is slowly cooling down a bit, I think its time to attempt to talk about something that has been repeatedly popping up and becoming the center of many discussions I had in the last two months and so. There are many faces of this topic, such as “Do we need a architect?” or “Do we need to design this now?” or even the “How come component A looks just like component B, and by the way what are they for?”, but it all boils down to one fundamental question “Does agile process need an architecture upfront”. Before I could think through this question, based on my agile nature my first instinct was to answer “of course not, that's what agile is all about – no big and expensive architecture upfront and by relying on many best practices such as TDD(Test Driven Design/Development), Refactory, and Paired Programming to guarantee the architectural integrity and taking an evolving approach to eventually arrive at the best architecture”. Before I can even feel proud and good about my hardcore Agileness, many discussions and debates I had in last two months led me to re-examine the reality; although there is no official number to show the project success rate after applying Agile approach but many study suggested the failure rate is probably around 30% or even higher, so why that many projects are still failing. You can argue that “People do not know how to apply Agile approach” or even somewhat degrading excuse like “They don't know what they are doing”, you can see just how many of this kind of excuses flying around everywhere now-a-days, however if we take a more constructive approach then you will quickly realize that there must be something we as the Agile community is not doing right, and one of these things that seems to confuse a hell out of most of the people is about the necessity of architecture in Agile approach. People I met are either completely lost on this if they just moved from a traditional model to an Agile method, or following certain practices but without any rationale, or even creating architecture for Agile project but just too ashamed to tell any of his/her friend that almost felt like they are some kind of traitors of Agile thinking. After a couple of days of meditation in the middle of nowhere in the Killarny provincial park, I am writing this post and hopefully can give some answers to all those confusion, pride, guilt, and even betrayal :-)

Before I can just answer “yes” or “no” to this question, I recalled something I learned when I was 10 years old “Other than Black and White, there is also something in this world called Gray”, but to make the answer not complete useless we need to find a precise definition for the gray, something more than “The answer is yes and no”. Now lets take a look at what is Software Architecture? In the traditional sense Software Architecture is defined as

The software architecture of a system comprises its software components, their external properties, and their relationships with one another. The term also refers to documentation of a system's software architecture. Documenting software architecture facilitates communication between stakeholders, documents early decisions about high-level design, and allows reuse of design components and patterns between projects. (by Bass, Len, Paul Clements, and Rick Kazman in Software Architecture In Practice (p. 23-28) and quoted by Wikipedia)

This definition is just abstract and almost useless like other textbook definitions such as the IEEE definition, and pointed out loud and clear by Ralph Jahnson in extreme programming mail-list in 2003 that:

There is no highest level concept of a system. Customers have a different concept than developers. Customers do not care at all about the structure of significant components. So, perhaps an architecture is the highest level concept that developers have of a system in its environment.

According to Johnson in his post, architecture is just a social contract among the developers, regardless it is on the paper or just in their mind, and most importantly it has almost nothing to do with other major stakeholders since they don't really care (As a matter of fact in the reality some stakeholders don't even necessarily want the project to succeed.). So far our question is almost answered, the answer is “Software architecture is whatever your developers agree upon, a merely enough consensus among them before they can move on to the construction stage”, in other words even in Agile you still need some Architecture upfront but whether its zero documentation or cutting down a forest its all developers' call, moreover in Agile you should never document architecture for users or any other stakeholders. Now some of you might wonder again this answer sounds like a good idea and very agile but it does not seems very helpful or providing any guidance, so how much really do I have to create for the architecture before I can proceed without too much worry? To answer this in short, you should create just enough architecture based on your project size and complexity (This is another reason why my favorite methodology is RUP, because it scales as long as you capture the spirit, see Per Runeson and Peter Greberg's Extreme Programming and Rational Unified Process – Contrasts or Synonyms? for more information on this topic), but if you are just like most of the people I talked to still have no concrete ideas about just how much is enough, I want to recommend a very interesting research paper I read recently to help you understand what is considered and proven to be important in Agile process. Dr. Diane Kelly published her research paper “A Study of Design Characteristics in Evolving Software Using Stability as a Criterion” in IEEE Transactions on Software Engineering Vol. 32, NO. 5 May 2006. The original idea of this paper has nothing to do with Agile methodology but it was the target project she picked for conducting this research captured my attention as soon as I realize this is a perfect study that provides us some very rare insight into what really matters when it comes to software architecture by closely examine a large, complex, and actively maintained scientific software system and its 30 years of active history. The target project used in her study is a system that:

Simulates thermal hydraulic subsystems in Canadian designed nuclear generating station. This software was first released in 1975 and is still actively maintained and in use. Awards were given to its developers from peers in the Canadian nuclear industry. That, its long life, and its use for safety-related, operational, and design issues in the international nuclear community point to the success of this software.

Clearly this is a very successful software system, an ideal role model for almost all the software systems out there we are trying to build, but I can just imagine that 9 out of the 10 companies I have worked for if they were about to build a system like this, no one in the planning meeting would even brave enough to just mention the word Agile or XP, maybe... just maybe someone will whisper the name of Agile Unified Process (If you are interested in how Agile performs in large scale project here is another fascinating article Agile Software Testing in a Large-Scale Project published on IEEE Software July/August 2006 issue). OK now just lets take a look at what they did 30 years ago:

Developers of scientific software are generally domain experts and not software engineers. Applications require deep domain knowledge and have extensive mathematical content.....Software engineering practices are generally not used...However, from the author's observations, there was a common culture of disciplined software practices in all the groups....Often little or no documentation is produced.

Doesn't that sound familiar to you? And remember this was done 30 years ago when systemic software design, architecture, and even the development life cycle theory is still in its cradle. With all these characteristics we observed from the system and now we also know how it was constructed, its time to show you the best part of this study that is to reveal the secret of the major design related factors that contributed to the success of this software system. The author listed the following major contributing factors:

  • The use of common blocks was the establishment of a common vocabulary made up of the variables names associated with the common blocks...Some of the research described in this paper reveals that the common blocks provided a kind of road map into the software code and a stable basis for long-term maintainability...this finding suggests that the developers relay on the common block structure more than the procedural portion of the code (My interpretation: Domain Object Model, Responsibility Model, and Key Building Blocks)

  • Rather than attempt to foresee and design for predicted changes, create a design that allows the software to evolve in response to unforeseeable changes. This may point, for example, to the architectural style typical of scientific software being appropriate and supportive of the changes that must be absorbed over the long term by this type of software.(My interpretation: Simplicity, Just Good Enough Design, Just In Time Implementation, and Refactory)

  • The software in this study has a high percentage of variable whose semantics and names relate to entities in the application and user domains. (My interpretation: Proper Object-Oriented Design and Programming - even on non-object oriented platform)

  • First, initial attention to naming convention is key. Second, maintaining those convention both over time and throughout all parts of the software provides a form of stable documentation. (Coding/Naming Convention and Self Descriptive Coding Practice)

Hopefully these factors I mentioned above can serve you as a starting point for your own Agile Architecture, and as well hoping after reading this post you can be as excited as I am right now in this ever intriguing but promising software engineering world, like Herb Shulter said in the September issue of 2006 Dr. Dobbs “It is a wonderful time to be a software engineer” !

Thursday, August 31, 2006

What can we learn from the Open Source community?

The most common complains I heard from the developers during my consulting career are as such that “The job is boring”, “The job is too stressful”, or “The manager does not know what he/she is doing”, and if I hear that a lot from the team then it means the team is probably demoralized which in most cases inevitably leads to a project failure. At the same time on the other side, the common complains that coming from the managers usually consist “The developers are not self-motivated/managed”, “The developers are too picky”, or “The developers are too greedy” as you can simply see both sides are not happy with each other, and in most cases that I have observed eventually the trust between managers and developers will be completely broken resulting in further demoralization which ultimately contributes to the failure of the project or can even bring down a company.

Now lets take a look at some of the characteristics of a successful open source team. In my opinion almost every successful and popular open source project has the following characteristics: firstly the team delivers high quality software that largely satisfy end-users' requirement, secondly it is consist of a group of highly skilled and self motivated developers, and finally (this is the sweetest part for the corporation) they don't get to paid to do all of that, but rather doing it for FREE. Now it is obvious this kind of developers are the perfect kind of employees, a dream come true if I may say for many of the corporations out there, so my next question will be how can a company acquire this kind of employees? If I am running a recruiting company, I will probably tell you that I can find someone like that for you, but since I don't I am going to tell you the truth. It all starts from building the company culture. In my opinion, two major contributing factor formed the solid foundation of any successful open source team and their formidable power of devotion.

The Learning Edge

Recent research in Psychology shows that when a person is in a constant and comfortable learning zone without too much pressure, then he/she has a much higher chance to get into a state called “Flow”. Flow, as described in The Psychology of Optimal Experience by Mihaly Csikszentmihalyi is a state of altered consciousness in which our ability to concentrate and perform is enormously enhanced. The research also suggests that within this state a great level of self-satisfaction and appreciation are being generated for the individual, in other words in this state a person can be more easily satisfied without too much stimulation from the external environment. The open source project perfectly reflects this theory, most of the team members are not being paid to work but still they devote a large chunk of their time and energy to do something, why? Because they can achieve satisfaction this way without any reward in any materialized form. We all know from Maslow's Hierarchy of Human Needs that once the basic needs are met the effectiveness of putting more investment such as increasing salary or benefit suffers from the law of Marginal Efficiency of Investment, hence it can not be used effectively as a motivation tool forever, sooner or later the company has to consult to other means to motivate people and of course the Learning Edge is the cheapest and proven the most effective way of doing that. For more information on the Learning Edge management philosophy check out the “The Learning Edge” article that is published on June 2006 ACM Communications journal.

Direct Feedback from the End Users

This is something else that's usually missing in many corporations but playing an very important role in open source community that also contributes to the high level of self-satisfaction and motivation demonstrated by some of the most successful open source team in the field. A good manager knows to openly express his/her appreciation to the team to increase the morale and the team spirit, but just like salary this kind of appreciation from a single manager or even a couple of them also suffers from the law of Marginal Efficiency of Investment and eventually become ineffective or too expensive to be effective, however if you open the communication channel between the developers and the end users directly then you actually open the door to a infinite appreciation program (if the software quality is good and also serves the end user's requirement) as well as an equally infinite monitoring and inspection program (if the software suffers from quality issues or fails to meet the end user's requirement) although special care need to be taken in consideration to remove the distraction and any other negative effects as side effects this direct channel can create, but overall I think the benefit greatly outweighs the shortcomings.

To summarize, to be competitive as a technology company at this post-dot-com era you can not only rely on the traditional management toolkits and philosophies but also need to learn from the open source community to first of all maintain a reasonable salary level as well as talents and most importantly to survive and strive.

Monday, August 21, 2006

Deadlock problem in MySQL JConnector 3.1

Recently I found a very interesting but also annoying database deadlock problem in the my current project. For the performance reason we uses different storage engine in MySQL for different type of tables, for example InnoDB engine for OLTP type tables, MyISAM engine when high data throughput as well as certain level of data inconsistency can coexist, MEMORY engine for transient data or tables that have extreme performance requirement.

Just a couple of weeks ago, during system stability testing we discovered that occasionally deadlocks were found in MySQL JConnector 3.1 driver when we have large number of InnoDB and MyISAM tables coexist in the same database. According to MySQL bug base, this problem is scheduled to be fixed in JConnector 5, unfortunately due to many different reasons both technical and political we do not have the luxury to upgrade to JConnector 5 at least for now in this project, but we also disparately need this mixed engine deployment feature from MySQL to meet our performance requirement. It became obvious a couple weeks ago, I had to fixed this problem for this project to be successfully released. After downloading JConnector source code, by using the combination of simple JVM dump and JProfiler finally I pinpointed the problem in two classes com.mysql.jdbc.Connection and com.mysql.jdbc.ServerPreparedStatement. In the original Connection class prepareStatement method implementation, where PreparedStatement gets created, it only uses a simple synchronized keyword at method level as the semaphore. Within this method Connection class performs many call to ServerPreparedStatement to open, close, and check state of the statement. Based on my observation and investigation, realClose method on ServerPreparedStatement seems is the one that causes all of the problems. Looked at the realClose method, quickly I realized the problem is that in this method ServerPreparedStatement makes callback to Connection class which causes a classic cross-reference scenario (the breeding bed of the notorious deadlock problem). After that I also examined how ServerPreparedStatement uses its semaphore, it appears to me it is obvious that the orginal implementation of the locking mechanism in this part of the code is rather naïve and not well thought-through. The original implementation of ServerPreparedStatement uses three different semaphores in realClose method in the order of statement.this, connection.this, connection.mutex, and now if you look at the Connection.prepareStatement implementation as we mentioned before at first it seems only uses connection.this as the semaphore, but with a little bit digging I realized that before calling prepareStatement method the current thread has already acquired connection.mutex as a semaphore which means Connection.prepareStatement locks in connection.mutex then connection.this sequence, but ServerPreparedStatement.realClose locks in connection.this then connection.mutex sequence. Now everything is clear, it is a classic locking sequence problem, and the solution is easy that is to make the sequence of semaphore acquisition the same in both Connection.prepareStatement and ServerPreparedStatement.realClose methods by adding explicit locking order:

synchronized (getMutex()) {
synchronized (this) {

to Connection.prepareStatement implementation and switching the locking sequence:

synchronized (this.connection.getMutex()) {
synchronized (this.connection) {

in ServerPreparedStatement.realClose implementation.

Although this fix did fixed all the problem we had in our project, but according to MySQL official bug base that this deadlock problem might have a much broader base of causes and impact, therefore it can be only address in version 5, so before you apply this fix make sure you fully understand the implication and the causes of your particular problem, also I strongly recommend you running a thorough stability test and profiling session after applying this fix, nevertheless I am hoping this post will shed some light to this problem as well as its solution and hopefully can also provide some help.

Wednesday, August 16, 2006

How to select a right Agile methodology for your project - Part 3

In my last two posts we have talked about XP as well as a little bit about RUP, and today I will explore the SCRUM methodology. Relatively speaking SCRUM is still the new kid on the block comparing to the other two, and that is also why you can easily observe the similarities among them since SCRUM does borrow ideas from both XP and RUP. Some even went one step further and announce SCRUM “as a management wrapper for Extreme Programming” (see Control Chaos website for more details on this theory) and similar claim for RUP as well (follow this link for the full article). Although I don't totally agree with this management wrapper theory, I do agree that SCRUM did learn from both XP and RUP through their success and failures, and try to come up with a better process.

Since most of people agree that SCRUM is more comparable to XP than RUP, plus at its core RUP is just a set of guidelines as I mentioned in part one, hence here I am going to focus on comparing SCRUM with XP and show what are the implications and hidden differences between them that you need to look out when implementing SCRUM in your project. As a methodology, SCRUM is more about management than development, comparing to XP SCRUM does not give out or enforce any coding guidelines or practices, such as the test first, pair programming, and refractory that XP requires, however in process management aspect SCRUM is way more rigid than XP, it has hard fixed requirement on team size, meeting frequency, meeting length, iteration length, and so on. But before we jump into details, I want to point out the first and foremost difference between SCRUM and XP which is also the reason why I don't agree with ADM's “using SCRUM as a management wrapper for XP” theory. If you remember what we talked about in part two, that the one and the only one reason why XP is considered so extreme is it allows customer changing their requirement at any time they want even in a middle of iteration. In order to make the whole process more manageable, since manageability is the most scaring part of XP at least for the traditional managers, SCRUM took this cornerstone of XP out of its process. Now people can argue that they can use SCRUM as a management wrapper for XP, since SCRUM does not have its own development principles so XP development style fits it pretty well, but can you still call it XP. I don't think so. It is like for people who find white-water rafting as a sport is too extreme and unmanageable, so to make it more manageable they take out a 'white-water' part and then claim calm-water rafting is a management wrapper for the white-water rafting although you can still apply all the precautions and best practices that 'white-water' rafting requires, but can you still call it an extreme sport?

Alright thats enough about the management wrapper theory, now lets take a look at some core SCRUM rules as well as the reasons and implications behind them:

  • Fixed Backlog Items (Requirement) for each Sprint (Iteration)

    • Reasons

Although SCRUM does acknowledge the changing nature of the requirement during the process, as we mentioned before in order to increase the manageability as well as lack of development principles and guidelines SCRUM is neither designed nor capable to handle frequent change of requirement in the middle of a sprint.

    • Implications

As soon as the sprint kicks off, all sprint backlog items become frozen. New backlog items only get added to the product backlog item pool, and picked only as early as when the next sprint starts. Because the relative inflexibility of the requirement, when implementing SCRUM the development team should always start picking the high priority items first since they are most likely to be the most stable requirement. Also due to this stableness, when using SCRUM on a simple project excessive testing and high level unit-test coverage might not be necessary (high test coverage, above 80%, is still recommended if the project is complex or it has a long predicted field life in other words refractory is required for the evolving architecture)

  • Fixed Sprint (Iteration) Length – 30 days

    • Reasons

First of all it is a widely accepted optimal iteration length for a 7 person team in RUP community (See The Rational Unified Process Made Easy for more information on recommended iteration length). Secondly in SCRUM for each Sprint there are relatively formal planning and review process (usually it takes a day or two), a very short Sprint will just simply create too much overhead because of this form of ceremony. Last but not least, because SCRUM does not enforce high test coverage, hence change of implementation and architecture is relatively expensive therefore the process was designed to shield any changes from the outside world for fixed period of time allowing the team to focus on what they are doing and get into their flow to maximize the productivity when minimizing the overhead.

    • Implications

If you are working on a project that needs very short iteration probably because a large degree of uncertainty and unpredictability within the requirement or just simply because of a super aggressive deadline, and before you try to customize your iteration length you should really consider XP or AUP which are tailored for short iteration or even downsize your team to fit a shorter sprint (usually I don't recommend shorten your sprint to less than 2 weeks with SCRUM), but if you insist to change the length of iteration remember the following consequence as well as some recommended solutions:

Consequence

Recommended Solution

Overhead from formal planning/review ceremony

Batch the planning and review activities for multiple sprints in one session

Overhead from more dynamic requirement and implementation

Increase test coverage as well as the SCRUM Master needs to do a better job to shield the team from the outside 'noise'

  • Max Team Size - 7 people

    • Reasons

Firstly because SCRUM does not require pair programming (here is an excellent article I read a couple of years ago thats explains why paring is a good idea), knowledge does not get spread as naturally as it is in XP and to overcome this problem SCRUM uses close-up open working space concept as well as mandatory daily meeting to help speedup the knowledge transfer. When you increase the team size over 7, this approach become insufficient to serve its purpose. Secondly like XP due to the decentralize the team structure that SCRUM adopted, once you have a large size team too many communication paths get created, and research shows that once team size exceeds 12 the communication overhead starts growing exponentially.

    • Implications

SCRUM does not have any problem handling team thats smaller than 7, but you need to be very careful when considering to increase your team size over that. Here are some possible consequences when you do that and recommended resolution from me:

Consequence

Recommended Solution

Inefficiency caused by large number of communication paths

Assign a team lead who is not the SCRUM master since the SM has to deal with the blocking issues plus shield the team from the rest of world. The team lead needs to be equally involved as well as focused on development to reduce the communication paths effectively.

Knowledge become isolated to its implementer

Promote pair design, pair programming, and conduct code review if necessary, in addition to that high test coverage as well as simple design will help preventing accidental breakage of the existing implementation because the modifier lacks of the appropriate knowledge.

In summary SCRUM is much more manageable at least in traditional sense comparing to XP plus the fact that in SCRUM all the traditional managerial type of position and pay check can be easily kept as the chickens (Observer/Contributor) to avoid the pain of converting people from architect to developer, thats why it remains so far the most favorite choice for any project that wishing to transit from more traditional waterfall model to a relatively more agile approach.


Huh... this turned out to be another lengthy post, and I am hoping you will find the reading is not too bumpy and the information is useful.

Sunday, August 06, 2006

Thoughts after reading “Componentization: The Visitor Example” by Betrand Meyer and Karine Arnout

This article was published on page 23 in IEEE Computer magazine July 2006 issue. My thoughts after reading this article are that the authors went on and on talked about how componentization is so much better then design patterns, so if we could we should some how componentize all design pattern at all possible levels. They argued that patterns “only provides the description of a solution but not the solution itself” and “Patterns such as Visitor or Bridge is a good idea carried halfway through. If it is that good, we argue, why should we ever use it again just as a design guideline? Someone should have turned it into an off-the-shelf component, a process we call componentization”

Despite the enthusiasm the authors had about their theory and product, the conclusion and reasoning behind it are rather naive. First of all, they used a very generalized and high-level definition of design pattern to start their argument. They explained that “A design pattern is an architectural solution to a frequently encountered software design situation”. Take a first look at this definition, it seems that design pattern is kind of similar to component, which is defined as “A system element offering a predefined service and able to communicate with other components.” in Wikipedia, but if you take a closer look, you will realize the key difference between design pattern and component is basically although both of them are aimed at reusability, design pattern is created to address design and communication issues rather than being a off-the-self solution that address issues in a problem domain as what component is designed to do. Now you will realize that the first argument they put up that design pattern only describes a solution but not the solution itself is not the weakness of design pattern, and in contrary the nature of design pattern. Based on this weakly established basis the authors start using examples to try to convince reader how much better that components could be, which inevitably led them to make another mistake in which they picked some classic design patterns such as Visitor and Bridge to serve as their examples. If before they are just comparing orange with apple (at least they have similar size and both of them are fruits) now they are comparing building with bricks, since now they are not comparing componentization with some conceptualized definition of design pattern but some concrete design patterns in object oriented realm. If you are familiar with the evolution of software design, you will realize that there are different level of encapsulation/abstraction existing in this domain:

  • Level 0 – No encapsulation
  • Level 1 – Macro or function level encapsulation
  • Level 2 – Class level encapsulation (usually referred to Object Oriented design philosophy)
  • Level 3 – Component level encapsulation
  • Level 4 – Service level encapsulation

As you can see that design patterns and componentary live on different planes of encapsulation/abstraction, comparing them does not make any sense, it is like comparing service with component. The same argument can be made the component is just the description of a part of solution (from service point-of-view) not the solution itself therefore it is not good enough, hence why bother creating component we should just focus on making ready to plug-and-play services, a process we call servicise.


To summarize here, I am not trying to say that design pattern is perfect and we should use them at all cost. In contrast I agree with some of the harshest criticism (you can have a peek of them here) out there against design pattern, but compoentization is definitely not the solution to these problems.

Saturday, July 29, 2006

How to select a right Agile methodology for your project - Part 2

In my last post, we talked about what are the popular choices for the agile software development process, and what is the difference at high level between RUP and the others. In this post we are going to examine the characteristic differences such as team size and iteration, length, and roles, as well as the reasoning and the driving forces behind the differences.

XP
Team size: 2-12 Person (Although there are claims that XP has been successfully implemented with teams had over 60 engineers)
Iteration Length: 1-3 Weeks
Roles: Customer, Programmer, Tester, and Manager
SCRUM
Team size: 5-10 (Some scrum practitioners recommend a fixed team size 7 or at least less than 7)
Iteration Length: Fixed 30 days
Role: Product Owner, Scrum Master, Programmer
RUP
Team size: 3-10 (You can actually structure a RUP team that has as many members as you want, but as you increase the size your team the higher ceremony level is required which will basically deminishes the agile nature effectively)
Iteration length: 2-8 weeks
Role: Customer, Business Analyst*, Architect*, Programmer, Tester, Manager

Note: Roles with * are optional

Now you probably are going to ask the question: Ok, we have these numbers, thats great but it does not seems to help too much since they all look the same. Yes you are right, in order to pick the right process, you not only need to know these superficial numbers, but also understand where these numbers come from and what are the implications you have to deal with when you decide to change them.

XP
The most controversial and successful element of XP, which pretty much dictates almost every aspect of XP as well as differentiates itself from other methodologies, is the way that Software Change Request gets managed or maybe I should say unmanaged. XP allows customer to add new requirement (user stories) in the middle of a iteration, because the people who created XP believe that user requirement is a ever changing reflection of the ever changing world hence a successful software development process should be nothing but adaptive to this kind of change. Wait a minute, so how XP survives this kind of harsh environment which once believed almost impossible to work with by the traditionalists. It is simple, just stick to the following rules:

  • Spread the knowledge

    • Reasons - When you have a team that needs to cope with constant changes and literally rework of the existing features every day at any given time, it becomes absolutely crucial to maintain the expertise within the team, you just simply can't effort lose a couple of weeks just because some one on your team is on vacation or sick. When a customer change request (a new user story) comes in some one on the site need to sit down and talk with the customer to come up with a simple but effective design to achieve the requirement in merely a couple of hours if not less. To preserve the knowledge you need to spread them so there will be no single point of failure in your team. Several best practices were introduced over the years to encourage this behavior, such as Move People Around, Coding Standards, Pair Programming, Collective Code Ownership.

    • Implications - This practice has two major implications, first the team size. Because XP uses a decentralized team structure, meaning there is no architect(s) or even team lead who acts like a center point or a hub of communications to reduce the communication paths, therefore a large size team will create a huge number of communication path which will effectively reduce the productivity, since people will just get moved around too much and lose their focus and proficiency. That's why XP fits well with a smaller team. The second implication is the scope of the project. Because XP requires spreading the knowledge among the team members as well as collective code ownership, in other words a perfect XP team will have no component owner but having every team member be the owner/expert of every component instead. Now you can imagine, if we have a over complicated project, and since there are so many modules and features you just can't move people around enough to work on all the modules, and as a result people either get forced to settle down on a few less components than they should and effectively roll back to the component owner model, or even worse, every body in the team become Jack Of All Traits and expert of none.

  • Test first

    • Reasons - When you are working with constant incoming change request without a formal change management system or process in place, how are you going to know whether all the user requirement including the originals and the new ones are implemented properly, and no outdated implementation gets left over in your code base? Although we do have user stories that we can manually go through each time during acceptance test stage to verify the completeness of the system, but nothing is better than using code to verify the code. High coverage automated tests serve as both a contract between requirement and the implementation to ensure the completeness and correctness, and a safety-net that allow us to make changes rapidly without worrying about accidentally breaking other parts of the system. Many best practices are available to address this aspect, such as Testable Requirement, Unit Test First, Unit Test, Test Guard.

    • Implications - Some people believes that XP is just one step away from chaos, and I believe this is THE step. In order to be successful when implementing XP, you just have to follow all these best practices, actually I should not call them best practice but mandatory practice since they are THE only practice. As the consequence, some of the overhead that is saved in XP by simplifying the process and reducing the level of formality were balanced out by the overhead created by demanding high coverage tests. A common newbie mistake will be reducing the test coverage or the effort required to automate test execution thinking it will not have a major impact to the project, and your project will be thrown back into the realm of chaos before you know it. (In other posts I will talk about the common mistakes and mis-assumptions that people make when implementing Agile process in more details)

  • Keep it simple

    • Reasons - In XP process people believe nothing is static, so why would you waste your time and energy to write sophisticated code for something that might get thrown away or rewrite any time soon. That's why in this model we encourage Just In Time Design so no time-consuming grand up-front design is required, and Just Enough Implementation so never write anything extra for future requirement or the "what-if" scenario. This practice also works as a catalyst for the other two, when you keep everything simple, it become easier for people to understand which in turns speed up the spreading of the knowledge, plus when you make it simple, it become easier to verify and test against. Furthermore, your team should even routinely refactor the code to make the complicated ones simple, and the simple ones simpler. Here is some best practice that can help you to keep things simple, Simplicity, Never Add Funcationality Early, Refactor Mercilessly, Optimize Last.

    • Implications - No specific implications or side-effect. As a matter of fact, this is one practice I recommend to all the projects, and it does not matter what process you are following. (In other posts I will talk about the recent trend and rethinking in architecture world about the new evolving adaptive architecture vs. up-front predictable architecture)

Wow this post is much longer than I originally thought. I hope it gave you some insight to what dictates these numbers for XP, and what you need to consider when you choose to use XP in your project.


Wednesday, July 26, 2006

How to select a right Agile methodology for your project - Part 1

Recently some of my friends and colleagues asked me the question that:

Though there are various types of Agile methodologies XP, SCRUM, and RUP available right now, but when you start a project which one do you choose and what criteria do you apply to assess whether the methodology is the best match for a specific project. A short answer will be pick the one you feel most comfortable with, but if you really want to compare them and pick the best one for your project then having a solid understanding of the difference between theses processes is a must-have. I am writing this post in the hope of shedding some light to this tough question. Perform a quick search you can find introductory articles such as this one from Agile Alliance, but it does not sound too practical or useful for a real life project.

As an independent consultant, I have been working with different companies with different sizes who endorse different development processes ranged from Waterfall to XP, iteration length ranged from 18 months to simply a week. Recently and fortunately I finally had a real-life hands-on experience on setting up a SCRUM team and working with it to deliver some rather complicated product under tight deadline which you can think of it as a performance and stress test for the SCRUM process, hence so far I have worked with all three of the most popular Agile methodologies therefore I want to share the tips and pearls I discovered and the lessons I learned, and paid dearly for it.

Before I can start, some people might ask how come RUP is also categorized as an Agile methodology here? Isn't it a heavy weight process that is CMMI compatible and only used for over weight project and teams? The answer to that is "Yes you are right but only partially right". RUP - Rational Unified Process; If you strip down all the fluff that's created around this methodology for obvious marketing reasons from IBM, and if you look deep enough under the hood RUP is nothing but a set of guide lines and principles - so called The Spirit of the RUP:

  • Attack major risks early and continuously ... or they will attack you.

  • Ensure that you deliver value to your customer

  • Stay focused on executable software

  • Accommodate change early in the project

  • Baseline an executable architecture early on

  • Build your system with components

  • Work together as one team

  • Make quality a way of life, not an afterthought

As long as you are following these essential principles, you can consider your project is following RUP process. So RUP process can have an iteration as long as a year or no iteration at all (it is possible to structure a RUP process with waterfall model although it is usually not a good idea) or as short as a couple of days. A RUP team can contain as many as 200 engineers or just one person (yourself working on your personal website in your basement). Based on that you can see actually RUP is considered more higher level methodology comparing to XP or SCRUM since it does not have strict rules or practices that you need to follow, all the artifacts and disciplines are optional so you can pick and choose as long as you know what you are doing, hence you can really consider RUP as a super set of XP or SCRUM. As matter of fact, you can effectively follow XP and RUP at the same time, there is no conflict between them at all.

But to avoid confusion, in this post I will follow the widely spread belief and treat RUP as a separated methodology with its own flavor.

Its 10 o'clock now, time to go to bed since I have a 7:30 SCRUM meeting every morning .... Don't be shocked, this meeting has actually nothing to do with SCRUM process and in fact it is a very valuable lesson I learned, and I will explain the details in another post :)

Say something about myself

I often get asked to say something about myself, so here we go:

I am a worrior in search for the Holy Grail of software development,
Keyboard is my sword, in hand I conquer,
Unittest is my shield, in danger it protects me,
Refactory is my steed, in its company I travel far,
Simplicity is my religion, in heart it makes me strong,
User story is my compass, showing me directions while I am lost ;)

My First Post

I was convinced last Saturday by some of my friends, they probably got tired of my constant mumbling, to write down my ideas and thoughts here in my blog so at least they don't have to be forced listening to them on the phone or in the cafeteria :)