Saturday, December 22, 2007

Why programming?


Why is programming fun? What delights may its practitioner expect as his reward?


First is the sheer joy of making things. As the child delights in his mud pie, so the adult enjoys building things, especially things of his own design. I think this delight must be an image of God's delight in making things, a delight shown in the distinctness and newness of each leaf and each snowflake.

Second is the pleasure of making things that are useful to other people. Deep within, we want others to use our work and to find it helpful. In this respect the programming system is not essentially different from the child's first clay pencil holder "for Daddy's office"

Third is the fascination of fashioning complex puzzle-like objects of interlocking moving parts and watching them work in subtle cycles, playing out the consequences of principles built in from the beginning. The programmed computer has all the fascination of the pinball machine or the jukebox mechanism, carried to the ultimate.

Fourth is the joy of always learning, which springs from nonrepeating nature of the task. In one way or another the problem is ever new, and its solver learns something: sometimes practical, sometimes theoretical, and sometimes both.

Finally, there is the delight of working in such a tractable medium. The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures.

-- Chapter 1: The Tar Pit (from The Mythical Man-Month by Federik P. Brooks, Jr.)


This is where I found out my reasoning for sticking around in this industry and doing what I am doing. Its a delightful and inspiring read from a software developer's point of view, however it is also an excellent guide on how to motivate a software development team. If you can satisfy your team members from all four aspects mentioned in this post, literally your developer will even work for you for free, just look at what's going on in the open source community. Check out this post "What can we learn from the Open Source community?" for more info on how to motivate a software team.

Tuesday, November 13, 2007

Oopsla - Making Object-Orientation Work Better (David Lorge Parnas)

Its inspiring seeing David's presentation on this topic despite my personal opinion difference.

  • Object Orientation is a principle not an attribute of language (you can create non-OO program using OO language and vice versa)
  • Outdated documentation is worse than no-documentation at all
  • Program without documentation (documentation can be in many form) is not maintainable
  • Finish coding without writing documentation should not be considered complete

Oopsla - Panel "No Silver Bullet" Reloaded

This is the best panel I attended in this year's Oopsla. Its nice to see Federick and David on the panel, and of course the big bad weir-wolf by Martin Fowler.

  • Federick's definition of inherent complexity vs. incidental complexity 20 years ago is still the one of the main driving forces of many recent process model and framework development aiming at minimizing incidental complexity hence focusing on the inherent complexity
  • Human incompetence is unavoidable
  • There is no silver bullet, but maybe a couple of lead bullets will do the job
  • There is no silver bullet, but we might have created something although complex but powerful enough to kill or seriously injure the weir-wolf, just like the creation of modern chemistry from the pursuit of ancient alchemist's dream of turning stone to gold
  • Embrace the inherent complexity

Sunday, November 11, 2007

Oopsla - Research: Language Design

This is the only academic research talk I attended in this year Oopsla which turned out quite interesting.

  • Software based transactional memory is approximately about 50% as efficient as hardware based solutions.
  • StreamFlex provides a Java based soft real-time streaming (no buffer & no dropping) platform
  • Annotation might be a good choice for constructing object meta-model (knowledge level model)
  • Its always a good idea to make your meta-model generic

Oopsla - Actor-Network Theory: Nothing to do with TCP/IP or distributed objects (Brian Marick)

This is one of the my favorite talks I attended in this year Oopsla, and it is so interesting that I was busy enjoying the talk and did not take too many notes :-)

  • Break your mental model (to be successful in this rapid changing industry you need to constantly challenge your past experience and your way of thinking - your mental model)
  • Conduct retrospective of your personal career development every 5 years
  • Read books and topics outside of our industry to help you breaking the mental model
  • To produce quality software programmer and tester need to form a unity of team effort
  • Tester's job is not a destroyer but rather helping programmer to beef up the product together
  • Agile daily stand-up meeting should not be a shrunk version of weekly status report. Daily stand-up should be purely user story oriented since thats the only measurement of real progress in an agile project

Saturday, November 10, 2007

Oopsla - Creating Passionate User (Kathy Sierra)

This presentation is, like its name suggests, about creating passionate users which is not only essential to the success of the creation of any useful software (especially important in agile environment), but also equally important from the sales and marketing perspective for creating consumer loyalty as well as utilizing the power of word of mouth.

  • User can't be passionate if they suck (at using your product or doing whatever they do)
  • Human can acquire high definition experience once they are good at something, for example watching tennis matches are way more exciting if you actually play tennis and understand the difficulty and the tactics that the players are facing in the match
  • HD experience and passion threshold model also apply to personal career development
  • When designing software, remember its not about the software (tools) but the goals (what user wants to achieve through using the tools). Make user good at achieving the goals, not just using the tools.
  • Constantly ask yourself "What can I do to make my user kick ass?"
  • Set up milestones and development path for the user, so they know how they can get better and measure the progress too. Learn from the video game model. Again this approach can also be applied to career development and management
  • For startups you don't need to out spend your competitor in marketing or sales, but just out teach your competitor hence obtain consumer recognition and the power of word of mouth
  • One step further, with highly satisfied and ass-kicking users it is possible to create tribal community among your users to further cultivate recognition and loyalty, for example Apple 1997 "Think Different" campaign

Sunday, November 04, 2007

2007 Oopsla Montreal Notes

As I promised :-) in the next several blog entries I am going to share the session notes I took at 2007 Oopsla conference in Montreal. My notes are not meant to be detailed summary of what was presented at the session, but rather a bunch of sparks and ideas that I collected with very strong personal bias, please keep this in mind. At mean time, I highly recommend you checking out the content of the presentation before you start reading these notes.


Enjoy.

Sunday, September 09, 2007

Extreme Programming with Ordinary Talent

Recently one of my friends asked me how come I haven't published any article on my blog in about 2 months. The answer is again, almost sound like a broken record, the last two months has been pretty hectic, not in a bad way though. The project team I took over several months ago, transformed from zero unit-test coverage to above 70% within 2 releases, development iteration cut from months to 2 weeks, team has started digging their self out of bug fixes and starting to deliver business values, big-bang integration test turned into daily continous integration tests (fully automated with Maven2 and Bamboo), critical production defects became a rarity, and most importantly I stopped getting emergency phone calls at the middle of the night, therefore finally now I have the time and energy to write this blog.

In this blog, I want to talk about some common misunderstanding about Extreme Programming which is by the way my favorite Agile process and the one that my team currently following. One of the most common misunderstanding is that XP can only be successfully adopted by highly skilled and experienced development teams. This kind of thinking is easy to understand since if you look at a successful XP team, usually you will see a high caliber team consists of a group of skilled developer, and furthermore XP principles require team member to be highly self-managed and multi-talented since a typical XP developer will assume multiple roles in a triditional process, such as designer, business analyist, QA, programmer, project manager and whatever is required to be effective. However what you see in a sucessful XP team is simply the result of praticing XP rather than the prerequisite. In Extreme Programming Explained – Embrace Change (2nd Edition) Kent Beck defined:

XP is a path of improvement to excellence for people coming together to develop software.

As you can see, XP is a process to develop excellence in both software product and the people who develop it. Many of the XP principle and pratices were actually designed specifically allowing people with ordinary talent to be able to develop non-oridinary software product and grow along the way. I am just going to list some key practices that I think are particularly helpful:

  • Close communication: Help clear ambiguities so you don't need someone who is exceptionally skilled in communiction to serve as a bridge between technical and business persons, but also allow everyone on the team to practice their interpersonal skills and develop their communication skills along the way.

  • Automcated tests: Help catch defects and shorten debug cycle therefore the team does not need to rely on super skilled developers, and also allow less skilled and experienced developers to grow in a relatively stress free and safe environment.

  • Pair Programming: Similar effects as the point above but also allow the more skilled developers to lead by examples and spread the knowledge more effectively among different team members, therefore almost provide a shortcut to seniority for the greener folks.

  • Incremental and Evolutionary Design Process: Help producing good design in a cost effective way, and also eliminate the need of having some genius architect or designer to come up with a flawless design up front (despite that it might not be even possible). At mean time, this practice also provide a perfect opportunity for every one on the team to learn from their own and other's mistakes and successes instead of just being a mindless coding machine.

Of course you can find this spirit in almost every XP principle and practice, since XP is a process more about people than software itself. As Kent put it perfectly in Extreme Programming Explained:

Its (XP's) reliance on the close collaboration of actively engaged individuals with ordinary talent.

Tuesday, July 31, 2007

Quotes of the Day

The cheapest, fastest and most reliable components of a computer system are those that aren't there.

-- Gordon Bell

Everything should be made as simple as possible, but not simpler.

-- Albert Einstein

Sunday, June 10, 2007

Pragmatic Programmer

I just finished reading Pragmatic Programmer recently. Its been on my to-read list for about a year and half now, never got a chance or the urge to read it. My recent job change gave me a chance to take public transit to work instead of driving which all of sudden freed up about 2 hours everyday exclusive reading time (vs. staring at total strangers otherwise), so I just started plowing through all my to-read list. Initially even before I picked up Pragmatic Programmer, I was still not very thrilled or feeling the urge to read it since just by going through the index I thought I knew pretty much all the topics in this book already, and the only reason I picked this book was because I need a light weight book for my business trip to Vancouver, and little did I know how much I would enjoy that trip just because the lightweightness I picked. Pragmatic Programmer is not only a book that was written by veteran and sucessful programmers who had been there and done that, but also written with great sense of humor to make it a very enjoyable ride through it. For beginners it is the only book I found so far that covers such a wide range of topics from algorithm to leadership, from requirement definition to software development process, and from database design to work ethic. It covers almost every topic you need to know as a professional and soon-to-be pragmatic programmer with vivid example and real world stories that vastly expand your experience, I consider it as a short-cut to seniority. Or if you are a seasoned veteran in the trench, or a already pragmatic programmer, I think it is still a must-read since the most valueable asset I gained from this book is not the methods or theories, but the stories and metaphors such as the Tracer Bullets, Broken Windows, and Boiled Frog theses metaphors are particularly powerful when used consistently within your team or even your company to describe a complex situation. For example, you can either spend half an hour to describe the idea of fast retaining prototyping strategy with short and highly iterative development process model to your team member and project manager, or just mention two words Tracer Bullets. I found these stories and metaphors improve communication drastically if applied properly, kind of similar to the domain language mentioned in Domain Driven Design, and similarly it can too form an ubiquitous language from a more technical and process point-of-view. Again a highly recommend must-read from me, pick up one from Amazon if you have not already.

Thursday, April 12, 2007

Talk about software maintenance

IEEE Transaction on Software Engineering Dec 2006 issue published a very interesting study conducted by researchers at Carnegie Mellon University intended to explore and seek the answer to how software developers approach typical software maintenance tasks and if there is any behaviour pattern that can be observed as cues to help improving existing theory and toolset to make developers more efficient at performing such tasks.

The whole experiment in a nutshell consists of the following building blocks:

  • A group of above-average Java developers as test targets
  • A simple paint application written in Java with around hundreds of line of source code
  • A set of consistent pre-prepared typical maintenance tasks (including defect fixes and minor feature enhancement) for each developer
  • A video recording program running at the background to record how the developers perform their task
  • A artificial periodic interruption mechanism simulates real-life interruptions that happens in any software organization

Based on this study the researcher concluded that the following behaviour pattern was observed during the experiment:

  • Search – Developer explores cues in environment to choose a sufficiently relevant node to start comprehending
  • Relate – Developer explores cues in environment to decide whether to navigate a dependency, return to a previously visited node, or stop relating. If the node is relevant the developer collects it.
  • Collect – Developer uses some form of memory, external or otherwise, to remember what was found

For each task every developer went through a search-relate-collect cycle till they collect enough information to attempt a solution implementation, and if they encounter further uncertainty during the implementation they will repeat the cycle to collect more information that is necessary to help them moving forward. Along with the behaviour pattern, the research study also suggested some improvements for existing toolset and IDE on how to help developers to perform the search, relate, and collect actions more efficiently, but what really caught my eyes is the fact that since this research was completely conducted based on the traditional way of software development, without the agile elements, it becomes a perfect study material for finding out how agile approach can eliminate or at least minimize some of the difficulties that were observed in this study as well as what we should look-out in an agile project to ensure high software maintainability.

Any successful commercial or open-source system goes through a relatively rapid green field development stage, and a much longer sometimes more challenging brown field maintenance phase. Customer satisfaction (for commercial software) and community endorsement (for open-source software) are both highly dependent on how well and easily the system is maintained. According to this research, to improve software maintainability it pretty much boils down to helping developers search, relate, and collect information in hundreds of thousand or million lines of source code. Let’s talk about each aspect in more details and how agile approach helps in many different ways:

I. Helping Developers Search More Effectively

A. It was identified during the study that method naming played a significant role in helping or preventing developers from searching and understanding code effectively, thus Continuous Refactory, Short Method, Self-descriptive Method Name can really help you on this.

B. In the report, researchers also noticed the fact of having an effective way of communicating the intent and purpose of the code really help developers to pinpoint the optimal starting point of the their search effort as well as reduce the scope of the search they have to perform, and the researcher suggested that well-written and up-to-date document is a sound solution for this purpose, but in agile realm I think a well-written test suite will be a even better choice since they don’t just describe the intent and purpose but also coded and compiled against them so they can be verified automatically and kept up-to-date.

II. Helping Developers Relate Information More Effectively

A. Once developers find the clues they need to comprehend them and identify whether they are relevant to the tasks on hand or not, and it becomes obvious that the more readable the source code is the more effective the developer could be relating information. Agile principle Keep It Simple and Stupid, Just Enough Design, and Continuous Refactory can greatly enhance the readability of your code thus make this task a lot easier than it has to be.

III. Helping Developers Collect Information More Effectively

A. Collecting information seems all about how to store, categorize, share and maybe annotate the information that was allocated in the previous step, and it sounds like a job for the IDE to handle which is probably true just like what the researchers suggested in the report, our tool vendors do have a lot of room to improve and innovate; however if you look at this issue from a different perspective there might be just enough things that agile methodology can do to make this task easy enough so the existing IDE or your limited brain memory can easily handle it without major technological or biological advance. Now imagine a typical scenario that after hours of searching and relating finally you managed to comprehend a extremely complicated algorithm, a lot of information need to be collected and maybe stored somewhere in a document since it is so complex that you just don’t trust your own memory, and also you maybe would like to share it with your colleagues. Now instead of doing that how about directly apply your what you have learnt to the source code level, and use Refactory to reduce the complexity and increase the readability, therefore basically combining the documentation with your source code in one unified form along with the help from what I described in the previous two steps you might just be able to completely remove the need to collect information on paper, since the code is so simple and self-descriptive all you need to do is just search and relate, the rest is just plain in sight – no need to collect (at least not in a document).

Thursday, March 01, 2007

How good is enough

This is the title of a recent column on JDJ magazine by Nigel Cheshire, and almost in the same month ACM published its own issue of special edition on software quality where number of papers shed their thoughts on the same question “How good is enough”; a month apart, in IEEE Software journal the coauthor of Testing Extreme Programming Lisa Crispin published her article “Driving Software Quality: How Test-Driven Development Impacts Software Quality” related to this topic as well. Why all of sudden all this attention to this seemingly straightforward question? The answer is simple because most of us do not know the answer.

How many times as a manager you have seen project produced higher than normal defect counts in the QA stage after many late night pizza deliveries and weekend team lunches and finally finished implementation on time. Just when you thought your job is done, it actually just got started. The bug fixing and rework stage actually turn out to be even longer and more painful than the implementation. Many projects eventually get bogged down and lead to significant schedule and budget overrun even after they managed to initially achieve the scheduled milestone for construction phase. What went wrong? Why do we keep repeating this self destructive pattern over and over again? Nigel is his article proposed a very prominent viewpoint in which he points out that one of the major contributing factor to this problem is the over-simplified concentration on using project schedule as the dominate metric to measure the project progress. Traditionally due to the relative easiness for measuring project schedule, it is widely adopted in the industry as a, and some times the only, metric for software project. However when you use the schedule alone without the help from other quality related metrics, it does not really tell you anything other than how many days have been passed since you project started. Without the context information from quality-aware metrics when a developer tells you that “I finished the implementation and checked in all the source code on time” what it really means is “I really have no idea if this code will work or not” so this could be well the end of the construction or the start of it. Nigel in his article comes to the conclusion that “What gets measured gets managed”, but he did not get down to the details on how you suppose to measure the true quality, cost, and progress.

In Agile process model, we throw away the pure construction milestones because we understand the pure construction without test case verification base on the user requirement does not reflect the progress at all, that’s why Lisa in her article on IEEE Software journal focused on how Test-Driven Development and Design can give you a hand on this. In her article, she basically comes to the plain ugly truth that “Anything not covered in a test isn’t going to be there” so the only way to guarantee you deliver what your customer has asked for is by creating test cases for every single requirement, and only by satisfying the test cases you move the project forward. This is especially crucial in Agile project, since you have to constantly make “How good is enough” decisions, no only on overall quality alone but for almost everything else such as your just good enough design and just in time implementation. With the help of test driven requirement it makes our life much easier to answer the question, but writing testable or quantifiable requirement is not just about some extra description in our story card or requirement document, however it requires a fundamental mindset shift from the traditional way of presenting and organizing requirement. But if you are interested in this topic Nail Maiden in his back-to-back article on requirement quantification “Improve Your Requirements: Quantify Them” and “My Requirements? Well, That Depends” provided some very interesting idea and material for further reading, and while you exploring all these new and exciting technique and theory in the search of the answer of this million-dollar question just don’t forget this century old wisdom:

Perfection is the enemy of the good.

Gustave Flaubert
French realist novelist (1821 - 1880)

Monday, February 19, 2007

Quote of the Day and Happy Chinese New Year Everybody

Sometimes through heroism you can make something work. However, understanding why it worked, abstracting it, making it a primitive is the key to getting to the next order of magnitude of scale.

-- Robert Calderbank
Princeton

Tuesday, February 13, 2007

Congrats to Java finally cracking into hard real-time environment

David F. Bacon from IBM research published an article Real Time Garbage Collection on recent ACM Queue magazine. In which he introduced project Metronome at IBM that successfully delivered a JVM implementation that removed three major road blocks that’s been keeping people away from using Java in any hard real-time environment (even with the real-time JVM specification extension):

  • Undeterministic garbage collection
  • JIT compilation
  • Dynamic class loading

The Metronome project has been adopted in the new Navy’s DDG-1000 destroyer. This is pretty significant breakthrough since now people will have a choice to facilitate safer language features such as garbage collection in not just soft real-time but hard real-time system like the weapon control system on a DDG.

It’s an exciting time again for programmers

About a couple of months ago, during the farewell lunch of mine at my previous work place one of my colleague asked me the question “What is the most exciting thing you think will happen to our field (as programmers) in the next 5-10 years?” To my surprise, I did not even pause for long (considering I am usually not very good at this kind of prediction) before I giving out my answer “The increasing parallelism in the hardware”. Now several months have passed, not years, you can already see the movement almost everywhere from hardware to software, from industry to academy. Just when you think Moore’s Law no longer applies to the modern processor design any more, it seems morphed into a different dimension, although the processor speed increase slowed down, the number of independent cores is currently increasing at a dazzling speed. Intel just releases its showcase an 80-core processor prototype. Now just imagine the challenge for software engineers to utilize this kind of concurrency and calculation power. Presently even with some of the natively thread-friendly language such as Java or C#, programmer still had hard-time to build robust and highly parallel software to utilize 8-16(16-32 cores) CPUs in some high-end servers; now imagine 5 years from now not only the high-end servers but almost all the computers even the pocket size ones will have multiple cores. The reality is calling for a new mechanism to deal with this unprecedented new challenge and that’s why Sun is working on Fortress (designed to work with tens of thousand processors, petabytes of memory) and IBM is working on its own X10 language (designed to help people moving from Java platform) to harness the power of this growing beast. Recently an excellent article Unlocking Concurrency was published on ACM Queue magazine, a very interesting read if you are looking for a peek into the future, but nevertheless this is probably the most exciting time for programmers since Object Oriented Programming was invented in 60-70’s.

Friday, February 09, 2007

Talk about requirement engineering (RE)

Recently I read a so far the most comprehensive research paper on how requirement engineering improvement benefits other aspects of both upstream and downstream activities in software development life cycle. This paper "An Empirical Study of the Complex Relationships between Requirements Engineering Processes" by D. Damian and J. Chisan in IEEE Transaction on Software Engineering July 2006 Issue presents the empirical findings from a case study conducted at Australia Center for Unisys Software (ACUS) during a RE improvement initiative aimed to move its RE practice up from CMMI level 1 to CMMI level 2. Before the initiative ACUS performs its RE process by simply gather its requirement in a primitive one to two sentence feature description list, due to this vague definition mechanism ACCUS suffers multiple RE related problems such as feature creeps, incorrect implementation, inaccurate estimation, and eventually lead to budget and schedule overrun as well as low customer satisfaction. This research is exceptionally interesting to me since one of my recent project suffers from exactly the same RE challenge which resulting to a somewhat project failure although there are many other contributing factors, the RE problem is definitely one of the major factors. ACUS's initiative is fairly straightforward without adopting any rigid RE methodology but instead an optimized process roughly based on RUP which contains the following elements:

· Feature Decomposition

· Requirement Tracability

· Group Analysis Session

· Cross Functional Team

· Structured Requirement

· Testing According to Requirement

According to the research, to the researcher's surprises the Group Analysis Meeting and Cross Functional Team were rated the top contributors among engineers, and on the other hand Feature Decomposition and Requirement Tracability occupied the top 2 places among managers. This conclusion is consistent with the widely accepted belief that Agile practice such as Group (Paired) Analysis Meeting and Cross Functional Team benefit mostly to the engineering team result in quality and productivity improvement, and formal techniques such as Feature Decomposition and Requirement Tracability mostly benefit managerial effort and overall controllability.

Sunday, January 07, 2007

Some history on the software development process model

People always have different ideas or mentalities towards software development process model. Veteran folks ask “I remember software development used to be as easy as 1-2-3, and just follow the steps things eventually get done one way or the other. We never had to worry about iterations, methodologies, or the process, so what good is this iterative/agile approach?”, and on the other hand greener folks would ask “Why aren't we following a iterative approach, and by the way what is this waterfall model you guys are talking about?”. More than 10 years after the iterative approach was introduced to the industry, you will still be amazed that just how many of the practitioners, including both developers and managers, can not tell the difference between the spiral model and the iterative model, and also just the number of the organizations that still following the decades old waterfall model or even worse still in the darkage. That is why I am writing this short article on the software development process model history hoping to help explain two questions: Why are we here? and How did we get here?


Waterfall Model

Introduction:

Winston W. Royce proposed the initial waterfall model concept in 1970 recognizing the cost of discovering defects late in the development cycle, adopted a logical stepwise process, including steps from Requirements -> Design -> Coding and unit test -> System Integration -> Operation and Maintenance, progress through this set of fixed steps to produce software system. Comparably the waterfall model is considerably better than its predecessor, the two-phase “code and fix” model, where by developer directly jumps into coding right away and then keeps fixing defects until it couldn't be fixed anymore. The water fall model is widely adopted in the industry during 70s-80s including some most influential players in the field employed by NASA and US Department of Defense.

Weakness:

The most criticized problem of the waterfall model is ironically also the single fact that contributes to its success - the fixation of the steps. The waterfall model promotes fixed steps as well as barriers created between these steps, for example in a pure waterfall model the requirements are “frozen” after the requirement definition stage throughout the entire life-cycle. Although this practice sounds logical from the software engineer's point-of-view, it is very counter-intuitive from the business perspective. In many cases with pure waterfall model, over time the development team becomes so disengaged from the real world requirement on which the project was originally based on due to the nature of the ever changing world of business. Contrary to widely accepted belief, waterfall model is exceptionally ineffective when used to implement large project. Because of the broad scope of the requirement in a large project, it is extremely difficult to get all the requirement captured and defined correctly in one shot without venturing into the construction stage, but because lack of feedback loops in the waterfall model often the development teams are forced to march through construction with an ill-defined requirement which in result inevitably leads to nothing deliverable and chaos, or even sometimes the team manages to deliver something but it is not what the customer wanted.



Spiral Model

Introduction:

In Barry Boehm's article A Spiral Model of Software Development and Enhancement published in 1988, he recommended a different approach for guiding the software development process. In his spiral model, Boehm acknowledged the weakness in the waterfall model when applied to large, complex, and high-risk project the upfront requirement analysis stage is usually too superficial, without getting into design and implementation details, to fully define the requirements which becomes a common pitfall for many waterfall driven projects. Thus Boehm recommended a more risk-driven and incremental development method in which after requirement definition and concept validation stage one or more prototypes are built to assist in early confirmation of the understanding of the requirement of the system as well as mitigate any major technical risks followed by a structured water-fall like process to produce the software system. The major advantage of this process model is it provides multiple feedback opportunities early on to verify the requirement and mitigate risks before entering the construction phase so it is more likely to produce the final product on time as well as to produce what the customer truly wants. The spiral model became the role model of software development process for the whole 90's, I still remember in my text book on Software Engineering when I took the course in 97 it was still considered the state of art process model.

Weakness:

Criticisms to this rigorous approach are mostly focused at its inability to perform in a more dynamic and competitive environment due to two main weakness: 1) It is very expensive and time-consuming making two or three non-deliverable prototypes to just confirm the requirement and mitigate risk 2) It lacks of effective mechanism to handle requirement change after it enters the construction phase. Despite the criticisms, spiral model does provide us a valid but expensive way to systematically develop a system at any size therefore even today it is still a very popular choice for large complex but rigid projects at US Army(the famous Future Combat System is a good example for the spiral model) and NASA.



Iterative Model

Introduction:

In 1995 Philippe Kruchten described a new approach, one that combines the best of two worlds from waterfall model as well as spiral model, called Iterative approach. In the traditional software development model as time progress the development process moves forward through a series of sequential steps, but in iterative model the software itself is broken down into smaller pieces and one or more of these pieces gets developed in each iteration. The whole project will typically go through multiple iterations till the whole system is implemented, and every each one of these iterations is like a mini-waterfall life-cycle has multiple phases. In addition, each iterations is like a spiral model designed to mitigate risks for that specific iteration. More recently accompanied by the advancement in engineering practice, such as refactory and automation, some developers have pushed iterative model to its extreme with shorten iterations as well as simplified life-cycle model to harvest more productivity out of the iterative model for certain kind of projects, and this highly optimized iterative model is often referred as Agile process model.

Advantages:

The main advantages that iterative model brings into the industry is basically allowing us to revisit various activities, such as requirement definition and design, multiple times during different iterations in a project, hence allowing us to correct and refine them along the way to deliver what the customers really want in a more timely and predictable fashion. Additional to this, the iterative model also enable us to design and build something even without fully understanding the requirement since in each iteration we only work on the parts that we have already understood, and furthermore, not like the spiral model, in the iterative model every iteration is a complete mini-lifecycle and produces a incremental but executable product vs. a mere prototype in the spiral model. A true incremental delivery ability which is the key in today's highly dynamic and competitive world.