Home
About
Projects
Articles
Resources
Music

My Development Philosophy

Software development is an expensive, risky endeavor. Any organization that has a need to develop software must face this risk. The ultimate value that organization will realize from its development efforts will depend in a large part on how that organization manages the risk.

One of the primary risks in software development is budgeting and scheduling. Estimating software costs and development schedules is generally quite difficult. To make a good estimate up front requires a team of programmers who have significant experience with both the technologies employed and the solution domain. Formal estimation methods usually require more time and money than available for business projects. To complicate matters, design and requirements change during development, so estimates have only temporary value. The bottom line is that it's difficult to predict how much time and money any given project will require.

There are a few common ways to manage budgeting and scheduling risks in projects where time or resources are tight. One popular way to do this is to get someone else to take on the risk. The two best methods for this are

The problem with relying on overtime is that it can easily become a chronic problem and often causes a severe drop either in product quality or productivity or both. This violates Stephen Covey's Production/Production Capability (P/PC) balance.

Outsourcing is a much worse solution than overtime. An organization that outsources its development also delegates the risk management, effectively compounding the risk of delayed or aborted projects. Moreover, there is little to guarantee quality in an outsourced solution. Lastly, communication with developers is usually extremely limited with outsourced projects; it's easy to end up with the perfect solution to someone else's problem.

Some organizations, including several for which I've worked, are able to swallow most of the risk in their business plan, by providing developers with ample time and resources to complete their work. This makes for a nice laid-back work environment. Assuming the developers are good and the organization can afford to sustain such an approach, this model will probably work better then the two mentioned above. But I suspect that most organizations can not afford to sustain this approach. Moreover, I believe that alternative could provide greater value, even for those that can.

I believe that the best way to manage development risk is in the development method itself. By adopting an Agile approach, such as Extreme Programming, an organization can achieve better estimates and tighter control of the features/schedule balance, without sacrificing product quality or staff well-being or investors' pockets. In short, Agile enables a development team to produce greater value with its resources.

A discussion of Extreme Programming practices can be found here.

Extreme Programming is a controversial topic, so I offer my own thoughts on a few essential aspects of Extreme Programming here.

Pair Programming

Understandably, management is usually hesitant to put two people on a task that one can do perfectly well. I am very productive on my own, but know from my own experience that I'm even more productive when I code together with someone else. I've witnessed many benefits:

  • Distributed Code Ownership
  • More Focused, Disciplined Effort
  • Faster Results
  • Better Design
  • Fewer Bugs

On top of that, pair programming is simply more fun. This generally means:

  • Higher levels of Productivity
  • Greater Team Cohesion
  • Increased Staff Longevity

Test Driven Development

It's well documented that bugs cost more the later they're fixed. By extension, they're cheapest when they're not created in the first place. The cool thing about TDD is one can actually prove that the code does what it's supposed to do. Of course, it isn't always just as simple as that, but it is a very big step in that direction. It also forces one to think very carefully on a detailed level what the code really is "supposed to do" anyway, and helps eliminate unnecessary features.

Ironically, a good team of testers is necessary to properly support TDD. Not to test the software, since TDD automates that, but to test the development. In a TDD environment, the testers' job is to find the gaps in developer's thinking so the up-front tests can be broadened appropriately.

The bigged challenge of TDD is deciding what and how to test. Testing a GUI, for example, isn't simple, and requires a lot of support code. In my opinion, that's part of the cost of doing it right the first time, and proving up-front that code works.

Flexibility

The coolest thing about TDD in my opinion is the flexibility it allows. I once worked on a project with hundreds of thousands of legacy code. I wanted to modify a line of code which set certain fields in a form to a default value and I asked one of the developers if there was any code which relied on the specific default value assigned. The answer? "I don't know. If you want to change that, we'd have to test it." By hand, of course. The better test coverage one has, the less one has to worry about unwittingly breaking something when modifying code. Of course, if that default value code had been documented somewhere, it's possible the information I wanted would have been there, but wouldn't it have been so much nicer to simply change the code and see if the tests still pass?

Simple Design

Because TDD allows greater flexibility in making changes with confidence, it allows developers to focus exclusively on present need. Teams often want to design for future needs up front. The problem is that designing on speculation generally complicates the design and the speculation is guaranteed to be wrong at least some of the time. With TDD the flexibility is moved from the design to the process, lowering the cost of current features, and keeping more possibilities open for future development.

Refactoring

TDD also enables developers to make another kind of important code change: refactoring. Refactoring is the term used in software engineering for changes made to the code without changing functionality to improve code readability and/or maintainability. Refactoring activities include splitting up long statements or expressions and large functions or classes, renaming variables or objects, removing unused code, moving variable declarations, and updating code comments. The challenge of refactoring is ascertaining that one has indeed not changed functionality, which is where TDD comes in. With requirements expressed as a set of automated test this becomes a simple matter to run the tests after each change.

Continuous Integration

Automated tests are only as useful to the extent that one runs them. Unit tests are usually executed quite frequently during coding, at least for the parts of the system being changed, but other unit tests, acceptance tests, end-to-end tests, etc. may not be. Software isn't really complete until it's ready to deploy, so code should be integrated fully and tested completely as part of the development process. It helps to have the right tools for this; if necessary, build in-house tools to augment those readily available, the benefit of maintaining a working automated build will outweigh the cost many fold.

The points I discuss above could and should be part of any good development process. You don't need XP to program in pairs, to write tests first, or to integrate regularly.

Incremental Release

Once doing these three, however, one can leverage some really nifty and powerful practices from XP. It's a short leap from Continuous Integration to Iterative Release. The difference is under Continuous Integration, features needn't be fully completed before they're integrated, and the product needn't be released often. With XP, only work that's really done can be integrated, and releases are made available (in potential at least) at regular, short intervals.

The benefits of this are many fold. It removes the liability of incomplete features; only features worth completing are of value, and only those are integrated. Iterative Releases also provide an opportunity for fruitful interface with product stakeholders. At the very least, through a demonstration and feedback session, but preferably through beta testing or beta release. Beta releases also have sales or marketing potential, and may provide other additional value which a more traditional, single-shot release cycle would not provide.

Better Estimation

Short iterative releases also enable a team to adopt a very simple and effective method for accurately estimating development schedules. Classic software estimation has evolved into a tremendously complex process, which requires formal measurement of all aspects of development, careful plotting of proportional effort by development task, automated tools, and much more. One gets to the point of needing a separate project to estimate the cost of a project. Then, of course, it becomes necessary to estimate that project

The problem is that some aspects of software development often fall through the cracks; they get left out of the estimate, or there isn't enough data to predict them accurately. Lack of data is a more general problem where projects have a time-span of months or years. By the time there's enough data to estimate a project, the variables change; the team changes, the technology changes, the development process changes, and then the data loses value.

Axiomatic to statistical analysis is the principle that more data points enable better description of the subject. Weekly releases means more data weekly to fuel further estimation. Completing features start-to-finish for each release means that task-by-task breakdown can be more or less ignored. What's important in XP is the correlation between the team's initial estimates and the team's actual productivity over the course of the iteration. In XP, this correlation is called velocity. Velocity works because programmers' estimates are usually more precise than accurate. In other words, though programmers often mis-estimate, even by a factor of two or more, experienced programmers are generally consistent in their mis-estimation. Instead of improving the accuracy of the estimates, XP leverages the precision, or reproducibility of estimates. If a XP team completes in one week an amount of work they initially predicted would take them 180 work hours to complete, the team will expect the next week to complete another set of features initially estimated at 180 hours. It's simple and elegant, and it doesn't require any tools, any advanced math, or reading any of Capers Jones' 600 page books.

Better Planning

Of course, better estimates certainly allow better project planning. But short release iterations enables better planning in other ways as well. Requirements Creep is a well-known phenomenon in software engineering, and an understandable one. Requirements can change because business processes and needs change. But changes in requirements are also a simple, natural, and beneficial by-product of the of creative process at the interface between the customer, the software engineer, and the product under development. Many software estimation techniques include some kind of compensation factor for late-breaking changes in requirements. Some software teams actually charge more for features added later in the schedule.

XP, however shines when it comes to requirements changes. Since features are completed over short iterations, re-prioritizing features from week to week is a natural part of the process. Simple design and TDD help keep the cost of requirements change low. A stakeholder demo and briefing after each iteration provides high visibility into the development process and empowers clients to make well-informed planning decisions.

Incremental Releases are, in my mind, the kingpin of the XP development methodology. Critics have objected that producing incremental releases isn't possible, doesn't work successfully, or provides less benefit without the other XP practices. This is certainly true. However, as we saw, the other elements of XP are worthwhile development practices in their own right, and not specific to XP. In any case, I think the relative worth of any development effort must be measured by the business value it provides. XP isn't about making development easier, it's about providing greater value.