This letter responds to the review letter I received on March 24, 2003.
I hope you'll pardon the following diatribe, but I have a few comments and questions regarding the reviews of my submitted paper, some of which are directed at the reviewers, and some of which are (really) posed to the larger software development community.
I was a little surprised and disheartened by the reviews of Trace Cards that I received. The two reviews seemed rather shallow to me. The opinions offered were not relevant to the issues raised by the paper, but appeared to reflect merely lack of experience with or lack of consideration of the issues addressed by the paper. So unfortunately, I don't actually find the comments in any way useful for enhancing the paper. I would actually appreciate review and comment from professionals for whom the issues I raise have some importance. Hence, I've drafted this response as an open letter and copied this missive to a few of the folks in the industry that I suspect might care.
I find it rather ironic that the paper evaluation process for this "agile" conference itself offered no agility, i.e., no opportunity for interaction and conversation directly with the reviewers. It's really too bad that the paper evaluation process is a traditional waterfall. It's unfortunate that it exhibits the worst aspects of that process, esp. the inability of the offered draft to "connect all the dots" of concepts and fulfill the reviewers' expectations without having a clue as to what those expectations were and what their prior experience might be. However, this appears to be how these conferences and paper submissions and reviews are generally conducted.
Writing and editing a paper with a page limit involves choices, choices about what to keep and what to jetison, what moves the discussion forward and what does not, what to explain and what to leave to the reader's imagination and personal research. I did not belabor the fact that the technique offered is experimental because that is obvious. It is also advertised in the Abstract (see the phrase "new technique"). Where appropriate, I've cited prior art so that I did not have to waste space repeating it. I provided as many examples from different domains as I could fit in order to show how the technique handles variety. While novelty will certainly make a new and unfamiliar technique more suspect initially, I would hope that that in and of itself does not demotivate its serious consideration and usage in experiments.
My reading of the Agile Manifesto and Agile Software Development suggests the following: While there may be more perceived value in individuals and interactions, working software, customer collaboration, and responsiveness to change, there is at least some acknowledged value in certain kinds of processes and tools, some forms of documentation, some requisite contract negotiations, and some plans to be followed.
While the former activities may offer more fun, the later provide necessary foundation, context, and residue. As such, they are not lightly dismissed.
So, the big challenges for "agile" development appear to center on just how to maximize the value offered by these later activities and their results. A quick review of some related articles on the Agile Alliance web site seems to bear this out. There appear to be many questions about just how to get the most out these (what are considered by many folks) odious activities, with few clear answers regarding how to keep them "agile".
How / does "agile" development truly honor and value these other activities? How much of "agile" development is focused on the fun activities to the detriment and dismissal of the others? The attitude of a least one of the reviews reinforces my concerns in this regard.
The really big challenges for software development generally are in the areas of requirements (capture, formulation, management), the attendant organizational politics, and their appropriate alignment to business value propositions. Acquiring adequate understanding of business problems is often the most difficult part of solving them. While I don't intend to diminish their importance and difficulty, frankly software solution design and development are very straightforward in comparison. The most wicked problems are often so by virtue of trade-offs that are motivated by or inextricably bound up in corporate policies and politics.
While it's certainly valuable, working software is not really the primary asset that results from software development, but rather the primary asset is working knowledge: know what, know how, and especially know why. Working software can always be (re)created from working knowledge, but the reverse (recreating knowledge from code) will not always be so easy, as it usually will have significantly greater (reverse-engineering) costs associated with it. People are generally the sources and repositories of working knowledge, even when some impersonal institutional knowledge repositories (such as documents) exist. Hence, the Agile Manifesto appropriately values individuals, their interactions and their collaborations.
Working software that does not address the real needs of a customer is a waste of time and money, unless it's a toy that can be readily thrown away. But, even such experiments have costs. And contrary to some notions of "agile" idealism, change also has costs. While it is generally better in principle to embrace and respond to change (and required in practice), the associated costs can only be managed and amortized, but not eliminated. Few businesses can afford the costs of excessive change for long. Fewer still will tolerate them indefinitely.
How do "agile" methods address the various real world quality concerns held by stakeholders? How do "agile" methods align their activities and work products with the vision, mission, and values of customer(s) and business enterprises? Do "agile" methods offer any techniques to identify and capture what stakeholders value and need most as comprehensively as possible? Do "agile" methods presume that customers even have coherent vision, mission, or values? If so, how are these captured, retained and shared, esp. over longer durations (longer than a single iteration or a single project)? If not, perhaps these "agile" methods need to be supplemented ...
Unlike many of the issues addressed by established "agile" techniques, many of the issues surrounding software requirements and technology alignment remain unaddressed. As such, I suppose this conference may not be an appropriate venue for the introduction of new techniques addressing themselves to these issues. These issues and techniques addressing them definitely need further consideration, explanation, discussion, development, and application in practice. This must occur in order for the industry as a whole to move forward. It was my sincerest hope that I could raise some of these issues at the conference.
But honestly, I had to make far too many compromises in the draft of the paper I submitted due to the page limit. The web version of the paper offers more examples, and the web format generally offers more options with respect to linkage, exposition, organization and evolution. I will indeed elaborate further on the technique I've proposed and offer significantly more expository depth for these ideas going forward via the web.
In fact, there are three papers I've posted on the web during the past year that all relate to the issues I've raised about software quality and business technology alignment. The first two of the following links really offer the background needed to motivate the introduction of trace cards. The third link is the current (more complete) web draft of the submitted paper.
For whatever it's worth, I hope you'll find these considerations have some value.