What produces quality?
In software, as in anything else, there will always be disagreements over what it means to be good, and further dispute over how to achieve it even assuming a definition can be agreed on. The hypothesis of this essay is that Open Source software tends innately to encourage a certain concept of quality, one that in many ways is incompatible with the view held by a majority of the proprietary software industry.
As understood by the archetypical Open Source developer, programs have a life-cycle. There is no single gestation period appropriate to all: some are conceived, born, raised, and sent out into the world all in the space of a few hours, while others may require weeks or months of planning before implementation can even start. However long it takes to get there, the initial release brings the cold shock of the real world, and the laying bare of all the program's unforseen deficiencies. The program's mature phase has begun. From here on out its maintainers must feel their way to a careful balance between making the program handle the various demands that will be put upon it, and preserving the integrity of the code.
The latter requirement may at first seem an odd concern. Why should the code have anything to say about the uses to which it will be put? Isn't it ours, to bend to our will as we please? But non-physicality does not imply infinite pliability. A suspension bridge cannot, in the course of regular maintenance, be converted to a skyscraper. Nor can a program be quickly reshaped into something that deviates too far from its original design. Modules and code paths have structural requirements too.
Unfortunately, the apparent flexibility of software contributes to the belief that it is really a new kind of engineering or industrial discipline, that almost any transformation can be accomplished in a predictable amount of time, with enough experience and know-how. And at first glance, it appears to work. The bridge can be converted to a skyscraper, for a little while. It won't stand long, and even before it falls the users will feel it swaying and oscillating, but nevertheless the illusion of a successful conversion can be kept up temporarily. By succumbing to the temptation to provide this illusion, in the hopes that the customer will embrace the new features (and overlook that uncomfortable swaying sensation), proprietary software development too often leads to unstable software.
As programs and their operating environments get more and more complex, the inadequacy of treating software development strictly as an engineering problem begins to reveal itself. A biological approach may be more appropriate: the process of creating and maintaining code resembles growth and evolution at least as much as architecture and construction. And, for reasons examined below, Open Source developers lean toward a looser, more evolutionary style of development, rather than straight top-down engineering.
Traditionally, engineered systems move from deadline to deadline according to human desires and plans. The rate of change depends mostly on factors external to the program: market pressures, management's vision of shiny new features, the need to impress the board and the shareholders, etc. The long-term health of the program is likely to be short-changed in this process: not necessarily ignored completely, but all too easily ending up at the bottom of the priority list. The owners of the code reason that corporate survival is the first imperative. Without the company, who cares what happens to the code? Profitability thus becomes a rational measure of the code's health, despite the fact that it has nothing to do with the state of the program itself. To make matters worse, slavishly satisfying market demands only causes raised expectations of immediate gratification for the future. The greater the concessions the program makes (even if at the expense of long-term maintainability), the more the market will expect from the next version.
By contrast, evolved systems change more gradually, according to a holistic calculus that balances the needs of the program against the pressures coming from its environment. Certainly a program that changes to meet the needs of its users is more likely to live to see another version. But, as with organisms in the natural world, a program that changes too quickly won't survive at all, except perhaps in the very short term. In nature, mutations must normally be minor and incremental to be propagated: an animal whose skeleton suddenly enlarges from one generation to the next, without any corresponding increase in, say, digestive capacity, would simply be a non-viable mutation with no descendants. The same is true for software.
When looked at this way, the surprising thing is not that Windows NT crashes so much, but that it crashes so little. Its developers are under constant pressure to implement new features in response to market demand, at a speed that has no connection to the product's actual malleability -- though it has every connection to Microsoft's need for revenue. This situation is by no means limited to Microsoft, though NT is one of the most well-known instances. It happened to Netscape Navigator too, for example: locked in a war for corporate survival, their browser was pushed to advance at a rate faster than the code could really stand. The result was a product stuffed full of new features, but notoriously buggy and liable to crash unpredictably. Maybe its digestive system couldn't keep up, or maybe it was the nervous system. The details aren't important. The result was inherent in the process.
In the biological model, which more closely matches the attitude Open Source developers have toward their projects, the programmer is part of the code's reproductive system. Humans are the conduit by which environmental pressure (program performance and user feedback) makes itself felt, and serve as the agents of mutation and transmission to the next generation. In a given set of environmental conditions, our decisions about how to modify the code are informed at least as much by the prime directive of code survival as by profitability. Indeed, when there is not necessarily any link between corporate survival and code survival, programmers may well feel that their primary responsibility is the latter and not the former. At least this is the case for many Open Source software projects, which often have no associated for-profit corporation anyway.
Considered from an Open Source viewpoint, then, market pressures are are like cosmic rays, spurring the software to more frequent and more drastic mutation. As in the natural world, most of the mutations would probably be junk, leaving partially or wholly broken animals and no descendants. Few Open Source projects want to run such a high risk of becoming non-viable, however, so they shield themselves with lead and run at something closer to the background mutation rate. For software, this means slow, incremental, reversible changes whose utility can be field-tested, evaluated, and reconsidered at an unhurried pace.
Of course, even for a very market-driven project, mutations are not entirely random, because they must pass the judgement of the programmer before they are even allowed into the genome. But that judgement itself can be influenced by the constant bombardment of cosmic rays. Corporate developers, worried about their stock options and trying to keep up with customer demand, don't always have a compelling motivation to consider each decision's effect on the program's health, especially if there's a chance of just replacing it with a completely new product down the road (ring any bells, DOS users?). Open Source developers, on the other hand, usually have less desire for speedy changes than for maintainable ones. In fact, the very title "maintainer" is more frequently used, and has much richer connotations, in the free software world than in the commercial world, where the titles are more likely to be "developer" or "engineer".
Inevitably, proponents of the Open Source way and proponents of the proprietary, ear-to-the-market way find themselves disagreeing about what it means for software to be "good". The commercial developer says "Your program doesn't have a modern GUI or speak any of the latest inter-object protocols from vendors X, Y, and Z." He has a point: those are what sells, and in the short term they probably also make the product extremely useable. But the Open Source developer says "Your program is staggering under the weight of hastily-added, under-librarized features, most of which will be obsolete in five years, but whose effects will be felt in your code even after you pull them out." She has a point, too: those are the causes of unhealthy and ultimately unmaintainable programs. Who's right? The first way is more likely to generate profit right now, and -- shall we just admit it? -- more likely to satisfy immediate user demands (except for economically insignificant users, of course). The second way is more likely to result in software that doesn't crash, and that users can count on to be there ten years later.
I don't think the proprietary way will ever disappear. There will always be money in charging royalties for full-featured instant skyscrapers. But as time goes on, a higher and higher percentage of the world's running code will be Open Source. When the software's own health is the primary consideration throughout the development and maintenance process, it's only natural that it stands a better chance of long-term survival.