Is Free Software Inevitable?


Linas Vepstas <linas@linas.org>
February-July 2001


There have been political [RMS], anthropological [ESR] and organizational [CBBrowne] analysis of free software and its implications for society, but few discussions of its underlying economic theory and its repercussions. In this essay, I review some of the economic forces acting on Free Software producers and consumers, and try to place these in a macroeconomic framework. I will be focusing primarily on corporate interests, in large part because the nature and role of free software in a business environment is widely misunderstood, permeated with myths, and prone to propaganda.

There are many free software users because the software market has matured (as all markets do) into a commodity market, where price, not features, dominate buying decisions. This alone can explain most of the growing popularity of free (gratis) software among consumers. But it is more interesting to examine the forces driving the producers: the forces acting on those who create free software. Why is free software getting produced, who is doing it, and will this trend continue indefinitely?

There are parallels that can be drawn between Free (libre) Software and free trade, and, more accurately, to markets controlled by intellectual property barriers. These parallels, understood as powerful economic forces, may be the right way of understanding the continuing vitality of free software. I will argue that the crux of the analogy is that these economic forces act indirectly, shaping markets rather than being markets. As such, they provide leverage that direct investment cannot match, and thus are far more powerful than first appearances. A good economic theory should provide a better way to predict and understand what the future may hold in store for Free Software. Based on the observations herein, it would seem that the dominance of Free Software is indeed inevitable. Free software wins not only because consumers like the price, but because producers like the freedom.

This article assumes a basic knowledge of the Free Software Movement and of the goals of Open Source. [additional overview references needed here.] [need URL's for other references].

Introduction

Companies engaged in free software/open source development have been struggling to make a profit. There seems to be a paucity of viable business models for ambitious entrepreneurs: the current business models include selling distributions (Slackware, Red Hat, SuSE), providing support (Linuxcare, others) or enhancing, customizing and installing complex software systems (Egrail, OpenSales, Brainfood, IBM services). Meanwhile, proprietary software stalwarts (Microsoft) accuse open source converts of depriving them of their livelihood [Mundie] even as they gleefully point out that a company without proprietary property protections can't be a viable business. Some of us have heard the VC's refrain: how can you make money from something you don't own?

How can we then explain the growing popularity of free software and open source? Eric S. Raymond tries to explain it all with an anthropological study in "Homesteading the Noosphere". Richard Stallman categorically states that Free Software is about Freedom (Liberte) and not Price (Gratuit). ESR's observations may explain the popularity of Linux with hobbyists, hackers, and IT professionals, while RMS's appeal is to Kantian imperatives offers no explanation. C.B. Browne discusses the pros and cons of authoritarian/centralized vs. distributed/anarchic development processes [Linux and Decentralized Development]. This may explain the success of the development model, but not the adoption by end-users. Finally, the popular observations and shrugged shoulders invoked during conversations of 'business models' is belied by the growing adoption of open source by the hardcore corporate and business community. There's a force driving this growth, but what is it?

Looking for an economic explanation seems particularly promising when dealing with corporate interests. This is because corporations tend to act much more like classical economic self-interested agents than do individuals. Indeed, to understand the actions of individuals participating in the free software movement, one has to appeal to sociological and anthropological principles[ESR], since the seemingly altruistic behavior of is unexplainable with textbook economic arguments. But when dealing with corporations, we might expect that behavior is driven for almost entirely selfish reasons: maximization of both short and long-term profits. Corporate motives are not subject to conjecture: they are usually plain to deduce.

Economic equations have two sides: producers and consumers. The forces acting on one can be nearly independent of the forces acting on the other. In a mature marketplace, the producers are as interchangeable as the consumers: one is pretty much as good as another. Thus, what matters are the forces between consumers and marketplace as a whole, and the forces between the producers and the marketplace as a whole.

To consumers, the software marketplace appears to be a mature, commodity market, or so I will argue immediately below. In a mature market, there is little product differentiation, and buying decisions are made primarily on price. This alone can explain most of the popularity of free (gratis) software.

To software producers, the software marketplace appears to be a complex, shifting seacoast, fraught with dangers, and a difficult place for anyone to make money (with the exception of Microsoft). The forces acting on producers are the ones that the majority of this essay will be devoted to. These forces are giving rise to a new class of producers, non-traditional producers who can 'afford' to create free (gratis) software, and give it away. They can afford to do this because their dominant costs are tied up in issues of intellectual property rights and freedoms (liberte). They do not recoup their software development investment through direct software sales, but instead, through other means. The need for source code access can outweigh the cost of underwriting 'free' software development. I claim that these needs will be powerful engines driving the continued production of free software. Free software wins not only because consumers like the price, but because producers like the freedom.

Innovation, Commodity Technology and Mature Markets

Commodity markets are price-sensitive precisely because the goods being traded are commodities. There is no such sensitivity when the goods are unique, one-of-a-kind. Is software a commodity? Based on consumer actions, it would seem to be. When a product has unique and powerful features that no other product has, and most buyers are demanding these features, then the seller has a lot of latitude in setting prices. But when the buyer can get substantially similar products from any number of sellers, one of the few things that sellers can compete on is price. 'Free' often trumps.

The process of introducing new, unique and powerful features that users want is called 'innovation'. It would appear that innovation in mass-market server and desktop software has stopped or stalled. One web server is pretty much like another; it should be no surprise that the cheapest one, Apache, should gain market dominance. Of course, there are differences: some web servers may scale better than other on high-end, cluster machines. Other web-servers may install more easily, or may use a much smaller RAM footprint. There is an immense variety of ways in which web servers can distinguish themselves from one another. But the plain fact is that most of these features are not important or interesting to the majority of web-server buyers. Just about any modern web-server is 'good enough' for the majority of the market. And therefore, the market will choose the cheapest one.

Am I saying that Apache is merely 'good enough', and that Apache developers are stupid and incapable of innovating? No. I am merely stating that Apache didn't need to be 'the best' to win market dominance, it only needed to be 'the cheapest'. And in one important sense, innovation does continue: with every new whiz-bang feature that is added to Apache, an additional 1% of the market is able to look at it and declare: 'this is good enough for me'. The same situation currently exists for other server subsystems: domain name servers, mail delivery agents, security tools, and, of course, server operating systems. This alone can explain most of the current popularity of Free Software in the server market.

The same situation has also existed in desktop systems for over a decade. The windowing systems in OS/2, the Macintosh, MS Windows 3.1 and the X Window System (to name the major ones) have been 'good enough' from almost the beginning, being distinguished primarily by price and the availability of the platform on which they ran. For over a decade, there have been dozens of word processors and spread sheets that were 'good enough'. More recently, the same has become true of mail readers and office suites. Microsoft has built its software dominance by leveraging the 'good enough' phenomenon. Microsoft desktop products didn't have to be better than the competition's (although they often were), they merely had to be more easily available. (I say 'more easily available', not 'cheaper', because for the home user, factors such as the need to drive somewhere to purchase a competing product, or the need to go through a possibly daunting install process, are powerful offsets to price differences. This is called 'convenience cost', and the power of bundled, pre-installed software is that its total convenience cost makes up for its price-tag.) Microsoft achieved market dominance by failing to innovate, and by stifling innovation: it created products that were 'good enough' and were cheaper than the competition. If an innovator introduced a revolutionary product that could demand high prices and rich profits, Microsoft only needed to create something similar that was 'good enough'. Microsoft could then leverage its distribution channel (viz., bundling, pre-installs) to offer a lower 'convenience cost' to win the market.

In a commodity market, the low-priced offer wins the business. To the extent that software is a commodity, then free (gratis) software will displace popular, commercial, non-free software. As we shall see below, Free Software is a disruptive innovation, and its continued development is powered by this fact. However, from the above arguments, we see that the popularity of free (gratis) software is due to the lack of product differentiation that is the hallmark of ordinary market maturation.

Is Free Software Like Free Trade?

[Note to self: rewrite this section completely. Refocus on IP issues.]

The plaintive cries of the 'intellectual property' industry remind me of a similar refrain heard in the hallways of Washington, sung to international trade representatives and policy wonks: "How can we survive when the flood of cheap products and labor from Mexico will put us out of business?" (NAFTA) "How can the Japanese rice farmer survive, and Japan guarantee is economic independence, when cheap American rice is allowed to be imported?". The answer, is, of course, that one doesn't. The inefficient producers do indeed go out of business. When the trade barriers come down, the protectionists are the losers. Some survive, others grow to take their place: some may discover new, niche, upscale markets, while others figure out how to reduce overhead and be profitable with reduced margins. Technology innovation often leads to 'disruptive dislocations' [The Innovator's Dilemma, Clayton M. Christensen] I think the same fate awaits the software industry, and more broadly, the intellectual property (viz. entertainment) industry.

Is this bad? Is there something wrong with this? Let me take sides with the trade liberals, who seem to have won: the loss of jobs and income from lowered trade barriers is more than offset by the broader and cheaper array of products. Once the dislocation is behind you, the total benefit of free trade exceeds the loss of jobs. Far from starving, or scratching for a living, the first world has a greater variety of foodstuffs and international cuisines available than ever before. Its not a zero-sum game: by removing economic friction and encouraging efficient markets, the whole world benefits. Indeed, the benefits usually go far beyond any original vision: one finds entirely new, unimagined industries springing up, built on the backs of the now-cheap, commodity products. [The Innovator's Dilemma, Clayton M. Christensen]

On the flip side, dislocations have been severe (e.g. the American steel industry, the British coal mining industry), as thousands of workers were displaced and were unable to find comparable employment. Dislocation undeniable and is hard; in some ideal world, one might hope that somehow the industries and the politicians might find some softer way through the transition. This would not be pure capitalism: in traditional pure capitalism, the worker be damned, its all about profits. But I argue below that the high-tech industry has been wracked with deep and powerful dislocations and paradigm shifts for half a century. These seem almost routine: every high tech worker knows that what one did five years ago is not what one is doing now. But this is quite palatable: the high-tech sector, as a whole, has not shrunk. Open Source/Free Software will not cause it to shrink.

As it is with trade barriers, so it is with proprietary software licenses. A proprietary license is a barrier, much like a trade barrier: it prevents the efficient, frictionless flow of information, and uses this barrier in an effort to scoop up profits (vast profits or windfalls, in many cases). Access is denied, forbidden, and protected by the law and enforced by the arms of the state: reverse engineering software is like smuggling: a black art, and now illegal in yet a new way with the passage of the DMCA. Of course, some companies will sell you their source code, usually for a steep price. The analogy: a stiff fine, a tariff, import duties. This suits the producer quite well, but hardly the consumer.

(Side Note: I am not arguing that all notions of intellectual property be abandoned. Although in its current form, the copyright laws, and in particular the patent laws in the United States appear to be severely broken, this does not imply that copyright or patents can serve no purpose. They do not need to be abandoned, although there is a clear need for reform.)

[Editorial correction from Richard M. Stallman:

There is a partial similarity between free software and globalization, but also a major difference.

The advocates of "free" trade, and neoliberalism in general, argue that it creates wealth. That is true--but it also concentrates wealth. The result is that only the rich benefit. The poor gain little; they may even they lose, as has happened in the US.

Free software is different, because it works against the concentration of wealth. (Copyright is a major factor for concentration.) So when free software creates more wealth, the benefits are general.

This is how free software can be beneficial, while global "free" trade is harmful.

I put the "free" in "free trade" in scare quotes because it is a misnomer. Trade is still restricted, but now it is restricted by copyrights and patents rather than specific laws. These treaties do not eliminate control over trade; rather, they transfer it from governments, which sometimes respond to the well-being of their citizens, to corporations (usually foreign), which don't recognize a concern for the public.
]

My goal here is to try to explain why open source is popular among businesses, and why that popularity is growing, and why its reasonable to believe that it will only grow more. I try to pin this on 'classical' economic ideas of self-interest: corporations are going for it because its cheaper than the alternative, and I only hoped to point out how its 'cheaper'.

The focus here should be on the role that patents, secrets and proprietary rights play in software, and the thesis is really centuries old: barriers provide economic 'friction' that allow the rights-holders to derive economic 'rents'. What's new here is the simple observation that free software is built without these barriers, and thus (a) free software companies won't make big bucks, (b) users of free software derive an economic advantage by avoiding the need to pay 'rents' to the rights-holders.

The flaw in my analogy is that of comparing 'free trade' which implies 'tariffs', when in fact, the analogy should be to 'trade that prevented due to the action of patents and secrets'. This is partly literary slight-of-hand: most people believe patents are good, and that free-trade is good. So if I say 'patents are bad', I get booed, but if I say 'free trade is good', I get cheered. But they're really similar statements.

(as RMS points out, large corporations are pro free-trade globalization, because they have something better: they have in place copyright and patent protections. If these didn't exist, we might expect that large corporations would be anti-globalization.)

A Case Study

Lets look at a case study, taken from real life, in the dot-com world. The names are hidden, as the story may be construed libelous. This is not an urban legend, it is a true story in which I was an active participant.

Company D, acting as the developer or lead contractor to an industry consortium C, developed a powerful and expensive web-based system. The technology used a proprietary web server from company X. The system was large: it consisted of hundreds of Unix-based SMP servers and a handful of mainframes located in two cities, mirrored in real time so that a major catastrophe in one city would not disrupt the operation of the system. It cost tens of millions of dollars, approaching hundreds, to develop and operate. It was used by millions of web users on a regular basis.

As any such large, complex system, it was beset by a variety of problems. One of the more serious problems was a 'memory leak'. During normal operation, the web-server/dynamic web-page subsystem 'leaked memory', using larger and large amounts of the installed RAM in each of the servers. After a few days or a week, an alarmingly large part of RAM became unavailable, and soon enough, all 4GB of installed RAM on each server would become exhausted. Preventative maintenance was scheduled: at first, weekly, then twice-weekly, Wednesday and Sunday nights, the servers were rebooted to reclaim that memory. The reboot was not entirely simple: one had to check that all customers had logged off, so as not to disrupt any pending transactions. Log files and communications subsystems had to be carefully closed before a reboot, in order to not set off automatic error detection alarms. After the reboot, each system was examined to make sure it had come on line properly and was functioning and usable. This maintenance process was carried out by a small army of graveyard-shift high-tech workers, sysadmins. Rebooting hundreds of servers twice a week is no small task. The salary expense alone amounted to millions of dollars annually.

In order to correct this problem, a special SWAT team was assigned to locate and eliminate the bug. One little problem: the web server was proprietary, and the source code was not available to the SWAT team. A specialized, pure-custom debugger was developed to trace and match up all calls to the C library malloc() and free() calls. Calls that did not match up in pairs pointed at the memory leak. After tracing through and compiling millions of calls, three leaky suspect areas were uncovered. The web server company X provided a special debug version of their binary, and we were thus able to explore stack traces surrounding these leaks. One could very loosely say that a portion of the server was reverse-engineered. Armed with reams of data, a set of conference calls and eventually visits to the vendor were scheduled.

The meeting on vendor premises was surreal. We spent the morning setting up a system that exhibited the problem, in the presence of a support tech who was unfamiliar with our complaint or its severity. As we attempted to explain the problem, the recurring refrain was that 'it is not our code, the bug must be in your code'. We made some progress in the afternoon in trying to explain to and convince the support tech, but the day ended in frustration as we escalated to a manager, and then a second-line manager. Both managers, unprepared, unprompted, blurted out that it can't possibly be their problem, it must be our problem. (After all, their product was version 3, many years old, millions of users, and no other customer had ever registered such a complaint. Our perception was that no other customer had ever stressed their server so strongly, had used their product in a 24x7, 100% CPU-load situation). The next day, a fresh batch of techies came by to examine the problem, and after much persuasion, left with heads shaking. More escalations, and on the third day, our last day, we finally met with a real programmer, someone who actually developed the product. Unprompted, the words fell out of her mouth: 'it can't be our problem (we use Purify, the memory debugger), it must be your problem'. We left for the airport with a promise from the developer and her management that she would look into it. However, according to our on-site, permanent support rep, the demo machine exhibiting the problem was powered off and left untouched for weeks on end.

Followup phone conferences with management were scheduled weekly, and with upper management monthly. No forward progress was made. There were more plane trips, more on-site visits. The problem seemed to remain unresolved, festering. This went on endlessly, without relief, without news. Finally, after 3 months of severe frustration, and some strong, take-charge lobbying by the local architects, it was decided that the whole system be ported to Apache. At least that we, we would have access to the source code, and could be masters of our own destiny.

Another (possibly more important) factor contributed to this decision. The system was slow, with painfully long response times for certain web pages. A good part of the performance problem was that a large part of the function was coded in a semi-interpreted proprietary programming language supported by this server. There were some attempts made to find alternate compilers/interpreters for this language, with the idea being that maybe we could get a performance boost by tweaking the interpreter. But this proved infeasible: an alternative could not be found, and even if it had been, the thought of trying to optimize a compiler was a bit dubious. Compilers are 'hard', and the investment in improving one would be a risky and difficult one, possibly having little to show for it. Lack of access to the source code of the interpreter/compiler for this language proved to be another stumbling block.

It was decided that it would be re-written entirely in Java, at the expense of 15 million dollars: the cost of 4 or 5 departments of coders working for over a year.

This should have been the end of the story, at which point I could point out the obvious advantages of open source. If the system had been designed on top of Apache from the beginning, a fortune could have been saved. Almost anything would have been cheaper than what we went through: we could have paid dozens of programmers for a year to hunt down one solitary, single bug: that would have been cheaper than the army of sysadmins doing twice weekly reboots, and the army of programmers redesigning everything in Java. The mistake of not going with Apache was due to a reasonable decision backed by the traditional business reasoning of the late 90's: Apache was an unknown. Company X was a large, powerful, widely acclaimed company. Company D was even larger and more powerful, the epitome of business computing. This was a professional, company-to-company relationship: each could be counted on to defend the other, to do what it takes for a successful web-site launch. The pockets were deep, seemingly bottomless: no matter how bad things got, there was a cushion, a safety net in picking a popular respected product. It seemed to be shear lunacy, absolute madness to think that one could ascribe such powers, such trust, such dependability to some rag-tag group of volunteers known as the Apache Foundation. Picking Apache over web-server X would have been career suicide, not a rational decision.

But that's not the end of the story. Amnesia set in, starting with the choice of Java for the new version. Late in the development cycle, the new system was finally able to be tested for extended periods of time. After 30 hours or so of 100% cpu load, the JVM, the Java Virtual Machine, locked up, locked up hard. Our conclusion: no other user had ever run a JVM in a 100% cpu load, 24x7, high-availability environment before. In principle, this was fixable: our company had a source code license from Sun, and thus should have had many of the benefits of open source. With one little catch. Fear of 'intellectual property contamination' meant that only a restricted set of employees actually had the ability to view the source code. We didn't want rank and file engineers to accidentally learn the secrets of Java and unwittingly use Sun's patented ideas in other products. It was clean, honest and upright policy. Unfortunately, the Java developers were an ocean and six time zones away. And, as you might surmise, a bug that shows up only after 30 hours of heavy load is not the kind of bug that is easy to diagnose, and once diagnosed, is not easy to fix. It remained a tangle for months. Eventually, some work-arounds and an upgrade to a newer, more recent version of the JVM made the bug disappear; it was never actually fixed (fixing it was no longer important). And another bit of amnesia set in at this point: since everything now worked, the original motivation to move to Apache disappeared. The transition was never made.

In retrospect, it was a very costly misadventure. Not only was there a big hit to the profits, but there was a hit to the revenues. The consortium members were quite upset with the shenanigans. Schedules had been delayed by years, performance was questionable, much of the development and maintenance costs were shouldered by the consortium members. The worst had come to pass: there was a breakdown of trust between the consortium and the prime contractor, with rancorous and acrimonious accusations flowing between executives on both sides. By 2000, the whole thing had been essentially wound down, with asset sales of major portions of the technology made for tens of millions of dollars, and a skeletal operating crew remaining. A potential business valued in the billions of dollars, with decades of happy customers and users, was not to be.

Analysis

Was it all for want of the source code? In part. There were other, structural problems. I hinted above: the performance (time to respond with a web page) was poor. Important features were late or never delivered. The system was mis-engineered in some important aspects, although this was in large part due to a bad technology selection and inexperienced web developers. (In the late 90's, most web developers were inexperienced.) Large, complex engineering efforts are often known to fail and the reasons are often manifold, complex, and obscure. This project fit the profile. But it was clear from my own, direct, personal experience, that the inaccessibility of the source code of the web server was a debilitating setback.

The largest of companies, with the seemingly best intentions of offering customer support, had failed to do so. In light of this, plaintively, I want to ask, what can a mere mortal, a small company or even a mid-size company, can ever hope to gain from 'customer support' from a proprietary software company? If a giant can't get it, what hope do you have?

The powerful, personal lesson that I take away from this is never, ever use proprietary software for something important. Even if your life depends on it. Especially if your life depends on it. If a bug needs fixing, I can hire a programmer, or two or three or a dozen, to fix it. The cost will be less than waiting for the vendor to fix it for me. I am bound to make many business mistakes in the future. The free software movement is still young, and many, many important technologies are not available as free software, and those that are, are frequently immature. This is a handicap: its hard to build an all-Linux solution. But the costs: Free! and the potential to be the master of one's domain, make it all seem worthwhile.

Is this story unique? At this scale, maybe. But this story played out at the dawn of corporate awareness of open source. Thousands of other large technology projects have failed at thousands of other large corporations. I suspect that few post-mortem analysis have ever pinned the blame on the lack of source code. But this, I suspect, is because no one ever thought before that source access was a viable or even a remote possibility, or that the lack of it could be a root cause of failure. This hasn't yet been a thing that many executives or technology officers have much thought about, heard about at seminars, or discussed with their peers at other corporations. It hasn't entered the consciousness of business at large on any broad scale. I couldn't say that there has ever been a Harvard Business Review article written on such matters, or that a class has been taught at the University of Chicago Business School analyzing such business case studies. We are at the very dawn of understanding. But slowly, inevitably, and surely, it will enter business thinking. Pundits, advisors and consultants will eventually start noticing, and they'll advise and consult in a noisy fashion, with splash and panache. And business leaders will come to hear such stories, and even become inured, before the light-bulb goes off with an bright 'ah-ha!'. Open source removes barriers and stumbling blocks, open source provides strong financial incentives and rewards.

Notes:

Altruistic Behavior by Capitalists and Corporations

Lets look at another important economic behavior pattern, the seemingly altruistic behavior of open source developers. Although Linux was launched due to a confluence of a variety of technical, social, organizational and anthropological forces that culminate in altruistic behaviors by individuals, it will not need to depend indefinitely on these same behavior patterns. The GPL license, when coupled to the economic incentives to use GPL'ed software, will cause corporations to exhibit these same 'altruistic' behaviors. We already see this in action now. Free Software has reached a certain critical mass, where it becomes cheaper for a corporation to modify and add enhancements to a particular GPL'ed piece of software than it is to purchase a competing proprietary product. This behavior is visible in, for example, the Postgres SQL database, where it can be cheaper to hire a developer to work for six months to add a needed feature, than it is to buy a thousand-seat license for Oracle or DB2.

Neither the GPL nor the BSD license prohibit the private, internal use of enhanced software. One can make 'proprietary' changes to GPL'ed software and use them internally. However, there is an economic incentive to not keep these changes secret and proprietary, at least not for long. This is the maintenance and upkeep costs of proprietary changes. If a proprietary change is made to version 1.0 of some GPL'ed software, then the user of this modified version cannot easily take advantage of the newer version 2.0 of the same software. This is because the proprietary changes would need to be integrated into the never version 2.0. This can be of such a great cost, equal or exceeding the cost of the original modifications, so as to be untenable. The internals of the software change, possibly requiring a major redesign of the proprietary enhancement. Even if the enhancement is easily ported, there can be non-trivial costs of test and validation, especially when the software is involved in a complex systems or server environment, where high uptime and dependability is a requirement. In the medium-to-long run, these maintenance and upkeep concerns provide a powerful incentive to donate the enhancements so that they become integrated back into the main body of the work. In this way, the maintenance and upkeep costs are spread to the community, rather than remaining concentrated in the hands of the modifier.

Thus we see that profit-driven, capitalistic, greedy corporations can be driven (and are being driven) into contributing significant developments to the free software pool. In fact, the greedier, the better: a sharp and quick-witted corporation will act quickly to shed costs onto the community. It may develop a needed feature, and dump it, half-baked and untested, onto the community. Then, with a bit of cheap and diplomatic mailing list behavior, encourage the pool of 'volunteers' to debug and polish its contributions. "Release early, release often" is a mantra already understood by individual open source developers, and it will be easily learned by self-interested corporations.

Thus, we see that the kinds of behaviors that are described as 'altruistic', and even derisively labelled 'communistic' by some [Mundie?], can emerge as a result of not-so-enlightened self-interested, greedy motivations.

An Example of Indirect Investment

As an example of the above process, let us take a look at the role of chip manufacturers in the development of gcc, the GNU C compiler. [I will have to use a disguised example again, mostly because I am not aware of the true, detailed historical facts of the example I wanted to write about. Anyone with a better historical understanding, please write me.]

Let us imagine a new processor, the SuperProc, developed by Microelectronics Inc. This is a low-power, embedded processor that Microelectronics Inc. wants to sell to Detroit, so that, e.g. Cadillac might use it in their anti-lock braking system, engine control, and the air-conditioning system. Microelectronics Inc. is an expert in designing chips, and its business is selling chips, not selling software. To make the chip all the more appealing, it needs to arrange for a development environment, consisting of a compiler, an assembler, and a C library. It has several choices: develop this technology in-house, subcontract it to a compiler/development tool specialty shop, or modify the GNU gcc/binutils/glibc toolchain. The first option is inappropriate: after all, Microelectronics is a hardware, not a software company. The last two options are practical, and the choice is really determined by the cost, and the question of whether the proprietary toolchain might have better features than the GNU toolchain. Assuming that the GNU toolchain is chosen, then we see again that the development of free software has been funded by a corporation acting strictly in its own competitive interests.

The moral of this story is that the free software is developed indirectly, as a side-effect of developing and marketing the main product. Unlike a for-profit, pure-software house, Microelectronics does not have to allocate a budget for marketing and advertising its software toolchain. It does not need a sales force to get it into the hands of customers. If it picks the GNU route, it doesn't even have to track the number of copies or licenses, or pay any royalties. It can mostly skip support questions: except for serious bugs, it can refer needy customers to Cygnus Solutions (now a part of RedHat) for support. It has none of the overhead associated with a traditional pure-software company.

Imagine that some proprietary compiler/toolchain company had the idea to create a toolset for SuperProc. Without direct support from Microelectronics, it would have a very hard time making a business case. Its not the R&D costs, its the marketing and sales costs that would eat away the plan. By comparison, Microelectronics only needs to pay for the R&D, and thus can get a toolchain for a tiny fraction of the cost that it would take for a traditionally-structured software market to deliver the same. Again, we have an analogy to free-trade. By removing the proprietary barrier, a more efficient market results.

Its also important to note that there is an indirect trickle-down effect at play as well. If Microelectronics were to hire Cygnus Solutions to develop the toolset, then some small fraction of the total investment will go into enhancing and improving the CPU-independent parts of the toolchain. Although Microelectronics is ostensibly investing in the SuperProc backend only, defacto, the front end gains a bit as well. The front-end improvements are shared by all CPU's, and in particular, by PC users running on Intel chips. This is benefit accrues to Intel users even though the investment is for a different platform entirely.

Altruistic Behavior by Governmental Entities

A more classically altruistic behavior can emerge from national, state and local governments. These institutions are discovering that they can save significant costs by collaboratively developing the types of software needed in governance, such as mapping, record keeping and processing, budgeting, and education-related software. There are some weak forces that can change this logic. There may be a perceived benefit in 'security through obscurity': software that keeps sensitive public records is considered more cracker-proof if the crackers can't examine the source. This is countered by a weak form of 'freedom of information act' type laws: Government may perceive a legal obligation to disclose and publish their software. The strongest counter-attack may well come from the private sector: for example, some ailing educational-software company might accuse a local government of trying to put them out of business by publishing competitive free software. The political reaction to such an accusation is very likely to be to the detriment of the free software in question: there is a certain obligation by governments not to directly harm or attack otherwise benevolent industries.

Other Ways That Corporations Support Free Software

There is another, important way in which corporations support Free Software. There are numerous indicators that a significant number of contributors to open source projects do so on company time, with immediate management 'looking the other way'. This may be because management values the employee and doesn't want to alienate them through 'petty' micro-management. Traditionally, employees have been encouraged to maintain a standing in professional societies and associations, and to present talks, and publish articles in papers and journals. Work on free software projects can be argued to be a modern equivalent thereof. It may be that the software in question has some marginal utility to the company, so the employee can generate simple 'excuses' for their behavior. It is also possible that while management is aware of the employee participation in open source projects, they are not aware of the extent (and might not be approving if they were). This need not occur at the lowest levels of management only: the origin of the IBM GNU/Linux mainframe port was through a skunk-works of which lower management was aware, but upper management was not.

Think of it this way: suppose you are the manager of some 50 technical employees, and a dozen sysadmins and support personell. Some are working on product, others are setting up linux firewalls, etc. But really, day to day, can you account for their time? That guy who has been struggling to set up an automated intrusion detection system on Linux for the last two months: how do you know he hasn't been sending large patches, enhancements and bug-fixes to LIDS, all done on company time? And if you did find out, would you reprimand him? Direct experience shows that this sort of process is going on all the time; what is not known is how large this effect is.

But perhaps the real point of such an example is that this kind of behavior isn't possible with proprietary software. The same employee may spend just as much time combing over newsgroups, documentation and the like, exchanging messages with peers, hunting for advice on configuration and the like. But just at the point where the employee finally becomes conversant, comfortable with the technology, proprietary software bars them from productive participation. They spend their time devising work-arounds and inventing clever hacks, when it might have been easier to just find and fix the bug. Open source projects and proprietary software both eventually 'grow' groups of strong, knowledgeable, committed users (after much time and invested energy). However, open source projects can 'harvest' contributions from their user groups that proprietary software vendors must of necessity leave 'fallow'.

Unfortunately, there is little data available that might show how much of GNU/Linux was developed on unpaid time, vs. during working hours (whether management approved or not). There is a wide range of opinions on this matter.

Open Standards: A History of Computing

Proprietary software will not cease to exist. In the case study above, there is a strong disincentive to make that particular web system open or free. If it was open, it would allow competing corporations to install and operate competing services. That is, the web system could have been built on a free operating system, using a free programming language, a free web server, free databases and a free application server infrastructure. All of those components are 'off-the-shelf'; the final combination was, however, quite complex, and required entire teams of programmers to develop. These programmer salaries were meant to be derived from the per-use fees that the system user would pay. The system itself would remain proprietary.

Note, however, that there is a gray area between the lowest levels and the highest levels of the system. In the layer between the web application server, and the details of the application, there might be some generic programming interfaces. When these are first created, there is the incentive to keep them proprietary: they provide a competitive advantage, as anyone wishing to create a competing system would need to reinvent these generic services. But time does not stand still. As time progresses, proprietary systems that used to provide a competitive edge are eroded, and become either extinct or become open. To understand this, lets look at the history of computer technology from the marketing point of view.

In the 1950's, computers were sold without operating systems (much as many embedded application chips are today). It was up to the buyer to create the environment they needed. By the 1960's, this was no longer the case. Computer manufacturers were briefly able to tout: 'New! Improved! Now comes with Operating System!'. But this didn't last long; eventually, all computers were sold with operating systems, and there was no particular point in using advertising space to announce that it came with an operating system. The buyer already assumed that. The buyer was basing their purchasing decision on other factors. The bar had been raised.

This scenario repeats itself over and over. In the 1980's, Unix workstation vendors could and did compete on the basis of technology that we now take for granted: a windowing system (Sun, SGI, NeXT had NeWS, the rest the X Window System), networking capabilities (TCP/IP vs. IBM's SNA and other protocols), distributed file systems (Sun's NFS vs. IBM's SNA-based file system, 'Distributed Services (DS)' and later, the Andrew File System (AFS) vs. the Distributed File System (DFS)), distributed computing environments (IBM, HP & Apollo's DCE vs. (i forget the name) Sun's stuff), windowing toolkits (Sun's OpenLook vs. HP's Motif vs. the academic (Carnegie-Mellon) Andrew Toolkit), 3D systems (SGI's GL/IrisGL/OpenGL vs. IBM's & HP's PHIGS/PEX), and programming languages (C++ vs. NeXT's Objective-C). During the interval of time that these technologies were hotly disputed, there was a tremendous amount of advertising ink spilled and PR hot air vented on promoting one of these technologies over the other. Customers were acutely aware of the differences, and made buying decisions based on the perceived competitive advantages. However, as time passed, things settled down. These days, all Unix customers take for granted that a workstation comes with X11, TCP/IP, NFS, Motif and OpenGL. Advertisements no longer make a big deal out of this; in fact, advertisements and PR completely fail to mention these technologies. Purchasing decisions are based on other factors. The bar had been raised.

In short, as time progresses, the level of sophistication rises. A company cannot have a competitive advantage when all of its competitors can match all of its features, feature-for-feature. One advertises what the competition does not have. That's how one distinguishes oneself from competitors.

In the course of the competition, the competitors learned another lesson: "Open Standards". This was not an easy lesson. Sun's NeWS was considered by many to be a superior windowing system. Sun held on to it quite tightly: the licensing terms were restrictive (keeping Sun in control) and expensive. There were attempts to license it to large corporations (DEC, Microsoft) but only a few smaller, non-threatening corporations (as SGI was at the time) picked up on it. In some cases (IBM), Sun did not even make an offer to license. The restrictive terms and the lack of an offer drove away IBM, HP, DEC and all the other big players. As a result, the other vendors rallied around, were driven into the arms of MIT's X Window System. Today, X11 dominates and NeWS is extinct. On the obverse, Sun seemed not to value NFS: it was given to all the Unix vendors. By the time that IBM introduced Distributed Services, it was too late. DS had some technical advantages: it had client-side caching, for example, which NFS of that era did not. It also allowed the sharing of volumes with mainframes; no other Unix machines did this. But it was too late. NFS had already taken over. On the window-toolkit side, Sun kept OpenLook proprietary until it was too late. Motif had won.

SGI was particularly clever with GL. GL gave SGI a tremendous competitive advantage in the 3D graphics market. It was only when the other workstation vendors finally stopped bickering and started throwing their weight behind PHIGS, that SGI realized it was threatened. It acted quickly and decisively: it remolded GL into OpenGL and licensed it quite freely. OpenGL won, while PHIGS has become irrelevant. Of particular note was SGI's ability to 'raise the bar' even after it had opened OpenGL. While all other vendors were readying their first release of OpenGL, SGI rolled out new (defacto proprietary) features and enhancements to OpenGL. Sure, one could get OpenGL from IBM, but SGI's implementation had more stuff, more sophisticated stuff. And it was also moving on other fronts: SGI encouraged programmers to code to its higher-level, more sophisticated Performer and Inventor 3D object systems, instead of the low-level OpenGL. It had trapped its customers and fought hard to keep them trapped by raising the bar. Stuff that was below the bar was freely and openly shared. Stuff below the bar no longer provided a competitive advantage; on the contrary, by sharing the stuff below the bar, one could protect one's investment in the technology. The protection was protection from extinction. The Open Systems of the late-80's, early-90's did an excellent job of shutting out proprietary interlopers while generating billions in revenues.

This same phenomenon continues today, and carries over easily to Open Source/Free Software. Suppose that company D of the case study above had indeed developed a generic but proprietary set of functions in its code. It can derive a competitive advantage by keeping this interface proprietary, and then keep this advantage for years. But one day, an open, free implementation of a similar set of functions arises. It may not be anywhere near as good as Company D's implementation, but it does have a popular following. What should company D do? If it keeps its interfaces proprietary forever, it will wake up one day to find that maintenance and upkeep costs are an anchor chain around its feet. The interfaces no longer provide a competitive advantage; rather, they have become a cost center. Company D's only rational decision is to wait up to the last minute, and then liberate its proprietary technology. It might not need to make it completely free, but it does have to make it cheap enough so that it will crush any competing implementations. These days, with the rise of the GPL, it may well mean that Company D is best-off just GPL'ing their work, since anything else will drive away the future adoption of its technology. If company D is successful in opening the stuff below the bar, then it will have protected its investment. The 'opening', 'liberating', or 'freeing' of technology is nothing new in the computer industry. Its a theme that has been playing out for five decades. It is not about to stop.

Monopoly Forces

It is important to understand that the "Open Standards" phenomenon of the late 80's/early 90's happened in a competitive environment. The same forces that drove "Open Standards" do not apply in a monopolistic environment. A good example of this is provided by IBM's VM and MVS mainframe operating systems.

IBM has enjoyed a monopoly position in mainframe operating systems for three decades. It has two mainframe operating system products: VM and MVS. They are very different in design and capabilities. MVS is the operating system that was developed explicitly for sale/licensing to its mainframe customers. VM was developed internally, in part as a research project, and eventually became widely deployed within IBM. It had two or three very powerful features that no other operating system has ever had, and for the most part, still don't have. First and foremost, it implements the concept of a 'Virtual Machine'. Every user gets their own copy of a virtual machine. In this machine, one can then boot any operating system that one desires: from the users point of view, it looks like 'bare metal', and just as with real bare metal, one can do anything one pleases. Similar systems are available for PC's these days: for example, VMware allows you to boot and run both MS Windows and Linux on the same PC at the same time. VM had the interesting property that one could 'crash' a virtual machine without disturbing other users. The VMware analog would be having Windows crash, without disrupting the Linux virtual machine. (VM is superior to VMware in that the mainframe hardware has specific support for VM, whereas Intel chips (and most RISC chips) do not. The hardware support makes VM much simpler and faster than VMware). VM also had a built-in hardware debugger, and was small, fast and lightweight.

Eventually, the existence of VM became known to IBM customers, and it was not long before they begged, pleaded, wheedled and threatened IBM into selling it. IBM eventually begrudgingly complied. Customers loved it: it allowed a sysadmin to partition a machine into several parts: one part ran the stable production environment, while other partitions ran the latest, new-fangled, experimental stuff that was still being debugged. You could use the same machine for two things at once: one could try out new software without endangering the stability of the older software.

IBM did not really want to (and still does not really want to) sell VM. It wants to put all of its development resources and dollars into MVS. It doesn't really want to deal with the cost of customer support, the cost of sales, the cost of marketing for VM. It doesn't want to have to enhance VM just because customers demand it. It would rather have it go away. It can charge much higher prices for MVS while slowly adding VM-like features to MVS (e.g. LPAR partitions). It can make more money licensing MVS. It has no competitors that are driving it to innovate MVS, or to lower the price of MVS. Its stupid to compete with oneself. When it let the genie out of the bottle, it found itself in a stupid situation: VM was applying competitive forces on MVS.

What should IBM do? Because it enjoys a monopoly, it has no incentive to open up VM. There is no competitor that is offering anything as good as or better than VM. IBM's most rational move is to bury VM as best it can, and this is precisely the strategy that IBM is following. VM is now quite old, and hasn't been kept up. While still interesting to a large segment of customers, its slowly withering.

Microsoft has never found itself in a VM/MVS situation. But it does enjoy a monopoly, and therefore feels no pressure to open its older technologies. The 'open standards' scenario cannot play out in the Microsoft world, because there is no competition that causes Microsoft to rethink its proprietary strategy. One could argue that, for example, Samba is providing a competitive pressure that should force Microsoft into opening up its file-server software. But two factors prevent this. Culturally, Microsoft has no experience in opening anything. Secondly, if Microsoft opened up its file server, it seems highly unlikely that they could save on development or support costs; nor would it be able to add new customers by opening it up. More likely, they would loose a small income stream, with nothing to show for it.

The 'Open Standards' history unfolded because of competitive pressures. The GPL'ing of software will also benefit from competitive pressures (see, for example, the database market). But in a monopoly environment, there is no incentive to open anything at all.

(footnote: MVS has been known under several different names during its product history, including OS/390 and OpenEdition, and currently, as z/OS. Name changes typically coincide with major feature/functional improvements to the operating system.)

Proprietary Software in Niche Markets

The above analysis indicates that another stronghold for proprietary software is in niche markets where there is one or only a few (near-)monopoly players. But in fact, niche markets are also open to attack from free software. The route of attack is again pressure from below.

Let us take as an example the market for web-server performance measurement tools. This market is currently dominated by fewer than a half-dozen vendors: Mercury Interactive, [get names of others]. There are relatively few customers: this is because few companies have the kind of web-server infrastructure that is complex enough that performance needs to be analyzed. This fact, coupled to the fact that creating measurement/stress software is hard, means that the vendors must charge high prices for their product. As of this writing, this is from $5K to $20K per seat. The use of web servers is expanding, and so with careful management, these companies could be quite profitable.

Let us now imagine the following scenario. A small but growing business is building a complex web infrastructure. It has reached the point where management decides that some performance analysis and tuning is called for. One person, possibly part-time, is assigned to the task. This person, possibly unfamiliar with the niche for such tools, begins scrounging. They may find a few free tools, and they may find one of the niche vendors. Its a part-time job, and 'not that big a deal', and so the decision is made to use one of the free tools. The proprietary tools may be considered to be 'overkill', or just maybe too expensive to merit purchase. But such projects have a way of getting out of control. The user may add one tiny feature to the free tool, and fold it back to the tool maintainer. Management asks for a few more reports, and a few more features get added. Before long, the user develops a certain amount of loyalty to their tool. Even though in retrospect, it would have been cheaper to buy the expensive proprietary tool, it is now too late. The free tool has advanced, and it has a loyal following. This process, repeated over and over, leads to a progressively more and more sophisticated free tool.

The point here is that at each stage of the game, more-or-less rational decisions were made based on corporate self-interest and profits, and yet a free software system resulted. This result is completely counter-intuitive if one believes that all software is developed only by companies in the software-product business. It makes no sense for one of the niche vendors to give away their product; nor is it likely that some startup will find the business case to develop and give away software to such a small niche. The absence of these latter two economic incentives does not deter the advance of free software. Free software enters indirectly.

There is a peripheral question that we should deal with: Who is the lead maintainer of the free code? Presumably, the answer is that the heaviest user acts as the maintainer. It is not particularly costly or time-consuming to maintain a free project: services such as Sourceforge [sourceforge] makes it easy and cheap. The lead maintainer derives some advantage by getting an ever-improving product with little direct investment, and may even be conferred some marketing points in recognition of its services. Provided such a tool becomes popular enough, it may even be possible to sustain a small consulting business whose focus is in maintaining this tool. At this point, the full free-software dynamics kicks in and drives advancement.

The 'end game' is also worth noting. The proprietary vendors face two futures. In one, they are driven out of business. In the other, they must continue to add features to their proprietary product to stay ahead, or to migrate to a related, unfilled niche. In this way, free software drives competition and innovation: those who sit still get clobbered.

[This section may benefit from a re-write with a more compelling example from another segment, or from additional factual details about the history and status of the current free web-performance tools.]

Are There Any Jobs for Software Developers?

"When all software is free, how will programmers make money?" is an old but still common refrain heard on mailing lists and message boards. Microsoft's Craig Mundie reformulates this as a declarative: [...] the GNU General Public License (GPL) under which some open-source software is distributed "fundamentally undermines" the commercial software model [...]. This is a legitimate question that deserves an answer. We've implied an answer, but perhaps it is best to spell it out.

Proprietary software will not disappear. Proprietary software will simply have to be better than open source software. There's an old joke: 'Microsoft makes the worst software in the world. Thats because anyone who makes software that is worse than Microsoft's can't stay in business'. This joke works quite well when refurbished with 'Open Source' in place of 'Microsoft'. A clever marketer would say that Microsoft has been 'raising the bar', has been 'innovating', and that Microsoft's competitors have been failing to 'innovate'. Of course, the very same can be said about Open Source/Free Software. The movement does advance, and if your software concern can't keep up, you're SOL. No doubt, many failing companies in the future will blame Free Software the way they used to blame Microsoft. Many software executives will come to passionately hate Free Software for this reason. But of course, this logic is flawed: the problem is not with innovation or lack there-of, but with the fact that the mainstream commercial software marketplace is mature, and the cheapest product is 'good enough' for most users. Free software is slowly taking the crown of 'cheapest' away from Microsoft.

But we have two questions to answer: the first, who will hire programmers? the second, what business plans will succeed? The answer to the first question will not be changed much by the growing acceptance of free software. The vast majority of software programmers develop software that is not meant for direct sale. Since their employers don't derive revenues from direct software sales, open source does not pose a threat to their continued employment.

There are also many programmers involved in the development and sale of shrink-wrapped, retail consumer software. Their jobs will be threatened. But this is nothing new; Microsoft has been trying to, and succeeding in wiping out entire segments of the consumer retail shrink-wrap market. Compare the titles for sale today, to those that were for sale a decade ago, and now ask 'how many of these are large, ongoing concerns, focused purely or mostly on software development'? Their numbers, and their variety, have collapsed. [need data to support this].

The other major segment of direct software sales that employs programmers is the business software segment. These guys are not threatened, or shouldn't be threatened; they just have to stay on their toes. Ten years ago, there was some good competition in the compiler market. But Microsoft won, Borland lost, and most of the rest of us use gcc. IBM is still trying to push its Tobey/Visual compiler, but its not destined for market dominance. The others have found niches or have failed outright.

Five years ago, there was a busy market for web application development tools and application servers. Since then, this market has more or less consolidated, with the dominant technologies being Microsoft's ASP, Sun's JSP, and the open source PHP. The ASP/JSP/PHP battle will rage for a long time, because the first two have powerful backers, while the third has raw economic forces on its side. In the meantime, Microsoft is trying to change the rules with its .net strategy, much as SGI tried to change the rules with Performer/Inventor after opening OpenGL.

Today, there is a battle raging between Microsoft SQL, Oracle, IBM's DB2, and Postgres. Informix and Sybase have been clobbered a while ago. The database programmers at Oracle should well feel a threat to their job security, but they currently see Microsoft, not Postgres, as the threat. Oracle is already reacting: They are focusing on non-database development and sales. DB2 has and will likely hold the high-end of multi-terabyte databases. Postgres will probably gut the low-end and midrange business, leaving Microsoft to mount ever more vehement attacks on Free Software. Its going to get uglier.

Can the world economy today sustain thousands of database-internals programmers? No, no more than it could sustain thousands of compiler developers ten years ago, or thousands of web-server developers five years ago. Free software was not to blame for the earlier market consolidations. In the future, it will serve as a convenient scape-goat, but even if open source didn't exist, the 'raising of the bar' in software features would continue as it always had.

(Some readers of this section may be disappointed that I didn't answer the questions directly: That company A with business plan B would be hiring programmers with skill-set C. I don't answer this question directly because there are literally thousands of answers, and it doesn't take much imagination to discover even a few of them. The point that I am trying to make is that Free Software poses no more of a threat to stable employment for programmers than any previous threat, be it Microsoft, or be it the hydraulic digger to steam-shovel engineers. [The Innovator's Dilemma, Clayton M. Christensen].

Conclusions

The claim I am attempting to make in this essay is that the growth and popularity of GPL'ed and more broadly, Open Source systems does not at all imply that a well-positioned business with a clever business model can reap bushels of profits from this movement. It does not imply that Open Source will become a multi-billion or trillion dollar sector of the global economy. Quite on the contrary, Open Source/Free Software has the effect of removing points of economic friction by circumventing the traps and nets that allow certain types of profit to be accumulated. It does this by creating 'valuable' intellectual property and placing it into the public domain (policed by the GPL). This intellectual property (the body of all open source code) can be exploited by all at a very low cost. This low cost software represents a 'savings' as compared to proprietary, high-cost software, and a business can exploit these savings, lower their costs, increase their profits, and pass the remainder on to the consumer. These savings apply a powerful economic incentive for businesses, large and small, to adopt Linux in favor of Microsoft Windows, in order to become more competitive in their sphere of competition. Thus, in this way, open source can be, and is becoming a powerful global economic force that will not be diverted.

There are some important corollaries to this claim. Although Free Software may become a large contributor to global economic output, equaling or exceeding the size of all proprietary software put together, pure-play open source companies, such as Red Hat, are unlikely to become as profitable or as big as Microsoft. Without other business models or revenue streams, Red Hat fundamentally cannot trap income based on licenses the way that Microsoft can, because it is not the exclusive owner of the intellectual property embodied in Linux. No pure open source company will be able to do this: the economic benefit of open source is distributed, not concentrated.

A second corollary is that open source will not kill Microsoft, although it will impact potential future revenues. The amount of damage is uncertain, but Microsoft is very strong, very shrewd, and involved in many ventures. The advent of the PC did not kill IBM mainframes, it restructured the flow of revenues, and did limit some upside potential. Microsoft is likely to have hundreds of billions of dollars in assets for decades to come. It just won't be a monopoly player in some of the markets that it would like to be in.

These raw economic forces will be powerful engines of change. The adoption of Open and Free Software will grow by orders of magnitude, and the vitality of its developer community will increase and expand for decades if not longer into the future. Open Source and Free Software will become the predominant, central feature of the post-industrial world and will reach into all facets of life.


Notes, TBD

myth -- free software is developed at university by kids freeloading on parents. In real life, programmers have to eat.

Who wrote much of the existing free software? Need to find some study that covers this.

explain differences between free and open source. Debate BSD vs. GPL. this difference is vital in economic and decision making terms.


Acknowledgements

This essay is an outgrowth a conversation with Granite Ventures Managing Director Thomas Furlong. It was presented as a lecture to students at the University of Texas Business School at the invitation of Professor Sirkka Jarvenpaa. The contents of this essay have benefited from discussions with Richard M. Stallman, Mitch Nelson and Wouter Habraken. Allen Akin reminded me that the forces acting on consumers are not the same as those on producers, thus clarifying the true topic of the essay.

Footnotes

NeWS
Allen Akin points out: Technically, there were some serious shortcomings with NeWS (lack of pseudocolor support when framebuffer memory was still a significant cost issue; security/reliability problems in the PostScript interpreter; incompatibilities between Sun and Adobe PostScript; questionable 3D acceleration strategy; etc.) But there were equally troubling problems with X11 at the time (lack of backwards compatibility to X10, lack of reasonable 2D acceleration interfaces as well as a lack of dumb frame buffer interfaces, a complex and difficult to understand programming API, and a lack of applications.) The engineers had plenty to fight over, as engineers are wont to do, but it was business decisions, not engineering decisions, that drove the adoption of X11 in favor of NeWS.


Linas Vepstas
February-June 2001
Copyright (c) 2001 Linas Vepstas
All rights reserved.