Archive for February, 2009
(followup post of “the dynamics of OSS adoption – 1“)
The most common process behind OSS adoption is called “diffusion”, and is usually modelled using a set of differential equations. It is based on the idea that the market is made of a set of interoperating agents, each one deciding independently which technology to adopt in different moments; the model is usually capable of handling multiple participants in a market, and to predict overall evolution. A good example of a diffusion-based dynamic equilibrium is the web server market, when total server numbers are used. If we take the data from Netcraft, and we model each individual server type as a competitor, we got this kind of graph:
Which is consistent with a traditional Bass Model explanation (data for apache was added to that of Google Web server, that is Apache-based; bicubic smoothing was used to get the trend lines). Diffusion models tend to generate this kind of equilibrium lines, with the market that in a more or less consistent way moves to an equilibrium that changes only when a specific technology is substituted by moving to another, different status.
The probability of choosing one technology over the other depends on several factors; a very good model for such adoption is the UTAUT model (some pdf examples here and here), that was found capable of predicting 70% of the variance of adoption success (what it means: that the parameters in the model explain nearly perfectly whether you will adopt a technology or not).
The important point to remember: this is about *individual* adoption, not mandated and without external constraints. In this sense, we can use it to predict how a PC owner chooses her web browser, or how a small company may choose which web server to use.
The model uses four parameters: performance expectancy, effort expectancy, social influence, and facilitating conditions.
- performance expectancy: The degree to which a person believes that using a particular system would enhance his or her job performance, or the degree to which using an innovation is perceived as being better than using its precursor.
- effort expectancy: the degree to which a person believes that using a system would be free of effort, or the degree to which a system is perceived as relatively difficult to understand and use.
- social influence: The individual’s internalization of the reference group‘s subjective culture, and specific interpersonal agreements that the individual has made with others, in specific social situations; or the degree to which use of an innovation is perceived to enhance one‘s image or status in one‘s social system.
- facilitating conditions: Reflects perceptions of internal and external constraints on behaviour and encompasses self-efficacy, resource facilitating conditions, and technology facilitating conditions; or objective factors in the environment that observers agree make an act easy to do, including the provision of computer support.
In the next post, I will present an example of these four parameters in the context of an OSS adoption.
Sometimes talking about Microsoft and Open Source software is difficult, because it seems to have many heads, looking into different directions. At the Stanford Accel Symposium, Bob Muglia, president of Microsoft’s Server and Tools Business was bold enough to say that at some point, “At some point, almost all our product(s) will have open source in (them)…If MySQL (or) Linux do a better job for you, of course you should use those products“. Of course, we all know that; even Steve Ballmer mentioned that “I agree that no single company can create all the hardware and software. Openness is central because it’s the foundation of choice“; a fact for which Matt Asay commented with some irony that openness claims are mainly directed towards competitors like Apple and its iTunes/iPod offer.
I would like just to point out to one of the Comes vs. Microsoft exhibits (that are sometimes more interesting than your average John Grisham or Stephen King novels) where we can find such pearls of openness and freedom of choice:
From: Peter Wise Sent: Monday, October 07, 2002 9:43 AM To: Server Platform Leadership Team Subject: CompHot Escalation Team Summary - Month of September 2002 CompHot Escalation Team Summary - Month of September 2002 Microsoft Confidential Observations and Issues * Linux infestations are being uncovered in many of our large accounts as part of the escalation engagements. People on the escalation team have gone into AXA, Ford, WalMart, the US Army, and other large enterprises, where they've helped block one Linux threat, only to have it pop up in other parts of the businesses. At General Electric alone, at least five major pilots have been identified, as well as a new "Center of Excellence for Linux" at GE Capitol.
“Infestation” is not exactly the word I would use to express the idea of “customer choice”, but you know how the software world is a battle zone. I am so relieved to see that they are now really perceiving open source as part of their ecosystem.
As a consultant, it happens frequently to answer questions about “what makes open source better”. Not only for some adopter, but for companies and integrators that form a large network ecosystem, that (up to now) had only proprietary software vendors as source of software and technology. Many IT projects had to “integrate” and create workarounds for bugs in proprietary components, because no feedback on status was available. Mary Jo Foley writes on the lack of feedback to beta testers from Microsoft:
“During a peak week in January we (the Windows dev team) were receiving one Send Feedback report every 15 seconds for an entire week, and to date we’ve received well over 500,000 of these reports.”
Microsoft has “fixes in the pipeline for nearly 2,000 bugs in Windows code (not in third party drivers or applications) that caused crashes or hangs.”
That’s great. Microsoft is getting a lot of feedback about Windows 7. What kind of feedback are testers getting from the team in return? Very little. I get lots of e-mail from testers asking me whether Microsoft has fixed specific bugs that have been reported on various comment boards and Web sites. I have no idea, and neither do they. (emphasis mine)
Open source, if well managed, is radically different; I had a conversation with a customer just a few minutes ago, asking for specifics on a bug encountered in Zimbra, answered simply by forwarding the link to the Zimbra dashboard:
Not to be outdone, Alfresco has a similar openness:
Or one of my favourite examples, OpenBravo. Transparency pays becuase it provides a direct handle on development, and provides a feedback channel for the (eventual) network of partners or consultancies that “are living” off an open source product. This kind of transparency is becoming more and more important in our IT landscape, because time constraints and visibility of development are becoming even more important than pure monetary considerations- and allows for adopters to eventually plan for alternative solutions depending on the individual risks and effort estimates.
Matthew Aslett has a fantastic summary post that provides a sort of synthesis of some of the previous debates on what is an OSS business model, and how this model impacts the performance of a company; along with the usual sensible comments. There are a few points that I would like to make:
- It is probably true that a pure service-based company is less interesting for VC looking for an equity investment (by service-based I mean: “Product specialists: companies that created, or maintain a specific software project, and use a pure FLOSS license to distribute it. The main revenues are provided from services like training and consulting” from the FLOSSMETRICS guide). Every service-based model of this kind is limited by the high percentage of non-repeatable work that should be done by humans; so the profit margins are lower than those of the average software industry or of other OSS models. On the other hand, unconstrained distribution (facilitated by the clear, unambiguous model and single license) in many cases compensates for this lower margin by increasing the effectiveness of marketing messages.
- Tarus Balog notes: “For those companies trying to make billions of dollars on software quickly… the only way to do that in today’s market is with the hybrid model where much of the revenue comes from closed software licenses.” That’s right- at the moment this seems the only possible road to a 1B$ company. What I am not convinced of is that this is in itself such a significant goal; after all, the importance of being “big” is related to the fact that bigger companies have the capability of creating more complex solutions, or to be capable of servicing customers across the globe. But in OSS, complex solutions can be created by engineering several separate components, reducing the need of a larger entity creating things from scratch; and cooperation between companies in different geographical areas may provide a reasonable offering with a much smaller overhead (the bigger the company, the less is spent in real R&D and support). A smaller (but not small) company may still be able to provide excellent quality and stability, with a more efficient process that translates into more value-for-dollar for the customer.
- I believe that in the long term the market equilibrium will be based on a set of service-based companies (providing high specialization) and development consortia (providing core economies of scale). After all, there is a strong economic incentive to move development outside of companies and in reduce coding effort through reuse. Here is an example from the Nokia Maemo platform:In this slide from Erkko Anttila’s thesis (more data in this previous post) it is possible to see how development effort (and cost) was shifted from the beginning of the project to the end. The real value comes from being able to concentrate on differentiating, user-centered applications – those can be still developed in a closed way, if the company believes that this gives them greater value; but the infrastructure and the 80% of non-differentiating software expenditure can be delivered at a much lower price point if developed in a shared way.
- Development consortia (like the Eclipse consortium) can act as a liasion/clearing office for external contributions, simplifying the process of contribution from companies. The combination of visibility and clear contribution processes can help companies in the shift from “shy participants” that prefer to have individual developers commit changes to projects (thus relieving the company from any liability, but still reaping the advantages of participation) to contribution and championing.
There are many different mechanisms behind OSS adoption, and understanding the differences makes it easier to help companies in using them efficiently – after all, word of mouth may be sufficient to get visibility, but it may be not enough to guarantee adoption, and then converting this adoption into paid services.
In fact, monetization may require a large number of “adopters” to get a small percentage of “paid users” – in many domains, only 0.05% of adopters pays for services, a percentage that we call “unconstrained monetization percentage” or UMP to make it sound more academic.
While it is true that the incremental cost for the OSS company of having a new adopter is zero (or extremely small), the increased adopters base also increments the probability that the community or some competitor will start to address the same monetization path, thus further reducing the UMP. So, to take the example of MySQL, instead of asking services or training to Sun, an adopter may opt for a local consulting firm that effectively leverage the free availability of the code and ancillary material to create a competitive entry.
Of course, the business model adopted by the OSS firm also has a positive or negative effect on the growth process in the number of adopters; this especially affects firms offering what we called “split OSS/commercial” or “open core” licensing that are forced to constantly adapt the features of the OSS and commercial parts; as we wrote:
“The model has the intrinsic downside that the FLOSS product must be valuable to be attractive for the users, but must also be not complete enough to prevent competition with the commercial one. This balance is difficult to achieve and maintain over time; also, if the software is of large interest, developers may try to complete the missing functionality in a purely open source way, thus reducing the attractiveness of the commercial version. ”
In other words, if the OSS product is too good, few will be interested in getting the commercial part, while if the OSS product is useless, the number of “adopters” will be too low to increase visibility of the product. This balance changes with time, and for this reason companies adopting this model need to constantly update their offering, and evaluate with time how to split the development effort across paid and OSS branches.
As I wrote in the beginning, there are many different adoption processes in open source software; some of those mechanisms are:
- cluster propagation
- directed incentives
In the following posts I will try to provide some insight into each, and how to help an OSS company in leveraging the relevant process to help in both adoption and monetization.
Many analysist talk about the potential savings of using OSS. One of the more visible place to see this savings is in “integrated reuse”, the leverage of OSS components to reduce development and maintenance costs. I will take some examples from an excellent thesis Erkko Anttila, “Open Source Software and Impact on Competitiveness: Case Study” from Helsinki University of Technology. Erkko interviewed many actors from Nokia and Apple about their adoption of OSS in the Maemo platform and in OSX, and measured the OSS contribution through the traditional (albeit not always accurate) COCOMO model. Here are some results:
The total software stack includes 10.5 million lines of code (product and development tools), which is split into 85% coming directly from OSS, and 15% either modified or developed by Nokia. In source code lines the respective amounts are 8.9 Million lines of OSS code and 1.6 million lines of Nokia developed software. Out of the 15% created by Nokia, 50% are made available to the community as modifications to components or totally new components, leaving roughly 7.5% of the software stack closed. (…) Based on the COCOMO model we can estimate the value of the utilized OSS to be $228,000,000, including both product software and tools.”
“Based on the COCOMO model the total cost of internally developing the OSS included in the Darwin core and the used development tools would be $350,000,000.”
This is not, however, the only advantage; as Ari Jaaksi of Nokia mentioned during one of his presentations: “No need to execute complex licensing negotiations; Saving can be up to 6- 12 months in real projects”. 6-12 months of totally non-productive wait are not a bad savings, but when added to the developers time saved by reuse we have estimated that for end-user products the total savings are between 12 and 18 months; and for consumer products (especially in IT) reducing time to market by one year means having a significant first-mover advantage. So, the next time someone wonders why LCD TVs from Sony, Sharp, LG use Linux and other OSS components inside, tell them that it’s the only way to be competitive…
Welcome to our public blog, where we will try to provide a window on the research activities that we do in the field of open source business models and OSS economics. Most of our work is oriented towards helping our customers in the evaluation and adoption of OSS (and sometimes helping companies in offering OSS-based services), so the focus will be clearly oriented towards commercialization and business aspects, and less
on technical aspects; I hope that you will enjoy our effort and I invite any people interested in this research area to transform this blog into a conversation and discussion on what is still a wide open research space.