Archive for April, 2009

Helping OSS adoption in public administrations: some resources

It was a busy and happy week, and among the many things I received several requests for information on how to facilitate adoption of OSS by public administrations. After the significant interest of a few years ago, it seems that the strong focus on “digital citizenship” and the need to increase interoperability with other administrations is pushing OSS again (and the simplification, thanks to the reduction in procurement hurdles, also helps). I have worked in this area for some years, first in the SPIRIT project (open source for health care), then in the COSPA and OpenTTT projects, that were oriented towards facilitating OSS adoption. I will try to provide some links that may be useful for administrations looking to OSS:

  • Let’s start with requirement analysis. What is important, what is not, and how to prioritize things was one of the arguments discussed in COSPA, and two excellent deliverables were produced (maybe a bit theoretical, but you can skip the boring parts): analysis of requirements for OS and ODS and prioritization of requirements (both pdf files).
  • As part of our guide in the FLOSSMETRICS project we have a list of best practices, that may be useful; in general, the guide does have some more material from various European projects. I would like to thank PJ from Groklaw, that hosted my work for discussion there, and to the many groklawers that helped in improving it.
  • One of the best migration guide ever created, by the Germany Ministry of the Interior (KBST), is available in english (pdf file). It covers many practical problems, server and desktop migrations, project planning, legal aspects (like changing contractual relations with vendors), evaluation of economics and efficiency aspects and much more. Unfortunately the 2.1 edition is still not available in english…
  • For something simpler, some guidance and economic comparison from the Treasury board of Canada;
  • and a very detailed desktop migration redbook from IBM.
  • The European Open Source Observatory does have a long and interesting list of case studies, both positive and negative (so the reader can get a balanced view).

And now for some additional comment, based on my personal experience:

  • A successful OSS migration or adoption is not only a technical problem, but a management and social problem as well. A significant improvement in success rates can be obtained simply by providing a simple, 1 hour “welcoming” session to help users in understanding the changes and the reasons behind it (as well as providing some information on OSS and its differences with proprietary software).
  • In most public administrations there are “experts” that provide most of the informal IT help; some of those users may felt threatened by the change of IT infrastructure, as it will remove their “skill advantage”. So, a simple and effective practice is to search for them and for passionate users and enlist them as “champions”. Those champions are offered the opportunity for further training and additional support, so they can continue in their role without disruptions.
  • Perform a real cost analysis of the actual, proprietary IT infrastructure: sometimes huge surprises are found, both in contractual aspects and in actual costs incurred that are “hidden” under alternative balance voices.
  • If a migration requires a long adaptation time, make sure that the management remains the same for the entire duration, or that the new management understands and approves what was done. One of the most sad experiences is to see a migration stop halfways because the municipality coalition changes, and the new coalition has no understanding of what was planned and why (“no one remembers the reasons for the migration” was one of the phrases that I heard once).
  • Create an open table between local administrations: sometimes you will find someone that already is using OSS and simply told no one. We had a local health agency that silently swapped MS Office with OpenOffices in the new PCs for hospital workers, and nobody noticed :-)
  • Have an appropriate legislative policy: informative campaigns and mandatory adoption are the two most efficient approaches to create OSS adoption, while subsidization has a negative welfare effect: “We show that a part from subsidization policies, which have been proved to harm social surplus, supporting OSS through mandatory adoption and information campaign may have positive welfare effects. When software adoption is affected by strong network effects, mandatory adoption and information campaign induce an increase in social surplus” (Comino, Manenti, “Free/Open Source vs Closed Source Software: Public Policies in the Software Market”). Also, in the TOSSAD conference proceedings, Gencer, Ozel, Schmidbauer, Tunalioglu, “Free & Open Source Software, Human Development and Public Policy Making: International Comparison”.
  • Check for adverse policy effects: In one of my case studies I found a large PA that was forced back to commercial software, because the state administration was subsidizing only the cost of proprietary software, while OSS was considered to be “out of procurement rules” and thus not paid for. This does also have policy implications, and require a careful choice of budget voices by the adopters administration.

We found that by presenting some “exemplar” OSS projects that can be used immediately, the exploration phase usually turns into a real adoption experiment. The tool that I use as an introduction are:

  • Document management: Alfresco. It is simple to install, easy to use and with good documentation, and can be introduced as a small departmental alternative to the “poor man repository”, that is a shared drive on the network. Start with the file system interface, and show the document previews and the search functionalities (more complex activities, like workflow, can be demonstrated in a second time). Nuxeo is also a worthy contender.
  • Groupware: my personal favorite, Zimbra, that can provide everything that Exchange does, and has a recently released standalone desktop client that really is a technical marvel. If you are still forced into Outlook you can use Funambol (another OSS gem) that with a desktop client can provide two-way synchronization with Outlook, exactly like Exchange.
  • Project management: a little known project from Austria is called OnePoint, and does have a very well designed web and native interface for the traditional project management tasks.
  • Workstation management: among the many choices, if (as it usually happens) the majority of the desktops are Windows-based there is a long-standing german project called Opsi, that provides automatic OS install, patch management, HW&SW inventory and much more.

Of course there are many other tools, but by presenting an initial, small subset it is usually possible to raise the PA interest in trying and testing out more. For some other software packages, you can check the software catalog that we provided as part of our FLOSSMETRICS guide. I will be happy to answer to individual requests for software that will be posted as comments to this articles, or sent to me by twitter (@cdaffara); if there are enough interest, I will prepare a follow-up post with more tools.

, , , ,

2 Comments

Sorry, not right. An answer to Raymond’s post on the GPL

I read with great interest the post by Eric Raymond on the GPL and efficiency, “The Economic Case Against the GPL“; because economic aspects of OSS are my current main research work (well, up to the end of FLOSSMETRICS, then I’ll find something new :-) ) The main argument is nicely summarized in the first lines: “Is open-source development a more efficient system of software production than the closed-source system? I think the answer is probably “yes”, and that it follows the GNU GPL is probably doing us more harm than good.”. Easy, clear, and totally wrong.

The post clearly distinguish between the “ethical” aspects of the GPL with the interaction model that is enforced by the GPL redistribution clause; ESR briefly describes a ideal world where:

“If we live in “Type A” a universe where closed source is more efficient, markets will eventually punish people who take closed source code open. Markets will correspondingly reward people who take open source closed. In this kind of universe, open source is doomed; the GPL will be subverted or routed around by efficiency-seeking investors as surely as water flows downhill.If we live in a “Type B” universe where open source is more efficient, markets will eventually punish people who take open source code closed. Markets will correspondingly reward people who take closed source open. In such a universe closed source is its own punishment; open source will capture ever-larger swathes of industry as investors chase efficiency gains.”

So, Raymond concludes, the GPL is either unnecessary or worse anti-economical. The problem lies in the assumption that the market is static, that the end equilibrium will always be optimal, that imbalances in the market are not relevant (only the end result is), and so on. I will start with the easy ones:

  • the market is NOT static. The fact that one production model is (or is not) more efficient is something that can be modelled easily, but is not really relevant when all agents are able to change their own interaction model at will. Many researchers demonstrated for example that in a simple, two-actor market (one OSS and one proprietary), even in the assumption that OSS is superior in every aspect there are situation where the pre-existing network effect will actually be able to extinguish OSS as soon as there is sufficient pricing discretionality by the proprietary vendor.
  • End equilibrium in real-life markets are not always optimal: the existence of monopolies is the most visible example of this fact (and the fact that there is a company that has been found guilty of multiple abuse of monopoly markets should make this clear).
  • The process is as important as the end result: you can become rich after a life of poverty (and receive all your money your last day of life) or have a generally well-off life, constantly increasing and spending what you obtain. What life do you prefer? So, among all the paths that lead to an OSS (in this case, a FLOSS) world, the one that enforces in a constant way an increase of the FLOSS component is preferable to one that, in an hypothetical way, will lead in the end to market domination.

In general, of all the aspects of OSS that are interesting (and there are many), I find the GPL family of licenses as the brightest examples of law engineering, and I believe that a substantial reason for the successes of OSS are dependent on it. Of course, there are other economical aspects that are relevant, and I agree with the fact that OSS is in general more efficient (as I wrote here, here and here). I disagree with both the premise and the conclusions, however, as I believe that the set of barriers created by the GPL are vital to create a sustainable market here and now, and not in an hypotetical future.

3 Comments

The procurement advantage, or a simple test for “purity”

There is no end in sight for the “open core” debate, or for the matter what role companies should have in the OSS marketplace. We recently witnessed the lively debated sparked by a post by James Dixon, that quickly prompted Tarus Balog to descent into another of his informed and passionate posts on open core and OSS. This is not the first (and will not be the last) of public discussions on what an OSS vendor is, and I briefly entered the fray as well. I am quite sure that this discussion will actually continue for a long time, just lowering its loudness and turning into the background as OSS becomes more and more entrenched inside of our economy.

There is, however, a point that I would like to make about the distinction between “pure OSS” and “open core” licensing, a point that does not imply any kind of “ethical” or “purity” measure, but just a consideration on economics. When we consider what OSS is and what advantage it brings to the market, it is important to consider that a commercial OSS transaction usually has two “concrete” partners: the seller (the OSS vendor) and the buyer, that is the user. If we look at the OSS world we can see that in both the pure and the open core model the vendor has the added R&D sharing cost reduction (that, as I wrote about in the past, can provide significant advantages). But R&D is not the only advantage: the reality is that “pure” OSS has a great added advantage for the adopter, that is the greatly reduced cost and effort of procurement.

With OSS the adopter can scale a single installation company-wide without a single call to the legal or procurement departments, and it can ask support from the OSS vendor if needed- eventually after the roll-out has been performed. With open core, the adopter is not allowed to do the same thing, as the proprietary extensions are not under the same license of the open source part; so, if you want to extend your software to more servers, you are forced to ask the vendor- exactly the same of proprietary software systems. This is, in fact, a much overlooked advantage of OSS, that is especially suited to those “departmental” installations that would be probably prohibited if legal or acquisition department would have to be asked for budget.

I believe that this advantage is significant and largely hidden. I started thinking about it while helping a local public administration in the adoption of an OSS-based electronic data capture for clinical data, and discovered that for many authorities and companies procurement (selecting the product, tendering, tender evaluation, contracting, etc.) can introduce many months in delays, and substantially increase costs. For this reason, we recently introduced with our customers a sort of “quick test” for OSS purity:

The acquired component is “pure OSS” if (eventually after an initial payment) the customer is allowed to perform extensions to its adoption of the component inside and outside of its legal border without the need for further negotiation with the vendor.

The reason for that “eventually after an initial payment” because the vendor may decide to release the source code only to customers (this is something that is allowed by some licenses), and the “inside and outside of its legal border” is a phrase that explicitly includes not only redistribution and usage within a single company, but also to external parties that may be not part of the same legal entity. This distinction may not be important for small companies, but may be vital for example for public authorities that need to redistribute a software solution to a large audience of participating public bodies (a recent example I found is a regional health care authority, that is exploring an OSS solution to be distributed to hospital, medical practitioners and private and public structures). Of course, this does not imply that the vendor is forced to offer services in the same way (services and software are in this sense quite distinct) or that the adopter should prefer “pure OSS” over “open core” (in fact, this is not an expression of preference for one form over the other).

We found this simple test to be useful especially for those new OSS adopters that are not overly interested in the intricacies of open source business models, and makes for a good initial question to OSS vendors to understand what are the implication of acquiring a pure vs. an open core solutions.

4 Comments

A brief research summary

After two months and 24 posts, I would like to thank all the kind people that mentioned our FLOSSMETRICS and OpenTTT work, especially Matthew Aslett, Matt Asay, Tarus Balog, Pamela Jones and many others with which I had the pleasure to exchange views with. I received many invaluable suggestions, and one of the most common one was to have a small “summary” of the posted research, as a landing page. So, here is a synthesis of the previous research posts:

2 Comments

Open source and certified systems

A recent white paper, published by the Election Technology Council (an industry trade association representing providers for over 90% of the voting systems used in the United States), analyses the potential role of open source software in voting systems, concludes that “it is.. premature. Given the economic dynamics of the marketplace, state and federal governments should not adopt unfair competitive practices which show preferential treatment towards open source platforms over proprietary ones. Legislators who adopt policies that require open source products, or offer incentives to open source providers, will likely fall victim to a perception of instituting unfair market practices.” (where do I have heard this? curious, sometimes, the deja vu feeling…)

The white paper however does contain some concepts that I have found over and over, the result of mixing the “legal” perspective of OSS (the license on which the software is released) with the “technical” aspects (the collaborative development model), arriving at some false conclusions that are unfortunately shared by many others. For this reason, I would like to add my perspective on the issue of “certified” source code and OSS:

  • First of all, there is no causal relation between the license aspect and the quality of the code or its certifiability. It is highly ironic that the e-voting companies are complaining of the fact that OSS may be potentially not tested enough for critical environments like voting, given the results of some testing on their own software systems: “the implementation of cryptographic protection is flawed..this key is hard-coded into the source code for the AV-TSx, which is poor security practice because, among other things, it means the same key is used in every such machine in the U.S … and can be found through Google. The result is that in any jurisdiction that uses the default keys rather than creating new ones, the digital signatures provide no protection at all.” “No use of high assurance development methods: The AccuBasic interpreter does not appear to have been written using high-assurance development methodologies. It seems to have been written according to ordinary commercial practices. … Clearly there are serious security flaws in current state of the AV-OS and AV-TSx software” (source: Security Analysis of the Diebold AccuBasic Interpreter, Wagner, Jefferson, Bishop). Of course, there are many other reports and news pieces on the general unreliability of the certified GEMS software, just to pick the most talked about component. The fact is that assurance and certification is a non-functional aspect that is unrelated to the license the software is released with, as certifications of software quality and adherence to high-integrity standards are based on design documents, the adherence to development standards, testing procedures and much more- but not licensing.
  • I have already written about our research on open source quality from the software engineering point of view, and in general it can be observed that open source development models tend to have an higher improvement in quality within a specific time frame when compared to proprietary software systems under specific circumstances (like a healthy contributor community).
  • It is possible to certify open source systems under the strictest certification rules, like the SABI “secret and below” certification, medical CCHIT, encryption FIPS standard, common criteria Evaluation Assurance Level EAL4+ (and in one case, meet or exceed EAL5), civil engineering (where the product is used for the stability computations for EDF nuclear plants designs), avionics and ground-based high-integrity systems, like air traffic control and railrway systems (we explored the procedures for achieving certified status for pre-existing open source code in the CALIBRE project). Thus, it is possible to meet and exceed the regulatory rules for a wide spectrum of environments with far more stringent specifications than the current e-voting environment.
  • It seems that the real problem lies in the potential for competition from OSS voting systems: “over proprietary ones. Legislators who adopt policies that require open source products, or offer incentives to open source providers, will likely fall victim to a perception of instituting unfair market practices. At worst, policy-makers may find themselves encouraging the use of products that do not exist and market conditions that cannot support competition.” The reality is that there are some open source voting software (the white paper even lists some), and the real threat is the government to start funding those projects instead of buying proprietary combinations. This is where the vendors clearly show the underlying misunderstanding on how open source works: you can still sell your assembly of hardware and software (as with EAL, it is the combination of both that is certified, not the software in isolation) and continue the current business model. It is doubtful that the “open source community” (as mentioned in the paper) will ever certify the code, as it is a costly and substantial effort, exactly like no individual applied to EAL4+ certification for Linux (that requires a substantial amount of money).

The various vendors would probably do something better if they started a collaborative effort for a minimum-denominator system to be used as a basis for their system, in a way similar to that performed by mobile phone companies in the LiMo and Android projects, or through industry consortia like Eclipse. They could still be introducing differentiating aspects in the hardware and upper-layer software, while reducing the costs of R&D and improving the transparency of a critical component of our modern democracies.

No Comments

MXM, patents and licenses: clarity is all it takes

Recently on the OSI mailing list Carlo Piana wrote a proposed license for the reference implementation of the ISO/IEC 23006 MPEG eXtensible Middleware (MXM). The license is derived from the MPL with the removal of some of the patent conditions from the text of the original license, and clearly creates a legal boundary conditions that grants patent rights only for those who compile it only for internal purposes without direct commercial exploitation. I tend to agree on Carlo’s comment: “My final conclusion is that if the BSD family is considered compliant, so shall be the MXM, as it does not condition the copyright grant to the obtaining of the patents, just as the BSD licenses don’t deal with them. And insofar an implementer is confident that the part of the code it uses if free from the patented area, or it decided to later challenge the patent in case an infringement litigation is threatened, the license works just fine.” (as a side note: I am completely and totally against software patents, and I am confident that Carlo Piana is absolutely against them as well).

Having worked in the italian ISO JTC1 chapter, I also totally agree with one point: “the sad truth is that if we did not offer a patent-agnostic license we would have made all efforts to have an open source reference implementation moot.” Unfortunately, ISO still believes that patents are something that is necessary to convince companies to participate in standard groups, despite the existence of standard groups that do work very well without this policy (my belief is that the added value of standardization in terms of cost reductions are well worth the cost of participating in the creation of complex standards like MPEG, but this is for another post).

What I would like to make clear is that the real point is not if the proposed MXM license is OSI-compliant or not: the important point is why you want it to be open source. Let’s consider the various alternatives:

  • the group believes that an open source implementation may receive external effort, much like the traditional open source projects, and thus reduce maintenance and extension effort. If this is the aim, then the probability of having this kind of external support is quite low, as companies would avoid it (as the license would not allow in any case a commercial use with an associated patent license), and researchers working in the area would have been perfectly satisfied with any kind of academic or research-only license.
  • the group wants to increase the adoption of the standard, and the reference implementation should be used as a basis for further work to turn it into a commercial product. This falls in the same cathegory as before; why should I look at the reference implementation, if it does not grant me any potential use? The group could have simply published the source code for the reference, and said “if you want to use it, you should pay us a license for the embedded patents”.
  • the group wants to have a “golden standard” to benchmark external implementations (for example, to see that the bitstreams are compliant). Again, there is no need for having an open source license.

The reality is that there is no clear motivation behind making this under an open source license, because the clear presence of patents on the implementation makes it risky or non-free to use for any commercial exploitation. Microsoft, for example, did it much better: to avoid losing their rights to enforce their patents, they paid or supported other companies to create a patent-covered software and released it under an open source license. Since the “secondary” companies do not hold any patent, with the releasing of the code they are not relieving any threat from the original Microsoft IPR, and at the same time they use a perfectly acceptable OSI-approved license.

As the purpose of the group is twofold (increase adoption of the standards, make commercial user pay for the IPR licensing) I would propose a different alternative: since the real purpose is to get paid for the patents, or to be able to enforce them in case of commercial competitors, why don’t you dual-license it with the strongest copyleft license available (at the moment, the AGPL)? This way, any competitor would be forced to be fully AGPL (and so any improvement would have to be shared, exchanging the lost licensing revenue for the maintenance cost reduction) or to pay for the license (turning everything into the traditional IPR licensing scheme).

I know, I know – this is wishful thinking. Carlo, I understand your difficult role…

2 Comments

Another hypocrite post: “Open Source After ‘Jacobsen v. Katzer’”

The reality is that I am unable to resist. To see a post containing idiotic comments on open source, masqueraded as a serious article, makes me start giggling with “I have to write them something” (my coworkers are used to it – they sometimes comment with “another post is arriving” or something more humorous). The post of today is a nicely written essay from Jonathan Moskin, Howard Wettan and Adam Turkelon Law.com, with the title “Open Source After ‘Jacobsen v. Katzer’”, referring to a recent US Federal Circuit decision. The main point of the ruling is “…the Federal Circuit’s recognition that the terms in an open source license can create enforceable conditions to use of copyrighted materials”; that is, the fact that software licenses (in this case, the Artistic License) that limit redistribution are enforceable. Not only this, but the fact that the enforceability is also transferable: “because Jacobsen confirmed that a licensee can be liable for copyright infringement for violating the conditions of an open source license, the original copyright owner may now have standing to sue all downstream licensees for copyright infringement, even absent direct contractual privity”.

This is the starting point for a funny tirade like: “Before Jacobsen v. Katzer, commercial software developers often avoided incorporating open source components in their offerings for fear of being stripped of ownership rights. Following Jacobsen, commercial software developers should be even more cautious“(the article headline in the Law.com front page) to “It is perhaps also the most feared for its requirement that any source code compiled with any GPL-licensed source code be publicly disclosed upon distribution — often referred to as infection.” (emphasis mine).

Infection??

And the closing points: “Before Jacobsen v. Katzer, commercial software developers already often avoided incorporating open source components in their offerings for fear of being stripped of ownership rights. While software development benefits from peer review and transparency of process facilitated by open source, the resulting licenses, by their terms, could require those using any open source code to disclose all associated source code and distribute incorporated works royalty-free. Following Jacobsen v. Katzer, commercial software developers should be even more cautious of incorporating any open source code in their offerings. Potentially far greater monetary remedies (not to mention continued availability of equitable relief) make this vehicle one train to board with caution.”

Let’s skip the fact that the law practitioners that wrote this jewel of law journalism are part of the firm White & Case that represented Microsoft in the EU Commission’s first antitrust action; let’s skip the fact that terms like “infection” and the liberal use of “commercial” hides the same error already presented in other pearls of legal wisdom already debated here, the reality is that the entire frame of reference is based on an assumption that I heard the first time from a lawyer working for a quite large firm: that since open source software is “free”, companies are entitled to do whatever they want with it.

Of course it’s a simplification – I know many lawyers and paralegals that are incredibly smart (Carlo Piana comes to mind), but to this people I propose the following gedankenexperiment: imagine that within the text of the linked article every mention to “open source” was magically replaced with “proprietary source code”. The federal circuit ruling would more or less stay unmodified, but the comment of the writers would assume quite hysterical properties. Because they would argue that proprietary software is extremely dangerous, because if Microsoft (just as an example) found parts of its source code included inside of another product, they would sue the hell out of the poor developer, that would be unable to use the “Cisco defence”: to claim that Open Source “crept into” its products and thus damages should be minimal. The reality is that the entire article is written with a focus that is non-differentiating: in this sense, there is no difference between OSS and proprietary code. Exactly like for proprietary software, taking open source code without respecting the license is not allowed (the RIAA would say that it is “stealing”, and that the company is a “pirate”).

So, dear customers of White & Case, stay away from open source at all costs – while we will continue to reap its benefits.

5 Comments

See you in Brussels: the European OpenClinica meeting

In a few days, the 14th of April, I will be attending as a panelist the first European OpenClinica meeting, in the “regulatory considerations” panel. It will be a wonderful opportunity to meet all the other OpenClinica users and developers, and in general talk and share experiences. As I will stay there for the evening, I would love to invite all friends and open source enthusiasts that happen to be in Brussels that night for a chat and a Belgian beer.

As for those that are not aware of OpenClinica: it is a shining example of open source software for health care; it is a Java-based server system that allows to create secure web forms for clinical data acquisition (and much more). The OpenClinica software platform supports clinical data submission, validation, and annotation; data filtering and extraction, study auditing, de-identification of Protected Health Information (PHI) and much more. It is distributed under the LGPL, and does have some really nice features (like the design of forms using spreadsheets – extremely intuitive).

We have used it in several regional and national trials, and even trialed it as a mobile data acquisition platform.
06042009309

If you can’t be in Brussels, but are interested in open source health care, check out OpenClinica.

2 Comments

Reliability of open source from a software engineering point of view

At the Philly ETE conference Michael Tiemann presented some interesting facts about open source quality, and in particular mentioned that open source software has an average defect density that is 50-150 times lower than proprietary software. As it stands, this statement is somewhat incorrect, and I would like to provide a small clarification of the context and the real values:

  • First of all, the average that is mentioned by Michael is related to a small number of projects, in particular the Linux kernel, the Apache web server (and later the entire LAMP stack), and a small number of additional, “famous” projects. For all of these projects, the reality is that the defect density is substantially lower than that of comparable proprietary products. A very good article on this is Succi, Paulson, Eberlein. An Empirical Study of Open-Source and Closed-Source Software Products, IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, V.30/4, april 2004, where the study was performed. It was not the only study on the subject, but all pointed at more or less the same results.
  • Other than the software engineering community, some results from companies working in the code defect identification industry also published some results, like Reasoning Inc. A Quantitative Analysis of TCP/IP Implementations in Commercial Software and in the Linux Kernel, and How Open Source and Commercial Software Compare: Database Implementations in Commercial Software and in MySQL. All results confirm the much higher quality (in terms of defect per line of code) of the academic research.
  • Additional research identified a common pattern: the initial quality of the source code is roughly the same for proprietary and open source, but the defect density decreases in a much faster way with open source. So, it’s not the fact that OSS coders are on average code wonders, but that the process itself creates more opportunity for defect resolution on average. As Succi et al. pointed out: “In terms of defects, our analysis finds that the changing rate or the functions modified as a percentage of the total functions is higher in open-source projects than in closed- source projects. This supports the hypothesis that defects may be found and fixed more quickly in open-source projects than in closed-source projects and may be an added benefit for using the open-source development model.” (emphasis mine).

I have a personal opinion on why this happens, and is really related to two different phenomenons:the first aspect is related to code reuse: the general modularity and great reuse of components is in fact helping developers, because instead of recoding something (introducing new bugs) the reuse of an already debugged component reduces the overall defect density. This aspect was found in other research groups focusing on reuse; for example in a work by Mohagheghi, Conradi, Killi and Schwarz called “An Empirical Study of Software Reuse vs. Defect-Density and Stability” (available here) we can find that reuse introduces a similar degree of improvement in the bug density and the trouble report numbers of code:

defectreuse

As it can be observed from the graph, code originated from reuse has a significant higher quality compared to traditional code, and the gap between the two grows with the size (as expected from basic probabilistic models of defect generation and discovery).

The second aspect is that the fact that bug data is public allows a “prioritization” and a better coordination of developers on triaging and in general fixing things. This explains why this faster improvement appears not only in code that is reused, but in newly generated code as well; the sum of the two effects explains the incredible difference in quality (50-150 times), higher than any previous effort like formal methods, automated code generation and so on. And this quality differential can only grow with time, leading to a long-term push for proprietary vendor to include more and more open source code inside of their own products to reduce the growing effort of bug isolation and fixing.

7 Comments

Dissecting words for fun and profit, or how to be a few years too late

So, after finishing a substantial part of our work on FLOSSMETRICS yesterday, I believe that I deserve some fun. And I cannot ask more than a new, flame-inducing post from a patent attorney, right here, that claims that open source will destroy the software industry, just waiting to be dissected and evaluated- he may be right, right? Actually, not; but as I have to rest somehow between my research duties with the Commission, I decided to prepare a response- after all, the writer is a fellow EE (electrical engineer), and so he will probably enjoy some response to his blog post.

Let’s start by stating that the idea that OSS will destroy the software industry is not new; after all, it is one of the top 5 myths from Navica, and while no-one tried to say that in front of me, I am sure that it was quite common, a few years ago. Along with the idea that software helps terrorists:

‘Now that foreign intelligence services and terrorists know that we plan to trust Linux to run some of our most advanced defense systems, we must expect them to deploy spies to infiltrate Linux. The risk is particularly acute since many Linux contributors are based in countries from which the U.S. would never purchase commercial defense software. Some Linux providers even outsource their development to China and Russia.’ (from Green Hills Software CEO, Dan O’Dowd).

So, let’s read and think about what Gene Quinn writes:

It is difficult, if not completely impossible, to argue the fact that open source software solutions can reduce costs when compared with proprietary software solutions, so I can completely understand why companies and governments who are cash starved would at least consider making a switch, and who can fault them for actually making the switch.”

Nice beginning, quite common in debate strategy: first, concede something to the opponent. Then, use the opening to push something unrelated:

The question I have is whether this is in the long term best interest of the computing/software industry. What is happening is that open source solutions are forcing down pricing and the race to zero is on.”

Here we take something that is acknowledge (that OSS solutions are reducing costs, thus creating a pressure on pricing) and then we attach a second, logically unconnected term: “the race to zero is on”. Who says that the reduction in pricing leads to a reduction to zero? No one with an economics background. The reality is that competition brings down prices, theoretically (in a perfectly competitive environment made of equal products) bringing the price down to the marginal cost of production. Which is, of course, not zero- as any software company will happily tell you. Because the cost of producing copies of software is very small, but the cost of creating, supporting, maintaining, documenting software is not zero. This does not take into account the fact that some software companies enjoy profit margins unheard of, and this explains why there is such a rush by users in at least experimenting with potentially cost-saving measures.

as zero is approached, however, less and less money will be available to be made, proprietary software giants will long since gone belly-up and leading open source companies, such as Red Hat, will not be able to compete.

Of course, since zero is not approached, the phrase is logically useless (what is the color of my boat? any as you like- as I don’t own one). But let’s split it in parts anyway: of course, if zero is approached, software giants will go belly-up. But why RedHat will not be able to compete? Compete with what? If all proprietary companies will disappear, and only OSS companies remains, then the market actually increases, even with increasingly small revenues; the same effect that can be witnessed in some mobile data markets, with the reduction in price of SMS you see an increase in the number of messages sent, resulting in an increase in revenues.

It is quite possible that the open source movement will ultimately result in a collapse of the industry, and that would not be a good thing.

Still following the hypothetical theory that software pricing will go to zero (that, as I said, is not grounded in reality) here the author takes the previous considerations and uses a logical trick; he says that the proprietary companies will disappear, here he says that there will be a collapse of the industry (not of the “proprietary industry”). This way he collapses the concept of the software industry (that includes the proprietary and the non-proprietary actors) and conveniently avoids the non-proprietary part. Of course, this is still not grounded in anything logical. The conclusion is obvious: “that would not be a good thing”. Of course, this is another rhetoric form- by adding a “grounding” in something that is emotionally or ethically based, we introduce an external negative perception in the reader, strengthening what is still an hypothesis.

And then, the avoidance trap:

I am sure that many open source advocates who are reading this are already irate, and perhaps even yelling that this Quinn guy doesn’t know what he is talking about. I am used to it by now; I get it all the time. It is, after all, much easier to simply believe that someone you disagree with is clueless rather than question your own beliefs

This approach is so commonly used that is now beginning to show its age; use the fact that someone may be irate at reading the article to dismiss all critics as clueless people unable to question “beliefs”. The use of this word is another standard tactics, simply removing the idea that the personal position of an OSS adopter depends on illogic, faith-based assumptions; this, of course, would be difficult to defend in an academic environment, where we assume that researchers are not faith-based in their studies. So, this is an approach commonly used in online forum, blogs and such that are meant for a general audience.

It is a mistake though to dismiss what I am saying here, or any of my other writings on computer software and open source.

Of course, I am dismissing it for the content of what you write, not because of my “beliefs”; and I have not read anything else from you, so I am not dismissing what I have not read.

The fact that I am a patent attorney undoubtedly makes many in the open source movement immediately think I simply don’t understand technology, and my writings that state computer software is not math have only caused mathematicians and computer scientists to believe I am a quack.

This is totally unrelated to the previous arguments- who was talking of software patents anyway? We were talking about the role of OSS in terms of competition with the proprietary software market, and about potential effects to revenues.

Unlike most patent attorneys, I do get it and that is probably why my writings can be so offensive to the true believers. I am not only a patent attorney, but I am an electrical engineer who specializes in computer technologies, including software and business method technologies. I write software code and whether you agree with me or not, telling me I simply don’t understand is not intellectually compelling.

Of course, being part of a “class of people” like EE is in itself not qualifying in any way; any comment I made up to now would be equally applicable independently of the author; claiming to “get it” or implying that someone “don’t get it” because he works as a patent attorney is silly, and here the author falls in the same fallacy. By the way, I know some patent attorneys that perfectly “get it”, along with others that believe that open source software is made by fairies in the forest. As I said, being member of a class is in itself useless in deciding the truth of a statement.

I do get it, and the reality is that open source software is taking us in a direction that should scare everyone.

Here the author uses the fallacy of membership discussed before, and uses it as a authority power: “I do get it”. I am qualified, then I am saying the truth. And what I am saying is that OSS is dangerous, and the fact that anyone else (apart from O’Dowd, that believes that Linux will be infiltrated by terrorists) is not perceiving the problem is due to the fact that they are not looking with enough attention.

Sun Microsystems is struggling, to say the least, and the reality is that they are always going to struggle because they are an open source company, which means that the only thing they can sell is service.

Sun Microsystems is struggling for a long time now (unfortunately; I always loved their products). Personally I believe that the new CEO is doing quite a turnaround on the company, that has languished for a long time on a shrinking, highly lucrative market like SGI did in the past, but that is better left to financial analysts. Anyway their financial results were not that good even before the OSS turnaround imposed by Jonathan Schwartz, and so there is no real linking between the two part of the phrase (on the contrary, the OSS part is growing nicely, while the large scale enterprise server part is decreasing fast). It also introduces an additional error, that is the fact that being OSS means that you can sell only services. The author clearly has not read much on OSS business models, but he should not worry: I would be happy to send some papers on the subject.

Whenever you sell time, earning potential is limited. There are only so many hours in the day, and only so much you can charge by the hour. When you have a product that can be replicated, whether it be a device, a piece of proprietary software or whatever, you have the ability to leverage, which simply doesn’t exist when you are selling yourself by the hour.

Of course: this is the reality of consulting. This, however, does not stop companies like IBM Global Services, Accenture and friends to live off consulting, simply by asking very high prices for a day of a specialized consultant. Or, you can find groups like the 451 or RedMonk that are more efficient and targeted towards special markets.

So there is a realistic ceiling on the revenue that can be earned by any open source company, and that ceiling is much lower than any proprietary software company.

So, assuming that by-the-hour services is the only OSS business model possible, and that the price-per-hour cannot match that of large consulting firm, then there is a revenue ceiling that is lower than that of proprietary software companies. The fact that both parts of the phrase are unsustained by arguments makes the conclusion unproven.

It is also an undeniable truth that the way many, if not most, service companies compete is by price. When service companies try and get you to switch over they will promise to provide the same or better service for a lower price.”

This should be a supporting argument for the fact that OSS companies charge a lower per-hour price of competing companies, and uses Sun as an example. Of course, it continues to be an unsupported argument, even considering the fact that the author probably never paid a receipt for a Sun consultant, or would have discovered that their pricing is in line with the rest of the market.

The trouble with freeware is that there is no margin on free, and while open source solutions are not free, the race to asymptotically approach free is on, hence why I say the race to zero is in full swing.”

Now the author switches from OSS to “freeware”, to remind us that Open Source is, after all, free. Probably RMS would say at this point “free as in free speech, not free as in free beer”, but his ideas would be probably dismissed. The use of “free” here is made to create the appearance of a logical connection between “freeware” and open source; of course, the author acqnowledges that OSS is not free, but as part of the same “family” they are participating in the “asymptotically approach free… race to zero”. As stated before: in a perfect competition the race is not to zero, but to the marginal cost; so using “freeware” is a way to imply that this cost is zero as well, when the reality is that it is not zero (but lower than writing everything from scratch, thanks to the reuse opportunity).

And then we move to something completely different (as Monty Python would say):

Unfortunately, many in the patent legal community are engaging in the race to zero as well. For example, there are patent attorneys and patent agents who advertise online claiming to be able to draft and file a complete patent application for under $3,000. One of the most common ads running provides patent applications for $2,800, and I have seen some agents advertise prices as low as $1,400 for a relatively simple mechanical invention. The race to zero is in full swing with respect to patent services aimed at independent inventors and start-up companies. It is also being pushed by major companies who want large law firms to provide patent services for fees ranging from $3,500 to $7,000 per application. This is forcing many large patent law firms to simply not offer patent drafting and prosecution services any longer. There are major law firms that are seeking to outsource such work, hoping to still keep the client for litigation purposes and to negotiate business deals.

Dear writer, this is called “competition”. And as before, it is not a “race to zero”, as you will never find an attorney doing this kind of service for free, without any attachment; or if they do, they will probably go out of business, leaving the market.

Does anyone really think that paying $1,400 for an allegedly complete patent application is a wise business decision? I can’t imagine that if you say that to yourself out loud it would sound like such a good idea.

Well, IF the author can prove that application quality and price are correlated, then this becomes a decision based on economics principles (and depends on the hypothetical future value of the patent, measures of indirect value and so on). If the correlation is not strict, then any rational actor would simply seek the lowest possible price.

Likewise, Fortune 500 companies that are pushing prices down and wanting to pay only $3,500 for a patent application can’t really expect to get much, if any, worthwhile protection. Do they? I suppose they do, but the reality is that they don’t. The reality is that when you are drafting a patent application you can ALWAYS make it better by spending more time. … But to think that you can force a patent attorney or agent to spend the same length of time working on a project whether you pay under $3,500, $7,000 or $10,000 is naïve. Everyone inherently knows this to be true, but somehow convinces themselves otherwise

So, Fortune 500 companies are managed by morons, that don’t understand the value of spending more time. I suspect it is for a lack of culture, or a lack of perception of value; both can be cured by promotion and dissemination of information. Still, this does not applies to Open Source.

As companies continue to look for the low cost solution, quality is sacrificed.

Ah! Here’s the connection! As for patent applications, software has the same correlation: quality-price…

Now I full well realize that much of the open source software is better than proprietary software, and I know that it can be much cheaper to rely on open source solutions than to enter into a license agreement for proprietary software.

…but I can’t say it loud, thinks the author, or they will burn me alive. So, let’s change the subject again:

But where is that going to lead us? Once mighty Sun Microsystems is hanging on for dear life, and is that who you want to be relying on to provide service for your customized open source solutions? What if Sun simply disappears?

Can you trust a company like Sun, that by using OSS is destroying itself? Or are you thinking about using OSS, and take the risk of being such a dying corpse yourself? So, let’s make sure that the poor moron that thinks that OSS can save money understand the risks, by bringing another example: gyms!

I remember years ago I joined a gym and purchased a yearly membership only to have the gym close less than 2 months later. A similar thing happened to my wife several years ago when she bought a membership to a fitness and well-being company who shall remain nameless. Eat better and get exercise counseling and support, what a deal! Of course, it was a deal only until the company filed for bankruptcy and left all its members high and dry. Luckily I put off joining myself otherwise we would have been out two memberships after less than 30 days.

Of course, the parallel between gyms and software companies is not so strict; and is not related to OSS at all – examples abound of what happens, in all sectors. At least, with OSS, you have the source code, and you can do something yourself.

“With once mighty companies falling left and right do you really want to bet the IT future of your company or organization on an industry whose business model is the race to zero?”

So, dear author, the race is not to zero, and yes, I would bet it on open source, so at least I am free to continue to use your gym even after it has closed.

4 Comments