Archive for category Uncategorized

Android free, non-free, and generic FUD (updated)

Updated 26/9/2011: It is clear that some of the points of my article were less clear than I hoped (my fault: it’s clear that my writing was less than perfect). So, as a clarification, I would like to point out a few things.

As a starting point, I am referring to the Android Open Source Project when talking about “Android”. It is clear that a proprietary, binary firmware released by a phone vendor is definitely not free, and I assume that Richard Stallman knows it as well, and in talking about Android he is referring to the open source project as well. So, when some of my polite commenters (I am blessed of having kind and nice people among my readers, it seems) mentioned that Android has proprietary pieces I actually have to point out that in the AOSP GIT there are no such proprietary pieces – even the imported kernel tree has no (optional) proprietary driver bits. So, if you want the Broadcom binary blob, or the Intel binary pieces, you have to download them externally. Also, as RMS points out in the article, it is possible to have a functioning phone using only the open parts; in fact, if you go and check what proprietary parts are usually needed in an Android build the primary culprits are the WiFi drivers and ancillary components like Camera, Video Out, and accelerated graphics like OpenVG) – nothing that stops you from creating a real phone out of it, albeit with lots of parts that require work. Lots of people is working on creating or porting fully open drivers-meaning that a fully open Android on all hardware devices is possible, just requires work.

Second, when talking about “free” there is always an uncertainty in discussing what “free” is without adding a definition. I am guilty of it as well, but here is my definition, which is by the way the same used by RMS:

“Free software is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.” Free software is a matter of the users’ freedom to run, copy, distribute, study, change and improve the software. More precisely, it means that the program’s users have the four essential freedoms:

  • The freedom to run the program, for any purpose (freedom 0).
  • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
  • The freedom to redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

A program is free software if users have all of these freedoms.” (Source: the Free Software Definition) Given this definition, I still have to object that under RMS definition Android (as the AOSP, released at this moment) IS free. It does not matter whether future versions are not (or will not be) as the definition does not talk about future versions but only the ones that we do have now; it does not cover whether the organization doing the development is an evil overlord or a public consortium, or anything else. It has a short list of 4 points, and to be free you must satisfy them all, and to satisfy them you must have software released under a license that is recognized as free. If you read the article again, you will find out that RMS is not addressing one or more of these points, but lots of external bits that are not related to the definition itself. The fact that binary Android may contain non-free parts (that happens with Linux as well, but is clearly not sufficient to say that Linux is non-free), the fact that it is not GNU/Linux (again, not relevant), the fact that it is not GPLv3 (as only the latest version of the license would grant a “freer than free” status), the fact that software patents exist (which is a curse, but again not relevant), the fact that part of the phone may be upgraded with non-free components that may listen to you (and again, not specific to Android). In essence, the entire article glides away from the point that may really be relevant to Android freedom, and draw a set of lines that are implying that Android is bad in some way. That is why the article is poorly written – because it ignores the real points, and would be equally applicable to other platforms as well – Symbian, Maemo, whatever. If RMS wants to make a statement, then you should point out that creating free drivers is possible and provides advantages for the user and the manufacturer (something that Broadcom, Intel, ATI already discovered); that a totally free alternative is possible with a coordinated effort, that alternatives to services may be created and can be a really competitive factor (OpenStreetMap, Firefox Sync) if properly directed. All of this was missing. A better effort would be to add a page at the FSF site listing hardware for which drivers are not available or in a partial stage, and request assistance. Much better than going at another project (75% of which comes from other free projects, by the way) and shouting “fire!”.

Original:

I hate FUD (Fear, Uncertainty, Doubt) whether it is spelled from proponents of proprietary software or free software loyalist. I hate it because it uses half-truths, innuendo and emotional traps to prevent readers to form their own opinion using rational means.

Having said that, I have been already skeptical of the previous attempt of the FSF to declare that the GPLv2 is “dangerous” because it has no explicit reinstatement clause, piggybacking on posts by popular Android doomsayers that (wrongly) claimed that Android tablet vendors “lost their rights to the Linux kernel”. Now, RMS clearly aimed with bigger guns at Android, that is clearly irking to his freedom-loving personality and probably seen as a plague barely better than Windows. In doing so, he unfortunately reached the same level of his hated proprietary vendors, using a barrage of arguments that show little attention to the reality.

Given the fact that RMS cares nothing of me (I still remember the disdain when I suggested that “Libre” may have been a better word than “Free”) and I am tired of being hated only by academics, I would like to dissect a few of the points raised by our beloved Richard.

“The version of Linux included in Android is not entirely free software, since it contains non-free “binary blobs” (just like Torvalds’ version of Linux), some of which are really used in some Android devices.” Ah, the heresy. Android uses binary blobs – but, behold! That’s because it uses Linux inside. So, really, this is an attack on the lax idea of freedom of Torvalds, that begrudgingly allows some vendors to add binary components inside of the user-space drivers.

I would like to point out as a first problem the phrase “some of which are really used in some Android devices”. So, it’s not all of Android that is bad – especially since those binary blobs are really part of Linux, it would have been much better aimed for RMS to claim that Linux is non-free. That, of course, would have raised the ire of quite a lot of developers and some condescending smiles from the general population, limiting the desired effect. Big, bad Google with its privacy problems is a much better target. It is also clear that not all of Android is non-free; something that is better rephrased as “basically all devices are non-free; some are, but we don’t talk of them as it would ruin the effect”.

So, first count is: “Android is non-free” is true only as “Linux is non-free” is true. Not a good start.

“Android platforms use other non-free firmware, too, and non-free libraries. Aside from those, the source code of Android versions 1 and 2, as released by Google, is free software – but this code is insufficient to run the device. Some of the applications that generally come with Android are non-free, too.” RMS here mixes a generic and undefined “Android platform” with the real Android – AOSP, the Android Open Source Project. That has no non-free libraries I was capable to find. If RMS talks about the binary versions that some vendors add, well, that’s no Android; it is the superposition of Android+other binary components. Not different from Linux used to run Oracle, for example; or Linux plus any other proprietary piece.

I also contend with the idea that the code is insufficient to run the device as well. MIUI and Cyanogen are but two of the source forks that are totally based only on the free code, and if you accept the lack of some functionality like camera or video out you may use your device without any proprietary blob. Again, saying so would have ruined the effect; also, it would point out that the Linux approach successfully made large companies like BroadCom or Intel to release fully free versions of their drivers.

“Android is very different from the GNU/Linux operating system because it contains very little of GNU. Indeed, just about the only component in common between Android and GNU/Linux is Linux, the kernel. People who erroneously think “Linux” refers to the entire GNU/Linux combination get tied in knots by these facts, and make paradoxical statements such as “Android contains Linux, but it isn’t Linux”. If we avoid starting from the confusion, the situation is simple: Android contains Linux, but not GNU; thus, Android and GNU/Linux are mostly different.” Leave it to RMS to beat to death the “GNU/Linux” mantra. RMS mixes the popular idea of Linux distribution with “Linux” redistribution in general. I found very little difference between Android and most embedded distro, or with set-top box systems. Here RMS uses the opportunity to rehash the idea that those using Linux should really be grateful to the FSF more than any other component maker; something that I found not correct – not that I am not grateful RMS and the FSF for their important contributions (the GPL on top) but because it is disingenuous to all the other contributors, like RedHat, Cygnus, Xorg, Sun, and the countless other groups that have a percentage of code comparable to the GCC, LibC, GNU utils and the other contribution by FSF.

By downplaying what others have done, RMS downplays other free software people and efforts – only because they are under the banner of open source, or not in line with his view. But this is only the entree, preparing for the real point:

“If the authors of Linux allowed its use under GPL version 3, then that code could be combined with Apache-licensed code, and the combination could be released under GPL version 3. But Linux has not been released that way.” Ahh, here comes the culprit. Bad, bad Torvalds! You decided not to trust us with a “GPL2 and later”, because you may not like what we write in the next license, and so one of the most successful free/open code is not in line with out current view. Bad boy! This should be seen as a continuation of the first FSF post, maybe – “if it was GPL3 it would be spotless and beautiful”. Note the totally unrelated intermission; there is really no logical connection with the binary blobs mentioned as a reason for being “non-free” (where the non-free parts are not really part of Android but bolt-ons, and the real non-free is Linux for its tolerance of binary blobs in user space). This lack of logical flow is the indication that this was the real reason for the article, and why I call it FUD. But it would have been much less effective this way.

“Google has said it will never publish the source code of Android 3.0 (aside from Linux), even though executables have been released to the public. Android 3.1 source code is also being withheld. Thus, Android 3, apart from Linux, is non-free software, pure and simple.” Actually, Google said that they plan to release the ASL part of it: “We’re committed to providing Android as an open platform across many device types and will publish the source as soon as it’s ready.” so actually the wording is not correct either (unless they have a different statement from the one that I heard recently from Chris DiBona of Google in one of his public appearances). In fact, it is also not correct for the GPL parts, that were published publicly in the AOSP Git as early as January 2011.

“The non-release of two versions’ source code raises concern that Google might intend to turn Android proprietary permanently; that the release of some Android versions as free software may have been a temporary ploy to get community assistance in improving a proprietary software product. Let us hope does not happen.” Read: “Google is probably driven by Darth Vader and doing a DeathStar-like ploy to destroy the rebels”. RMS is forgetting the fact that is true that Android may turn proprietary, exactly like any software project for which one entity has all the copyrights. But the previous versions remain free – something that Nokia should have learned when they released the Symbian code under the EPL, trying later to rewrap them under a proprietary license (good thing that I kept a copy of the originals). This means that someone (several, actually) will go and fork it. Good riddance to Google – they will never be able to stay afloat without the constant flow of patches from external projects, that constitute 75% of the Android source code.

“In any case, most of the source code of some versions of Android has been released as free software. Does that mean that products using those Android versions respect users’ freedom? No, for several reasons. First of all, most of them contain non-free Google applications for talking to services such as YouTube and Google Maps. These are officially not part of Android, but that doesn’t make the product OK. There are also non-free libraries; whether they are part of Android is a moot point. What matters is that various functionalities need them.” So, let me be clear here: RMS claims that he is aware that Android, per se, is released as free software. But he changes the definition of what is Android to artificially extend it to reach non-free parts, so that he can show that all of it is non-free. Well, I use a free software rom (MIUI) that is free and beautiful; I have decided to add the Google non-free apps not because I am forced to, but because I decided that I can trade functionality for freedom in this specific case. If you don’t want to, there are replacements – or you can decide you don’t want Google Maps and walk instead or use a paper map. I can decide – thus I am free.

“Replicant, a free version of Android that supports just a few phone models, has replaced many of these libraries, and you can do without the non-free apps. But there are other problems.” So, actually, it is possible to run a totally free Android – but you still are not in the clear. Why? Oh, Why?

“Some device models are designed to stop users from installing and using modified software. In that situation, the executables are not free even if they were made from sources that are free and available to you. However, some Android devices can be “rooted” so users can install different software.” Ahh, here it is again! If Linux (and, thus, Android) was GPL3 this could not have been done! Good thing that RMS recognizes that only some vendors are doing so (Samsung happily allows for custom roms, like the one I am using-and several others do as well).

“Important firmware or drivers are generally proprietary also. These handle the phone network radio, Wi-Fi, bluetooth, GPS, 3D graphics, the camera, the speaker, and in some cases the microphone too. On some models, a few of these drivers are free, and there are some that you can do without – but you can’t do without the microphone or the phone network radio.” Same point as before – proprietary drivers in Linux. Please, if this is all you have, go back and write “Is Linux really free software?” And again, if this is a problem boycott vendors that have no source for their drivers.

“On most Android phones, this firmware has so much control that it could turn the product into a listening device. On some, it controls the microphone. On some, it can take full control of the main computer, through shared memory, and can thus override or replace whatever free software you have installed. With some models it is possible to exercise remote control of this firmware, and thus of the phone’s computer, through the phone radio network.” The GSM part of modern cell phones is independent from the main phone controls, and is usually connected through a separate bus. This is due to the certification process for being able to connect to the GSM networks, that make it very difficult to be certified if the code is modifiable by the user. So, everyone masks this under a binary part for the RIL (Radio Interface Layer). Some vendors have a purely binary RIL, others publish the source code. So, dear RMS, instead of banging against the fact that a binary RIL is possible (and is possible even in the GPL3) go and praise those that publish it.

“Putting these points together, we can tolerate non-free phone network firmware provided new versions of it won’t be loaded, it can’t take control of the main computer, and it can only communicate when and as the free operating system chooses to let it communicate. In other words, it has to be equivalent to circuitry, and that circuitry must not be malicious. There is no obstacle to building an Android phone which has these characteristics, but we don’t know of any.” The point is not Android, but any Linux phone (actually, any phone in general, since all of them have upgradeable radio firmware). Go and claim that we should not be using a phone, or a mobile phone. Again, blaming this to Android makes for a better target.

“Software patents could force elimination of features from Android, or even make it unavailable.” Go there! Claim that Android is a patent target, and conveniently ignore the Microsoft Linux patent threats, and the many patent attacks on free software that companies like RedHat are trying to defend. Just don’t point out Android as a single culprit.

“However, the patent attacks, and Google’s responses, are not directly relevant to the topic of this article: how Android products approach an ethically system of distribution and how they fall short. This issue merits the attention of the press too.” So, why write it? Because it is like a cherry on top – it finish the dish.

“Android is a major step towards an ethical, user-controlled, free-software portable phone, but there is a long way to go.” Don’t be too harsh, or people may think that you have an agenda. So, after badmouthing Android, say something nice – like the fact that you can redeem, if you move to the GPL3.

Article summary: “Android is non-free” (actually Linux is, but I can’t say it), “is driven by greedy gollums” (maybe), “Android phones may spy on you” (like all modern phones), “it may be destroyed by patents” (like Linux), and in general if you switch to the GPL3 we forgive you.

Look: there are many negative points in Android, like the fact that having it as a separately managed project under an Eclipse-like consortium would be much better (I wrote my thoughts on it here) or that the fact that the Honeycomb code is still not released, or that governance is centrally hold by Google. This is however not a good reason for using Android as a scapegoat, only because it is widely used and successful. This is FUD – and it only helps those that despise free software.

(Disclaimer: I don’t care what Google thinks, I don’t have an interest in Google financial performance, my only point of contact is in having an Android phone and a passion for free/open/libre source).

3 Comments

“FOSS isn’t always the answer”. And you asked the wrong question.

There is an interesting post from James Turner on O’Reilly Radar, that starting with the charming title of “FOSS isn’t always the answer” and ends up with, among other interesting comment, something like “But [the FLOSS proponent] need to accept the ground rules that most of us live in a capitalist society”. At that point, I was snickering and torn between laugh it off or respond – and of course, my desire to find good excuses for not work today had finally won me over.

from the always, always perfect Xkcd

Reading this post made me think of one of my first conferences, where an economics professor kindly told me “good work, kid. Now, economic theory tells us that this little, nice communist dream will fail in one or two years. In the meanwhile, enjoy your gift economy ideal”. I have been called a “communist” for quite some time, and still chuckle at the idea: that someone still thinks that FLOSS is inherently “anti-capitalistic”. This is something that curiously is commonly repeated by people working for a very large, Redmond-based company, that constantly presents slides like these:

img_0224

See? The GPL is, magically, “anti commercial” (despite the fact that OSS provide cost reduction and increases in efficiency of at least 116B€, 31% of the software and services market). And the author makes the example of TCP/IP or NSA linux as projects that made no commercial impact… Let’s all revel in the idea that TCP/IP had no commercial impact for a moment, including the irony of doing it on a TCP/IP network like the Internet, and let’s continue.

Let’s comment on the individual points that Turner raises:

“No one uses a closed source compiler anymore, Eclipse is one of the leading IDEs for many languages, and Linux is a dominant player in embedded operating systems. All these cases succeeded because, largely, the software is secondary to the main business of the companies using it (the major exception being Linux vendors who contribute to the kernel, but they have a fairly unique business model.)” That’s an interesting comment, for more than a reason. First of all, because it fails to grasp one of the most important economic points of software: with the exception of software producers, software is always secondary – it is a supporting technology. You would consider electric power as secondary as well, despite the fact that it is essential; software has a similar property – it is secondary and essential at the same time. The second interesting comment is related to Linux vendors: for some curious reasons they were able to profit from something that is distributed for free, but Turner dismisses them as an “exception” because… they don’t fit his model of the market.

“Where FOSS breaks down pretty quickly is when the software is not a widely desired tool used by the developer community.” The underlying assumption is that open source is developed for free by developers, because that’s what they do normally, and they want to donate time to the community. This assumption is wrong. The majority of open source developers are paid for work on open source; also, the idea that they do it for free is some nice communist-like idea that is unfortunately totally distant from reality.

“The typical line of thought runs like this: Let’s say we’re talking about some truly boring, intricate, detail-laden piece of software, such as something to transmit dental billing records to insurers … So, if all software should be free and open source, who is going to write this code? One argument is that the dentist, or a group of dentists, should underwrite the production of the code. But dentistry, like most things in western society, tends to be a for-profit competitive enterprise. If everyone gets the benefit of the software (since it’s FOSS), but a smaller group pays for it, the rest of the dentists get a competitive advantage. So there is no incentive for a subset of the group to fund the effort.” There goes the main argument: since software is given for free, and someone else is getting advantages for free, free riding will quickly destroy any incentive. This is based on many wrong assumptions, the first of which is that the market is always capable to provide a good product that matches the needs of its users. This is easily found out to be wrong, as the example of SAKAI and Kuali demonstrate: both products were developed because the proprietary tools used by the initial group of universities were unable to meet the requirements, and the costs were so high that developing reusing open source was a better alternative. And consider that Kuali is exactly the kind of software that Turner identifies as “non-sexy”, that is a financial management system (if you want more examples of the medical nature, go to VISTA, OpenClinica, or to meet Turner article, OpenDental). The reality is that some kind of software is essential, most of the software is non-differentiating, and maintaining that software as open source has the potential to reduce costs and investment substantially – for the actual data, check this post).

“Another variant is to propose that the software will be developed and given away, and the developers will make their living by charging for support. Leaving alone the cynical idea that this would be a powerful incentive to write hard-to-use software, it also suffers from a couple of major problems. To begin with, software this complex might take a team of 10 people one or more years to produce. Unless they are independently wealthy, or already have a pipeline of supported projects, there’s no way they will be able to pay for food (and college!) while they create the initial product.” Great! Turner now discovered one of the possible open source-based business models we classified in FLOSSMETRICS. Of course, he conveniently forgot that by reusing open source, this hapless group of starved developers can create their product for one tenth of the cost and the time. So, the same property that Turner thinks can doom this poor group of misguided developers can actually provide them with the solution as well.

“And once they do, the source is free and available to everyone, including people who live in areas of the world with much lower costs (and standards) of living. What is going to stop someone in the developing world from stepping in and undercutting support prices? It strikes me as an almost automatic race to the bottom.” That’s called competition. Never heard of it? Despite the “race to the bottom” (that, in economic terms, is applicable only to commodities) service companies seem to do quite well nowadays.

“But I have spent most of my adult life writing proprietary software — most of it so specialized and complicated that no open source project would ever want to take it on — and I find the implication that the work I do is in some way cheap or degrading to be a direct insult.” Here we see two different concept: the first, that open source is unable to reach the level of complexity of proprietary software. This is contradicted by some of the examples mentioned by Turner himself (unless he considers a compiler to be too simple), or by academic research: “The growing rate, or the number of functions added, was greater in the open source projects than in the closed source projects. This indicates that the open source approach may be able to provide more features over time than by using the closed source approach. In terms of defects, our analysis finds that the changing rate or the functions modified as a percentage of the total functions is higher in open source projects than in closed source projects. This supports the hypothesis that defects may be found and fixed more quickly in open source projects than in closed source projects, and may be an added benefit for using the open source development model.” (Paulson, Succi, Eberlein “An Empirical Study of Open Source and Closed Source Software Products”).

More important is the second part of the phrase: “the implication that the work I do is in some way cheap .. is a direct insult”. There you are: unless I am paid directly for my software, I consider it to be too cheap. This means only one thing: that Turner knows only one potential model, that is the “I sell a box to you, and you pay it for that”, and all the others are an “insult”. Well, RedHat does it differently, and is reaching 1B$ in revenues; my brief contacts with RHT people never gave me the indication that they feel insulted. Or OpenNMS, where Tarus is proud of its openness. Or Eclipse (the ultimate communist dream! Friends and Foes all together!)

“But they need to accept the ground rules that most of us live in a capitalist society, we have the right to raise and provide for a family, and that until we all wake up in a FOSS developer’s paradise, we have to live and work inside of that context. I’d love to hear how a proprietary-free software world could work.” Here we return to the assumption that open source is inherently non-capitalist, which is simply not true; the ending is also telling: “a proprietary-free software world” assumes that there should be only one solution. Exactly like Turner, I believe in freedom of choice – anyone is free (when is not forced by his government to use a proprietary office suite or operating system, of course) to choose for the best. I would not, however, bet against FLOSS.

1 Comment

On WebM again: freedom, quality, patents

I have already presented my views on the relative patent risk of WebM, based on my preliminary analysis of the source code and some of the comment of Jason Garett-Glaser (Dark Shikari), author of the famous (and probably, unparalleled from the quality point of view) x264 encoder. The recent Google announcement, related to the intention to drop the patented H264 video support from Chrome and Chromium (with the implication that it will be probably dropped from other Google properties as well) raised substantial noise, starting with an Ars Technica analysis that claims that the decision is a step back for openness. There is an abundance of comments from many other observers, that mostly revolve around five separate ideas: that WebM is inferior, and thus it should not be promoted as an alternative, that WebM is a patent risk given the many H264 patents that may be infringed by it, that WebM is not open enough, that H264 is not so encumbered as not to be usable with free software (or not so costly for end users) and that Google provides no protection against other potential infringing patents. I will try, as much as possible, to provide some objective points to at least provide a more consistent baseline for discussion. This is not intended to say that WebM is sufficient for the success of HTML5 video tag – I believe that Christian Kaiser, VP of technology at Netflix, wrote eloquently about the subject here.

Quality: quality seems to be one the main criticism of WebM, and my previous post has been used several times to demonstrate that it does employ sub-par techniques (while my intention was to demonstrate that some design decisions were made to avoid existing patents, go figure). The relative roughness of most encoders, the limited time on the markete of the open source implementation led many to believe that WebM is more in the league with Theora (that is, not very good) and not in that of H264. The reality is that encoders are as important as the standard itself for evaluating quality, and this of course means that comparing WebM with the very best encoder in the market (x264) would probably not give much an indication on WebM itself. In fact, a very good comparison by the Moscow state university’s Graphics and media lab performed a very thorough evaluation of several encoders, and an interesting result is this:

conclusion_overall

(source: http://compression.graphicon.ru/video/codec_comparison/h264_2010/#Video_Codecs) where it is evident that there are major variations even among same-technology encoders, like Elecard, MainConcept and x264. And WebM? Our russian friends extended their analysis to it as well:

vp8_rd_ice

What this graph shows is the relative quality, measured using a sensible measure (not PSNR, that values blurriness more than data…) of various encoding done with different presets, with x264, WebM (here called VP8) and Xvid. It shows that WebM is slightly inferior to x264, that is it requires longer encoding times to reach the “normal” settings of x264; and it shows that it already beats by a wide margin xvid, one of the most widely used codecs. Considering that most H264 players are limited to the “baseline” H264 profile, the end result is that – especially with the maturing of command line tools (and the emergence of third party encoders, like Sorenson) we can safely say that WebM is or can be on the same quality level of H264.

WebM is a patent risk: I already wrote in my past article that it is clear that most design decisions in the original On2 encoder and decoder were made to avoid preexisting patents; curiously, most commenters used this to demonstrate that WebM is technically inferior, while highlighting the potential risk anyway. By going through the H264 “essential patent list”, however, I found that in the US (that has the highest number of covered patents) there are 164 non-expired patents, of which 31 specific to H264 advanced deblocking (not used in WebM), 34 related to CABAC/CAVAC not used in WebM, 16 on the specific bytecode stream syntax (substituted with Matroska), 45 specific to AVC. The remaining ones are (to a cursory reading) not overlapping with WebM specific technologies, at least as they are implemented in the libvpx library as released by Google (there is no guarantee that patented technologies are not added to external, third party implementations). Of course there may be patent claims on Matroska, or any other part of the encoding/decoding pair, but probably not from MPEG-LA.

WebM is not open enough: Dark Shikari commented, with some humor, of the poor state of the WebM standard: basically, the source code itself. This is not so unusual in the video coding world, with many pre-standards basically described through their code implementations. If you follow the history of ISO MPEG standards for video coding you will find many submissions based on a few peer-reviewed articles, source code and a short word document describing what it does; this is then replaced by well written (well, most of the time) documents detailing every and all the nooks and crannies of the standard itself. No such thing is available for WebM, and this is certainly a difficulty; on the other hand (and having been part, for a few years, of the italian ISO JTC1 committee) I can certainly say that it is not such a big hurdle; many technical standards are implemented even before ratification and “structuring”, and if the discussion forum is open there is certainly enough space for finding any contradictions or problems. On the other hand, the evolution of WebM is strictly in the hand of Google, and in this sense it is true that the standard is not “open” in the sense that there is a third party entity that manages its evolution.

H264 is not so encumbered-and is free anyway: Ah, the beauty of people reading only the parts that they like from licensing arrangements. H264 playback is free only for non-commercial use (whatever it is) of video that is web-distributed and freely accessible. Period. It is true that the licensing fees are not so high, but they are incompatible with free software, because the license is not transferable, because it depends on field of use, and in general cannot be sensibly applied to most licenses. The fact that x264 is GPL licenses does not mean much: the author has simply decided to ignore any patent claim, and implement whatever he likes (with incredibly good results, by the way). This does not means that suddenly you can start using H264 without thinking about patents.

Google provides no protection against other potential infringing patents: that’s true. Terrible, isn’t it? But, if you go looking at the uber-powerful MPEG-LA that gives you a license for the essential H264 patents, you will find the following text: “Q: Are all AVC essential patents included? A: No assurance is or can be made that the License includes every essential patent. The purpose of the License is to offer a convenient licensing alternative to everyone on the same terms and to include as much essential intellectual property as possible for their convenience. Participation in the License is voluntary on the part of essential patent holders, however.” So, if someone claims that you infringe on its patent, claiming that you licensed it from MPEG-LA is not a defense. And, just to provide an example, in the Microsoft vs. Alcatel/Lucent case, MS had to fight quite a long time to have the claim dismissed (after an initial 1.52B$ damages decision). In a previous effort for creating an open video codec by Sun Microsystem, Sun similarly did not introduce a patent indemnification clause in it – in fact, in one of the OMV presentations this text was included: “While we are encouraged by our findings so far, the investigation continues and Sun and OMC cannot make any representations regarding encumbrances or the validity or invalidity of any patent claims or other intellectual property rights claims a third party may assert in connection with any OMC project or work product.”

So, after all this text, I think that there may be some more complexity behind Google’s decision to drop H264 than “we want to kill Apple”, as some commenters seem to think – and the final line is: software patents are adding a degree of complexity to the ICT world that is becoming, in my humble opinion, damaging in too many ways – not only in terms of uncertainty, but adding a great friction in the capability of companies and researchers to bring innovation to the market. Something that, curiously, patent promoters describe as their first motivation.

40 Comments

ChromeOS is *not* for consumers.

Finally, after much delays, Google has presented its second operating system after Android: ChromeOS. Actually, it is not that new- developers already had full access of the development source code, and I had already the opportunity to write about it in the past; Hexxeh made quite a name for himself by offering a USB image that is bootable on more systems, and providing a daily build service to help others try it at home. Google launched a parallel pilot program, delivering to many lucky US citizens an unbranded laptop (called Cr-48) preloaded with the latest build of ChromeOS; initial reports are not overall enthusiastic, due to problems with the Flash plugin and trackpad responsiveness to gestures; in general, many of the initial adopter are perplexed about the real value of such a proposition. And the explanation is simple: it’s not for them.

24112009563

The reality is that ChromeOS is a quite imaginative play designed to enter the enterprise market – and has nothing to do with consumers, or at least it does have only limited impact there. Let’s forget for a moment the fact that the system does have many, many shortcomings and little problems (like the fact that sometimes you are exposed to the internal file system, for example, or that the system is still not fully optimized, or that the hardware support is abysmal). Many observers already commented on the device itself, like Joe Wilcox, Mary Jo Foley or Chris Dawson; what I would like to add is that Google is using the seed devices to collect end user experiences to focus the remaining development effort to create what in the end will be a different approach to enterprise computing – not for consumers. It is not about thin clients: the economics of such devices has always been difficult to justify, with the substantial expenditure in servers and infrastructure; just look at the new refreshment of the concept, in the form of VDI, and despite the hotness of the field, actual deployments are still limited.

Web-based applications change this economics: the only thing that needs to be delivered to the client is the payload (the page, js files, images, and so on), data persistence and identity (authentication, authorization and accounting). All three can be done in an extremely efficient way; the cost per user is one or two orders of magnitude smaller than traditional thin client backends or VDI. It is true that not all apps are web applications, but I believe that Google is making a bet, based on the great uptake of modern web toolkits, javascript and metacompilers like GWT. For those apps that cannot be replaced, Citrix is providing a very nice implementation of their Receiver app – giving a full, uncompromised user experience directly in the browser.

Let’s consider what advantages does this approach bring to the enterprise:

  • Activation: you don’t need an engineer to deploy a ChromeOS machine. Actually, anyone can do it, and without the need for any complex deployment server, initial authentication or activation keys. It works everywhere there is a form of connectivity, and as soon as you have completed it, your desktop environment is ready with all the links and apps already in place. It means: no need for large helpdesks (a limited support line is sufficient); no need to fiddle with apps or virtualization desktop layers, you can do it from an hotel room… everywhere you are. Your machine stop working? You activate another.
  • Management: There is no machine management – all activities are based on the login identity, and machines are basically shells that provide the execution capabilities. It means that things like hardware and software inventories will not be necessary anymore, along with patch deployment, app supervision, and all those nice enterprise platform management things that add quite a lot of money to the IT licensing budgeted costs.
  • Security: Since there are no additional apps installable, it is much easier to check for compliance and security. You basically have to log every web transaction on your web app-which is fairly easy. There is still one area that is uncovered (actually, not covered in any current commercial operating system…) that is information labelling, and I will mention it later in the “still to do” area.

So, basically ChromeOS tries to push a model of computation that is based on something like 90% of apps as web-based applications, that use local resources for computation and the browser as the main interface; and the remaining 10% through bitmap remotization like Citrix (I bet that it will not take much time to see VMware View as well). To fulfil this scenario Google still needs quite some work:

  • They need to find a way to bring ChromeOS to more machines. If the enterprise already has its own PCs, they will not throw them out of the window. The ideal thing would be to make it a bootable USB image, like we did for our own EveryDesk, or make an embeddable image like SplashTop. The amount of reinvention of the wheel that is coming with ChromeOS is actually appalling – come on, we did most of those things year ago.
  • Google has to substantially improve management of the individual ChromeOS data and app instances. There must be a way for an enterprise to allow for remote control of what apps can and cannot be installed, for example – to preload a user with the internal links and data shared to all. At the moment there is nothing in this area, and I suspect that it is better for them to develop something *before* initial enterprise enrolments. Come on, Google, you cannot count only on external developers for filling this gap.
  • The browser must implement multilevel security labels. That means that each app and web domain must have a label, cryptographically signed, to claim what “level” of security is implemented, and how information can flow in and out. For example, it must prevent secure information from the ERP application to be copied into FaceBook, or securely partition different domains. A very good example of this was the Sun JDS trusted extensions, unfortunately defunct like JDS itself. This is actually fairly easy to implement in the browser, as the only application that can access external resources and copy and paste between them – and Chrome already uses sandboxing, that can be used as basis for such a label-based containment. This would give a substantial advantage to ChromeOS, and would open up many additional markets in areas like financials, banking, law enforcement and government.

So, after all, I think that Google is onto something, that this “something” needs work to mature before it can be brought to the market, and that the model it proposes is totally different from what we have up to now. Noone knows if it will be successful (remember the iPad? Its failure was nearly assured by pundits worldwide…) but at least it’s not a boring new PC.

6 Comments

No, Microsoft, you still don’t get it.

There is a very nice article, in Linux for you, with a long and detailed interview with Vijay Rajagopalan, principal architect in Microsoft’s interoperability team. It is long and interesting, polite and with some very good questions. The interesting thing (for me) is that the answers depict a view of Microsoft that is not very aware of what open source, in itself, is. In fact, there is a part that is quite telling:

Q Don’t you think you should develop an open source business model to offer the tools in the first place?
There are many basic development tools offered for free. Eclipse also follows the same model, which is also called an express edition. These tools are free, and come with basic functionality, which is good for many open source development start-ups. In fact, all the Azure tools from Microsoft are free. All you need is Visual Studio Express and to install Azure. If you are a .Net developer, everything is free in that model too. In addition, just like other offerings in the ecosystem, the professional model is aimed at big enterprises with large-scale client licensing and support.” (emphasis mine.)

The question is: is MS interested in an OSS business model? The answer: we already give out things for free. Well, we can probably thank Richard Stallman for his insistence in the use of the word “free”, but the answer miss the mark substantially. OSS is not about having something for free, and it never was (at least, from the point of view of the researcher). OSS is about collaborative development; as evidenced in a recent post by Henrik Ingo, “The state of MySQL forks: co-operating without co-operating”, being open source allowed the creation of an ecosystem of companies that cooperate (while being more or less competitors) and not only this fact increases the viability of a product even as its main developer (in this case, Oracle) changes its plans, but allows for the integration of features that are coming from outside the company – as Henrik wrote, “HandlerSocket is in my opinion the greatest MySQL innovation since the addition of InnoDB – both developed outside of MySQL”.

Microsoft still uses the idea of “free” as a purely economic competition, while I see OSS as a way to allow for far faster development and improvement of a product. And, at least, I have some academic results that point out that, actually, a live and active project do improve faster than comparable proprietary projects. That’s the difference: not price, that may be lower or not, as RedHat demonstrates; it is competition on value and speed of change.

Ah, by the way: SugarCRM, despite being a nice company with a nice CEO, is not 100% open source, since that by definition would mean that all code and all releases are under a 100% open source license, and this is not the case. As I mentioned before, I am not against open core or whatever model a company wants to use – especially if it works for them, like the case of SugarCRM. My observation is that we must be careful how we handle words, or those words start to lose their value as bearers of meaning.

6 Comments

“Best practices for open source” session at EU Internet of Services meeting

There is something that I mentioned many, many times: EU projects tend to talk about Open Source, but it is sometimes difficult for project managers to really grasp what OSS is, and how it can be used for real – not only during the project lifetime, but afterwards as well. For this reason, with  my thanks to the EU project officers in the Internet of Services group for the invitation, I have prepared a small guide on how to engage open source project, how to evaluate the best exploitation strategy, how to select a business model, and (more important) a simple and pragmatic approach on selecting an OSS license for a new project.  The guide will be presented at the Internet of Services 2010 event; the collaboration meeting is in Brussels, tomorrow, and the open source part will start from 11:30. A detailed agenda is available here; for more information, the event webpage is here. Just after the end of the event, the draft of the guide for FP7 project will be mirrored here as well.

See you in Brussels!

2 Comments

Estimating savings from OSS code reuse, or: where does the money comes from?

We are approaching 100% of usage within other software, that is every software system contains some OSS code inside. Why? There is a perfectly sound reason, and this reason is related to a long standing tenet of software engineering: doing software takes time and money, and code needs to be maintained for a long time- adding additional costs on top. In one of the most widely known article in software engineering (“no silver bullet: essence and accidents of software engineering“), Frederick Brooks exposes some fundamental reasons behind the inherent difficulty of making software, especially large scale software systems. He also coined his law, the “no silver bullet law”:

There is no single development, in either technology or in management technique, that by itself promises even one order of magnitude improvement in productivity, in reliability, in simplicity.

Despite many trials, and many technologies (better languages, OOP, formal methods, automatic programming and many others..) the law has remained true until now. In the same article, however, Brooks marks some potential attacks on the inherent difficulty of making software:

  • buy, don’t build (that is, if possible don’t code at all)
  • requirement refining, rapid prototyping, incremental building
  • great designers

It is quite easy to make a parallel with open source style development, that promotes the same ideas:

  • reuse components and source code from other projects
  • release early/release often (or allow anyone read access to CVS for making their own version)
  • meritocracy (small group of respected core developers, and many smaller contributors)

In the software engineering world the reuse of code coming from the “external” world is commonly called COTS, Commercial Off The Shelf, and has been studied for many years. Boehm and others created a model for mixed development that can be graphically presented as:

productdelivery
As can be seen in the image, there are costs that are related to the integration of COTS (in our case, OSS) within a newly developed product. These costs are related to the evaluation (and searching) of OSS, “tailoring” (the adaptation of the code for the project needs), and development of glue code (the layer of code between OSS modules and between OSS and internally developed code).

I would like to present some results based on the COCOMO II model, adapted to a model where a varying percentage of code is developed or reused from OSS. First of all, some assumptions:

  • The average company cost of a developer is fixed at 25€ per hour. It should be a reasonable approximation of european costs (in particular, costs in mediterranean areas like Spain, France, Italy, Greece); we know that it is considerably lower than other estimates (especially US ones), but this way we provide a “lower bound” for savings instead of averages.
  • The “tailoring” of code is performed on 15% of the OSS code; percentage comes from several separate projects, with estimates ranging from 5% for mature projects with structured and well-documented interfaces to 20% for complex, deeply-interlocked code like that found in embedded systems.
  • Tailoring cost is higher than traditional coding; for this reason, the COCOMO complexity index is increased to 6 compared to new-code development.
  • Volatility is based on our own model for cost estimation and data from literature on COTS (“Empirical observations on COTS software integration effort based on the initial COCOTS calibration database”, Abts C., Boehm B.W., Bailey Clark E.) and it can be approximate with an average effort equivalent to 1.5 to 2.5 full time person-year.

This is the result:

Project size (lines of code) % of OSS total cost (Keuro) Savings duration (years) avg. staffing
100000 0 1703 0% 1.7 20.5
100000 50 975 43% 1.3 15.4
100000 75 487 71% 0.9 8.6
1000000 0 22000 0% 3.3 141.7
1000000 50 12061 45% 2.6 103.2
1000000 75 3012 86% 2 32
10000000 0 295955 0% 7.5 818
10000000 50 160596 46% 5.9 631.2
10000000 75 80845 73% 3.8 421

In the case of 10Mlines of code, the saving is estimated at more than 210M€, that is consistent with previous estimates of savings by Nokia in reusing open source within Maemo. Even for the “small” project of 100000 lines, the savings are estimated at 1.2M€. Another interesting aspect is related to  staffing and time: not only the use of OSS can reduce development time substantially, but it allows for a substantial reduction in the amount of staff necessary for the development. In the smallest example (100000 lines of code, still substantial) the average staffing is reduced from more than 20 developers to slightly less than 9, bringing this project within reach even by small companies, and in my personal view it explains the exceptional take-up of OSS by new and innovative companies, that even before external sources of capital (like VCs) are capable of creating non-trivial projects with very limited resources.

10 Comments

Comparing companies effectiveness: a response to Savio Rodrigues

I was intrigued by a twit from Stéfane Fermigier, “Comparing only 1 oss vendor (RHAT) and 1 proprietary, monopolistic one (MSFT) is really a deep piece of economic science !” with a link to this article by long-time OSS debater/supporter/critic/fellow, Savio Rodrigues, that compares the financial breakdown of RedHat and Microsoft, and concludes that the commonly-held hypothesis that open source gives a capital advantage by providing savings on R&D is not true. In particular: “The argument is that commercial vendors spend on items such as advertising, marketing, R&D and most importantly, expensive direct sales representatives. We’re told that open source vendors spend significantly less on these items, and hence can be more capital efficient. These costs make up the difference between the costs of doing business as a commercial vendor vs. an open source vendor. Somehow, those numbers didn’t seem right to me.”

I am always skeptical of the “We’re told” part, as I also remember the “we’re told that all open source is developed by students in basements”, “we’re told that we can give the source code and people will start work on it” and many other, unsubstantiated or out-of-context comments.

I would like to point out a few things:

  • first of all, there are structural limits in how public listed companies can perform, and how the financial breakdown is performed. If Savio tried to extend his (somewhat limited) analysis to other public companies in the same sector he would have found that most of them are nearly identical in R&D versus SG&A costs, when compared within the same class in terms of market capitalisation. In fact, only startups (that rarely can go to the stock market) have an higher-than-average R&D. Other companies with similar properties are companies in the biochemistry and drug design firms, that have a long incubation period to create a product, and for this reason have a high R&D share.
  • Then, the balance sheet is in itself not a good way to measure the “productivity”, or savings in development compared to same-class companies. In fact, as I wrote some days ago, savings due to the adoption of OSS are not inherently visible in balance sheets, but appear as better quality product or as the capability of producing goods at a lower price point. In fact, just thinking of comparing RedHat with a company that is 55 times larger should provide an idea of how big an advantage is OSS in terms of efficiency.
  • Many companies are helped by the existence of a “trialable” product, and in this sense there may be a core of truth in the idea that cost for customer acquisition may be lower. I am not convinced that cost reduction is so significant, at least not to the same extent of R&D advantages that are clearly easier to measure, and that tend to be significant.

I agree with Savio that competition should not happen exclusively on pricing (but it may be a part of a larger strategy), but I contend that by looking just at two balance sheet breackdowns can give us information on whether OSS is more or less efficient in terms of product creation. I continue to believe that in many markets OSS provides a substantial advantage: after all, Rishab et al. estimated the average R&D advantage at 36%; my estimates are from 20% to 75% in specific industrial areas, but in any case substantial.

update: Savio added another company (Tibco) which is similar to RedHat size; as before, it shows very similar results. It is my belief that even adding additional companies will more or less show that for software-intensive companies the results will be more or less the same. I also believe that the real comparison should happen outside the financial sheets, by comparing the market: in which markets do the company compete? What is the average size of the competitors? If we can show that on average OSS companies tend to be efficient competitors in markets much larger than their own, then we can show that OSS can give an advantage. If Rishab’s evaluation is right, the 36% increased efficiency should bring the equivalent of a capital advantage of 50% (roughly) so we should check whether RedHat or Alfresco effectively compete with companies that are at least 50% larger than themselves.

4 Comments

The dynamics of OSS adoptions, II – diffusion processes

(followup post of “the dynamics of OSS adoption – 1“)

The most common process behind OSS adoption is called “diffusion”, and is usually modelled using a set of differential equations. It is based on the idea that the market is made of a set of interoperating agents, each one deciding independently which technology to adopt in different moments; the model is usually capable of handling multiple participants in a market, and to predict overall evolution. A good example of a diffusion-based dynamic equilibrium is the web server market, when total server numbers are used. If we take the data from Netcraft, and we model each individual server type as a competitor, we got this kind of graph:

webservers

Which is consistent with a traditional Bass Model explanation (data for apache was added to that of Google Web server, that is Apache-based; bicubic smoothing was used to get the trend lines). Diffusion models tend to generate this kind of equilibrium lines, with the market that in a more or less consistent way moves to an equilibrium that changes only when a specific technology is substituted by moving to another, different status.

The probability of choosing one technology over the other depends on several factors; a very good model for such adoption is the UTAUT model (some pdf examples here and here), that was found capable of predicting 70% of the variance of adoption success (what it means: that the parameters in the model explain nearly perfectly whether you will adopt a technology or not).
The important point to remember: this is about *individual* adoption, not mandated and without external constraints. In this sense, we can use it to predict how a PC owner chooses her web browser, or how a small company may choose which web server to use.

The model uses four parameters: performance expectancy, effort expectancy, social influence, and facilitating conditions.

  • performance expectancy: The degree to which a person believes that using a particular system would enhance his or her job performance, or the degree to which using an innovation is perceived as being better than using its precursor.
  • effort expectancy: the degree to which a person believes that using a system would be free of effort, or the degree to which a system is perceived as relatively difficult to understand and use.
  • social influence: The individual’s internalization of the reference groups subjective culture, and specific interpersonal agreements that the individual has made with others, in specific social situations; or the degree to which use of an innovation is perceived to enhance ones image or status in ones social system.
  • facilitating conditions: Reflects perceptions of internal and external constraints on behaviour and encompasses self-efficacy, resource facilitating conditions, and technology facilitating conditions; or objective factors in the environment that observers agree make an act easy to do, including the provision of computer support.

In the next post, I will present an example of these four parameters in the context of an OSS adoption.

2 Comments