Archive for September, 2010

Strategy, Tactics, and why companies are free to not contribute.

Yesterday Julie Bort wrote in the NetworkWorld site an interesting post called “Cisco doesn’t contribute nearly enough to open source”, where she contends that “”[despite its] .. proclaims it responsible for a half percent of the contributions to the Linux kernel (0.5%). In reality, Cisco has been a near non-entity as an open source contributor”. Of course the author is right in its claims – the amount of contributed code to the Linux kernel is substantial but very “vertical”, and specific to the needs of Cisco as a Linux adopter.

Which is a perfectly sensible thing to do.

The problem of “contribution” comes up and again in many discussions on open source and business adoption of OSS; it is, in fact, a source of major debate why participation is low, and what can be done to improve it. It is my opinion that there are some barriers to OSS contribution – namely, internal IPR policies, lack of understanding of how participation can be helpful and not just a gift to competitors, and more. On the other hand, two points should be made to complement this view: the first is that some companies contribute in ways that are difficult to measure, and the second is that sometimes companies have no economic reasons to do so.

Let’s start with the first point, that is a little peeve of me. Companies can provide source code; some do, and that’s a beautiful thing. However, there are many, many alternative ways of collaborating. Aaron Seigo, of KDE fame, in one presentation outlined the many activities that are part of the possible KDE contributions:

  • Artwork
  • Documentation
  • Human-computer interaction
  • Marketing
  • Quality Assurance
  • Software Development
  • Translation

In fact, I would say that some aspects like Artwork, Marketing and Quality Assurance may even be more important than pure coding – the problem is measuring such contributions. While the technical work underpinning source code analysis is quite well researched (among others, in our FLOSSMETRICS project) there is NO research on how to measure non-code contributions. And such contributions may be hugely important; one of my favorite example is the release, from Red Hat, of the Liberation fonts – a set of fonts with metrics compatible with the most widely used Microsoft fonts, like Arial. That alone helped substantially in improving the quality and correctness of document editing and visualization on Linux. How to measure that? Ubuntu has substantially contributed in terms of dissemination, in creating a base for many other distributions (including our own Everydesk). How to assess the value of that?

The second aspect is more complex, and is related to the strategy and tactics that a company uses to fulfill its own goals. Let’s take into account what a normal company do: first of all, survive (that is, revenues+reserves>expenses). Not all companies do have such a goal (a company designed to fulfill a task and then end its activities does have the survival goal with a deadline) but most do. This means that a company performs an internal or external activity if it does provide, now or in the future, a probable increase in revenues or reserves, or decreases expenses. Moral or ethical goals can be easily modeled in this schema using a “ethical asset”, that is a measure of how good we are in a specific target environment; for example, ecological contributions and so on.

So, let’s think about our typical company using OSS for a product. Let’s imagine that the company is doing a tactical adoption, that is it does not have a long term strategy that is based on Open Source. If the cost of contributing something is lower than the cost of doing everything from scratch, then the  company will contribute back (or at least, the probability of that action is higher). In absence of a strategy based on open source, there is no need to go further.

For example, in the blog post the open sourcing of IOS is mentioned; the question is: why? What economic goal this open sourcing brings? If the company decides to adopt a long term strategy based on resource sharing (with the idea of receiving substantial contributions from external entities – like Linux, WebKit, Apache, and so on) then this may make sense; but it implies a substantial change in company strategy. Such large changes are not easy to do and perform well; Sun tried (and partly failed), and most of the “famous” examples are only partially adopting an open-based strategy (IBM, Oracle, Google).

To recap: 1) we must evaluate and appreciate all kind of contributions – not only code. 2) We can expect large scale contributions only from companies that bet their strategy on OSS – Red Hat is among my favorite examples of that. We cannot expect, realistically, for companies that are using Open Source in a tactical way to contribute back in the same way.

, ,

5 Comments

Google vs. Facebook: two separate markets

Just a few days ago it was announced that according to ComScore, people spent more time on Facebook than Google; something that prompted a wide variety of claims from “google is dead” to “facebook will become the internet” or something similar. The reality is that FB and GOOG are moving in two different markets, and the failure of previous social experiments like Orkut or Buzz is actually related to the fact that Google internal machinery, so sophisticated and tailored to help manage “the world’s information”, is not appropriate for something like social sites.

In fact, Google manages everything with the assumption that data is, in some sense, relevant and useful in a generic sense, at least for a subset of its users. The concept of “usefulness” is at the core of the famous PageRank algorithm; it uses the idea that links and connections are a substitute proxy for “interest” – and mixes in the background information on who links what to create and propose potential links that maximize the probability that someone searching will go through and find something that is, well, interesting. This way, the said user will come back to Google to search something, and this will bring back more ad revenues.

This is good and correct – unless it’s data that is ephemeral, like the massive amount of drivel that is generated by social site users. The updates on Facebook users’ walls are mostly uninteresting for anyone not part of the same social circle – knowing that “Justin just spilled its coke on all its textbooks! What fun!” is probably relevant to people that know Justin, whoever he is, and maybe some textbook vendor, that will probably get a new customer. The value of Facebook is in its inclusiveness, that is the fact that a large number of social groups of all sizes are within the same platform, and share at least partly the same social graph. But – there is no need of a PageRank on the backoffice of Facebook search, as all the search that is necessary is already structured within the (user generated) social graph itself.

That’s why Google seem unable to turn its social acquisition into something integrated within its own services: the engine behind Google is tailored for a different set of parameters – and useless for social sites. Which is a pity, because some of the technology that Google developed to address this market (like Wave) was quite interesting, but – again – simply ill suited to such a market.

1 Comment

EveryDesk is a finalist of the OpenWorldForum demo cup!

I can only thank the judges for this recognition. I hope to entertain you in Paris as well :-)

Open Source and innovation: 13 finalists chosen for the Demo Cup at this year’s Open World Forum
The Jury for the Demo Cup – the international competition for Open Source projects being organized as part of the Open World Forum in Paris on 1 October 2010 – has published the list of finalists, following a tough selection process based on entrants’ submissions.

The Open World Forum is the world’s leading summit meeting bringing together decision-makers and communities to cross-fertilize open technological, economic and social initiatives to build the digital future. Its Demo Cup showcases innovative and game-changing Open Source products of the year.

The finalists are:

These 13 finalists will compete against each other on 1 October 2010 from 2:00pm to 4:00pm at the Open World Forum. Each will have exactly eight minutes to convince the Jury and the audience, by putting forward a practical demonstration of how their product might become a game-changer in their marketplace. Following these presentations, the Jury will present five ‘Open Innovation Awards’ which will recognize the most spectacular and convincing demonstrations.

The Demo Cup Jury includes investors, entrepreneurs, Open Source managers from leading IT services companies, and consultants; all of them experts in Open Source and innovation: Larry Augustin (SugarCRM), Jean-François Caenen (Capgemini), Jean-Marie Chauvet (LC Capital), Stefane Fermigier (Nuxeo), Jean-François Gallouin (Via Innovation), Roberto Galopini (consultant), Thierry Koerlen (Ulteo), Jean-Noel Olivier (Accenture), Bruno Pinna (Bull), Alain Revah (Kublax).

Stefane Fermigier, joint chairman of the Jury, commented: “When choosing the finalists from among the many submissions we received, we took three key criteria into account: the innovative and open nature of the projects being presented; the impact that we thought they might have on their respective markets; and their ability to produce a spectacular demo that would leave a lasting impression on the audience.

Jean-Marie Chauvet, also joint chairman of the Jury, added: “The very high quality and variety of the entries we received shows that the Open Innovation Awards, organized as part of the Open World Forum, are helping to establish this event as the essential annual focal point for innovation in the open software world.”

The 2010 Demo Cup is organized by the Open World Forum, with operational support being provided by the Open Software Special Interest Group of the Systematic competitiveness cluster.

For more information, visit: <http://www.openworldforum.org/connect/awards/awards> or contact Stefane Fermigier at sf@nuxeo.com.

About the Open World Forum

The Open World Forum is the leading global summit meeting bringing together decision-makers and communities to cross-fertilize open digital technological, economic and social initiatives. At the very heart of the Free/Open Source revolution, the event was founded in 2008 and now takes place every year in Paris, with over 140 speakers from 40 countries, an international audience of 1,500 delegates and some forty seminars, workshops and think-tanks. Organized by a vast network of partners, including the leading Free/Open Source communities and main global players from the IT world, the Open World Forum is the definitive event for discovering the latest trends in open computing. As a result, it is a unique opportunity to share ideas and best practice with visionary thinkers, entrepreneurs and leaders of the top international Free/Open Source communities and to network with technology gurus, CxOs, analysts, CIOs, researchers, politicians and investors from six continents. The Open World Forum is being run this year by the Systematic competitiveness cluster, in partnership with Cap Digital and the European QualiPSo consortium. Some 70% of the world’s leading information technology companies are involved in the Forum as partners and participants.

For more information, visit: http://www.openworldforum.org

, ,

2 Comments

Web versus Apps: what is missing in HTML5

If there is a concept that is clear in the analysts’ minds is the fact that mobile (in any form) is the hot market right now. Apple’s iOS devices are growing by leaps and bounds, dispelling the doom predictions of our beloved Ballmer: “There’s no chance that the iPhone is going to get any significant market share. No chance. … 2% or 3%, which is what Apple might get” or dismissing the iPad as “yet another pc”. The reality is that new mobile platform are consolidating a concept -  the idea of the App store, that is an integrated approach to managing and buying applications – and the idea of “apps for everything”, even for data that comes off straight from a web site, like the recently launched Twitter app for the iPad, that is – in my own, clearly subjective opinion – beautiful.

After all the talks about platform independence, portability, universality of HTML5, and so on  – why Apps? why closed (or half-open) app stores, when theoretically the same thing can be obtained through a web page? I have a set of personal opinion on that, and I believe that we still need some additional features and infrastructures from browsers (and probably operating systems) to really match the feature set and quality of apps. If – or when – those missing pieces are delivered to the browser, the whole development experience will in my opinion return back to the web as a medium, substantially enlarging the potential user base and reducing the importance of a single OS to develop for.

User Interfaces: this is, actually, one of the easiest things. HTML5, CSS3, Canvas, and a whole bunch of additions (like WebGL) are already closing in on the most refined native UI toolkits. There is still a margin – of course – but that gap is closing fast. Modern toolkits like Cappuccino (one of my favorites, used to create the stunning 280slides tool) are quite comparable to native UIs, and the few remaining features are being added at a frantic pace (thanks in part to the healthy competition between Mozilla and WebKit).

Video: actually, WebM is in my tests a very good alternative to H264, in terms of quality and in decoding efficiency (in my tests, WebM playback uses 20 to 30% less CPU than the ffmpeg H264 decoder, which is quite a good result). As for quality, the results of MSU graphics and media lab codec comparison found out that WebM is approximately equivalent to the baseline x264 encoding; that is, good enough for most applications. The substantial drawback of WebM is at the moment the dreadful encoding time – 5 to 20 times slower than comparable, more mature encoders. Substantial effort is needed before WebM can become encoding-wise competitive.

2D and casual gaming: ah, the hard point of gaming on the web. Up to now, gaming has been mainly relegated to the Flash engine, and is one of the parts still not replicated well by HTML5, Javascript et al; in fact, Flash is quite important for the casual gaming experience, and some quite stunning games are based on flash, and comparable to native games (if you want to waste some time, look at RoboKill2 as an example).However, given the fact that no fully compatible open source flash player exist, there are still issues with the real portability and platform independence of flash gaming in general (despite the excellent improvements in Gnash and LightSpark; also, it may even be possible to see in the future a native translator to Javascript like the SmokeScreen project. Actually, there is a great deal of overlap of Flash with recent evolutions of Canvas/HTML5/Javascript – it is clear that the overall evolution of the open web platform is going in the direction of integrating most of flash functionalities directly within HTML.

3D gaming: There is at the moment no way to create something like the Epic Citadel demo, or Carmack’s RAGE engine on iOS. The only potential alternative is WebGL, that is (like the previous links) based on OpenGL 2.0 ES, and paints on the HTML5 canvas (that, in the presence of proper support for hardware compositing, should allow for complex interfaces and effects). The problem is that browser support is still immature – most browsers are still experimenting in an accelerated compositing pipeline right now, and there are still lots of problems that need to be solved before the platform can be considered stable. However, after the basic infrastructure is done, there is no reason for not seeing things like the current state of the art demos on the web; modern  in-browser Javascript JIT are good enough for action and scripting, web workers and web sockets are stable enough to create complex, asynchronous event models. It will take an additional year, probably, until the 3D support is good enough to see something like WoW inside a browser.

Local binary execution: for those things that actually cannot be done by a browser, local execution is the only alternative. For example, having a complex VPN client embedded in a web page is something that would simplify the task of connecting to a web (or non-web) in an easy way without downloading any additional package. This model was demonstrated by Google in its ChromeOS presentation, showing off a game based on the Unity web player, ported to the Native Client (NaCl) environment. The problem of the initial implementation of NaCl was that the binary was actually not portable across cpu architectures; the new pNaCl (portable NaCl) uses the incredibly good LLVM infrastructure to generate portable bytecodes.

Payment: there is one thing that is sorely missing or incomplete, and that is billing and payment management from the web application. Within iOS, and thanks to iTunes and carrier interactions, paying even in-game or in-app is easy and immediate. There is no similar ease of use and instant monetization within web applications, at the moment. One of the missing things is actually the overall management of digital identities, that is inextricably linked to the payment possibilities and channels.

DRM: yes, DRM. Or content protection, or whatever. Despite the clear indication that DRM schemes do not work, there is no shortage of studios or content producer that want to ensure that there is at least a minimum form of “protection” against unwanted use. I don’t believe that this form of protection is useful at all, but I am not confident in people accepting this in the next 5 years – and this means that DRM should be possible in the context of the browser. Possible alternatives are the use of a ported content execution engine (imagine a video player based on pNaCl, that brings its own DRM engine inside). Or integrating an open source DRM engine like DReaM (if it survives the Oracle changeover, that is). This kind of tool can also help to prevent cheating in online games (imagine a WoW like game, based on Javascript: what prevents the user to change the code on the fly with something like GreaseMonkey?) and other multiplayer environments.

App stores: what is an app store?  A tool to reduce searching costs, and in Apple iTunes model a framework for app management and backup. In a sense, something like this is possible right now with some browser/OS integration (the excellent Jolicloud has something like that right now, and with some additional support for web packaging formats and remote synchronization like Mozilla Sync this can become ubiquitous).

What do you think? Is there something else missing? Comments are, as usual, welcome…

, , ,

5 Comments

Windows phone 7, Android, and market relevance

Updated: despite the Business Insider claims, the list of motives is actually a perfect copy of those mentioned by Steve Ballmer in a CNN interview, and I also found that the list of motives for the claimed inferiority of Android is actually from 2008, as can be found here. I found quite funny that basically the same motivations apply two years later for a different OS (in 2008 it was Windows Mobile 6.5, a totally different operating system), and are quite similar to the list of motivations from MS to avoid open source – namely, inferior user experience, hidden costs and IPR risks. Maybe Microsoft has not changed so much as it would like to claim.

A recent Business Insider post provided, other than a nice retouched photo of Google’s Schmidt with menacing red eyes, a snippet of conversation with an anonymous MS employee that claimed that Android “free” OS is not free at all, and its costs are much higher than the $15 asked by Microsoft as licensing fees. Having had my stint on mobile economics, I would like to contribute some of my thoughts on what is actually implied by the MS employee, and why I believe that some parts of it are not accurate. Before flaming me as a Google fanboy, I would like to point out that I am not affiliated with Google, MS, anyone else (apart my own company, of course), and my cellphone is a Nokia. Enough said.

OEMs are not using the stock Android build. All Android OEMs are bearing costs beyond “free.” That goes with the definition of OEM – it is hardly a surprising idea. My gripe with the phrase is that the author had, conveniently, conflated the concept of “free” as “freely available operating system” with “free as in I have nothing to do, everything is done for me for free”. The second concept is actually quite uncommon, and I had never met an OEM product manager that believed in something like that. It reminds me a lot of the old taglines used in the infamous MS “comparisons”, that were – with blessings from all – sacked from Microsoft web site. So, in conclusion: yes, you will bear costs other than downloading Android from GIT. And – surprise – I am sure MS will ask for engineering costs for adapting WinPhone7 for any adaptation outside the stock image.

Lawsuits over disputed Android IP have been costly for Android OEMs. (See Apple/HTC, as just one example.) Microsoft indemnifies OEMs who license Windows Phone 7 against IP issues with the product. That is, legal disputes over the IP in Windows Phone 7 directed at OEMs will be handled by Microsoft. This goes a long way toward controlling legal costs at the OEM level. Ah, please, Microsoft – you are so friend of OSS, and you still drum the “IPR violation” song? Anyway, I am quite sure that indemnification can be quite easily acquired, probably from Google or from a third party. It depends on the kind of IPR that the OEM itself does have; in some cases such a patent safety scheme is uneconomical. It is, in any case, a business decision – Symbian did not had indemnification either (or only as an additional product) but that did not stopped Symbian from becoming the most widely used mobile OS.

Android’s laissez faire hardware landscape is a fragmented mess for device drivers. (For background, just like PCs, mobile devices need drivers for their various components—screen, GPS, WiFi, Bluetooth, 3G radio, accelerometer, etc.) Android OEMs have to put engineering resources into developing these drivers to get their devices working. The Windows Phone 7 “chassis strategy” allows devices to be created faster, saving significant engineering cost. It’s essentially plug and play, with device drivers authored by Microsoft. This (apart from the use of the clearly pejorative mention of “fragmented mess” is naturally true. It is also – another surprise – the reason of Windows success, namely the external ecosystem of hardware devices, mostly unpredictable, that were basically developed and managed outside of Microsoft control. After much bashing of Apple’s “walled garden”, now Microsoft seem to imply that the same model that brought them success is now useless, and that to win in mobile you have to adopt Apple centrally managed hardware experience. It may be true, or not – but I suspect that hardware manufacturers will be more happy to create many permutation and device models, designed for different price points and different users, in a way that would be incompatible with MS central control and central device driver development. What happens if I need to push on the market a device that deviates from the MS chassis? Will MS write the driver for me, for free? What if it doesn’t want to write it? The chassis model is nice if you are Apple, and are selling basically a single (or a few) models; if you are going to market with many hardware vendors, you are forcing the same, undifferentiated hardware on all OEM – and this is a great no-no. How are you going to go against competitors that do employ exactly the same model, bill of material, same procurement channel?

Also, this phrase is a clear indication that someone inside of MS still don’t understand what (real) open source is about. The amount of engineering necessary for creating a complex product out of OSS is substantially lower than proprietary alternatives, as I demonstrated here and here; the driver development effort can easily be shared among many different projects that use the same component, lowering the development costs substantially.

Windows Phone 7 has a software update architecture designed to make it easy for OEMs to plug-in their custom code, independent of the OS code. We’ve seen the delays due to Android OEMs having to sink engineering resources into each and every Android update. Some Android OEMs skip updates or stop updating their less popular devices. Because of the unique update architecture, Windows Phone 7 OEMs don’t need to roll their own updates based on the stock build. Costs are reduced significantly. This is another part that is, until Phone 7 is out, difficult to judge. It is a part that I believe stems from an underlying error: OEMs add code to differentiate and to push branded apps and services, not because they have to compensate for an OS missing functionality (especially now, with Android 2.2; Android 1.5 and 1.6 needed some addition from third parties because of lack of features). Carriers, once sold a device, are not that interested in providing updates – after all, you are already locked in a contract. I had seen no official documentation on why Phone7 can be so modular that no engineering is needed even for custom layers on top of the user interface – we will see.

Android OEMs need to pay for licenses for many must-have features that are standard in Windows Phone 7. For example, software to edit Office documents, audio/video codecs (see some costs here), or improved location services (for this, Moto licenses from Skyhook, just as Apple once did). Of course, all of these license fees add up. I like the concept of “must have” – it is widely different for every user and company. For example, I am sure that using Google Docs or Zoho (or Microsoft Web office, that is quite good on its own) would go against the “edit Office documents” part; as for the audio/video codecs, of course you have to license them… unless you use WebM or similar. Or, like many OEM, you are already a licensee for H264 and other covered standards-  in this case, you pay around 1$ per device. As for other services: I found no mention of location services from MS, at least not in the public presentations. If anyone has more details on them, I would welcome any addition.

Windows Phone 7 supports automated testing. Android doesn’t. When OEMs hit the QA phase of the development lifecycle, it’s faster and less expensive to QA a Windows Phone 7 device than an Android device. Again: if you have a single chassis, or a few of them, testing is certainly easier. However, there are quite a few testing suites that allow (through the emulator) to provide a very good automated testing facility.

Finally, Windows Phone 7 comes with great user experiences in the Metro UI, Zune, Xbox LIVE, Exchange, and Visual Studio for app development. Creating these experiences for Android is costly. They’re not baked into the stock build of Android. Well, there are quite a few tools for app development on Android as well. How, exactly, Exchange should be counted as a great user experience is something I am not understanding well, but that is probably a limit of mine.

In synthesis, the new MS concept is “we do it like Apple”. I am not sure that this can work for anyone that is not Apple, though; first of all, because up to now product engineering excellence was not among MS most touted virtues, and because this will in turn go against the differentiation trend that OEM and telcos are pushing to make sure that their brand lines remain unique and appealing enough. How many Phone7 devices can a telco carry? 1? 2? It is possible to imagine a custom Android device for every price point instead – some carriers like Motorola and HTC are already pushing 5,6 devices and more, and low cost handsets are adding even more to the segmentation mix.

, , ,

4 Comments