Posts Tagged open source

Some data on OSS TCO: results from past projects

During the development of the EU Cospa project, we found that one of the most common criteria used to evaluate “average” TCO was actually not very effective in providing guidance – as the variability of the results was so large that made any form of “average” basically useless. For this reason, we performed a two-step action: the first was to define a clearly measurable set of metrics (including material and immaterial expenses) and you can find it here:
D3.1 – Framework for evaluating return/losses of the transition to ODS/OSS

The second aspect is related to “grouping”. We found that the optimal methodology for evaluating migration was different for different kind of transitions, like server vs. desktop, full-environment migration vs. partial, and so on; the other orthogonal aspect is whether the migration was successful or not. In fact, *when* the migration is successful, the measured (both short-term and over 5 years) TCO was substantially lower in OSS compared to pre-existing proprietary software. I highlight two cases: a group of municipalities in the North of Italy, and a modern hospital in Ireland. For the municipalities:

Initial acquisition cost: proprietary 800K€, OSS 240K€

annual support/maintenance cost (over 5 years): proprietary 144K€, OSS 170K€

The slightly higher cost for the OSS part is related to the fact that an external consultancy was paid to provide the support. An alternative strategy could have been to retrain the staff for Linux support, using consultancies only in year 1 and 2- leading to an estimated total cost for the OSS solution exactly in line with the proprietary one. The municipalities also performed an in-depth analysis of efficiency; that is, documents processed per day, comparing openoffice and MS office. This was possible thanks to a small applet installed (with users and unions consent) on the PC, recording the user actions and the applications and files used during the migration evaluation. It was found that users were actually *more* productive with OOo in a substantial way. This is probably not related to a relative technical advantage of OOo vs. MS office, but on the fact that some training was provided on OpenOffice.org before beginning the migration – something that was not done before for internal personnel. So many users actually never had any formal training on any office application, and the limited (4 hours) training performed before the migration actually substantially improved their overall productivity.

On the other hand, it is clear that OOo is – from the point of view of the user – not lowering the productivity of employees, and can perform the necessary tasks without impacting the municipality operations.

- Hospital:
The migration was done in two steps; a first one (groupware, content management, openoffice) and a second one (ERP, medical image management).
In the first, the Initial acquisition cost was: proprietary 735K€, OSS 68K€

annual support/maintenance cost (over 5 year): proprietary 169K€, OSS 45K€

Second stage Initial acquisition cost: proprietary 8160K€, OSS 1710K€

annual support/maintenance cost (over 5 year): proprietary 1148K€, OSS 170K€

The hospital does have a much larger saving percentage when compared with other comparable cases because they were quite more mature in terms of OSS adoption; thus, most of the external, paid consulting was not necessary for their larger migration.

Some of the interesting aspects that we observed:

  • In both tangible and intangible costs, the reality is that one of the most important expense is software search and selection, and the costs incurred in selecting the “wrong” one. This is one of the reasons why in our guidelines we have included the use of established, pragmatic software selection methodologies like FLOSSMETRICS or QUALIPSO (actually we found no basic difference in “effectiveness” among methods: just use at least one!)
    This information is also something that can be reused and disseminated among similar groups; for example, the information on suitability of a backup solution for municipalities can be spread as a “best practice” among similar users, thus reducing the cost of searching and evaluating it. If such a widespread practice can be performed, we estimate that OSS adoption/migration costs can be reduced of something between 17% and 22% with just information spreading alone.
  • On average, the cost of migration (tangible vs. intangible) was nearly equal with one exception that was 27% tangible vs. 73% intangible, due to the pressure to use older pcs, and reuse resources when possible for budgetary reasons. In general, if you want to know the “real” TCO, simply take your material costs and multiply by two. Rough, but surprisingly accurate.
  • Both in COSPA, OpenTTT and our own consulting activity we found that 70% of users *do not need* external support services after the initial migration is performed. For example, while most of COSPA users paid for server support fees for RedHat Enterprise, a substantial percentage could have used a clone like Centos or Oracle linux with the same level of service and support. Also, while not universally possible, community-based support has been found sufficient and capable in a large number of environments. A problem with community support has been found in terms of “attitude”; some users accessed the forums with the same expectations of a paid offering, seriously damaging the image and possibility of support (something like “I need an answer NOW or I’ll sue you!” sent to a public support forum for an open source product). For this reason, we have suggested in our best practices to have a single, central point of contact between the internal users and the external OSS communities that is trained and expert in how OSS works to forward requests and seek solutions. This can reduce, after initial migration and a 1-2 year period of “adaptation”, support costs by shifting some of the support calls to communities. This can reduce costs of a further 15-20% on average.

, , ,

3 Comments

Everydesk online: a full desktop as a Facebook application

I am quite proud of the work that we did on EveryDesk – a full desktop as a bootable USB key, fully modifiable and adaptable. We are using it in schools, public administrations and companies, where the increased efficiency of Linux makes a difference in making old computers usable again – or helping in the problem of managing PCs that are remote or in hostile environments.

However, this is not enough. You may be without a USB-bootable computer, or you may be using a tablet like an iPad or a Galaxy Pad (something that I see more and more everywhere). In these environments, you may need something more powerful than the apps that are available there – a full Office-like application, or a real desktop browser to access a corporate banking application; maybe you need a specific client for older systems, like the IBM iSeries (the old AS/400) or some special client in Java – on system that do not have java or flash.

For this kind of applications, we are working on a system that embeds a full HTML5 desktop in a FaceBook application, making it accessible from any recent web browser, including the iPad. This way, you can have a full desktop everywhere you go. We hope that it can be of interest; as soon as it is ready, we will release source code and blueprints.

We have prepared a small demo of how it works right now; it is a real screen capture from my own personal EveryDesk/Online instance, done on a normal ADSL line. It should give an idea on how it may work for you.

,

2 Comments

Strategy, Tactics, and why companies are free to not contribute.

Yesterday Julie Bort wrote in the NetworkWorld site an interesting post called “Cisco doesn’t contribute nearly enough to open source”, where she contends that “”[despite its] .. proclaims it responsible for a half percent of the contributions to the Linux kernel (0.5%). In reality, Cisco has been a near non-entity as an open source contributor”. Of course the author is right in its claims – the amount of contributed code to the Linux kernel is substantial but very “vertical”, and specific to the needs of Cisco as a Linux adopter.

Which is a perfectly sensible thing to do.

The problem of “contribution” comes up and again in many discussions on open source and business adoption of OSS; it is, in fact, a source of major debate why participation is low, and what can be done to improve it. It is my opinion that there are some barriers to OSS contribution – namely, internal IPR policies, lack of understanding of how participation can be helpful and not just a gift to competitors, and more. On the other hand, two points should be made to complement this view: the first is that some companies contribute in ways that are difficult to measure, and the second is that sometimes companies have no economic reasons to do so.

Let’s start with the first point, that is a little peeve of me. Companies can provide source code; some do, and that’s a beautiful thing. However, there are many, many alternative ways of collaborating. Aaron Seigo, of KDE fame, in one presentation outlined the many activities that are part of the possible KDE contributions:

  • Artwork
  • Documentation
  • Human-computer interaction
  • Marketing
  • Quality Assurance
  • Software Development
  • Translation

In fact, I would say that some aspects like Artwork, Marketing and Quality Assurance may even be more important than pure coding – the problem is measuring such contributions. While the technical work underpinning source code analysis is quite well researched (among others, in our FLOSSMETRICS project) there is NO research on how to measure non-code contributions. And such contributions may be hugely important; one of my favorite example is the release, from Red Hat, of the Liberation fonts – a set of fonts with metrics compatible with the most widely used Microsoft fonts, like Arial. That alone helped substantially in improving the quality and correctness of document editing and visualization on Linux. How to measure that? Ubuntu has substantially contributed in terms of dissemination, in creating a base for many other distributions (including our own Everydesk). How to assess the value of that?

The second aspect is more complex, and is related to the strategy and tactics that a company uses to fulfill its own goals. Let’s take into account what a normal company do: first of all, survive (that is, revenues+reserves>expenses). Not all companies do have such a goal (a company designed to fulfill a task and then end its activities does have the survival goal with a deadline) but most do. This means that a company performs an internal or external activity if it does provide, now or in the future, a probable increase in revenues or reserves, or decreases expenses. Moral or ethical goals can be easily modeled in this schema using a “ethical asset”, that is a measure of how good we are in a specific target environment; for example, ecological contributions and so on.

So, let’s think about our typical company using OSS for a product. Let’s imagine that the company is doing a tactical adoption, that is it does not have a long term strategy that is based on Open Source. If the cost of contributing something is lower than the cost of doing everything from scratch, then the  company will contribute back (or at least, the probability of that action is higher). In absence of a strategy based on open source, there is no need to go further.

For example, in the blog post the open sourcing of IOS is mentioned; the question is: why? What economic goal this open sourcing brings? If the company decides to adopt a long term strategy based on resource sharing (with the idea of receiving substantial contributions from external entities – like Linux, WebKit, Apache, and so on) then this may make sense; but it implies a substantial change in company strategy. Such large changes are not easy to do and perform well; Sun tried (and partly failed), and most of the “famous” examples are only partially adopting an open-based strategy (IBM, Oracle, Google).

To recap: 1) we must evaluate and appreciate all kind of contributions – not only code. 2) We can expect large scale contributions only from companies that bet their strategy on OSS – Red Hat is among my favorite examples of that. We cannot expect, realistically, for companies that are using Open Source in a tactical way to contribute back in the same way.

, ,

5 Comments

EveryDesk is a finalist of the OpenWorldForum demo cup!

I can only thank the judges for this recognition. I hope to entertain you in Paris as well :-)

Open Source and innovation: 13 finalists chosen for the Demo Cup at this year’s Open World Forum
The Jury for the Demo Cup – the international competition for Open Source projects being organized as part of the Open World Forum in Paris on 1 October 2010 – has published the list of finalists, following a tough selection process based on entrants’ submissions.

The Open World Forum is the world’s leading summit meeting bringing together decision-makers and communities to cross-fertilize open technological, economic and social initiatives to build the digital future. Its Demo Cup showcases innovative and game-changing Open Source products of the year.

The finalists are:

These 13 finalists will compete against each other on 1 October 2010 from 2:00pm to 4:00pm at the Open World Forum. Each will have exactly eight minutes to convince the Jury and the audience, by putting forward a practical demonstration of how their product might become a game-changer in their marketplace. Following these presentations, the Jury will present five ‘Open Innovation Awards’ which will recognize the most spectacular and convincing demonstrations.

The Demo Cup Jury includes investors, entrepreneurs, Open Source managers from leading IT services companies, and consultants; all of them experts in Open Source and innovation: Larry Augustin (SugarCRM), Jean-François Caenen (Capgemini), Jean-Marie Chauvet (LC Capital), Stefane Fermigier (Nuxeo), Jean-François Gallouin (Via Innovation), Roberto Galopini (consultant), Thierry Koerlen (Ulteo), Jean-Noel Olivier (Accenture), Bruno Pinna (Bull), Alain Revah (Kublax).

Stefane Fermigier, joint chairman of the Jury, commented: “When choosing the finalists from among the many submissions we received, we took three key criteria into account: the innovative and open nature of the projects being presented; the impact that we thought they might have on their respective markets; and their ability to produce a spectacular demo that would leave a lasting impression on the audience.

Jean-Marie Chauvet, also joint chairman of the Jury, added: “The very high quality and variety of the entries we received shows that the Open Innovation Awards, organized as part of the Open World Forum, are helping to establish this event as the essential annual focal point for innovation in the open software world.”

The 2010 Demo Cup is organized by the Open World Forum, with operational support being provided by the Open Software Special Interest Group of the Systematic competitiveness cluster.

For more information, visit: <http://www.openworldforum.org/connect/awards/awards> or contact Stefane Fermigier at sf@nuxeo.com.

About the Open World Forum

The Open World Forum is the leading global summit meeting bringing together decision-makers and communities to cross-fertilize open digital technological, economic and social initiatives. At the very heart of the Free/Open Source revolution, the event was founded in 2008 and now takes place every year in Paris, with over 140 speakers from 40 countries, an international audience of 1,500 delegates and some forty seminars, workshops and think-tanks. Organized by a vast network of partners, including the leading Free/Open Source communities and main global players from the IT world, the Open World Forum is the definitive event for discovering the latest trends in open computing. As a result, it is a unique opportunity to share ideas and best practice with visionary thinkers, entrepreneurs and leaders of the top international Free/Open Source communities and to network with technology gurus, CxOs, analysts, CIOs, researchers, politicians and investors from six continents. The Open World Forum is being run this year by the Systematic competitiveness cluster, in partnership with Cap Digital and the European QualiPSo consortium. Some 70% of the world’s leading information technology companies are involved in the Forum as partners and participants.

For more information, visit: http://www.openworldforum.org

, ,

2 Comments

Web versus Apps: what is missing in HTML5

If there is a concept that is clear in the analysts’ minds is the fact that mobile (in any form) is the hot market right now. Apple’s iOS devices are growing by leaps and bounds, dispelling the doom predictions of our beloved Ballmer: “There’s no chance that the iPhone is going to get any significant market share. No chance. … 2% or 3%, which is what Apple might get” or dismissing the iPad as “yet another pc”. The reality is that new mobile platform are consolidating a concept -  the idea of the App store, that is an integrated approach to managing and buying applications – and the idea of “apps for everything”, even for data that comes off straight from a web site, like the recently launched Twitter app for the iPad, that is – in my own, clearly subjective opinion – beautiful.

After all the talks about platform independence, portability, universality of HTML5, and so on  – why Apps? why closed (or half-open) app stores, when theoretically the same thing can be obtained through a web page? I have a set of personal opinion on that, and I believe that we still need some additional features and infrastructures from browsers (and probably operating systems) to really match the feature set and quality of apps. If – or when – those missing pieces are delivered to the browser, the whole development experience will in my opinion return back to the web as a medium, substantially enlarging the potential user base and reducing the importance of a single OS to develop for.

User Interfaces: this is, actually, one of the easiest things. HTML5, CSS3, Canvas, and a whole bunch of additions (like WebGL) are already closing in on the most refined native UI toolkits. There is still a margin – of course – but that gap is closing fast. Modern toolkits like Cappuccino (one of my favorites, used to create the stunning 280slides tool) are quite comparable to native UIs, and the few remaining features are being added at a frantic pace (thanks in part to the healthy competition between Mozilla and WebKit).

Video: actually, WebM is in my tests a very good alternative to H264, in terms of quality and in decoding efficiency (in my tests, WebM playback uses 20 to 30% less CPU than the ffmpeg H264 decoder, which is quite a good result). As for quality, the results of MSU graphics and media lab codec comparison found out that WebM is approximately equivalent to the baseline x264 encoding; that is, good enough for most applications. The substantial drawback of WebM is at the moment the dreadful encoding time – 5 to 20 times slower than comparable, more mature encoders. Substantial effort is needed before WebM can become encoding-wise competitive.

2D and casual gaming: ah, the hard point of gaming on the web. Up to now, gaming has been mainly relegated to the Flash engine, and is one of the parts still not replicated well by HTML5, Javascript et al; in fact, Flash is quite important for the casual gaming experience, and some quite stunning games are based on flash, and comparable to native games (if you want to waste some time, look at RoboKill2 as an example).However, given the fact that no fully compatible open source flash player exist, there are still issues with the real portability and platform independence of flash gaming in general (despite the excellent improvements in Gnash and LightSpark; also, it may even be possible to see in the future a native translator to Javascript like the SmokeScreen project. Actually, there is a great deal of overlap of Flash with recent evolutions of Canvas/HTML5/Javascript – it is clear that the overall evolution of the open web platform is going in the direction of integrating most of flash functionalities directly within HTML.

3D gaming: There is at the moment no way to create something like the Epic Citadel demo, or Carmack’s RAGE engine on iOS. The only potential alternative is WebGL, that is (like the previous links) based on OpenGL 2.0 ES, and paints on the HTML5 canvas (that, in the presence of proper support for hardware compositing, should allow for complex interfaces and effects). The problem is that browser support is still immature – most browsers are still experimenting in an accelerated compositing pipeline right now, and there are still lots of problems that need to be solved before the platform can be considered stable. However, after the basic infrastructure is done, there is no reason for not seeing things like the current state of the art demos on the web; modern  in-browser Javascript JIT are good enough for action and scripting, web workers and web sockets are stable enough to create complex, asynchronous event models. It will take an additional year, probably, until the 3D support is good enough to see something like WoW inside a browser.

Local binary execution: for those things that actually cannot be done by a browser, local execution is the only alternative. For example, having a complex VPN client embedded in a web page is something that would simplify the task of connecting to a web (or non-web) in an easy way without downloading any additional package. This model was demonstrated by Google in its ChromeOS presentation, showing off a game based on the Unity web player, ported to the Native Client (NaCl) environment. The problem of the initial implementation of NaCl was that the binary was actually not portable across cpu architectures; the new pNaCl (portable NaCl) uses the incredibly good LLVM infrastructure to generate portable bytecodes.

Payment: there is one thing that is sorely missing or incomplete, and that is billing and payment management from the web application. Within iOS, and thanks to iTunes and carrier interactions, paying even in-game or in-app is easy and immediate. There is no similar ease of use and instant monetization within web applications, at the moment. One of the missing things is actually the overall management of digital identities, that is inextricably linked to the payment possibilities and channels.

DRM: yes, DRM. Or content protection, or whatever. Despite the clear indication that DRM schemes do not work, there is no shortage of studios or content producer that want to ensure that there is at least a minimum form of “protection” against unwanted use. I don’t believe that this form of protection is useful at all, but I am not confident in people accepting this in the next 5 years – and this means that DRM should be possible in the context of the browser. Possible alternatives are the use of a ported content execution engine (imagine a video player based on pNaCl, that brings its own DRM engine inside). Or integrating an open source DRM engine like DReaM (if it survives the Oracle changeover, that is). This kind of tool can also help to prevent cheating in online games (imagine a WoW like game, based on Javascript: what prevents the user to change the code on the fly with something like GreaseMonkey?) and other multiplayer environments.

App stores: what is an app store?  A tool to reduce searching costs, and in Apple iTunes model a framework for app management and backup. In a sense, something like this is possible right now with some browser/OS integration (the excellent Jolicloud has something like that right now, and with some additional support for web packaging formats and remote synchronization like Mozilla Sync this can become ubiquitous).

What do you think? Is there something else missing? Comments are, as usual, welcome…

, , ,

5 Comments

Windows phone 7, Android, and market relevance

Updated: despite the Business Insider claims, the list of motives is actually a perfect copy of those mentioned by Steve Ballmer in a CNN interview, and I also found that the list of motives for the claimed inferiority of Android is actually from 2008, as can be found here. I found quite funny that basically the same motivations apply two years later for a different OS (in 2008 it was Windows Mobile 6.5, a totally different operating system), and are quite similar to the list of motivations from MS to avoid open source – namely, inferior user experience, hidden costs and IPR risks. Maybe Microsoft has not changed so much as it would like to claim.

A recent Business Insider post provided, other than a nice retouched photo of Google’s Schmidt with menacing red eyes, a snippet of conversation with an anonymous MS employee that claimed that Android “free” OS is not free at all, and its costs are much higher than the $15 asked by Microsoft as licensing fees. Having had my stint on mobile economics, I would like to contribute some of my thoughts on what is actually implied by the MS employee, and why I believe that some parts of it are not accurate. Before flaming me as a Google fanboy, I would like to point out that I am not affiliated with Google, MS, anyone else (apart my own company, of course), and my cellphone is a Nokia. Enough said.

OEMs are not using the stock Android build. All Android OEMs are bearing costs beyond “free.” That goes with the definition of OEM – it is hardly a surprising idea. My gripe with the phrase is that the author had, conveniently, conflated the concept of “free” as “freely available operating system” with “free as in I have nothing to do, everything is done for me for free”. The second concept is actually quite uncommon, and I had never met an OEM product manager that believed in something like that. It reminds me a lot of the old taglines used in the infamous MS “comparisons”, that were – with blessings from all – sacked from Microsoft web site. So, in conclusion: yes, you will bear costs other than downloading Android from GIT. And – surprise – I am sure MS will ask for engineering costs for adapting WinPhone7 for any adaptation outside the stock image.

Lawsuits over disputed Android IP have been costly for Android OEMs. (See Apple/HTC, as just one example.) Microsoft indemnifies OEMs who license Windows Phone 7 against IP issues with the product. That is, legal disputes over the IP in Windows Phone 7 directed at OEMs will be handled by Microsoft. This goes a long way toward controlling legal costs at the OEM level. Ah, please, Microsoft – you are so friend of OSS, and you still drum the “IPR violation” song? Anyway, I am quite sure that indemnification can be quite easily acquired, probably from Google or from a third party. It depends on the kind of IPR that the OEM itself does have; in some cases such a patent safety scheme is uneconomical. It is, in any case, a business decision – Symbian did not had indemnification either (or only as an additional product) but that did not stopped Symbian from becoming the most widely used mobile OS.

Android’s laissez faire hardware landscape is a fragmented mess for device drivers. (For background, just like PCs, mobile devices need drivers for their various components—screen, GPS, WiFi, Bluetooth, 3G radio, accelerometer, etc.) Android OEMs have to put engineering resources into developing these drivers to get their devices working. The Windows Phone 7 “chassis strategy” allows devices to be created faster, saving significant engineering cost. It’s essentially plug and play, with device drivers authored by Microsoft. This (apart from the use of the clearly pejorative mention of “fragmented mess” is naturally true. It is also – another surprise – the reason of Windows success, namely the external ecosystem of hardware devices, mostly unpredictable, that were basically developed and managed outside of Microsoft control. After much bashing of Apple’s “walled garden”, now Microsoft seem to imply that the same model that brought them success is now useless, and that to win in mobile you have to adopt Apple centrally managed hardware experience. It may be true, or not – but I suspect that hardware manufacturers will be more happy to create many permutation and device models, designed for different price points and different users, in a way that would be incompatible with MS central control and central device driver development. What happens if I need to push on the market a device that deviates from the MS chassis? Will MS write the driver for me, for free? What if it doesn’t want to write it? The chassis model is nice if you are Apple, and are selling basically a single (or a few) models; if you are going to market with many hardware vendors, you are forcing the same, undifferentiated hardware on all OEM – and this is a great no-no. How are you going to go against competitors that do employ exactly the same model, bill of material, same procurement channel?

Also, this phrase is a clear indication that someone inside of MS still don’t understand what (real) open source is about. The amount of engineering necessary for creating a complex product out of OSS is substantially lower than proprietary alternatives, as I demonstrated here and here; the driver development effort can easily be shared among many different projects that use the same component, lowering the development costs substantially.

Windows Phone 7 has a software update architecture designed to make it easy for OEMs to plug-in their custom code, independent of the OS code. We’ve seen the delays due to Android OEMs having to sink engineering resources into each and every Android update. Some Android OEMs skip updates or stop updating their less popular devices. Because of the unique update architecture, Windows Phone 7 OEMs don’t need to roll their own updates based on the stock build. Costs are reduced significantly. This is another part that is, until Phone 7 is out, difficult to judge. It is a part that I believe stems from an underlying error: OEMs add code to differentiate and to push branded apps and services, not because they have to compensate for an OS missing functionality (especially now, with Android 2.2; Android 1.5 and 1.6 needed some addition from third parties because of lack of features). Carriers, once sold a device, are not that interested in providing updates – after all, you are already locked in a contract. I had seen no official documentation on why Phone7 can be so modular that no engineering is needed even for custom layers on top of the user interface – we will see.

Android OEMs need to pay for licenses for many must-have features that are standard in Windows Phone 7. For example, software to edit Office documents, audio/video codecs (see some costs here), or improved location services (for this, Moto licenses from Skyhook, just as Apple once did). Of course, all of these license fees add up. I like the concept of “must have” – it is widely different for every user and company. For example, I am sure that using Google Docs or Zoho (or Microsoft Web office, that is quite good on its own) would go against the “edit Office documents” part; as for the audio/video codecs, of course you have to license them… unless you use WebM or similar. Or, like many OEM, you are already a licensee for H264 and other covered standards-  in this case, you pay around 1$ per device. As for other services: I found no mention of location services from MS, at least not in the public presentations. If anyone has more details on them, I would welcome any addition.

Windows Phone 7 supports automated testing. Android doesn’t. When OEMs hit the QA phase of the development lifecycle, it’s faster and less expensive to QA a Windows Phone 7 device than an Android device. Again: if you have a single chassis, or a few of them, testing is certainly easier. However, there are quite a few testing suites that allow (through the emulator) to provide a very good automated testing facility.

Finally, Windows Phone 7 comes with great user experiences in the Metro UI, Zune, Xbox LIVE, Exchange, and Visual Studio for app development. Creating these experiences for Android is costly. They’re not baked into the stock build of Android. Well, there are quite a few tools for app development on Android as well. How, exactly, Exchange should be counted as a great user experience is something I am not understanding well, but that is probably a limit of mine.

In synthesis, the new MS concept is “we do it like Apple”. I am not sure that this can work for anyone that is not Apple, though; first of all, because up to now product engineering excellence was not among MS most touted virtues, and because this will in turn go against the differentiation trend that OEM and telcos are pushing to make sure that their brand lines remain unique and appealing enough. How many Phone7 devices can a telco carry? 1? 2? It is possible to imagine a custom Android device for every price point instead – some carriers like Motorola and HTC are already pushing 5,6 devices and more, and low cost handsets are adding even more to the segmentation mix.

, , ,

4 Comments

OSS 4.0 and licenses: not a clear-cut choice

The (always great) Matthew Aslett posted today on some of his most recent results on the future of OSS licensing, in what he calls “Open Source 4.0″, characterized by corporate-dominated development communities. This form of evolution was one of the prediction in my previous posts – not for ethical,or community reasons, but for entirely practical and economic reasons: collaborative development is one of the strongest model in all the 11 basic components that we have identified in the FLOSSMETRICS group. In fact, I wrote in the past something like

Many researchers are trying to identify whether there is a more “efficient” model among all those surveyed; what we found is that the most probable future outcome will be a continuous shift across model, with a long-term consolidation of development consortia (like Symbian and Eclipse) that provide strong legal infrastructure and development advantages, and product specialists that provide vertical offerings for specific markets

which, I believe, matches quite well Matthew’s idea about OSS4.0. One area where I am (slightly) in disagreement with Matthew is related to licensing; I am not totally sure about the increased success of non-copyleft licenses in this next evolution of the open source market. Not because I believe that he is wrong (I would never do that – he is too nice :-) ) but because I believe that there are additional aspects that may introduce some differences.

The choice of an open source license for a project code release is not clear-cut, and depends on several factors; in general, when reusing code that comes from external projects, license compatibility is the first, major driver in license selection. Licenses do have an impact on development activity, depending on the kind of project and who controls the project evolution. Previous studies that shown that restrictive, copyleft licenses do have a negative impact on contribution (for example in Fershman and Gandal, “Open source software: motivation and restrictive licensing”) has been refuted by other researchers (Stewart, Ammeter, Maruping, “Impacts of License Choice and Organizational Sponsorship on User Interest and Development Activity in Open Source Software Projects”). An interesting result of that research is the following graph:

devel

What we found is that for non-market sponsors and new code, there is an higher development activity from outside partners for code that is released under a non-copyleft license. But this implies that the code is new and not encumbered with previous license obligations, like for example the reuse of an existing, copyleft-licensed project. The graph shows the impact on development activity in open source projects, depending on license restrictiveness and the kind of “sponsor”, that is the entity that manages a project. “No sponsor” is the kind of project managed by a non-coordinated community, for example by volunteers; “market sponsor” are projects coordinated by a company, while “nonmarket sponsor” are project managed by a structured organization that is not inherently for-profit, like a development consortia (an example is the Eclipse Foundation). The research data identified a clear effect of how the project is coordinated and the kind of license; the license restrictiveness has been found to be correlated with decreased contributions for nonmarket sponsors, like OSS foundations, and is in general related to the higher percentage of “infrastructural” projects (like libraries, development tools, enabling technologies) of such foundations.

In general,the license selection follows from the main licensing and business model constraints:

  • When the project is derived from an external FLOSS project, then the main constraint is the original license. In this case, the basic approach is to find a suitable license from those compatible with the original license, and select among the possible business models the one that is consistent with the selected exploitation strategy.
  • When one of the partners has an Intellectual Property Rights licensing policy that is in conflict with a FLOSS license, the project can select a MIT or BSD license (if compatible with an eventual upstream release) or use an intermediate releaser; in the latter case there are no constraints on license selection. If a MIT or BSD license is selected, some models are of difficult application: for example, Open Core and Dual Licensing are difficult to implement because the license lack the reciprocity of copyleft.
  • When there are no external licensing constraints, and external contributions are important, license can be more or less freely selected; for nonmarket entities, a non-copylefted license gives a greater probability of contribution.

So, if you are creating a nonmarket entity, and you are free to choose: choose non-copyleft licenses. In the other situations, it is not so simple, and it may even be difficult to avoid previous licensing requirements.

The point on intermediate releasers require some additional consideration. An especially important point of OSS licenses is related to “embedded IPR”, that is the relationship of the code released with software patents that may be held by the releasing authority. While the debate on software patents is still not entirely settled, with most OSS companies vigorously fighting the process of patenting software-based innovations, while on the other hand large software companies defending the practice (for example SAP) most open source licenses explicitly mention the fact that software patents held by the releasing authority are implicitly licensed for use with the code. This means that business practices that rely on separate patent licensing may be incompatible with some specific OSS licenses, in particular the Apache License and the GPL family of licenses. The Eclipse Public License gives patent grants to the original work and to enhanced versions based on the original work but not to code not directly derived from the release, while permissive licenses like BSD and MIT give no patent rights at all.

If, for compatibility or derivation, a license that gives explicitly IPR rights must be selected, and the company or research organization wants to maintain the rights to use IPR in a license-incompatible way a possible solution may be the use of an intermediate releaser; that is, an entity that has no IPR on its own, to which the releasing organization gives a copy of the source code for further publication. Since the intermediate release has no IPR, the license clauses that require patent grants are not activated, while the code is published with the required license; this approach has been used for example by Microsoft for some of its contributions to the Apache POI project.

This may become an important point of attention for companies that are interested in releasing source code under an OSS license; most software houses are still interested in maintaining their portfolio of patents, and are not willing to risk invalidation through “accidental licensing” of IPR embedded in source code (one of the reasons why Microsoft will never sell a Linux based system).

As I wrote in the beginning, there is for a large number of consortia a clear preference for non-copyleft licenses; but it is not possible to generalize: the panorama of OSS is so complex, right now, that even doing predictions is difficult.

, , , ,

4 Comments

Oracle, Sun, Java: lawsuits mark the exit road

I already wrote a few words on the Oracle/Google lawsuits here and here, and I would like to thank all those that found them interesting enough to read and comment on. I found recently a very interesting post by Java author extraordinaire, James Gosling, where he answers on some of his readers’ comments. In the post there are many interesting ideas, and a few points that I believe are not totally accurate – or better, may be explained in a different way. In particular, I believe that the role of Java in the enterprise will remain and will become “legacy”, that is stable, plain and boring, while the real evolution will move from Java to… something else.

James clearly points out the fact that JavaME fragmentation was a substantial hurdle for developers, and believes that in a lesser way this may be true for Android as well. While it is true that fragmentation was a problem for Java on mobile, this was a common aspect of mobile development at the time (go ask a Windows Mobile developer about fragmentation. And see a grown man cry, as the song says). The problem of JavaME was not fragmentation, but lack of movement – the basic toolkits, the UI components, most of the libraries for one reason or the other remained largely unchanged apart a few bug fixes. JavaFX should have been promoted much, much earlier, and would have had a great impact on software development, like (I believe) the more recent Qt releases from Nokia and their idea of declarative user interfaces.

If we compare with the rest of Java, we see a much stronger push towards adding libraries, components, functionalities: all things that made Java one of the best choices for software developers in the enterprise space, because the developers can trust Sun to update and extend their platform, making their job easier and faster. It was the same approach that made Microsoft the king of software: create lots of tools and libraries for developers, sometimes even trying to push more than one approach at a time to see what sticks (like Fahrenheit) , or trying very experimental and skunkworks approach, that later are turned into more mature projects (like WinG). JavaEE and JavaSE followed the same model, with a consistent stream of additions and updates that created a confidence in developers – and, despite all the naysayers, for enterprise software Java was portable with very little effort, even for very large applications.

JavaME was not so lucky, and partly to guarantee uniform licensing Sun was forced to do everything on their own (a striking difference with Android, that – if you check the source code – included tons of external open source projects inside) limiting the rate of growth attainable. Some features that now we take for granted (like web browsing) were not included as default, or implemented by vendors in inconsistent way because Sun never gave guidance on the roadmap and product evolution; multimedia has been mostly an afterthought, usually forcing developers to create (or buy) external libraries to implement anything more complex than a video or audio player. As I wrote before: JavaFX should have been announced much, much earlier, and not as a reactive answer to the competition, but as part of a long-term roadmap that JavaEE had, while the rest of Java missed.

This is, in my opinion, one of the real reasons for the lawsuit: Sun (now Oracle) was unable to create and maintain a real roadmap outside of JavaEE (and partly JavaSE), and especially for JavaME they constantly followed – never led. This, as any developer will tell you, is never a good position; it’s full of dust and you miss all the scenery. So, since Oracle is really more interested in their own markets (the DB and the applications) and not really caring about software developers, ecosystems or openness, they probably believe that lawsuits do have a better return on investment.

,

7 Comments

OpenWorldForum 2010: Join me in the OSS governance session!

I will have the opportunity to present our most recent results on the best practices for OSS adoption at the Open World Forum governance session, moderated by Martin Michlmayr (HP community manager) and Matteo Melideo (QUALIPSO project consortium leader). The program is available here, and packs quite a substantial amount of high quality talks. I hope to see you there!

The Open World Forum is the premier global summit meeting bringing together decisions-makers from across the world to discuss the technological, financial and social impact of open technologies, and to cross-fertilize ideas and initiatives in these areas. At the hub of the Free/Open Source revolution, the event was first staged in 2008, and takes place every year in Paris with more than 140 speakers from 40 countries, a 1,500-strong international audience and numerous conferences, workshops and think-tanks. The 2010 Open World Forum will be held on 30 September and 1 October, under the banner of “Open is the future: Open Innovation – Open Enterprise – Open Society”. Organized by a unique network of partners including the main Free/Open Source communities and most of the leading IT players worldwide, the Open World Forum is a must-attend event to discover the latest trends in open technology, business and social issues and to explore the future of Free/Open Source initiatives. It also offers a unique opportunity to share insights and best practices with many of the most respected visionaries, entrepreneurs and community leaders, and network with technology gurus, CxOs, analysts, CIOs, researchers, government leaders and investors from six continents. To request an invitation, please visit http://www.openworldforum.org

,

No Comments

EveryDesk beta3 released – now available as a VirtualBox image!

I am quite happy to announce the release of the third beta of our EveryDesk flash-based desktop, now available in VirtualBox format as well – so you can try it out without the need of a USB key. EveryDesk is a reinterpretation of the Linux desktop. It executes from a 4Gb USB Key, and allows the user to run a modern and efficient Linux Desktop on most PCs without the need of changing or removing the native operating system such as Windows. Designed to be used in Public Administrations or as an enterprise desktop, EveryDesk is a real OS on a USB key, not a live CD, and as such allows for extensive customization and adaptation to each Public Administration need. It is the result of the open sourcing of parts of the  Conecta HealthDesk system, designed using the result of our past European projects COSPA (a large migration experiment for European Public Administrations), SPIRIT (open source health care), OpenTTT (OSS technology transfer) and CALIBRE (open source for industrial environments).

There are more than 120 changes from the previous edition; among them, all the medical applications are integrated in the same image – so there is no need to have a separate edition for Health Care applications. Among the updates:

  • Latest edition of the DICOM browser for hospitals and medical applications; now supports per-user monitor calibration.
  • Integrated medical dictionary in OpenOffice.org
  • Integrated the After the Deadline OpenOffice grammar checker
  • LikeWise 6 Active directory integration tool
  • A fast, efficient and very capable RDP, NX and VNC connection manager: Remmina based on FreeRDP
  • The latest VirtualBox
  • Several ancillary additions, like a large complement of fonts

1

2

To facilitate the final bug fixing, we made the boot process visible – that will be reverted to silent boot as soon as the final testing is completed. As usual, you will find the images at our sourceforge page.

, ,

3 Comments