Posts Tagged FLOSS

Open Source as a differentiator?

What is an “open source company”? What is the real differentiation element introduced by Open Source? These and more questions were introduced by a great post by Matthew Aslett (if you don’t follow him, go and follow now. I’ll wait. Yes, do it. You will thank me later.), called “The decline of open source as an identifying differentiator“. It is an excellent analysis of how companies mostly stopped using the term “open source” in their marketing materials, and has a follow up (here) that provides a summary of the main responses by other analysts and observers.

The post raises several interesting points, and in my opinion provides a great basis for a more general discussion: what is the difference introduced by open source? Is there a difference at all?

Let’s start with an observation of the obvious: the use of open source to build software is now so widespread that it is not a differentiating element anymore. There goes the various “built on open source components” of some companies – practically all companies are using open source inside. It’s simply not a difference. So, let’s start with what is the real differential between OSS and proprietary:

The licensing. An open license may introduce a difference for the adopter. This means that if such a differential is used by the company, it must provide a value that derives from the intrinsic property of open source as a legal framework. For example, independence from supplier (at least, theoretically…) both in case of provider change, and independence in terms of adding or integrating additional components, even if the company is in disagreement.

The development model. The collaborative development model is not a certainty – it arises only when there is a clear infrastructure for participation. When it does happen, it is comparatively much faster and more efficient than the proprietary and closed model. For this to be a real differentiator, the company must engage in an open development model, and this is actually happening only in a very small number of cases.

In general, the majority of companies that we surveyed in FLOSSMETRICS have now a limited degree of differentiation when compared to their peers, and even as a “signaling” open source is now no more interesting than other IT terms that entered the mainstream (we can discuss further whether “cloud” will disappear in the background as well..) Of the companies we surveyed, I would say that those that we marked originally as “specialists” are the ones more apt to still use “open source” as a differentiating term, with “open core” ones the least (since they don’t reap the advantages of a distributed development model, neither the adopter reaps the advantages of the open source licensing). A potential difference may arise for development tools or infrastructures, where open source is a near necessity; in this case, the natural expectation will be for the platform to be open – thus not a differentiating element any more.

, ,

2 Comments

OSS 4.0 and licenses: not a clear-cut choice

The (always great) Matthew Aslett posted today on some of his most recent results on the future of OSS licensing, in what he calls “Open Source 4.0″, characterized by corporate-dominated development communities. This form of evolution was one of the prediction in my previous posts – not for ethical,or community reasons, but for entirely practical and economic reasons: collaborative development is one of the strongest model in all the 11 basic components that we have identified in the FLOSSMETRICS group. In fact, I wrote in the past something like

Many researchers are trying to identify whether there is a more “efficient” model among all those surveyed; what we found is that the most probable future outcome will be a continuous shift across model, with a long-term consolidation of development consortia (like Symbian and Eclipse) that provide strong legal infrastructure and development advantages, and product specialists that provide vertical offerings for specific markets

which, I believe, matches quite well Matthew’s idea about OSS4.0. One area where I am (slightly) in disagreement with Matthew is related to licensing; I am not totally sure about the increased success of non-copyleft licenses in this next evolution of the open source market. Not because I believe that he is wrong (I would never do that – he is too nice :-) ) but because I believe that there are additional aspects that may introduce some differences.

The choice of an open source license for a project code release is not clear-cut, and depends on several factors; in general, when reusing code that comes from external projects, license compatibility is the first, major driver in license selection. Licenses do have an impact on development activity, depending on the kind of project and who controls the project evolution. Previous studies that shown that restrictive, copyleft licenses do have a negative impact on contribution (for example in Fershman and Gandal, “Open source software: motivation and restrictive licensing”) has been refuted by other researchers (Stewart, Ammeter, Maruping, “Impacts of License Choice and Organizational Sponsorship on User Interest and Development Activity in Open Source Software Projects”). An interesting result of that research is the following graph:

devel

What we found is that for non-market sponsors and new code, there is an higher development activity from outside partners for code that is released under a non-copyleft license. But this implies that the code is new and not encumbered with previous license obligations, like for example the reuse of an existing, copyleft-licensed project. The graph shows the impact on development activity in open source projects, depending on license restrictiveness and the kind of “sponsor”, that is the entity that manages a project. “No sponsor” is the kind of project managed by a non-coordinated community, for example by volunteers; “market sponsor” are projects coordinated by a company, while “nonmarket sponsor” are project managed by a structured organization that is not inherently for-profit, like a development consortia (an example is the Eclipse Foundation). The research data identified a clear effect of how the project is coordinated and the kind of license; the license restrictiveness has been found to be correlated with decreased contributions for nonmarket sponsors, like OSS foundations, and is in general related to the higher percentage of “infrastructural” projects (like libraries, development tools, enabling technologies) of such foundations.

In general,the license selection follows from the main licensing and business model constraints:

  • When the project is derived from an external FLOSS project, then the main constraint is the original license. In this case, the basic approach is to find a suitable license from those compatible with the original license, and select among the possible business models the one that is consistent with the selected exploitation strategy.
  • When one of the partners has an Intellectual Property Rights licensing policy that is in conflict with a FLOSS license, the project can select a MIT or BSD license (if compatible with an eventual upstream release) or use an intermediate releaser; in the latter case there are no constraints on license selection. If a MIT or BSD license is selected, some models are of difficult application: for example, Open Core and Dual Licensing are difficult to implement because the license lack the reciprocity of copyleft.
  • When there are no external licensing constraints, and external contributions are important, license can be more or less freely selected; for nonmarket entities, a non-copylefted license gives a greater probability of contribution.

So, if you are creating a nonmarket entity, and you are free to choose: choose non-copyleft licenses. In the other situations, it is not so simple, and it may even be difficult to avoid previous licensing requirements.

The point on intermediate releasers require some additional consideration. An especially important point of OSS licenses is related to “embedded IPR”, that is the relationship of the code released with software patents that may be held by the releasing authority. While the debate on software patents is still not entirely settled, with most OSS companies vigorously fighting the process of patenting software-based innovations, while on the other hand large software companies defending the practice (for example SAP) most open source licenses explicitly mention the fact that software patents held by the releasing authority are implicitly licensed for use with the code. This means that business practices that rely on separate patent licensing may be incompatible with some specific OSS licenses, in particular the Apache License and the GPL family of licenses. The Eclipse Public License gives patent grants to the original work and to enhanced versions based on the original work but not to code not directly derived from the release, while permissive licenses like BSD and MIT give no patent rights at all.

If, for compatibility or derivation, a license that gives explicitly IPR rights must be selected, and the company or research organization wants to maintain the rights to use IPR in a license-incompatible way a possible solution may be the use of an intermediate releaser; that is, an entity that has no IPR on its own, to which the releasing organization gives a copy of the source code for further publication. Since the intermediate release has no IPR, the license clauses that require patent grants are not activated, while the code is published with the required license; this approach has been used for example by Microsoft for some of its contributions to the Apache POI project.

This may become an important point of attention for companies that are interested in releasing source code under an OSS license; most software houses are still interested in maintaining their portfolio of patents, and are not willing to risk invalidation through “accidental licensing” of IPR embedded in source code (one of the reasons why Microsoft will never sell a Linux based system).

As I wrote in the beginning, there is for a large number of consortia a clear preference for non-copyleft licenses; but it is not possible to generalize: the panorama of OSS is so complex, right now, that even doing predictions is difficult.

, , , ,

4 Comments

Oracle/Google: the patents and the implications

Just as LinuxCon ended, Oracle announced that it has filed suit for patent and copyright infringement against Google for its implementation of Android; as an Oracle spokesperson said, “In developing Android, Google knowingly, directly and repeatedly infringed Oracle’s Java-related intellectual property. This lawsuit seeks appropriate remedies for their infringement … Android (including without limitation the Dalvik VM and the Android software development kit) and devices that operate Android infringe one or more claims of each of United States Patents Nos. 6,125,447; 6,192,476; 5,966,702; 7,426,720; RE38,104; 6,910,205; and 6,061,520.” (some more details in the copy of Oracle complaint). Apart from the slight cowardice of waiting after LinuxCon for announcing it, the use of the Boies Schiller legal team (the same of SCO) would be ironic on its own (someone already is calling the company SCOracle).

The patent claims are:

Let’s skip the patent analysis for a moment, and let’s focus on the reasons behind this. Clearly, it is a move typical of mature industries: when a competitor is running past you, you try to put a wrench in its engine. That is a typical move, and one of the examples of why doing things by the book in this modern, collaborative world is wrong. Not only that, but I believe that previous actions by Sun made this threat clearly useless – even dangerous.

Let’s clear the table from the actual patent claims: the patent themselves are quite broad, and quite generic; a good example of what should not be patented (the security domain one is a good example; look at the sheet 5 and you will find the illuminating flowchart with the representation of: do you have the rights to do it? if yes, do it, if no, do nothing. How brilliant). Also, Dalvik implementation is quite different from the old JRE one, and I have strong suspicions that the actual Dalvik method is substantially different. But, that is not important. I believe that there are two main points that Oracle should have checked before filing the complaint (but, given the use of Schiller&Boies, I believe that they have still to learn from the SCO debacle): first of all, Dalvik is not Java and Google never claimed any form of Java compatibility. Second, there is a protection for patents as well, just hidden in recent history.

On the first point: in the complaint, Oracle claims that “The Android operating system software “stack” consists of Java applications running on a Java-based object-oriented application framework, and core libraries running on a “Dalvik” virtual machine (VM) that features just-in-time (JIT) compilation”. On copyrights, Oracle claims that “Without consent, authorization, approval, or license, Google knowingly, willingly, and unlawfully copied, prepared, published, and distributed Oracle America’s copyrighted work, portions thereof, or derivative works and continues to do so. Google’s Android infringes Oracle America’s copyrights in Java and Google is not licensed to do so … users of Android, including device manufacturers, must obtain and use copyrightable portions of the Java platform or works derived therefrom to manufacture and use functioning Android devices. Such use is not licensed. Google has thus induced, caused, and materially contributed to the infringing acts of others by encouraging, inducing, allowing and assisting others to use, copy, and distribute Oracle America’s copyrightable works, and works derived therefrom.”

Well, it is wrong. Wrong because Google did not copied Java – and actually never mention Java anywhere. In fact, the Android SDK produced Dalvik (not Java) bytecodes, and the decoding and execution pattern is quite different (and one of the reasons why older implementations of Dalvik were so slow – they were made to conserve memory bandwidth, that is quite limited in cell phone chipsets). The thing that Google did was to “copy” (or – for a better word – inspire) the Java language; but as the recent SAS-vs-WPS lawsuit found, “copyright in computer programs does not protect programming languages from being copied”. So, unless Oracle can find pieces of documentation that were verbatim lifted from the Sun one, I believe that the copyright part is quite weak.

As for patents, a little reminder: while copyright covers specific representations (a page of source code, an Harry Potter book, a music composition), software patents cover implementations of ideas, and if the patent is broad enough, all possible implementation of an algorithm (let’s skip for the moment the folly of giving monopoly protection on ideas. You already know how I think about it); so, if in any way Oracle had, now or in the past, given full access to those patents through a licensing that is transferable, Google is somehow protected there as well. And – guess what? That really happened! Sun released the entire Java JDK under the GPLv2+classpath exception; granting with that release full rights of use and redistribution of the IPR assigned on what was released. This is different from the TCK specification, that Google wisely never licensed; because the TCK license requires for the patents to be transferred to limit the development to enhancements or modifications to the basic JDK as released by Sun.

But, you would say, Dalvik is independent from OpenJDK, so patents are not transferred there. So, include the code that is touched by the patents from the OpenJDK within Dalvik – compile it, and make a connecting shim, include it in a way that is GPLv2 compatible. The idea (just an idea! and IANAL of course..) is that through the release of the GPL code Sun gave an implicit license to embedded patents that is connected with the code itself. So, if it is possible to create an aggregate entity of the Dalvik and OpenJDK code, the Dalvik one would become a derivative of the GPL license, and would obtain the same patent protection as well. That would be a good use of the GPL, don’t you think?

What will be the result of the lawsuit? First of all, the open source credibility of Oracle, already damaged by the OpenSolaris affair, is now destroyed. It is a pity – they have lots of good people there, both internal and through the Sun acquisition; after all, they are among the 10 largest contributors to the Linux kernel. That is something that will be very difficult to recover.

Second, Google now has a free, quite important gift: the attention has been moved from their recent net neutrality blunder, and they are again the David of the situation. I could not imagine a better gift.

Third, with this lawsuit Oracle basically announced the world that Java in mobile is dead. This was actually something that most people already knew – but seeing it in writing is always reassuring.

Update: Miguel de Icaza claims that “The Java specification patent grant patent grant seems to be only valid as long as you have a fully conformant implementation”, but that applies only to the Standard Implementation of Java, not OpenJDK. Sorry Miguel – nice try. More luck next time.

Update 2: cleaned the language on the phrase on patents, ideas and implementation that was badly worded.ù

Update 3: clarified the Dalvik+OpenJDK idea.

, ,

34 Comments

Estimating source-to-product costs for OSS: an experiment

One of my recurring themes in this blog is related to the advantages that OSS brings to the creation of new products; that is, the reduction in R&D costs through code reuse (some of my older posts: on reasons for company contribution, Why use OSS in product development, Estimating savings from OSS code reuse, or: where does the money comes from?, Another data point on OSS efficiency). I already mentioned the study by Erkko Anttila, “Open Source Software and Impact on Competitiveness: Case Study” from Helsinki University of Technology, where the author analysed the degree of reuse done by Nokia in the Maemo platform and by Apple in OSX. I have done a little experiment on my own, by asking IGEL (to which I would like to express my thanks for the courtesy and help) for the source code of their thin client line, and through inspecting the source code of the published Palm source code (available here). Of course it is not possible to inspect the code for the proprietary parts of both platforms; but through some unscientific drill-down in the binaries for IGEL, and some back of the envelope calculation for Palm I believe that the proprietary parts are less than 10% in both cases (for IGEL, less than 5% – there is a higher uncertainty for Palm).

The actual results are:

  • Total published source code (without modifications) for IGEL: 1.9GB in 181 packages; total amount of patch code: 51MB in 167 files (the remaining files are not modified). Average patch size: 305KB, Patch percentage on total publisheed code:  2.68%
  • Total published source code (without modifications) for Palm: 1.2GB in 106 packages; total amount of patch code: 55MB in 83 files (the remaining files are not modified). Average patch size: 664KB, Patch percentage on total published code: 4.58%

If we add the proprietary parts and the code modified we end up in the same approximate range found in the Maemo study, that is around 10% to 15% of code that is either proprietary or modified OSS directly developed by the company. IGEL reused more than 50 million lines of code, modified or developed around 1.3 million lines of code. Without OSS, that would have costed more than 2B$, required a full staffing of more than 700 people for an effort duration of more than 20 years. Through OSS, the estimated cost (using the more appropriate semidetached model) is around 90M$, with an average staffing of 150 people and an estimated project duration of 5 years. Palm has a similar cost (the amount of modified code is quite similar), but starting from a smaller amount of reused code (to recode everything would still require 12B$, 570 people and 18 years of work). We have to add some additional costs (for an explanation you can check my previous post on the proper use of COCOMO II and OSS, using the model by Abts, Boehm and Bailey) that would bring the total cost to a little less than 100M$ (still substantially less than the full cost of development from scratch).

Open Source allows to create a derived product (in both case of substantial complexity) reducing the cost of development to 1/20, the time to market to 1/4, the total staff necessary to more than 1/4, and in general reduce the cost of maintaining the product after delivery. I believe that it would be difficult, for anyone producing software today, to ignore this kind of results.

Addendum: I received some requests for specific parts of source code from people willing to check the kind of modifications performed. For Palm, the website provides both original source code and patches. For IGEL, I requested the access to the source code, and was kindly provided with a username and password to download it. Since the single most requested file seems to be the modified rdesktop, I have linked the GPL sources here.

, ,

3 Comments

ChromiumOS: a look in the code, and in the model (updated)

The release of Google ChromiumOS was an event waited by industry analysts with significant anticipation, and the overall impression after the announcement was that it went out as a fizzle, and not a bang. Most comments were centered on the obvious shortcomings of this first pre-alpha release, the significant limits in the supported hardware, the reliance on networking for everything (especially the initial login), the over-reliance on Google services. And all the comments are right-and, at the same time, based on a general misperception of what can be a potential competitor for the most visible part of the IT infrastructure, namely the traditional desktop PC. I had the opportunity to explore the code, build my version, and in general to evaluate the release in the context of the UTAUT model of technology adoption, and I believe that the approach is sound and sensible, and will change the market even if it fails.
24112009563

The first misconception is the idea that ChromiumOS was designed as a desktop OS competitor, despite the previous comments from Google spokespersons that the release would have been targeted towards a different market. The reality is that, even in ideal conditions and with technology prevalence (that is, the new technology is invariably and clearly superior to the old one) in presence of strong network effect and market prevalence NO alternative can supplant the incumbent in a short period of time, but can eventually grow its market in small percentage increments. This is especially true if the incumbent has pricing flexibility, that is it is possible to lower prices to fight back economic advantages, by moving the dead loss to some other market sector where there is less competition. This is what happened in the netbook market, with the possible loss of market space to Linux alternatives thwarted by lowering the pricing point of the offered operating system. Google makes with ChromiumOS a technological bet, that is a clear continuation of their overall strategy, and that has a serious potential to materialize.

It is not a desktop operating system. Desktop OS are full-featured, flexible, allow for unlimited installation of applications; on the other hand, ChromiumOS is a thin shell designed to run the Chrome browser as a single application. So, everyone expecting Google to save the idea of the Linux desktop has missed the fundamental point that it is not possible for anyone to fight for the desktop and win in a short amount of time, and without a massive monetary investment. But it is always possible to create a new market, and that’s exactly what Google is trying to do; similarly to when Apple launched the iPhone, very few believed that it would reach any substantial market share, forgetting that the iPhone was not a phone, but an execution platform – something different from all the previous smartphones, for which apps and web browsing were at most an afterthought.ChromiumOS resembles in this aspect Moblin (and shares much code with it) but in an even more radical way.

It requires little or no maintenance and support. What is the single highest source of costs for PCs? Management and support. OS patching and installation/reinstallation, fixing applications, installing and removing apps, checking for malware, identity management… the list can go on forever. The real innovation in ChromiumOS is the use of an upgradeable read-only code frame, clearly mimicking set-top boxes that can upgrade themselves OTA (over the air) for example from a satellite channel. ChromiumOS is capable of managing in a transparent and secure way this upgrade, handling securely interruptions and attacks. This, coupled with a totally encrypted local store, means that the hardware can be effectively thought as a purely ephemeral device that can substituted with limited configuration needs, and that large numbers of devices can be upgraded and managed without human intervention and in total security. Applications are embedded in web pages, and managed as web pages; so the maintenance and training requirements are limited.

It is not really tied into Google. Of course in this first release it heavily uses Google services for everything; but changing that is trivial. The authentication part is managed by a PAM module that can be easily swapped, and login completion (that actually turns your login name in a gmail account) is just a small modification of the SLiM login manager used by the OS to perform the initial login, and can be changed with a few lines of code. The same for the application list (the first icon on the top left of the screen), that is merely a hardwired URL – change it with your own portal address, and you get the same result without using Google. The only part that requires some work is the integration of Google SSO (through a complex cookie exchange mechanism); augmenting that with something like OpenSSO from Sun would not require more than a few days of work anyway.

It is not a SplashTop clone. There are several Linux-based instant-on environments, designed to be integrated inside of a flash BIOS; the most famous one is SplashTop, used in many motherboards and notebooks from Asus, Acer, HP, Sony and many others. The problem of this approach is that it is “fixed”: the image is difficult to update and upgrade, and this means that it rapidly loses appeal. ChromiumOS uses a trusted boot mechanism to ensure that upgrades are legitimate, but integrates it in a clean and smart way, making sure that the users will continuously be up to date.

It does require the net most of the time, but not always. The first login requires a working connection, but then the credentials are hashed and stored in a cache wallet, that allows to enter even in absence of a connection. If the pages allow for detached operation (using Gears, HTML5 persistent storage, or similar mechanisms) the system will work even without a connection. It is a stopgag solution, but is sensible: most of the time spent in desktop applications is centered on online services that are unusable without a connection, so it makes sense when considering the OS as something that is not competing in the same market as a traditional PC. Local, cached web applications may provide in the future more flexibility in this sense, but moch effort needs to be done to make it a worthwhile path. If we consider how people spend time on the PC, we can use the data from  Wakoopa, that ublished recently a measurement of time spent per application on Windows, OSX and Linux, and shows that for example on Windows the time is spent with:

  1. Firefox (28.71%)
  2. Internet Explorer (6.88%)
  3. Google Chrome (6.62%)
  4. Windows Explorer (5.92%)
  5. Windows Live Messenger (4.25%)
  6. Opera (2.97%)
  7. Microsoft Office Word (2.51%)
  8. Microsoft Office Outlook (2.22%)
  9. World of Warcraft (1.45%)
  10. Skype (1.30%)

Apart from Microsoft Word, no other application can be used without a connection; at the same time, most of the applications may be supplanted by future versions of web applications, if the evolution around HTML and related standards continue at the current pace. For games, up-and-coming standards like WebGL and O3D may provide this in a “clientless” way; this is similar to the Quake Live game, that at the moment requires an additional plug-in but that may be potentially recoded using only those standards.

It integrates digital identities better than anyone else. You login once-then, everything just works. Enterprise users with large scale SSO systems sometimes encounter this, but is not that common in consumer and smaller companies, and is a great productivity tool. It is just the beginning: more sophisticated user interfaces are needed (this one for example would be great), but many companies (including Microsoft) are making great progresses in this direction.

It introduces a different model. Desktop PC are flexible, adaptable, usable without connectivity, complex, fragile, difficult to manage. Thin (bitmap-based, like RDP or ICA)  clients are slightly easier to manage, require no support, require substantial infrastructure investments, cannot work detached, have marginally lower management costs. The model adopted by Google leverages the local computing power for rendering pages, reducing back-end costs; is simpler to manage, requires no support and can integrate through plug-ins (or browser functionalities) rich functionalities, like 3D (with WebGL and O3d) or native processing (through NaCL) but always within the context of web-delivered applications.

The future will be the final judge; after all, even if something is not successful directly, it may “seed” a future evolution that is capable of shaking the market substantially. The real impact of Negroponte’s OLPC was not the machine in itself (despite the boatloads of innovations contained within) but the re-framing of the netbook market; similarly, maybe it will be not ChromiumOS that will lead the change, but I believe that it is a bold statement – in fact, much bolder than the code that was released.

, , , ,

5 Comments

See you at OMAT Rome!

I am grateful to Flavia Marzano for the invitation to being part of the roundtable on “applications and services for handling digital assets”, where I will present an overview of the tools and best practices for using open source in the context of Enterprise2.0. It is part of OMAT360, the oldest running conference on digital information management, started in 1990 and representing a wonderful opportunity to present the last results from FLOSSMETRICS.

The conference is free, with a registration page here, and an english presentation here. I would love to use the opportunity to meet anyone that may be interested in the topics, or in OSS in general.

, ,

1 Comment

Why COMmunity+COMpany is a winning COMbination

There is an interesting debate, partially moved by Matt Asay, with sound responses from Matthew Aslett, that centered on the reasons for (or not) moving part of the core IP asset of an open source company towards an externally controlled group, like a consortia. Matthew rightly indicates that this is probably the future direction of OSS (the “4.0″ of his graph), and I tried to address this with a few friends on twitter- but 140 chars are too few. So, I will use this space to provide a small overview of my belief: the current structure based on open core is a temporary step in a more appropriate commercialization structure, that for efficiency reason should be composed of a commuity-managed (or at least, transparently managed) consortia that manages the “core” of what now is the open source part of open core offerings, and a purely proprietary company that provides the monetization services, may those be proprietary add-ons, paid services and so on.

Why? Because the current structure is not the most efficient to enable participation from outside groups- if you look at the various open core offerings, the majority of the code is developed from in-house developers, while on community-managed consortia the code may be originated by a single company, but is taken up by more entities. The best example is Eclipse: as recently measured, 25% of the committers work for IBM, with individuals accounting for 22%, and a large number of companies like Oracle, Borland, Actuate and many others with percentages that go from 1 to 7% in a collective, non-IBM collaboration.

Having then a pure proprietary company that sells services or add-ons also removes any possibility of misunderstanding about what is offered to the customer, and thus will make the need of a “OSS checklist” unnecessary. Of course, this means that the direction of the project is no longer in the hand of a single company, and this may be a problem for investors- that may want to have some form of exclusivity or guarantee of maintaining the control. But my impression is that there is only the illusion of control, because if there is a large enough payoff, forks will make the point moot (exactly like it happened with MySQL); and by relieving control, the company gets back a much enlarged community of developers and potential adopters.

,

3 Comments

2020 FLOSS Roadmap, 2009 Version published

Having contributed to the new edition of the 2020 FLOSS roadmap, I am happy to forward the announcement relative to the main updates and changes of the 2020 FLOSS roadmap document. I am especially fond of the “FOSS is like a Forest” analogy, that in my opinion captures well the hidden dynamics that is created when many different projects create an effective synergy, that may be difficult to perceive for those that are not within the same “forest”.

For its first edition, Open World Forum had launched an initiative of prospective unique in the world: the 2020 FLOSS Roadmap (see 2008 version). This Roadmap is a projection of the influences that will affect FLOSS until 2020, with descriptions of all FLOSS-related trends as anticipated by an international workgroup of 40 contributors over this period of time and highlights 7 predictions and 8 recommendations. 2009 edition of Open World Forum gave place to an update of this Roadmap reflecting the evolutions noted during the last months (see OWF keynote presentation). According to Jean-Pierre Laisné, coordinator of 2020 FLOSS Roadmap and Bull Open Source Strategy: “For the first edition of the 2020 FLOSS Roadmap, we had the ambition to bring to the debate a new lighting thanks to an introspective and prospective vision. This second edition demonstrates that not only this ambition is reached but that the 2020 FLOSS Roadmap is actually a guide describing the paths towards a knowledge economy and society based on intrinsic values of FLOSS.

About 2009 version (full printable version available here)

So far, so good: Contributors to the 2020 FLOSS Roadmap estimate that their projections are still relevant. The technological trends envisioned – including the use of FLOSS for virtualization, micro-blogging and social networking – have been confirmed. Contributors consider that their predictions about Cloud Computing may have to be revised, due to accelerating adoption of the concepts by the market. The number of mature FLOSS projects addressing all technological and organizational aspects of Cloud Computing is confirming the importance of FLOSS in this area. Actually, the future of true Open Clouds will mainly depend on convergence towards a common definition of ‘openness’ and ‘open services’.

Open Cloud Tribune: Following the various discussions and controversies around the topic “FLOSS and Cloud Computing”, this opinion column aims to nourish the debates on this issue by freely publishing the various opinions and points of view. 2009’s article questions about the impact of Cloud Computing on employment in IT.

Contradictory evolutions: While significant progress was observed in line with 2020 FLOSS Roadmap, the 2009 Synthesis highlights contradictory evolutions: the penetration of FLOSS continues, but at political level there is still some blocking. In spite of recognition from ‘intellectuals’. the alliance between security and proprietary has been reinforced, and has delayed the evolution of lawful environments. In terms of public policies, progress is variable. Except in Brazil, United Kingdom and the Netherlands, who have made notable moves, no other major stimulus for FLOSS has appeared on the radar. The 2009 Synthesis is questioning why governments are still reluctant to adopt a more voluntary ‘FLOSS attitude’. Because FLOSS supports new concepts of ’society’ and supports the links between technology and solidarity, it should be taken into account in public policies.

Two new issues: Considering what has been published in 2008, two new issues have emerged, which will need to be explored in the coming months: proprietary hardware platforms, which may slow the development of FLOSS , and proprietary data, which may create critical lock-ins even when software is free.

The global economic crisis: While the global crisis may have had a negative impact on services based businesses and services vendors specializing in FLOSS, it has proved to be an opportunity for most FLOSS vendors, who have seen their business grow significantly in 2009. When it comes to Cloud-based businesses, the facts tend to show a massive migration of applications in the coming months. Impressive growth in terms of hosting is paving the way for these migrations.

Free software and financial system: this new theme of the 2020 FLOSS Roadmap makes its appearance in the version 2009 in order to take into account the role which FLOSS can hold in a system which currently is the target of many reflexions.

Sun/Oracle: The acquisition of Sun by Oracle is seen by contributors to the 2009 Synthesis as a major event, with the potential risk that it will significantly redefine the FLOSS landscape. But while the number of major IT players is decreasing, the number of small and medium-size companies focused around FLOSS is growing rapidly. This movement is structured around technology communities and business activities, with some of the business models involved being hybrid ones.

FLOSS is like forests: The 2009 Synthesis puts forward this analogy to make it easier to understand the complexity of FLOSS through the use of a simple and rich image. Like forests and their canopies – which play host to a rich bio-diversity and diverse ecosystems – FLOSS is diverse, with multiple layers and branches both in term of technology and creation of wealth. Like a forest, FLOSS provides vital oxygen to industry. Like forests, which have brought both health and wealth throughout human history, FLOSS plays an important role in the transformation of society. Having accepted this analogy, contributors to the Roadmap subsequently identified different kind of forests: ‘old-growth forests’ or ‘primary forests’, which are pure community-based FLOSS projects such as Linux; ‘cultivated forests’, which are the professional and business-oriented projects such as Jboss and MySQL; and ‘FLOSS tree nurseries’, which are communities such as Apache, OW2 and Eclipse. And finally the ‘IKEAs’ of FLOSS are companies such as Red Hat and Google.

Ego-altruism: The 2009 Synthesis insists on the need to encourage FLOSS users to contribute to FLOSS, not for altruistic reasons, but rather for egoistical ones. It literally recommends users to only help when it benefits themselves. Thanks to FLOSS, public sector bodies, NGOs, companies, citizens, etc. have full, free and fair access to technologies enabling them to communicate on a global level. To make sure that they will always have access to these powerful tools, they have to support and participate in the sustainability of FLOSS.

New Recommendation: To reinforce these ideas, the 2020 FLOSS Roadmap in its 2009 Synthesis added to the existing list of recommendations:
Acknowledge the intrinsic value of FLOSS infrastructure for essential applications as a public knowledge asset (or ‘as knowledge commons’), and consider new means to ensure its sustainable development

Contact: http://www.2020flossroadmap.org/contact/

, , , ,

No Comments

All the possible errors, in a single slide.

I found this slide deck, from a very large and visible software company (that I will not name, leaving it as an the exercise for the reader); I believe that it was created to provide a clear response to many popular misconceptions on open source software. Unfortunately, it seems to collect in a single slide most of the myths and false assumptions that I have already mentioned in our past work within FLOSSMETRICS.

badslide

First of all, “zero cost” is something that may be true or not- it simply is not the defining attribute of open source software. At the same time, saying that proprietary software has “lower ongoing cost” is not overall true (and I have tons of independent confirmation of that), claiming that proprietary has more features is (as before) not universally true, saying that proprietary software maintains backward compatibility generated substantial laughter across the poor people here in the office that has to provide support to our commercial customers, claiming that proprietary is “more secure” recalled the recent attack against DNS claiming that it was poorly protected freeware.

Should I continue? Open standards, anyone? And the last one, implying that only proprietary software is based on managed development? Any commercial OSS vendor would happily dismiss this claim as untrue. Commitment on support? I believe that my fellow three readers would not encountering any difficulties in thinking about proprietary products that got bought and buried underground, or that simply got scrapped altogether.

Ah, I would happily send my guide to this fellow slide author, but I believe that probably this would not change this company views a single bit.

,

1 Comment

OSS: the real point is software control

Ah, the morning aroma of a freshly brewed flame war… With our restless Matt Asay that sternly observes that in the free software/open source war, open source won and we are all the better for it. Of course, this joins the rack of those that consider Richard Stallman a relic of a passed era, or the thoughtful comments of my favourite thinker, Glyn Moody, or the pragmatic and reasoned views of  Matthew Aslett of the 451 group.

If there is one thing that emerges clearly from all these discussions, is that fundamentalism is wrong. It is wrong when it is spelled “OSS is better”, it is wrong when it claims “Microsoft is better” without any reasoning. Because rational thinking should be the basis of discussion, not religion. This is not to say that religion or moral motivations are bad- but beliefs should be recognised beforehand, to avoid turning any discussion into a flame war. That’s why I may feel at ease in criticizing Stallman for what I perceive as personal attacks, and at the same time recognize the fact that without him and the GPL the free software and open source world would be much less developed and relevant.

My perspective is simple: every user, developer, administrator that depends on software (and basically everyone does, today) should think before using a software or service, and understand who control it, and if this “who” is not the user, what can happen. It is not just a question of “religious beliefs” but practical thinking: is the software yours? Does the service you are using gives you the opportunity of moving somewhere else? What happens if the developers are not going in the direction you need?

If we consider this as the basis for discussion, lots of arguments in the OSS/FS camp become much simpler. The crusade against software patents is a way of defending the rights of use of the end-user against arbitrary legal attacks; in this sense, the only real reason for being not happy of having something like Mono is not the fact that it is a Microsoft “standard”, but the fact that it is probably covered by unknown patents. The same thing applies for Flash- most people is dependent from a single company for what amounts as a platform, still not replicated by OSS alternatives (like Gnash) and in any case potentially covered by patents not only by Adobe, but by many other companies as well. The “victory of pragmatism” that Matt proclaims is not actually related to FS and OSS (that are the same exact thing) but the general overcoming of emotional based arguments, that is absolutely a positive thing.

But the “new pragmatism” should also be viewed with suspicion, exactly as the claims that free software is “better” without reason. I will make the example of Mono: now it is pushed as a way to overcome what is equally proprietary, that is Flash. What happens when Microsoft stops promoting it? It is OSS, s0 it can theoretically go on forever, but very few will risk infringing patents with it, and so it will remain more or less limited to those shops already using .NET elsewhere (thus having paid for the right of use), limiting its growth potential. The scenario is not so unbelievable, after the unveiling of a real Silverlight port to Moblin, that makes Mono more or less redundant. Some “open core” systems suffer of the same problem: the user is forced, by the proprietary part, to abide to whatever decision is made by the vendor, independently of what OSS license the “open” part is licensed with.

The uncritical embracing of online services is similarly flawed: what happens if the company goes bankrupt, or discontinue the service? If you use EC2, you can always create your own infrastructure using Eucalyptus and continue your work. Can you say the same of all the services that are being promoted right now? Can you get a complete copy of your data, move it somewhere else?

Control is what really matters, on-premise and online. Who, how such control is performed, what it may affects. You may prefer the ethical angle (like Stallman did) or the economic angle (like I do) but the end result is the same, exactly like free software and open source are the same. The critical aspect is being able to assess this control and weight if the lack of control is compensated by the features you get (which is reasonable) or what kind of risk are you accepting in exchange. You like the integrated set of features proposed by Microsoft? That’s good as long as you know that some of the actions that they did in the past were not exactly transparent, and that your control of their offering is very limited. You like Google? Good! Just understand what happens if Gmail does not work. You prefer open source? Good! But with the increased control you get with it, you also get responsibility and increased effort.

Always ask yourself: it is your software, or not? Think about it, and don’t let the question disappear from your mind, because your business may depend on it.

, ,

9 Comments