Archive for category OSS adoption
What is an “open source company”? What is the real differentiation element introduced by Open Source? These and more questions were introduced by a great post by Matthew Aslett (if you don’t follow him, go and follow now. I’ll wait. Yes, do it. You will thank me later.), called “The decline of open source as an identifying differentiator“. It is an excellent analysis of how companies mostly stopped using the term “open source” in their marketing materials, and has a follow up (here) that provides a summary of the main responses by other analysts and observers.
The post raises several interesting points, and in my opinion provides a great basis for a more general discussion: what is the difference introduced by open source? Is there a difference at all?
Let’s start with an observation of the obvious: the use of open source to build software is now so widespread that it is not a differentiating element anymore. There goes the various “built on open source components” of some companies – practically all companies are using open source inside. It’s simply not a difference. So, let’s start with what is the real differential between OSS and proprietary:
The licensing. An open license may introduce a difference for the adopter. This means that if such a differential is used by the company, it must provide a value that derives from the intrinsic property of open source as a legal framework. For example, independence from supplier (at least, theoretically…) both in case of provider change, and independence in terms of adding or integrating additional components, even if the company is in disagreement.
The development model. The collaborative development model is not a certainty – it arises only when there is a clear infrastructure for participation. When it does happen, it is comparatively much faster and more efficient than the proprietary and closed model. For this to be a real differentiator, the company must engage in an open development model, and this is actually happening only in a very small number of cases.
In general, the majority of companies that we surveyed in FLOSSMETRICS have now a limited degree of differentiation when compared to their peers, and even as a “signaling” open source is now no more interesting than other IT terms that entered the mainstream (we can discuss further whether “cloud” will disappear in the background as well..) Of the companies we surveyed, I would say that those that we marked originally as “specialists” are the ones more apt to still use “open source” as a differentiating term, with “open core” ones the least (since they don’t reap the advantages of a distributed development model, neither the adopter reaps the advantages of the open source licensing). A potential difference may arise for development tools or infrastructures, where open source is a near necessity; in this case, the natural expectation will be for the platform to be open – thus not a differentiating element any more.
The latest IO2011 conference from Google marked the first real marketable appearance of several noteworthy additions: Angry Birds for the web, a new Android release and the latest netbooks based on ChromeOS. The most common reaction was of basic lack of interest from analysts and journalists, with the prevalent mood being “we already saw what netbooks are, and we don’t like them” or “you can do the same with Windows, just remove everything and leave Chrome on it”.
Well, I believe that they are wrong. ChromeOS netbooks may be a failure, but the basic model is sound and there are strong economic reasons behind Google push in the enterprise. Yes- I said enterprise, because ChromeOS is a pure enterprise play, something that I already wrote about here. I would like to point out a few specific reasons I believe that ChromeOS, in some form or the other, will probably appear as a strong enterprise contender:
- It is *not* a thin client. Several commentators argued that a ChromeOS netbook is essentially a rehash of some old thin client of sort, with most people presenting a parallel between ChromeOS and Sun Ray clients. But ChromeOS is designed to execute web applications, where the computational cost of the presentation and interaction layer is on the client; this means that the cost per user for providing the service is one order of magnitude lower than bitmap-based remotization approaches. How many servers would be necessary to support half a billion Facebook users if their app was not web based, but accessed through an ICA or RDP layer? The advantages are so overwhelming that nowadays most new enterprise apps are exclusively web-based (even if hosted on local premises).
- It is designed to reduce maintenance costs. The use of the Google synchronization features and identity services allows for management-as-a-service to be introduced fairly easily; most thin clients infrastructures require a local server to act as a master for the individual client groups, and this adds costs and complexities. The extremely simple approach used to provide replication and management is also easy to grasp and extremely scalable; provisioning (from opening the box to begin useful work) is fast and requires no external support – only an internet connection and a login. This form of self-service provisioning, despite brochure claims, is still unavailable for most thin client infrastructures, and when available it is extremely costly in terms of licensing.
- Updates are really failsafe. Ed Bott commented that “Automatic updates are a nightmare”, and it’s clear that he has quite a deep experience with the approach used by Microsoft. It is true that automatic updates are not a new thing – but the approach used by ChromeOS is totally different from the one used by MS, Apple or most Linux distributions. ChromeOS updates are distributed like satellite set-top box updates – whole, integrated new images sent through a central management service. The build process ensures that things work, because if something happens during the build the image will simply not be built – and distributed. Companies scared by the latest service pack roulette (will it run? will it stop in the middle, making half of the machines left broken in the rain?) should be happy to embrace a model where only working updates are distributed. And, just as a comment, this model is possible only because the components (with the exception of Flas) are all open source, and thus rebuildable. To those that still doubt that such a model can work, I suggest a simple experiment: go to the chromiumos developer wiki, download the source and try a full build. It is quite an instructive process.
- The Citrix receiver thing is a temporary stopgap. If there is a thing that confused the waters in the past, it was the presentation done by Citrix of the integrated Receiver for the ICA protocol. It helped to create the misconception that ChromeOS is a thin-client operating system, and framed in a wrong way the expectations of users. The reality is that Google is pushing for a totally in-browser, HTML5 app world; ICA, RDP and other remotization features are there only to support those “legacy” app that are not HTML5 enabled. Only when the absolute majority of apps are web based the economics of ChromeOS makes sense.
On the other hand, it is clear that there are still substantial hurdles- actually, Google approach with the Chromebooks may even fail. In fact, I am not a big fan of the notebook format for enterprise computing, where a substantial percentage of users are still on the desk, and not nomadic. I believe that having a small, very low cost device is more interesting than leasing notebooks, especially if it is possible to lower the cost of the individual box to something like 100$ (yes – believe me, it is actually possible). Also, Google must make it easier to try ChromeOS on traditional PCs, something that at the moment is totally impossible; a more generic image, executed from a USB key, would go a long way towards demonstrating the usefulness of the approach.
As a followup of my previous post (available here) on concrete data on TCO and results of our past EU projects in the area, here is a more detailed contribution of some of the aspects that we studied in Public Administrations, starting with an explanation of my assertion that “real world TCO of an OSS migration is roughly twice the actual monetary cost” that I mentioned, and that raised quite some interest. First of all, here is a more detailed table with the tangible versus intangible costs:
On average the costs are very equally shared, with one exception in an Italian Province that had higher tangible costs (as it actually paid external contractors and services for most of the OSS activity) and a small municipality that for budget reasons shifted most of the work internally as immaterial expenses in a very significant way; the measure is also skewed by the small size of the experiment for this particular case. The measures performed in the other experiments confirm the approximate range of tangible vs. intangible; while there is some variability, the relative error is quite small.
Some interesting facts also emerged in the evaluation of negative and positive factors for adoption of OSS; in fact, this is part of the set of measurements that became the basis of our set of best practices for OSS adoption (that you can find described here, here and here). Let’s start with the negative ones:
The first observation is the fact that being in a risk averse industry sector is basically not a really significant factor (and as a counter-example, banks and financial services are substantial OSS users, and extremely risk-averse). The other factors are extremely significant, that is they explain a substantial percentage of the variability (that is, if a migration is successful or not). You can also find that most of our best practices do have a direct connection with some of the negative factors:
Perception of work under-valued if using “cheap” OSS products -> the best practice is “Provide background information on OSS: A significant obstacle of OSS adoption is the acceptance by the user, that usually has a very limited knowledge of open source and open data standards. In many cases, OSS is perceived as lower quality as it is “free”, downloadable from the Internet like many shareware packages or like amateur projects. It is important to cancel this perception, and to provide information on how OSS is developed and what is the rationale and business model that underlie it.”
Staff resistance due to fear of being de-skilled -> the best practice is “Use the migration as an occasion to improve users skill: as training for the new infrastructure is required, it may be used as a way to improve overall ICT skills; in many companies and public administrations for example little formal training is usually performed on users. This helps not only in increasing confidence, but can also used to harmonize skills among groups and in general improve performance. This may rise some resistance from the so called “local gurus”, that may perceive this overall improvement as a lessening of their social role as technical leaders. The best way to counter such resistance is to identify those users, and suggest them to access higher-level training material (that may be placed in a publicly accessible web site, for example).”
You will find that for each of these negative factors there is a specific best practice designed to help reduce its impact; taken all together, it is possible to increase the probability of success substantially with very few (and simple) methods. As for the positive factors:
It is easy to see that economic consideration are of relatively limited importance when compared with the other positive factors, and this gives value to my own theory that flexibility and vendor independence are stronger factors than economical ones (even considering that sometimes the threat of going to OSS is used to increase the discount by proprietary vendors during negotiation). Among the factors that map directly to our best practices:
Availability of OSS-literate IT personnel: “Understand the way OSS is developed: Most project are based on a cooperative development model, with a core set of developers providing most of the code (usually working for a commercial firm) and a large number of non-core contributors. This development model does provide a great code quality and a fast development cycle, but requires also a significant effort in tracking changes and updates.”
Top management support for the migration: “Be sure of management commitment to the transition: Management support and commitment have been repeatedly found to be one of the most important variable for the success of complex IT efforts, and FLOSS migrations are no exception. This commitment must be guaranteed for a time period sufficient to cover the complete migration; this means that in organizations where IT directors are frequently changed, or where management changes in fixed periods of times (for example, in public administrations where changes happens frequently) there must be a process in place to hand over management of the migration. The commitment should also extend to funding (as transitions and training will require resources, both monetary and in-house). The best way to insure continued coordination is to appoint a team with mixed experiences (management and technical) to provide continuous feedback and day-to-day management.”
Some best practices provide a reduction of the impact of negative factors and at the same time increase the impact of positive ones, like “Seek out advice or search for information on similar transitions: As the number of companies and administrations that have already performed a migration is now considerable, it is easy to find information on what to expect and how to proceed” covers both the negative “no other successful example of OSS migration in the same sector” and the positive “support from other OSS users”.
As a conclusion: there is a way to substantially increase the probability of doing a successful OSS adoption/migration, and the way goes through some simple and fact-based methods. Also, the spectre of ultra-high TCO for OSS migration can be banned through a single multiplication, to get the real numbers. If some company claims that OSS costs too much, first of all ask where they get their data from. If it is from a model, show them that reality (may) be different.
Of course, if you insist of doing everything wrong, the costs may be high. But I’m here for a reason, no?
During the development of the EU Cospa project, we found that one of the most common criteria used to evaluate “average” TCO was actually not very effective in providing guidance – as the variability of the results was so large that made any form of “average” basically useless. For this reason, we performed a two-step action: the first was to define a clearly measurable set of metrics (including material and immaterial expenses) and you can find it here:
“D3.1 – Framework for evaluating return/losses of the transition to ODS/OSS”
The second aspect is related to “grouping”. We found that the optimal methodology for evaluating migration was different for different kind of transitions, like server vs. desktop, full-environment migration vs. partial, and so on; the other orthogonal aspect is whether the migration was successful or not. In fact, *when* the migration is successful, the measured (both short-term and over 5 years) TCO was substantially lower in OSS compared to pre-existing proprietary software. I highlight two cases: a group of municipalities in the North of Italy, and a modern hospital in Ireland. For the municipalities:
Initial acquisition cost: proprietary 800K€, OSS 240K€
annual support/maintenance cost (over 5 years): proprietary 144K€, OSS 170K€
The slightly higher cost for the OSS part is related to the fact that an external consultancy was paid to provide the support. An alternative strategy could have been to retrain the staff for Linux support, using consultancies only in year 1 and 2- leading to an estimated total cost for the OSS solution exactly in line with the proprietary one. The municipalities also performed an in-depth analysis of efficiency; that is, documents processed per day, comparing openoffice and MS office. This was possible thanks to a small applet installed (with users and unions consent) on the PC, recording the user actions and the applications and files used during the migration evaluation. It was found that users were actually *more* productive with OOo in a substantial way. This is probably not related to a relative technical advantage of OOo vs. MS office, but on the fact that some training was provided on OpenOffice.org before beginning the migration – something that was not done before for internal personnel. So many users actually never had any formal training on any office application, and the limited (4 hours) training performed before the migration actually substantially improved their overall productivity.
On the other hand, it is clear that OOo is – from the point of view of the user – not lowering the productivity of employees, and can perform the necessary tasks without impacting the municipality operations.
The migration was done in two steps; a first one (groupware, content management, openoffice) and a second one (ERP, medical image management).
In the first, the Initial acquisition cost was: proprietary 735K€, OSS 68K€
annual support/maintenance cost (over 5 year): proprietary 169K€, OSS 45K€
Second stage Initial acquisition cost: proprietary 8160K€, OSS 1710K€
annual support/maintenance cost (over 5 year): proprietary 1148K€, OSS 170K€
The hospital does have a much larger saving percentage when compared with other comparable cases because they were quite more mature in terms of OSS adoption; thus, most of the external, paid consulting was not necessary for their larger migration.
Some of the interesting aspects that we observed:
- In both tangible and intangible costs, the reality is that one of the most important expense is software search and selection, and the costs incurred in selecting the “wrong” one. This is one of the reasons why in our guidelines we have included the use of established, pragmatic software selection methodologies like FLOSSMETRICS or QUALIPSO (actually we found no basic difference in “effectiveness” among methods: just use at least one!)
This information is also something that can be reused and disseminated among similar groups; for example, the information on suitability of a backup solution for municipalities can be spread as a “best practice” among similar users, thus reducing the cost of searching and evaluating it. If such a widespread practice can be performed, we estimate that OSS adoption/migration costs can be reduced of something between 17% and 22% with just information spreading alone.
- On average, the cost of migration (tangible vs. intangible) was nearly equal with one exception that was 27% tangible vs. 73% intangible, due to the pressure to use older pcs, and reuse resources when possible for budgetary reasons. In general, if you want to know the “real” TCO, simply take your material costs and multiply by two. Rough, but surprisingly accurate.
- Both in COSPA, OpenTTT and our own consulting activity we found that 70% of users *do not need* external support services after the initial migration is performed. For example, while most of COSPA users paid for server support fees for RedHat Enterprise, a substantial percentage could have used a clone like Centos or Oracle linux with the same level of service and support. Also, while not universally possible, community-based support has been found sufficient and capable in a large number of environments. A problem with community support has been found in terms of “attitude”; some users accessed the forums with the same expectations of a paid offering, seriously damaging the image and possibility of support (something like “I need an answer NOW or I’ll sue you!” sent to a public support forum for an open source product). For this reason, we have suggested in our best practices to have a single, central point of contact between the internal users and the external OSS communities that is trained and expert in how OSS works to forward requests and seek solutions. This can reduce, after initial migration and a 1-2 year period of “adaptation”, support costs by shifting some of the support calls to communities. This can reduce costs of a further 15-20% on average.
Open core is usually built by a set of internal open source components held together by a dual-licensed wrapper, plus proprietary modules on the outside. One of the best examples of this is Zimbra (an excellent product on its own) but MySQL in recent editions can be included in the same group. As discussed in previous posts, dual licensing hampers contributions because it requires an explicit agreement on ceding rights to the company that employs it, in order to be able to relicense it for the proprietary edition. This means that Open Core companies, in itself, will have an easier time in monetizing their software, but will receive much less contributions in exchange. As I wrote before, it is simply not possible to get something like Linux or Apache with Open Core.
Again: open core is not bad per se (but I would have been more cautious in calling Sugar “an open source company”, for whatever definition you have of that). But it is a tradeoff: monetization versus contributions. And, my bets are on contributions, as OpenStack demonstrates – you need leverage and external resources to go beyond what a single company can do.
[Note: since I am writing this from a sunny beach, with a cell phone, I will not be able to add more than a few links to external pages. Will add the rest of them at my return, after the 12th of July]
It seems that Open Core continues to be the source of significant debate; I wrote quite a few posts in the past, and Open Core was one of our researched business models (for more details, see my LinuxTag presentation on business models). I would like to enter again the debate with a few short comments on my own:
- Companies using OC are not the devil, and should not be called names because of their choice of business model. Actually, there are no good and bad business models - only models that work, and those that do not. So, if open core works for a company, that’s a good thing.
- Open core models are somehow confusing for adopters. As a consultant for more than 100 companies and public administrations, actually explaining open core is one of my most common tasks. And the marketing message of companies is confusing: if you go to the Zimbra webpage (no offence against Zimbra, which is a company/product I love and use as example of good practice) you see the phrase “Zimbra – the leader in open source email and collaboration”, not “Zimbra – the leader in open source and proprietary email” (not that the phrase would win any context ) and the same for all the other open core companies. This is not, in my opinion, such a negative point if the website explains the difference between versions in a simple way, as for example both Zimbra and Alfresco do.
- It is true that open core models tend to have a higher revenue than non-OC models. It is also true that OC does have an intrinsic limited number of contributions from outside (as we found in FLOSSMETRICS analysing a few hundreds packages), and as can be found in the mentioned LinuxTag presentation. So, you may have higher monetization ratio, but you basically forfeit external contributions. The CEO should decide what is more important – so the decision is not “ethical”, but practical and based on economics. You will never get the kind of participation that Linux, Apache and Eclipse do have in an Open Core model. If that is ok for you – that’s great.
- The fact that most VC are funding open core companies is just a data point. Lots of open source companies do well without VC funding.
- It is true that lots of people claims that “pure” open source models are not sustainable. Even my friend Erwin Tenhumberg (that is quite knowledgeable, expert and incredibly nice on its own) had a slide in this sense in his LinuxTag presentation; and you can find lots of comments like that in many publication (something like “the majority of OSS companies adopt the so called mixed model”, despite this being actually false, as we found in our survey of OSS companies). The point, like said before, is that the important thing is not that there is a superior model, but that for every company, every market there is an optimal model – it may be OC, it may be pure services, or lots of combinations of our 11 building blocks. The optimal model changes with time and market condition, and what is appropriate now may be wrong tomorrow.
- No open source model can achieve the kind of profit margins of proprietary companies. So, if you want to make your OSS company, remember this basic fact. If you want the kind of profit margins of Microsoft or Oracle, forget it.
So, to end this post, there are three critical points: whether the model is clear for the adopter (and this should be a given, and actually nowadays I would say that most companies are absolutely honest and clear on this), whether the software in its open source edition provides sufficient functionality to be useful to a wide range of adopters (and this is a fine line to walk, and requires constant adaptation) and whether the increased monetization compensates for the lack of external contributions, that can substantially increase the value of the code base (you are trading cash for code and engineering, in a sense).
Can we put this to rest? End the name calling, be friends, and call all of us family? Especially since right now, under the sun of Fuerteventura where I am writing this, it seems difficult to fight
[by the way: sorry for any misspelling. There is no spell checker here on this small screen...]
Now that most of our work for FLOSSMETRICS is ended, I had the opportunity to try and work on something different. As you know, I worked on bringing OSS to companies and public administration for nearly 15 years now, and I had the opportunity to work in many different projects with many different and incredible people. One of the common things that I discovered is that to increase adoption it is necessary to give every user a distinct advantage in using OSS, and to make the exploratory process easy and hassle-free.
So, we collected most of the work done in past projects, and developed a custom desktop, designed to be explorable without installation, fast and designed for real world use; EveryDesk is a reinterpretation of the Linux desktop, designed to be used in public administrations or as an enterprise desktop. EveryDesk is a real OS on a USB key, not a live CD; this way the system allows for extensive customization and adaptation to each Public Administration need It is the result of the open sourcing of part of our HealthDesk system, designed using the result of our past European projects COSPA (a large migration experiment for European Public Administrations), SPIRIT (open source health care), OpenTTT (OSS technology transfer) and CALIBRE (open source for industrial environments).
EveryDesk is a binary image designed for 4GB USB keys, easy to install with a single command both on Linux and Windows, simple to replicate and adapt. It does provide a simple and pleasing user interface, with several pre-installed applications and native support for Active Directory. EveryDesk supports roaming/nomadic work through a special mode that stores all user data on a remote SMB server (both Samba and Windows are supported). This way, the user’s USB key contains no personal data, and can be used in environments that manage sensitive data, like health care or law enforcement.
The files and images can be downloaded from the SourceForge project page.
EveryDesk integrates a simple and easy to use menu, derived from Novell usability research studies, providing one-click access to individual programs, documents, places; easy installation of new software or updates, thanks to the fully functional package manager.
EveryDesk includes support for Terminal Services, VNC, VmWare View and other remote access protocols. One peculiarity we are quite happy with is the idea of simplified VDI; basically, EveryDesk integrates the open source edition of VirtualBox, and allows for mounting the disk images remotely – so the disk storage is remote, and the execution is local. This way, VDI can be implemented by adding only storage (that is cheap and easy to manage) and avoiding all the virtualization infrastructure.
The seamless virtualization mode of VirtualBox allows for a quite good integration between Windows (especially Windows 7) and the local environment. Coupled with the fact that the desktop is small and runs in less than 100MB (with both Firefox and OpenOffice.org, it takes only 150MB) it makes for a good substitute of a traditional thin client, is manageable through CIM, and is commercially supported. Among the extensions developed, we have a complete ITIL compliant management infrastructure, and digitally-signed log storage for health care and law enforcement applications.
Just a few days ago, Glynn Moody posted a tweet with the message: “Italy to begin an open source competence centre”; a result of the recent EU project Qualipso, created with the purpose to identify barriers to OSS adoption, quality metrics and with the explicit target of creating a network of OSS competence centers, sharing the results of the research effort and disseminating it with the European community of companies and public administration. For this reason, the project created more than one competence center, and created a network (that you can find under this website) to cover not only Europe, but China, India and Japan as well. This is absolutely a great effort, and I am grateful to the Commission and the project participants for their work (hey, they even cited my work on business models!)
There is, however, an underlying attitude that I found puzzling – and partially troubling as well. The announcement mentioned the competence center of Italy, and was worded as there was no previous such effort in that country. If you go to the network website, you will find no mention of any other competence center there, even when you consider that the Commission already has a list of such centers (not much updated, though) and that on OSOR there is even an official group devoted to Italian OSS competence centers, among them two in Friuli (disclaimer: I am part of the technical board of CROSS, and work in the other), Tuscany, Trentino, Umbria, Emilia (as part of the PITER project), a national one and many others that I probably forgot. Then we have Austria, Belgium, Denmark, Estonia, Finland, Germany, Ireland, Malta, Netherlands, Nordic Countries and many others. What is incredible is that most of these centers… actually don’t link one with the other, and they hardly share information. The new Qualipso network of competence centers does not list any previous center, nor does it point to already prepared documentation – even by the Commission. The competence center network website does not link to OSOR as well, nor does it links to other projects – past or current.
I still believe that competence centers are important, and that they must focus on what can be done to simplify adoption – or to turn adoption into a commercially sustainable ecosystem, for example by facilitating the embracing of OSS packages by local software companies. In the past I tried to summarize this in the following set of potential activities:
- Creating software catalogs, using an integrated evaluation model (QSOS, Qualipso, FLOSSMETRICS-anything, as long as it is consistent)
- For selected projects, finds local support companies with competence in the identified solution
- Collect the needs of potential OSS users, using standardized forms (Technology Request/Technology Offer, TR/TO) to identify IT needs. Find the set of OSS projects that together satisfies the Technology Request; if there are still unsatisfied requirements, join together several interested users to ask (with a commercial offer) for a custom-made OSS extension or project
- Aggregate and restructure the information created by other actors, like IST, IDABC, individual national initiatives (OSOSS, KBST, COSS, …)
This models helps in overcoming several hurdles to OSS adoption:
- Correctly identify needs, and through analysis of already published TR can help in aggregating demand
- Helps in finding appropriate OSS solutions, even when solutions are created through combination of individual pieces
- Helps in finding actors that can provide commercial support or know-how
It does have several potential advantages over traditional mediation services:
- The center does NOT participate in the commercial exchange, and in this sense acts as a pure catalyst. This way it does not compete with existing OSS companies, but provides increased visibility and an additional dissemination channel
- It remains a simple and lean structure, reducing the management costs
- By reusing competences and information from many sources, it can become a significant learning center even for OSS companies (for example, in the field of business models for a specific OSS project)
- It is compatible with traditional IT incubators, and can reuse most of the same structures
Most of this idea revolves around the concept of sharing effort, and reusing knowledge already developed in other areas or countries. I find it strange that the most difficult idea among these competence centers is… sharing.
(update: corrected the network project name – Qualipso, not Qualoss. Thanks to Matteo for spotting it.)
There is an interesting debate, partially moved by Matt Asay, with sound responses from Matthew Aslett, that centered on the reasons for (or not) moving part of the core IP asset of an open source company towards an externally controlled group, like a consortia. Matthew rightly indicates that this is probably the future direction of OSS (the “4.0″ of his graph), and I tried to address this with a few friends on twitter- but 140 chars are too few. So, I will use this space to provide a small overview of my belief: the current structure based on open core is a temporary step in a more appropriate commercialization structure, that for efficiency reason should be composed of a commuity-managed (or at least, transparently managed) consortia that manages the “core” of what now is the open source part of open core offerings, and a purely proprietary company that provides the monetization services, may those be proprietary add-ons, paid services and so on.
Why? Because the current structure is not the most efficient to enable participation from outside groups- if you look at the various open core offerings, the majority of the code is developed from in-house developers, while on community-managed consortia the code may be originated by a single company, but is taken up by more entities. The best example is Eclipse: as recently measured, 25% of the committers work for IBM, with individuals accounting for 22%, and a large number of companies like Oracle, Borland, Actuate and many others with percentages that go from 1 to 7% in a collective, non-IBM collaboration.
Having then a pure proprietary company that sells services or add-ons also removes any possibility of misunderstanding about what is offered to the customer, and thus will make the need of a “OSS checklist” unnecessary. Of course, this means that the direction of the project is no longer in the hand of a single company, and this may be a problem for investors- that may want to have some form of exclusivity or guarantee of maintaining the control. But my impression is that there is only the illusion of control, because if there is a large enough payoff, forks will make the point moot (exactly like it happened with MySQL); and by relieving control, the company gets back a much enlarged community of developers and potential adopters.
Having contributed to the new edition of the 2020 FLOSS roadmap, I am happy to forward the announcement relative to the main updates and changes of the 2020 FLOSS roadmap document. I am especially fond of the “FOSS is like a Forest” analogy, that in my opinion captures well the hidden dynamics that is created when many different projects create an effective synergy, that may be difficult to perceive for those that are not within the same “forest”.
For its first edition, Open World Forum had launched an initiative of prospective unique in the world: the 2020 FLOSS Roadmap (see 2008 version). This Roadmap is a projection of the influences that will affect FLOSS until 2020, with descriptions of all FLOSS-related trends as anticipated by an international workgroup of 40 contributors over this period of time and highlights 7 predictions and 8 recommendations. 2009 edition of Open World Forum gave place to an update of this Roadmap reflecting the evolutions noted during the last months (see OWF keynote presentation). According to Jean-Pierre Laisné, coordinator of 2020 FLOSS Roadmap and Bull Open Source Strategy: “For the first edition of the 2020 FLOSS Roadmap, we had the ambition to bring to the debate a new lighting thanks to an introspective and prospective vision. This second edition demonstrates that not only this ambition is reached but that the 2020 FLOSS Roadmap is actually a guide describing the paths towards a knowledge economy and society based on intrinsic values of FLOSS.”
About 2009 version (full printable version available here)
So far, so good: Contributors to the 2020 FLOSS Roadmap estimate that their projections are still relevant. The technological trends envisioned – including the use of FLOSS for virtualization, micro-blogging and social networking – have been confirmed. Contributors consider that their predictions about Cloud Computing may have to be revised, due to accelerating adoption of the concepts by the market. The number of mature FLOSS projects addressing all technological and organizational aspects of Cloud Computing is confirming the importance of FLOSS in this area. Actually, the future of true Open Clouds will mainly depend on convergence towards a common definition of ‘openness’ and ‘open services’.
Open Cloud Tribune: Following the various discussions and controversies around the topic “FLOSS and Cloud Computing”, this opinion column aims to nourish the debates on this issue by freely publishing the various opinions and points of view. 2009’s article questions about the impact of Cloud Computing on employment in IT.
Contradictory evolutions: While significant progress was observed in line with 2020 FLOSS Roadmap, the 2009 Synthesis highlights contradictory evolutions: the penetration of FLOSS continues, but at political level there is still some blocking. In spite of recognition from ‘intellectuals’. the alliance between security and proprietary has been reinforced, and has delayed the evolution of lawful environments. In terms of public policies, progress is variable. Except in Brazil, United Kingdom and the Netherlands, who have made notable moves, no other major stimulus for FLOSS has appeared on the radar. The 2009 Synthesis is questioning why governments are still reluctant to adopt a more voluntary ‘FLOSS attitude’. Because FLOSS supports new concepts of ’society’ and supports the links between technology and solidarity, it should be taken into account in public policies.
Two new issues: Considering what has been published in 2008, two new issues have emerged, which will need to be explored in the coming months: proprietary hardware platforms, which may slow the development of FLOSS , and proprietary data, which may create critical lock-ins even when software is free.
The global economic crisis: While the global crisis may have had a negative impact on services based businesses and services vendors specializing in FLOSS, it has proved to be an opportunity for most FLOSS vendors, who have seen their business grow significantly in 2009. When it comes to Cloud-based businesses, the facts tend to show a massive migration of applications in the coming months. Impressive growth in terms of hosting is paving the way for these migrations.
Free software and financial system: this new theme of the 2020 FLOSS Roadmap makes its appearance in the version 2009 in order to take into account the role which FLOSS can hold in a system which currently is the target of many reflexions.
Sun/Oracle: The acquisition of Sun by Oracle is seen by contributors to the 2009 Synthesis as a major event, with the potential risk that it will significantly redefine the FLOSS landscape. But while the number of major IT players is decreasing, the number of small and medium-size companies focused around FLOSS is growing rapidly. This movement is structured around technology communities and business activities, with some of the business models involved being hybrid ones.
FLOSS is like forests: The 2009 Synthesis puts forward this analogy to make it easier to understand the complexity of FLOSS through the use of a simple and rich image. Like forests and their canopies – which play host to a rich bio-diversity and diverse ecosystems – FLOSS is diverse, with multiple layers and branches both in term of technology and creation of wealth. Like a forest, FLOSS provides vital oxygen to industry. Like forests, which have brought both health and wealth throughout human history, FLOSS plays an important role in the transformation of society. Having accepted this analogy, contributors to the Roadmap subsequently identified different kind of forests: ‘old-growth forests’ or ‘primary forests’, which are pure community-based FLOSS projects such as Linux; ‘cultivated forests’, which are the professional and business-oriented projects such as Jboss and MySQL; and ‘FLOSS tree nurseries’, which are communities such as Apache, OW2 and Eclipse. And finally the ‘IKEAs’ of FLOSS are companies such as Red Hat and Google.
Ego-altruism: The 2009 Synthesis insists on the need to encourage FLOSS users to contribute to FLOSS, not for altruistic reasons, but rather for egoistical ones. It literally recommends users to only help when it benefits themselves. Thanks to FLOSS, public sector bodies, NGOs, companies, citizens, etc. have full, free and fair access to technologies enabling them to communicate on a global level. To make sure that they will always have access to these powerful tools, they have to support and participate in the sustainability of FLOSS.
New Recommendation: To reinforce these ideas, the 2020 FLOSS Roadmap in its 2009 Synthesis added to the existing list of recommendations:
Acknowledge the intrinsic value of FLOSS infrastructure for essential applications as a public knowledge asset (or ‘as knowledge commons’), and consider new means to ensure its sustainable development