Posts Tagged OSS adoption
I have been talking about OSS for a long, long time, and my first public conference on the subject is still imprinted in my mind. It was at a very important post-universitary Italian school, with a renowned economic department, and I got invited to deliver a speech about EU activities in support of OSS, to an audience mainly composed of academics from sociology, economics, political science and such. Just after my talk, one of the professors started a lively debate, claiming that I was a “crypto-communist, deluded and trying to spread the false model of the gift economy upon IT”. Heck, I stopped talking for a moment – something that the people that knows me would find surprising (I tend to talk a lot, on things that I like). I had to think about the best way to answer, and was surprised to find that most of the audience shared the same belief. One professor mentioned that basic economic laws make the very idea of OSS impossible, or only a temporary step towards a market readjustment, and so on.
Guess what? They were wrong. And not wrong a little – wrong a lot (but it took me a few years to demonstrate it).
And so, after all these years, I still find sometimes academics that improvise on the subject, claiming certainty of their models; models that, usually, include hidden assumptions that are more myth and folklore than science. Thankfully for the many ones that are not subject to this faults (Dirk Riehle comes to mind, as Rishab Gosh, Paul David, Francesco Rullani, Cristina Rossi, and many others) we have real data to present and show. I still sometimes open my talks with a mention from “Government policy toward open source software”, a book from AEI-Brookings where Evans claims that “The GPL effectively prevents profit-making firms from using any of the code since all derivative products must also be distributed under the GPL license”. Go tell that to RedHat.
Now, I have a new contender for inclusion in my slides; an article from Sebastian von Engelhardt and Stephen M. Maurer, that you can find in all its glory here. I will try to dissect some of the claims that are hidden in the paper, and that for example push the authors towards “imposing a fixed, lump-sum tax on OS firms and using the proceeds to subsidize their [proprietary software] competitors”. I think that Microsoft would love that – a tax on RedHat, Google, IBM! What can be more glorious than that?
I will pinpoint some of the most evident problems:
- “For this reason, the emergence of fundamentally new, “open source” (OS) methods for producing software in the 1990s surprised and delighted observers.” Actually, as I wrote for example here, the tradition of collaborative development of software far predates Stallman and Raymond, and was the norm along with the creation of “user” (more appropriately “developer”) groups like SHARE (Society to Help Avoid Redundant Efforts, founded in 1955 and centered on IBM systems) and DECUS (for Digital Equipment computers and later for HP systems), both still alive. Code was also commonly shared in academic journals, like the famous “Algorithms” column of the “Communications of the ACM” journal. It was the emergence of the shrinkwrapped software market in the eighties that changed this approach, and introduced the “closed” approach, where only the software firm produces software. This is actually an illusion: in Europe, the market for shrinkwrapped software is only 19% of the total software+services marker, with own-developed software at 29%. We will return upon this number later.
- “This made it natural to ask whether OS could drastically improve welfare compared to CS. At first, this was only an intuition. Early explanations of OS were either ad hoc (“altruism”) or downright mysterious (e.g. a post-modern “gift economy”). [Raymond 1999] Absent a clear model of OS, no one could really be certain how much software the new incentive could deliver, let alone whether social welfare would best be served by OS, CS, or some mix of the two.” Argh. I understand the fact that my papers are not that famous, but there are several excellent works that show that OSS is about the economics of production, and not politics, ideology or “gif economies”.
- “economists showed that real world OS collaborations rely on many different incentives such as education, signaling, and reputation.” See? No economic incentives. People collaborate to show their prowess, or improve their education. Actually, this applies only to half of the OSS population, since the other half is paid to work on OSS – something that the article totally ignores.
- “We model the choice between OS and CS as a two-stage game. In Stage 1, profit-maximizing firms decide between joining an OS collaboration or writing CS code for their own use. In Stage 2 they develop a complementary product, for example a DVD player or computer game, whose performance depends on the code. The firms then sell the bundled products in markets that include one or more competitors.” So, they are describing either a R&D sharing effort or an Open Core model (it is not well explained). They are simply ignoring every other possible model, something that I have already covered in detail in the past. They also ignore the idea that a company may contribute to OSS for their own internal product, not for selling it; something that is in itself much bigger than the market for shrinkwrapped software (remember the 29% mentioned before?) and that is totally forgotten in the later discussion on welfare.
- “OS only realizes the full promise of cost-sharing when CS firms are present”. This is of course false: R&D sharing is always present every time there is a cooperation across a source base. But the article mentions only a simplistic model that assumes a OS company and a proprietary company (they insist in calling it Commercial Software, which is not).
There is a large, underlying assumptions: that OSS is produced now only by companies that create Open Core-like products. The reality is that this is not true (something that was for example found in the last CAOS report from the excellent Matthew Aslett) and the exclusion of users-developers makes any model that tries to extract welfare totally unreliable.
Ahh, I feel better. Now I have another university where I will never be invited
During the development of the EU Cospa project, we found that one of the most common criteria used to evaluate “average” TCO was actually not very effective in providing guidance – as the variability of the results was so large that made any form of “average” basically useless. For this reason, we performed a two-step action: the first was to define a clearly measurable set of metrics (including material and immaterial expenses) and you can find it here:
“D3.1 – Framework for evaluating return/losses of the transition to ODS/OSS”
The second aspect is related to “grouping”. We found that the optimal methodology for evaluating migration was different for different kind of transitions, like server vs. desktop, full-environment migration vs. partial, and so on; the other orthogonal aspect is whether the migration was successful or not. In fact, *when* the migration is successful, the measured (both short-term and over 5 years) TCO was substantially lower in OSS compared to pre-existing proprietary software. I highlight two cases: a group of municipalities in the North of Italy, and a modern hospital in Ireland. For the municipalities:
Initial acquisition cost: proprietary 800K€, OSS 240K€
annual support/maintenance cost (over 5 years): proprietary 144K€, OSS 170K€
The slightly higher cost for the OSS part is related to the fact that an external consultancy was paid to provide the support. An alternative strategy could have been to retrain the staff for Linux support, using consultancies only in year 1 and 2- leading to an estimated total cost for the OSS solution exactly in line with the proprietary one. The municipalities also performed an in-depth analysis of efficiency; that is, documents processed per day, comparing openoffice and MS office. This was possible thanks to a small applet installed (with users and unions consent) on the PC, recording the user actions and the applications and files used during the migration evaluation. It was found that users were actually *more* productive with OOo in a substantial way. This is probably not related to a relative technical advantage of OOo vs. MS office, but on the fact that some training was provided on OpenOffice.org before beginning the migration – something that was not done before for internal personnel. So many users actually never had any formal training on any office application, and the limited (4 hours) training performed before the migration actually substantially improved their overall productivity.
On the other hand, it is clear that OOo is – from the point of view of the user – not lowering the productivity of employees, and can perform the necessary tasks without impacting the municipality operations.
The migration was done in two steps; a first one (groupware, content management, openoffice) and a second one (ERP, medical image management).
In the first, the Initial acquisition cost was: proprietary 735K€, OSS 68K€
annual support/maintenance cost (over 5 year): proprietary 169K€, OSS 45K€
Second stage Initial acquisition cost: proprietary 8160K€, OSS 1710K€
annual support/maintenance cost (over 5 year): proprietary 1148K€, OSS 170K€
The hospital does have a much larger saving percentage when compared with other comparable cases because they were quite more mature in terms of OSS adoption; thus, most of the external, paid consulting was not necessary for their larger migration.
Some of the interesting aspects that we observed:
- In both tangible and intangible costs, the reality is that one of the most important expense is software search and selection, and the costs incurred in selecting the “wrong” one. This is one of the reasons why in our guidelines we have included the use of established, pragmatic software selection methodologies like FLOSSMETRICS or QUALIPSO (actually we found no basic difference in “effectiveness” among methods: just use at least one!)
This information is also something that can be reused and disseminated among similar groups; for example, the information on suitability of a backup solution for municipalities can be spread as a “best practice” among similar users, thus reducing the cost of searching and evaluating it. If such a widespread practice can be performed, we estimate that OSS adoption/migration costs can be reduced of something between 17% and 22% with just information spreading alone.
- On average, the cost of migration (tangible vs. intangible) was nearly equal with one exception that was 27% tangible vs. 73% intangible, due to the pressure to use older pcs, and reuse resources when possible for budgetary reasons. In general, if you want to know the “real” TCO, simply take your material costs and multiply by two. Rough, but surprisingly accurate.
- Both in COSPA, OpenTTT and our own consulting activity we found that 70% of users *do not need* external support services after the initial migration is performed. For example, while most of COSPA users paid for server support fees for RedHat Enterprise, a substantial percentage could have used a clone like Centos or Oracle linux with the same level of service and support. Also, while not universally possible, community-based support has been found sufficient and capable in a large number of environments. A problem with community support has been found in terms of “attitude”; some users accessed the forums with the same expectations of a paid offering, seriously damaging the image and possibility of support (something like “I need an answer NOW or I’ll sue you!” sent to a public support forum for an open source product). For this reason, we have suggested in our best practices to have a single, central point of contact between the internal users and the external OSS communities that is trained and expert in how OSS works to forward requests and seek solutions. This can reduce, after initial migration and a 1-2 year period of “adaptation”, support costs by shifting some of the support calls to communities. This can reduce costs of a further 15-20% on average.
I am quite proud of the work that we did on EveryDesk – a full desktop as a bootable USB key, fully modifiable and adaptable. We are using it in schools, public administrations and companies, where the increased efficiency of Linux makes a difference in making old computers usable again – or helping in the problem of managing PCs that are remote or in hostile environments.
However, this is not enough. You may be without a USB-bootable computer, or you may be using a tablet like an iPad or a Galaxy Pad (something that I see more and more everywhere). In these environments, you may need something more powerful than the apps that are available there – a full Office-like application, or a real desktop browser to access a corporate banking application; maybe you need a specific client for older systems, like the IBM iSeries (the old AS/400) or some special client in Java – on system that do not have java or flash.
For this kind of applications, we are working on a system that embeds a full HTML5 desktop in a FaceBook application, making it accessible from any recent web browser, including the iPad. This way, you can have a full desktop everywhere you go. We hope that it can be of interest; as soon as it is ready, we will release source code and blueprints.
We have prepared a small demo of how it works right now; it is a real screen capture from my own personal EveryDesk/Online instance, done on a normal ADSL line. It should give an idea on how it may work for you.
Yesterday Julie Bort wrote in the NetworkWorld site an interesting post called “Cisco doesn’t contribute nearly enough to open source”, where she contends that “”[despite its] .. proclaims it responsible for a half percent of the contributions to the Linux kernel (0.5%). In reality, Cisco has been a near non-entity as an open source contributor”. Of course the author is right in its claims – the amount of contributed code to the Linux kernel is substantial but very “vertical”, and specific to the needs of Cisco as a Linux adopter.
Which is a perfectly sensible thing to do.
The problem of “contribution” comes up and again in many discussions on open source and business adoption of OSS; it is, in fact, a source of major debate why participation is low, and what can be done to improve it. It is my opinion that there are some barriers to OSS contribution – namely, internal IPR policies, lack of understanding of how participation can be helpful and not just a gift to competitors, and more. On the other hand, two points should be made to complement this view: the first is that some companies contribute in ways that are difficult to measure, and the second is that sometimes companies have no economic reasons to do so.
Let’s start with the first point, that is a little peeve of me. Companies can provide source code; some do, and that’s a beautiful thing. However, there are many, many alternative ways of collaborating. Aaron Seigo, of KDE fame, in one presentation outlined the many activities that are part of the possible KDE contributions:
- Human-computer interaction
- Quality Assurance
- Software Development
In fact, I would say that some aspects like Artwork, Marketing and Quality Assurance may even be more important than pure coding – the problem is measuring such contributions. While the technical work underpinning source code analysis is quite well researched (among others, in our FLOSSMETRICS project) there is NO research on how to measure non-code contributions. And such contributions may be hugely important; one of my favorite example is the release, from Red Hat, of the Liberation fonts – a set of fonts with metrics compatible with the most widely used Microsoft fonts, like Arial. That alone helped substantially in improving the quality and correctness of document editing and visualization on Linux. How to measure that? Ubuntu has substantially contributed in terms of dissemination, in creating a base for many other distributions (including our own Everydesk). How to assess the value of that?
The second aspect is more complex, and is related to the strategy and tactics that a company uses to fulfill its own goals. Let’s take into account what a normal company do: first of all, survive (that is, revenues+reserves>expenses). Not all companies do have such a goal (a company designed to fulfill a task and then end its activities does have the survival goal with a deadline) but most do. This means that a company performs an internal or external activity if it does provide, now or in the future, a probable increase in revenues or reserves, or decreases expenses. Moral or ethical goals can be easily modeled in this schema using a “ethical asset”, that is a measure of how good we are in a specific target environment; for example, ecological contributions and so on.
So, let’s think about our typical company using OSS for a product. Let’s imagine that the company is doing a tactical adoption, that is it does not have a long term strategy that is based on Open Source. If the cost of contributing something is lower than the cost of doing everything from scratch, then the company will contribute back (or at least, the probability of that action is higher). In absence of a strategy based on open source, there is no need to go further.
For example, in the blog post the open sourcing of IOS is mentioned; the question is: why? What economic goal this open sourcing brings? If the company decides to adopt a long term strategy based on resource sharing (with the idea of receiving substantial contributions from external entities – like Linux, WebKit, Apache, and so on) then this may make sense; but it implies a substantial change in company strategy. Such large changes are not easy to do and perform well; Sun tried (and partly failed), and most of the “famous” examples are only partially adopting an open-based strategy (IBM, Oracle, Google).
To recap: 1) we must evaluate and appreciate all kind of contributions – not only code. 2) We can expect large scale contributions only from companies that bet their strategy on OSS – Red Hat is among my favorite examples of that. We cannot expect, realistically, for companies that are using Open Source in a tactical way to contribute back in the same way.
The (always great) Matthew Aslett posted today on some of his most recent results on the future of OSS licensing, in what he calls “Open Source 4.0″, characterized by corporate-dominated development communities. This form of evolution was one of the prediction in my previous posts – not for ethical,or community reasons, but for entirely practical and economic reasons: collaborative development is one of the strongest model in all the 11 basic components that we have identified in the FLOSSMETRICS group. In fact, I wrote in the past something like
“Many researchers are trying to identify whether there is a more “efficient” model among all those surveyed; what we found is that the most probable future outcome will be a continuous shift across model, with a long-term consolidation of development consortia (like Symbian and Eclipse) that provide strong legal infrastructure and development advantages, and product specialists that provide vertical offerings for specific markets“
which, I believe, matches quite well Matthew’s idea about OSS4.0. One area where I am (slightly) in disagreement with Matthew is related to licensing; I am not totally sure about the increased success of non-copyleft licenses in this next evolution of the open source market. Not because I believe that he is wrong (I would never do that – he is too nice ) but because I believe that there are additional aspects that may introduce some differences.
The choice of an open source license for a project code release is not clear-cut, and depends on several factors; in general, when reusing code that comes from external projects, license compatibility is the first, major driver in license selection. Licenses do have an impact on development activity, depending on the kind of project and who controls the project evolution. Previous studies that shown that restrictive, copyleft licenses do have a negative impact on contribution (for example in Fershman and Gandal, “Open source software: motivation and restrictive licensing”) has been refuted by other researchers (Stewart, Ammeter, Maruping, “Impacts of License Choice and Organizational Sponsorship on User Interest and Development Activity in Open Source Software Projects”). An interesting result of that research is the following graph:
What we found is that for non-market sponsors and new code, there is an higher development activity from outside partners for code that is released under a non-copyleft license. But this implies that the code is new and not encumbered with previous license obligations, like for example the reuse of an existing, copyleft-licensed project. The graph shows the impact on development activity in open source projects, depending on license restrictiveness and the kind of “sponsor”, that is the entity that manages a project. “No sponsor” is the kind of project managed by a non-coordinated community, for example by volunteers; “market sponsor” are projects coordinated by a company, while “nonmarket sponsor” are project managed by a structured organization that is not inherently for-profit, like a development consortia (an example is the Eclipse Foundation). The research data identified a clear effect of how the project is coordinated and the kind of license; the license restrictiveness has been found to be correlated with decreased contributions for nonmarket sponsors, like OSS foundations, and is in general related to the higher percentage of “infrastructural” projects (like libraries, development tools, enabling technologies) of such foundations.
In general,the license selection follows from the main licensing and business model constraints:
- When the project is derived from an external FLOSS project, then the main constraint is the original license. In this case, the basic approach is to find a suitable license from those compatible with the original license, and select among the possible business models the one that is consistent with the selected exploitation strategy.
- When one of the partners has an Intellectual Property Rights licensing policy that is in conflict with a FLOSS license, the project can select a MIT or BSD license (if compatible with an eventual upstream release) or use an intermediate releaser; in the latter case there are no constraints on license selection. If a MIT or BSD license is selected, some models are of difficult application: for example, Open Core and Dual Licensing are difficult to implement because the license lack the reciprocity of copyleft.
- When there are no external licensing constraints, and external contributions are important, license can be more or less freely selected; for nonmarket entities, a non-copylefted license gives a greater probability of contribution.
So, if you are creating a nonmarket entity, and you are free to choose: choose non-copyleft licenses. In the other situations, it is not so simple, and it may even be difficult to avoid previous licensing requirements.
The point on intermediate releasers require some additional consideration. An especially important point of OSS licenses is related to “embedded IPR”, that is the relationship of the code released with software patents that may be held by the releasing authority. While the debate on software patents is still not entirely settled, with most OSS companies vigorously fighting the process of patenting software-based innovations, while on the other hand large software companies defending the practice (for example SAP) most open source licenses explicitly mention the fact that software patents held by the releasing authority are implicitly licensed for use with the code. This means that business practices that rely on separate patent licensing may be incompatible with some specific OSS licenses, in particular the Apache License and the GPL family of licenses. The Eclipse Public License gives patent grants to the original work and to enhanced versions based on the original work but not to code not directly derived from the release, while permissive licenses like BSD and MIT give no patent rights at all.
If, for compatibility or derivation, a license that gives explicitly IPR rights must be selected, and the company or research organization wants to maintain the rights to use IPR in a license-incompatible way a possible solution may be the use of an intermediate releaser; that is, an entity that has no IPR on its own, to which the releasing organization gives a copy of the source code for further publication. Since the intermediate release has no IPR, the license clauses that require patent grants are not activated, while the code is published with the required license; this approach has been used for example by Microsoft for some of its contributions to the Apache POI project.
This may become an important point of attention for companies that are interested in releasing source code under an OSS license; most software houses are still interested in maintaining their portfolio of patents, and are not willing to risk invalidation through “accidental licensing” of IPR embedded in source code (one of the reasons why Microsoft will never sell a Linux based system).
As I wrote in the beginning, there is for a large number of consortia a clear preference for non-copyleft licenses; but it is not possible to generalize: the panorama of OSS is so complex, right now, that even doing predictions is difficult.
I am quite happy to announce the release of the third beta of our EveryDesk flash-based desktop, now available in VirtualBox format as well – so you can try it out without the need of a USB key. EveryDesk is a reinterpretation of the Linux desktop. It executes from a 4Gb USB Key, and allows the user to run a modern and efficient Linux Desktop on most PCs without the need of changing or removing the native operating system such as Windows. Designed to be used in Public Administrations or as an enterprise desktop, EveryDesk is a real OS on a USB key, not a live CD, and as such allows for extensive customization and adaptation to each Public Administration need. It is the result of the open sourcing of parts of the Conecta HealthDesk system, designed using the result of our past European projects COSPA (a large migration experiment for European Public Administrations), SPIRIT (open source health care), OpenTTT (OSS technology transfer) and CALIBRE (open source for industrial environments).
There are more than 120 changes from the previous edition; among them, all the medical applications are integrated in the same image – so there is no need to have a separate edition for Health Care applications. Among the updates:
- Latest edition of the DICOM browser for hospitals and medical applications; now supports per-user monitor calibration.
- Integrated medical dictionary in OpenOffice.org
- Integrated the After the Deadline OpenOffice grammar checker
- LikeWise 6 Active directory integration tool
- A fast, efficient and very capable RDP, NX and VNC connection manager: Remmina based on FreeRDP
- The latest VirtualBox
- Several ancillary additions, like a large complement of fonts
To facilitate the final bug fixing, we made the boot process visible – that will be reverted to silent boot as soon as the final testing is completed. As usual, you will find the images at our sourceforge page.
It is now time to write the closing part of our long multi-part look at open source business models. After all the discussion on how to look at the various parts of a model and how to improve it, I will try to summarize a bit on how to look at an OSS business model, and what implications can be made from a specific choice (for once, without mentioning open core).
The basic idea behind business models is quite simple: I have something or can do something – the “value proposition” – and it is more economical to pay me to do or get this “something” instead of doing it yourself (sometimes it may even be impossible to find alternatives, as in natural or man-made monopolies, so the idea of doing it myself may not be applicable)
There are two possible sources for the value: a property (something that can be transferred) and efficiency (something that is inherent in what the company do, and how they do it). With Open Source, usually “property” is non-exclusive (with the exception of Open Core, where part of the code is not open at all). Other examples of property are trademarks, patents, licenses… anything that may be transferred to another entity through a contract or legal transaction.
Efficiency is the ability to perform an action with a lower cost (both tangible and intangible), and is something that follows the specialization in a work area or appears thanks to a new technology. Examples of the first are simply the decrease in time necessary to perform an action when you increase your expertise in it; the first time you install a complex system may require lots of effort, and this effort is reduced the more you experience the tasks necessary to perform the installation itself.
Examples of the second may be the introduction of a tool that simplifies the process (for example, through image cloning) and it introduces a huge discontinuity, a “jump” in the graph of efficiency versus time.
These two aspects are the basis of all the business models that we have analysed in the past; it is possible to show that all of them fall in a continuum between properties and efficiency:
Among the results of our past research project, one thing that we found is that property-based projects tend to have lower contributions from the outside, because it requires a legal transaction to become part of the company’s properties; think for example at dual licensing: to become part of the product source code, an external contributor needs to sign off his rights to the code, to allow the company to sell the enterprise version alongside the open one.
On the other hand, right-handed models based purely on efficiency tends to have higher contributions and visibility, but lower monetization rates. As I wrote many times, there is no ideal business model, but a spectrum of possible models, and companies should adapt themselves to changing market conditions and adapt their model as well. Some companies start as pure efficiency based, and build an internal property with time; some others may start as property based, and move to the other side to increase contributions and reducing the engineering effort (or enlarging the user base, to create alternative ways of monetizing users).
This is the last post in our little mini-serie on OSS business models; I hope that my archetypal three readers will have enjoyed it as much as I enjoyed writing them. Of course, I will be happy to read and respond to any comment – even negative ones.
Open core is usually built by a set of internal open source components held together by a dual-licensed wrapper, plus proprietary modules on the outside. One of the best examples of this is Zimbra (an excellent product on its own) but MySQL in recent editions can be included in the same group. As discussed in previous posts, dual licensing hampers contributions because it requires an explicit agreement on ceding rights to the company that employs it, in order to be able to relicense it for the proprietary edition. This means that Open Core companies, in itself, will have an easier time in monetizing their software, but will receive much less contributions in exchange. As I wrote before, it is simply not possible to get something like Linux or Apache with Open Core.
Again: open core is not bad per se (but I would have been more cautious in calling Sugar “an open source company”, for whatever definition you have of that). But it is a tradeoff: monetization versus contributions. And, my bets are on contributions, as OpenStack demonstrates – you need leverage and external resources to go beyond what a single company can do.
We have finally released the new version of the linux-on-USB EveryDesk system, both in the plain version and the Medical release, that includes an IHE certified DICOM medical image browser, a complete R-based statistical environment and OpenOffice enhanced with a complete medical dictionary. The new version is faster, should be more compatible with older hardware, and in general was found by our beta testers to be fairly complete.
Its main appeal is that it can be tested without any installation: just download the image, copy it on the key and try. It boots fast, it is totally modifiable, provides local applications, Prism for web-apps, Chromium and several remote computing applications like the VMware View client, clients for IBM mini and mainframes, a full Java environment for Citrix, and much more.
The medical version still misses the final DICOM certification (you will see in the startup splash screen that it does have no CE marking), we are working towards the final release that will be certified and significantly improved. The R environment is also missing some modules specific to bioengineering, that were not ready in time for release; we expect to have a beta-2 version ready for the mid of august.
We have also a completely new website: http://www.everydesk.org where we added a substantial amount of material, and will be used to publish the training videos that we are preparing to help companies in adopting the desktop for their own internal use.
We have introduced a new policy: we offer unlimited and free support and helpdesk services for all users, commercial or not. To receive private answers we only as for an introductory email that provides details of the institution, contact points and the actual or expected number of EveryDesk installations. We will provide a separate customer ID, and it will be used for issue tracking. Large scale customers can request a private portal, with issue and bug tracking, device management and group update as a separate commercial option.
We are welcoming health care institutions that are interested in trying EveryDesk/MED, especially from developing countries; let us know what additional application may be of interest to be added to the default platform.
For more information: http://www.everydesk.org
Now that our EveryDesk is out in the wild, I would like to provide a little background on what choices were made in creating it; especially outlining some differences with previous approaches. EveryDesk starts with a set of assumptions: first of all, that every single barrier reduces by an order of magnitude the probability of adoption, and that it is extremely difficult to displace “what works”, but there are lots of environments where current OSS and commercial offerings are not perfectly suited for their intended target.
I have previously addressed the use of the UTAUT model to study for example Google’s ChromiumOS offering; we applied the same model for our own desktop offering, modelled after the end of the COSPA project (one of the largest controlled experiments in the introduction of OSS in European Public Administration desktops). We have focused our initial efforts on the Health Care sector, thanks to our contract work with the regional health care agency of the Friuli region, but later generalized the approach for a wide range of activities using the same basic infrastructure.
First of all, what’s the problem of the current commercial offering?
- Hardware obsolescence: PC refresh cycles are already widely stretched thanks to the economic crisis, forcing users to adapt to less-than-modern IT infrastructures, both server and client side;
- Security: the basic security of most commercial offerings is barely adequate; to provide sufficient protection, several layers of added security software needs to be added to the basic OS, increasing resource consumption and aggravating the situation for less than modern hardware;
- Management: unless you are the lucky recipient of a fully managed (and costly) infrastructure, you will have to perform or have performed several management activities like patch and software management, backups and lots more.
Thin clients reduce management, but require substantial infrastructural investments, some applications are hard to port to Terminal Services or require substantial remotization bandwidth (or lots of additional software: think about video-conferencing in a TS environment, with all the hybrid local/remote channel enabled by tools like Citrix HDX). VDI requires even more complex systems, with an offering that is still maturing (with some stunning technical hacks, actually) and that has for many installation an unproven return on investment.
To summarize: desktop PC are flexible, adaptable, usable without connectivity, complex, fragile, difficult to manage. Thin (bitmap-based, like RDP or ICA) clients are slightly easier to manage, require little support, require substantial infrastructure investments, cannot work detached, have marginally lower management costs.
We try to strive with a middle ground solution: EveryDesk is a locally executed OS, that when configured provides the same remote management advantages of thin clients without the costly infrastructure (the only thing needed is storage, that is nowadays cheap and plentiful). The system is a real install, not a live CD, so the user/administrator can install applications or customize it in depth simply by using the image and then replicating it for all the people working in a company or administration. Updating it is simple: just execute the Update Manager!
While developing EveryDesk we identified a few potential use cases, and I would like to explain what advantage our hybrid model can have:
- Hospital worker: our initial use case. We designed the system so that national regulations in the handling of sensitive data could be complied to without any specific effort on the side of the user; that is, to make nearly impossible for the worker to lose or disseminate data without an explicit and voluntary breach of confidentiality, and make it possible to identify such breach immediately. By moving user data on a centrally managed server, standard logging and identity management techniques can be applied easily to prevent data loss; as no private data is on the key (including passwords), losing the key or having it stolen is not sufficient to breach the system privacy. For our health care customization we added to the basic image an excellent radiology workstation system called O3, already in use in some Italian hospitals, a medical dictionary and some ancillary tools like the ImageJ image processing system.
- Another important use case is widely found in developing countries, and is the “Internet Café”. While it is true that mobile internet access is fast becoming a fundamental infrastructure, cost and efficiency reasons still make it sensible to have a physical, shared space with PCs. EveryDesk makes it possible to provide low-maintenance PCs with no hard disks, a central low cost storage, and simply give away the USB keys to the attendees. If a key stops working, it is simply a matter of re-copying the image on top of a new one to restore everything.
- Within companies and Public Administration, providing a diskless PC with EveryDesk allows the efficient use of even old PCs (EveryDesk takes 150MB of RAM with both Firefox and OpenOffice.org open), while providing thanks to VirtualBox the set of applications that are not available within Linux. In dispersed companies, where you have multiple sites, you can use a replicating file system (like the wonderful XtreemFS developed within another EU-funded research project) that provides in a totally open source solution with differential and efficient replicas across sites. This way you can use your VirtualBox image, stop it, let the system replicate it in the other sites, move to another city, fire up EveryDesk again and have all your data and status restored without the need for local persistent storage.
The idea of a real Linux install is not new – actually, some of the ideas were explored a few years ago in a Gentoo-based system called FlashLinux, that unfortunately is not updated since 2005. We also introduced some of the ideas behind IBM SoulPad, namely the integration of virtualization within the environment, but reversed the concept (in SoulPad the virtualization layer is at the bottom, and is used to abstract the internal virtual machine from the hardware, as well as providing easy suspend/resume functionalities).
We plan to create a education-oriented edition, integrating some of the software tools already selected in projects like EduLinux; we also plan to backport some of the customizations of municipally-sponsored distributions like MAX (Madrid Linux) to try to provide a common basis for experimentation in public administrations across Europe.