Posts Tagged OSS business models

How to analyse an OSS business model – part four

Welcome to the fourth part of  our little analysis of OSS business models (first part here, second part here, third part here). It is heavily based on the Osterwalder model, and follows through the examination of our hypothetical business model; after all the theoretical parts, we will try to add a simple set of hands-on exercises and tutorials based on a more or less real case. We will focus today on the remaining parts of our model canvas (with less detail, as those parts are more or less covered by every business management course…), and will start a little bit of “practical” exploration, to create the actors/actions model that was discussed in the previous instalments.

Cost structure: this is quite simple – the costs incurred during the operation of our business model. There are usually two kind of models, called “cost-driven” (where the approach is minimization of costs) or “value-driven” (where the approach is the maximization of value creation). Most models are a combination of the two; for example, many companies have a low-cost offering to increase market share, and a value offering with an higher cost and higher overall quality. In open source companies it is usually incorrect to classify the open source edition as “cost-driven”, unless a specific price and feature difference is applied between a low-level and high-level edition.

Key partners: do our company partners with external entities? Common examples are resellers, external support providers, and so on. Additional examples may be partnerships with other companies or external groups for co-development of the OSS components (even competitors may share work on improving a reciprocally useful OSS package); sometimes the partnership may be informal (for example, with an OSS community) but fundamental as well.

Key activities: what is the basis of our work in our hypothetical OSS company? Of course, software development may be a big part; other examples are marketing, support… every company do have a specific mix, that is easily recognized simply by looking at what each person inside the company is doing right now.

Channels: how do we contact our customers, or potential ones? Directly? Through an external channel? Each channel provide different properties; web marketing is different from web word-of-mouth, exactly like radio advertising is different from print advertising. Choosing an appropriate channel is adifficult art, and is something that changes with time.

Customer relationships: How does the customer (or potential customer) interact with our company? Only through the software? Through online or in-person channels, like workshops? Support is self-service (the customer does it by itself) or requires human interaction? Is this interaction monitored? By who? A special case is handling the relationship with contributors (that are offering something of value to the company, without an economic intermediation) and OSS communities, that should be handled as a distinct entity (and not simply a collection of individuals). In this area I depart a little bit from the original Osterwalder model, by including not only customers but any interacting actor that provides value to the company, in one form or the other; this allows us to model in a more accurate way all those interactions that are not strictly monetary,

Revenue streams: this is easy! How the money enters your company? Is it structured in one-time payments, multiple recurring payments? Are there alternative form of revenue?

Are you still with me? Now that you have collected all the data on your company, the fun begins. We need to draft the network of actors (like your key resources, customer segments, external contributors…) and link these actors together with their relationship and effect. Some relations may impact on specific variables while  changing others (lowering the attractiveness of the community edition may increase conversion rates, but lower overall adoption rates).

In the next instalment we will provide an initial draft, and will later show how to convert this graph into a small and simple simulation.

,

No Comments

How to analyse an OSS business model – part three

Welcome to the third part of  our little analysis of OSS business models (first part here, second part here). It is heavily based on the Osterwalder model, and follows through the examination of our hypothetical business model reaching the “key resources” part. After all the theoretical parts, we will try to add a simple set of hands-on exercises and tutorials based on a more or less real case.

The argument of this part is “resources”. Key resources are the set of assets (material and immaterial) that are the basis of the company operations. There may be physical resources (a production plant for example), intellectual resources (previously developed source code), human resources (your developers), financial resources (capital in the bank, loans) and other immaterial assets (your company name as a recognizable mark, “good standing” in terms of how your customer see your products…)

In our OSS company example, at least part of the immaterial assets are shared and publicly available, that is they are non rival. In our model we have a company that provides under an open source license the community edition of the software, while provides an “enterprise edition” with additional stability tests, support and so on. It is not correct to say that just because the source code is publicly available it is not monetizable; on the contrary, especially when the code is wholly owned in terms of copyright assignments it is potentially a valuable asset (for some examples, look at JBoss or MySQL). Even when the code is cooperatively owned (as in a pure GPLv2 with multiple contributors, like Linux) the “default place” is valuable in itself, and it is the reason why so many companies try to make sure that their code is included in the main kernel line, thus reducing future integration efforts and sharing the maintenance activities. Other examples that are relevant for the OSS case are trademarks (that are sometimes vigorously defended), “brand name”, the external ecosystem of knowledge; for example, all the people that is capable of using and managing a complex OSS offering, creating a networked value that grows with the number of participants in the net. People becoming RedHat certified, for example, increases the value of the RedHat ecosystem other than their own.

One of the most important resource is human: the people working on your code, installing it, supporting it. Most of those people in the OSS environment are not part of your company, but are an extremely important asset on their own thanks to their capability of contributing back time and effort. In exchange, these resources need to be managed, and that’s why you need sometimes figures like “community managers” (an excellent example is my friend Stefano Maffulli, community manager extraordinaire at Funambol) because exactly as you have a financial officer to check your finance (another essential resource) you should have a community manager for… the community.

To properly analyse your key resources, we can extend the network model created for the channel analysis (the actor/action model) and extend it a little bit, including the missing pieces. For example, we mentioned that a potential customer may be interested in our product. Who makes it? Of course, as any good OSS company, you have some pieces coming from the outside (other OSS projects), part coded by your developers and part coming as contribution from external groups. All of them are resources: the other OSS projects are key resources themselves, simply obtained without immediate cost but managed by some developers that are themselves a key resources; your internally developed source code is another key resource, and if you have large scale contributions from the outside those should be considered resources too, maybe not “key” resources but important nevertheless.

The main concept is: a resource is “key” if without it your company would not be able to operate; and whenever you have a key resource you should have a person that manages it with a clearly defined process.

Next: cost structure!

,

5 Comments

How to analyse an OSS business model – part two

(now available: part three)

Welcome to the second part of  our little analysis of OSS business models (first part here). It is based on the practical workshops that we do for companies, and so it does have a little “practical” feel to it; as for its theoretical background, it is heavily based on the Osterwalder model, that I found to be clear and comprehensible. It could be adapted easily to other conceptualizations and ontologies on how to describe a business model, if someone wants to use it in a teaching context.

In the first part of our analysis we discussed the basic background concepts and discussed the first two aspects: customer segments and value proposition. As I mentioned before, the analysis is iterative, and should be done collaboratively (for example, by all the people working in a specific group, or by all the managers). As an example of why it should be iterative, we discussed the value proposition: by identifying several different value propositions, we inherently created different customer segments, that receive different value from our hypothetical “widgets, inc.” and this fact can be leveraged by differentiated pricing or different adoption percentages (if the user perceives an higher value, the potential monetary payment may be higher or it may be encouraged in adoption). Let’s continue with channels!

Channels: under this name we can place all the different ways our company interacts with the outside world. A common mistake is to consider only “paid” transactions, while (especially for open source software) a substantial part of value comes from non-monetary interactions. Examples of channel purposes may be sales, distribution (both physical and intangible), company communication, brand channelling and so on. Most channels do have a simple definition (“sales”) while some are indirect and outside the control of the company, for example word of mouth. As any iPhone user can testimony, word of mouth is one of the most powerful information dissemination vehicle, because it is based upon trust in people you already know, and knows what you may be interested in; the flash mob success of some online games on Facebook is a slightly modified version of this principle.

In channel analysis, the various actors in a company try to imagine (or list) all the possible ways someone from the outside may interact with the company or its products. How can a potential customer discover about widgets, inc. products? What actions need to be performed to be able to evaluate or buy? To help in this mapping exercise you can perform what is called the actor/actions mapping. In this activity you start by listing all the actors that may be potentially interacting with you, your users (potential or not), people that may talk about your product… Everything. You start with a simple table, listing the actors and the possible actions that they may want to perform. As an example:

  1. unaware user: casually finds out about widgets, inc. through advertising, word of mouth, email campaign…
  2. potential user: wants more information. Can go to the web site, download from a mirror site, ask friends, look for reviews of the product….
  3. user: wants support. Contact through email, phone, web-based system, (if there is a physical part) may ask for replacement of something…
  4. user: wants a different contract. As before, can use email, phone, a CRM system…
  5. journalist: may ask for information to write a review…

The idea is to try to map all the roles, all the actions, and list all of them along with a sort of small description. Then, imagine yourself while performing the action listed within: who do you interact with? What are the precondition for performing such action? As for the customer segmentation, you repeat this exercise until nothing changes, and at this point you have a nice, complete map of all the in/out relationships of widgets, inc. with the outside world. At that point, you add a value to each channel, in terms of what does it costs to maintain it and what potential advantage brings to you. It is important to bring to the table all potential value (even negative value, or intangible) because for open source software a large part of the channel network will not be directly managed by widgets, inc. but will be handled by third parties that cannot be directly influenced. So, a very simple example: Acme corp. takes the community edition of our software, adds some bells and whistles and creates a nice service business based on that. Is it a value or not? It does have a positive value: enlarges the use base, may provide additional contributions; on the other hand, it competes directly in at least part of the user base. The decision on how to act (the strategy part) depends on what we want to optimize, and is something that is inherently dynamic; so as an example what is good in the beginning (when dissemination of information and adoption is more important than monetization) may not be optimal in a later stage.

This is one of the explanation for the change in licensing by OSS companies, after an initial stage designed to maximize recognition and community contributions; among the examples Wavemaker. As I wrote many times in the past, there is no “bad” or “good” license, the point is that the license should be adopted with a rationale; changing license (when possible) may increase certain factors and modify in general this global channel map for example by changing the percentage of developers that are adopting our software, thanks to a more permissive license. The various parameters of our model (percentage of enterprise/community, independent adopters that integrate our software within their products, return contributions…) are all dependent on many different external conditions that are a-priori imposed by how we manage the company. So, after the creation of our channel map, an important exercise is to try to estimate these parameters, or measure them if possible; this way, we can turn our model into a simulation, giving us insight and allowing us to experiment freely to find the best match for our needs. We will give an example of such parameters after all the pieces of our business model canvas are completed.

Next time: key resources!

,

2 Comments

How to analyse an OSS business model – part one

(now available: part two and three)

One of the activity that I love is teaching: especially, within companies, helping them to assess their business model, and improve it. The first part is analysis; from the dictionary definition, “to separate (a material or abstract entity) into constituent parts or elements; determine the elements or essential features of”. This separation is fundamental – lots of wrong choices are made because some of the underlying choices are done without a clear understanding of what the company do, how it does it, what pays and for what. I will give a small example of such an analysis session, using as a model the Osterwalder business model canvas, that can be found here:
businessmodelcanvas
Let’s start with an imaginary company, “widgets inc.”, that uses the “community/enterprise” model, that is a fully open source edition (usually called “community”) and an enterprise edition, that is released under a different license, and includes things like support and additional features. There are lots of vendors using this model, and as such it should be easy for my readers to imagine their own, favourite  company listed.

The exercise is simple: we start by filling all the boxes, answering all the questions; the order that I suggest is: customer segments, value proposition,  channels, key resources, cost structure, revenue streams and the rest in any order. Let’s start!

Customer segments: who we are selling to, or interact with? Let’s start with the initial concept that not all customers do have a commercial relationship with the company. Some may be using the community edition, but may be users as well; the fact that they are not paying (yet) does not imply that they are of no value for “widgets inc.” The company may have a single segment or many segments; some offerings may be unstructured (which is always a bad thing, as it means that the effort for producing an offer cannot be automated) or simply everything may be dumped in a single bucket. The idea is to start from the differences; that is, different channels, different relationships, different profitability, different willingness to pay – every time you have a difference, it should be reflected in a segmentation of your customers. In a lot of situation this is perceived as a useless effort – especially if the company offers a single product. But separating customers across all the different variables allows for something similar to sensitivity analysis; for example, is the directly contacted customer more or less profitable than the one acquired through an indirect channel? How much do we lose by going through an intermediary?

So, let’s imagine that our “widgets inc.” is selling directly and through a reseller network. Resellers are providing additional reach, thanks to their own marketing efforts, so we have at least 3 different segments: users of the community edition, users of the enterprise edition that have a direct relationship with widgets inc. and users of the enterprise edition that are managed by a partner. There is a potential fourth segment, that is users of a potential “community enhanced” edition, for example a commercial offering by an independent vendor that enhanced the community edition and that is selling that in a form similar to our enterprise offering”. What can we say of these segments? The enterprise edition users are paying us (of course) and the profitability of each customer depends on the cost of servicing it (that changes if we follow it directly or through a partner); a reseller will require a percentage of revenues, but on the other hand it handles some of the support costs, and covered some of the expenses for getting the customer in the first place. The community users are not paying us, but can be leveraged in several ways: as a reference (for example GE is an Alfresco user, even if it was not paying for the enterprise edition, and this can be a reference with a commercial value) and by conversion. In fact, community users may become enterprise users, with a conversion ratio that is quite low (from 0.5% to 3%, depending on the kind of software) but that can become substantial if the user base is large enough. MySQL is a good example of such “conversion by numbers”. Sometimes segments are interlocked, in what are called “multisided markets”. An example is a merchant like eBay, that needs a large number of buyers and sellers to guarantee the fluidity of the market itself; it may charge sellers, buyers or both, charge only on trade performed, on publication or not at all (for example, using advertising to recover costs).

A common segmentation is also that based on size or revenue assumptions, so you get something like an SME offering and a large company (or administration) offering. Thanks to data from eBusinessWatch (an observatory of the European Commission) we know that the average precentage of revenues spent for ICT in companies is roughly the same for small and large companies, but this does also imply that smaller companies do have a smaller available budget, while larger companies may have a much longer (and costlier) procurement process.

Value proposition: the reason why someone would want to come to widgets inc. in the first place, because we solve a problem or satisfy a need. In our case, we have two separate propositions: one for the community edition and one for the enterprise edition. The community edition may solve a practical problem for companies (for example, document management, groupware, whatever), and thus gives a concrete value in exchange for the time necessary for the customer to install and adapt the product by themselves, including the potential risk if something does not work. The enterprise edition changes this proposition, by costing something (in monetary term) in exchange for an easier installation or better out-of-the-box experience, support, lower risk (knowing that it is possible to ask for support) and so on. The value proposition should be explicit (to give your customers, paying and non-paying, an idea of why it is useful to invest time or money in widgets inc. products), realistic (your company will not survive in the long term if the value advantage is not there at all) and approximately quantifiable. The value proposition may be different for different customer segments, for example a groupware system for a small company may not require to handle thousands of users; in general, the additional value of a feature or a structural property of your product is dependent on whether your customer is in position to use it, and this usually shows out in the fact that there may be different advertising for different segments, pushing only on those features that are relevant.

Next part: channels and resources. See you next time!

,

3 Comments

2020 FLOSS Roadmap, 2009 Version published

Having contributed to the new edition of the 2020 FLOSS roadmap, I am happy to forward the announcement relative to the main updates and changes of the 2020 FLOSS roadmap document. I am especially fond of the “FOSS is like a Forest” analogy, that in my opinion captures well the hidden dynamics that is created when many different projects create an effective synergy, that may be difficult to perceive for those that are not within the same “forest”.

For its first edition, Open World Forum had launched an initiative of prospective unique in the world: the 2020 FLOSS Roadmap (see 2008 version). This Roadmap is a projection of the influences that will affect FLOSS until 2020, with descriptions of all FLOSS-related trends as anticipated by an international workgroup of 40 contributors over this period of time and highlights 7 predictions and 8 recommendations. 2009 edition of Open World Forum gave place to an update of this Roadmap reflecting the evolutions noted during the last months (see OWF keynote presentation). According to Jean-Pierre LaisnĂ©, coordinator of 2020 FLOSS Roadmap and Bull Open Source Strategy: “For the first edition of the 2020 FLOSS Roadmap, we had the ambition to bring to the debate a new lighting thanks to an introspective and prospective vision. This second edition demonstrates that not only this ambition is reached but that the 2020 FLOSS Roadmap is actually a guide describing the paths towards a knowledge economy and society based on intrinsic values of FLOSS.”

About 2009 version (full printable version available here)

So far, so good: Contributors to the 2020 FLOSS Roadmap estimate that their projections are still relevant. The technological trends envisioned – including the use of FLOSS for virtualization, micro-blogging and social networking – have been confirmed. Contributors consider that their predictions about Cloud Computing may have to be revised, due to accelerating adoption of the concepts by the market. The number of mature FLOSS projects addressing all technological and organizational aspects of Cloud Computing is confirming the importance of FLOSS in this area. Actually, the future of true Open Clouds will mainly depend on convergence towards a common definition of ‘openness’ and ‘open services’.

Open Cloud Tribune: Following the various discussions and controversies around the topic “FLOSS and Cloud Computing”, this opinion column aims to nourish the debates on this issue by freely publishing the various opinions and points of view. 2009’s article questions about the impact of Cloud Computing on employment in IT.

Contradictory evolutions: While significant progress was observed in line with 2020 FLOSS Roadmap, the 2009 Synthesis highlights contradictory evolutions: the penetration of FLOSS continues, but at political level there is still some blocking. In spite of recognition from ‘intellectuals’. the alliance between security and proprietary has been reinforced, and has delayed the evolution of lawful environments. In terms of public policies, progress is variable. Except in Brazil, United Kingdom and the Netherlands, who have made notable moves, no other major stimulus for FLOSS has appeared on the radar. The 2009 Synthesis is questioning why governments are still reluctant to adopt a more voluntary ‘FLOSS attitude’. Because FLOSS supports new concepts of ’society’ and supports the links between technology and solidarity, it should be taken into account in public policies.

Two new issues: Considering what has been published in 2008, two new issues have emerged, which will need to be explored in the coming months: proprietary hardware platforms, which may slow the development of FLOSS , and proprietary data, which may create critical lock-ins even when software is free.

The global economic crisis: While the global crisis may have had a negative impact on services based businesses and services vendors specializing in FLOSS, it has proved to be an opportunity for most FLOSS vendors, who have seen their business grow significantly in 2009. When it comes to Cloud-based businesses, the facts tend to show a massive migration of applications in the coming months. Impressive growth in terms of hosting is paving the way for these migrations.

Free software and financial system: this new theme of the 2020 FLOSS Roadmap makes its appearance in the version 2009 in order to take into account the role which FLOSS can hold in a system which currently is the target of many reflexions.

Sun/Oracle: The acquisition of Sun by Oracle is seen by contributors to the 2009 Synthesis as a major event, with the potential risk that it will significantly redefine the FLOSS landscape. But while the number of major IT players is decreasing, the number of small and medium-size companies focused around FLOSS is growing rapidly. This movement is structured around technology communities and business activities, with some of the business models involved being hybrid ones.

FLOSS is like forests: The 2009 Synthesis puts forward this analogy to make it easier to understand the complexity of FLOSS through the use of a simple and rich image. Like forests and their canopies – which play host to a rich bio-diversity and diverse ecosystems – FLOSS is diverse, with multiple layers and branches both in term of technology and creation of wealth. Like a forest, FLOSS provides vital oxygen to industry. Like forests, which have brought both health and wealth throughout human history, FLOSS plays an important role in the transformation of society. Having accepted this analogy, contributors to the Roadmap subsequently identified different kind of forests: ‘old-growth forests’ or ‘primary forests’, which are pure community-based FLOSS projects such as Linux; ‘cultivated forests’, which are the professional and business-oriented projects such as Jboss and MySQL; and ‘FLOSS tree nurseries’, which are communities such as Apache, OW2 and Eclipse. And finally the ‘IKEAs’ of FLOSS are companies such as Red Hat and Google.

Ego-altruism: The 2009 Synthesis insists on the need to encourage FLOSS users to contribute to FLOSS, not for altruistic reasons, but rather for egoistical ones. It literally recommends users to only help when it benefits themselves. Thanks to FLOSS, public sector bodies, NGOs, companies, citizens, etc. have full, free and fair access to technologies enabling them to communicate on a global level. To make sure that they will always have access to these powerful tools, they have to support and participate in the sustainability of FLOSS.

New Recommendation: To reinforce these ideas, the 2020 FLOSS Roadmap in its 2009 Synthesis added to the existing list of recommendations:
Acknowledge the intrinsic value of FLOSS infrastructure for essential applications as a public knowledge asset (or ‘as knowledge commons’), and consider new means to ensure its sustainable development

Contact: http://www.2020flossroadmap.org/contact/

, , , ,

No Comments

On licenses, communities, business models

The debate on whether the GPL is going the way of the dodo or not is still raging, in a way similar to the one on open core – not surprisingly, since they are both related to similar aspects, that intermingle technical and emotional aspects. A recent post from BlackDuck indicates that (on some metric, not very well specified unfortunately) the GPLv2 for the first time dropped below 50%; while Amy Stephen points out that the GPLv2 is used in 55% of the new projects (with the LGPL at 10%), something that is comparable to the results that we found in FLOSSMETRICS for the stable projects. Why such a storm? The reason is partly related to a strong association of the GPL with a specific political and ethical stance (an association that is, in my view, negative in the long term), and partly because the GPL is considered to be antithetic to so-called “open core” models, where less invasive licenses (like the Apache or Eclipse licenses) are considered to be more appropriate.

First of all, the “open core” debate is mostly moot: the “new” open core is quite different from the initial, “free demo” approach (as magistrally exemplified by Eric Barroca of Nuxeo). While in the past the open core model was basically to present a half-solution barely usable for testing now open core means a combination of services and (little) added code, like the new approach taken by Alfresco – that in the past I would have probably classified in the ITSM class (installation/training/support/maintenance, in recent report rechristened as “product specialist). Read as an example the post from John Newton, describing Alfresco approach:

  • We must insure that customers using our enterprise version are not locked into that choice and that open source is available to them. To that end, the core system and interfaces will remain 100% open source.
  • We will provide service and customer support that provides insurance that systems will run as expected and correct problems according our promised Service Level Agreement
  • Enterprise customers will receive fixes as a priority, but that we will make these fixes available in the next labs release. Bugs fixed by the community are delivered to the community as a priority.
  • We will provide extensions and integrations to proprietary systems to which customers are charged. It is fair for us to charge and include this in an enterprise release as well.
  • Extensions and integrations to ubiquitous proprietary systems, such as Windows and Office, will be completely open source.
  • Extensions that are useful to monitor or run a system in a scaled or production environment, such as system monitoring, administration and high availability, are fair to put into an enterprise release.”

The new “open core” is really a mix of services, including enhanced documentation and training materials, SLA-backed support, stability testing and much more. In this new model, the GPL is not a barrier in any way, and can be used to implement such a model without additional difficulties. The move towards services also explains why despite the claim that open core models are the preferred monetization strategies, our work in FLOSSMETRICS found that only 19% of the companies surveyed used such a model, a number that is consistent with the 23.7% found by the 451 group, despite the claim that “Open Core becomes the default business model”. The reality is that the first implementation of open core was seriously flawed; for several reasons:

“The model has the intrinsic downside that the FLOSS product must be valuable to be attractive for the users, but must also be not complete enough to prevent competition with the commercial one. This balance is difficult to achieve and maintain over time; also, if the software is of large interest, developers may try to complete the missing functionality in a purely open source way, thus reducing the attractiveness of the commercial version.”

and, from Matthew Aslett:

I previously noted that with the Open-Core approach the open source disruptor is disrupted by its own disruption and that in the context of Christensen’s law of Conservation of Attractive Profits it is probably easier in the long-term to generate profit from adjacent proprietary products than it is to generate profit from proprietary features deployed on top of the commoditized product.

In the process of selecting a business model, then, the GPL is not a barrier in adopting this new style of open core model, and certainly creates a barrier for potential freeriding by competitors, something that was for example recognized by SpringSource (that adopted for most of their products the Apache license):

The GPL is well understood by the market and the legal community and has notable precedents such as MySQL, Java and the Linux kernel as GPL licensed projects. The GPL ensures that the software remains open and that companies do not take our products and sell against us in the marketplace. If this happened, we would not be able to sufficiently invest in the project and everyone would suffer.

The GPL family, at the moment, has the advantage that the majority of packages are licensed under one of such licenses, making compatibility checking easier; also, there is an higher probability of finding a GPL (v2, v3, AGPL, LGPL) package to improve than starting for scratch – and this should also guarantee that in the future the license mix will probably continue to be oriented towards copyleft-style restrictions. Of course, there will be a movement towards the GPLv3 (reducing the GPLv2 share, especially for new projects) but as a collective group the percentages will remain more or less similar.

This is not to say that the GPL is perfect: on the contrary, the text (even in the v3 edition) lacks clarity on derivative works, has been bent too much to accommodate anti-tivoization clauses (that contributed to a general lack of readability of the text) and lacks a worldwide vision (something that the EUPL has added). In terms of community and widespread adoption the GPL can be less effective as a tool for creating widespread platform usage; the EPL or the Apache license may be more appropriate for this role, and this because the FSF simply has not created a license that fullfills the same role (this time, for political reasons).

What I hope is that more companies start the adoption process, under the license that allows them to be commercially sustainable and thriving. The wrong choice way hamper growth and adoption, or may limit the choice of the most appropriate business model. The increase in adoption will also trigger what Matthew Aslett mentioned as the fifth stage of evolution (still partially undecided). I am a strong believer that there will be a move toward consortia-managed projects, something similar to what Matthew calls “the embedded age”; the availability of neutral third-party networks increase the probability and quality of contributions, in a way similar to the highly successful Eclipse foundation.

, ,

7 Comments

DoD OSCMIS: a great beginning of a new OSS project

OSCMIS is a very large web-based application (more than half a GB of code), created by the Defense Information Systems Agency of the US Department of Defense, and currently in use and supporting 16000 users (including some in critical areas of the world, like a tactical site in Iraq). It is written in ColdFusion8, but should be executable with minimal effort using a CFML open source engine like Ralio; it is currently using MSSQL, but there is already a standard SQL version alternative. The application implements, among others, the following functions:

  • Balanced Scorecard—extensive balanced scorecard application implementing DISA quad view (strategy, initiatives, issues, and goals/accomplished graph) practice. Designed and built in house after commercial vendors didn’t feel it was possible to create.
  • DISA Learning Management System. Enables fast, easy course identification and registration, with registration validation or wait listing as appropriate, and automated supervisory notifications for approvals. Educational Development Specialists have control as appropriate of course curricula, venues, funds allocation data, reporting, and more. Automated individual and group SF182’s are offered. Includes many other training tools for intern management and training, competitive training selection and management, mandatory training, mentoring at all levels, etc.
  • Personnel Locator System—completely integrated into HR, Training, Security, and other applications as appropriate. System is accessible by the entire DISA public. PLS feeds the Global Address List.
  • COR/TM Qualification Management—Acquisition personnel training and accreditation status and display. Tracks all DISA acquisition personnel and provides auto notification to personnel and management of upcoming training requirements to maintain accreditation and more. Designed and built in house after the Acquisition community and its vendors didn’t feel it possible to create.
  • Action Tracking System—automates the SF50 and process throughout a civilian personnel operation.
  • Security Suite—a comprehensive suite of Personnel and Physical Security tools, to include contractor management.
  • Force Development Program—individual and group professional development tools for military members, to include required training and tracking of training status and more.
  • Network User Agreement—automated system to gather legal documentation (CAC signed PDF’s) of network users’ agreements not to harm the government network they are using. Used by DISA worldwide.
  • Telework—comprehensive telework management tool to enable users to propose times to telework, with an automated notification system (both up and down) of approval status.
  • JTD/JTMD management—provides requirements to manage billets, personnel, vacancies, and realignments, plus more, comprehensively or down to single organizations.
  • Employee On-Boarding Tool—automates and provides automated notification in sequence of actions needed to ensure that inbound personnel are processed, provided with tools and accounts, and made operational in minimal time.
  • DISA Performance Appraisal System—automates the process of collecting performance appraisal data. Supervisors log in and enter data for their employees.  This data is output to reports which are used to track metrics and missing data. The final export of the data goes to DFAS.
  • ER/LR Tracking System—provides comprehensive tracking and status of employee relations/labor relations actions to include disciplinary actions and participants of the advance sick leave and leave transfer programs.
  • Protocol Office–comprehensive event planning and management application to all track actions and materials in detail as needed to support operations for significant events, VIP visits, etc.

This is a small snippet of the full list – at the moment covering more than 50 applications; some are specific to the military world, while some are typical of large scale organizations of all kind (personnel management, for example). The open source release of OSCMIS is important for several different reasons:

  • It gives the opportunity to reuse an incredible amount of work, already used and tested in production in one of the largest defence groups.
  • It creates an opportunity to enlarge, improve and create an additional economy around it, in a way similar to the release of the DoD Vista health care management system (another incredibly large contribution, that spawned several commercial successes).
  • It is an example of well studied, carefully planned release process; while Vista was released through an indirect process (a FOIA request that leaved the sources in the public domain and later re-licensed by independent groups) OSCMIS was released with a good process from the start, including a rationale for license selection from Lawrence Rosen, that acted as counsel to OSSI and DISA.

It cannot be underestimated the role of both people inside of DISA (like Richard Nelson, chief of the Personnel Systems Support Branch), John Weathersby of OSSI, and I am sure many others, in preparing such a large effort. This is also a good demonstration of good cooperation between a competence center like OSSI and a government agency, and I hope an example for similar efforts around the world. (By the way, other efforts from OSSI are worthy of attention, including the FIPS validation of OpenSSL…)

For more information: a good overview from Military IT journal, Government computer news, a license primer from Rosen (pdf), and the press package (pdf). The public presentation will be hosted by OSSI the first of september in Washington.

I am indebted to Richard Nelson for the kindness and support in answering my mails, and for providing additional documentation.

, , , ,

5 Comments

Some observations on licenses and forge evolution

One of the activities we are working on to distract ourselves from the lure of beaches and mountain walks is the creation of a preliminary model of actor/actions for the OSS environment, trying to estimate the effect of code and non-code contributions and the impact of OSS on firms (adopters, producers, leaders – following the model already outlined by Carbone), and the impact of competition-resistance measures introduced by firms (pricing and licensing changes are among the possibility). We started with some assumptions on our own, of course; first of all, rationality of actors, the fact that OSS and traditional firms do have similar financial and structural properties (something that we informally observed in our study for FLOSSMETRICS, and commented over here), and the fact that technology adoption of OSS is similar to other IT technologies.

Given this set of assumptions, we obtained some initial results on licensing choices, and I would like to share them with you. License evolution is complex, and synthesis reports (like the one that is presented daily by Black Duck) can only show a limited view of the dynamics of license adoption. In Black Duck’s database there is no account for “live” or “active” projects, and actually I would suggest them to add a separate report for only the active and stable ones (3% to 7% of the total, and actually those that are used in the enterprise anyway). Our model predicts that in the large scale, license compatibility and business model considerations are the main drivers for a specific license choice; in this sense, our view is that for new projects the license choice is more or less not changed significantly in the last year, and that can be confirmed by looking at the new projects appearing in sourceforge, that maintain an overall 70% preference for copyleft licensing models (higher in some specialized forges, that reach 75%, and of course lower in communities like Codeplex). Our prediction is that license adoption follows a diffusion process that is similar to the one already discussed here:

webservers

for web server adoption (parameters are also quite similar, as the time frame) and so we should expect a relative stabilization, and further reduction of “fringe” licenses. In this sense, I agree with Matthew Aslett (and the 451 CAOS 12 analysis) on the fact that despite the claims, there is actually a self-paced consolidation  An important aspect for people working on this kind of statistical analysis is the relative change in importance of forges, and the movement toward self-management of source code for commercial OSS companies. A good example comes from the FlossMOLE project:
06-09-ProjectGrowth2
It is relatively easy to see the reduction in the number of new projects in forges, that is only partially offset by new repositories not included in the analysis like Googlecode or Codeplex; this reduction can be explained by the fact that with an increasing number of projects, it is easier to find an existing project to contribute to, instead of creating one anew. An additional explanation is the fact that commercial OSS companies are moving from the traditional hosting on Sourceforge to the creation of internally managed and public repositories, where the development process is more controlled and manageable; my expectation is for this trend to continue, especially for “platform-like” products (an example is SugarForge).

, ,

2 Comments

The different reasons for company code contributions

It was recently posted by Matt Asay an intriguing article called “Apache and the future of open-source licensing“, that starts with the phrase “If most developers contribute to open-source projects because they want to, rather than because they’re forced to, why do we have the GNU General Public License?

It turns out that Joachim Henkel (one of the leading European researchers in the field of open source) already published several papers on commercial contributions to open source projects, especially in the field of embedded open source. Among them, one of my favourite is “Patterns of Free Revealing – Balancing Code Sharing and Protection in Commercial Open Source Development“, that is available also at the Cospa knowledge base (a digital collection of more than 2000 papers on open source, that we created and populated in the context of the COSPA project). In the paper there is a nice summary analysis of reasons for contributing back, and one of the results is:

whyshare

What does it means? That licensing issues are the main reason for publishing back, but separated by very few percentage points other reasons appear: the signaling advantage (being good players), the R&D sharing, and many others. In this sense, my view is that the GPL creates an initial context (by forcing the publication of source code) that creates a secondary effect – reuse and quality improvement – that appears after some time. In fact, our research shows that companies need quite some time to grasp the advantages of reuse and participation; the GPL enforces participation for enough time that companies discovers the added benefits, and start moving their motivations to economic reasons, as compared to legal enforcing or legal risks.

, , ,

4 Comments

The new form of Open Core, or how everyone was right

Right on the heels of the 451 group’s CAOS 12 report, I had the opportunity to perform a comparison between monetization modalities that we originally classified as open core in the first edition of our work with the more recent database of OSS companies and their adopted models (such an analysis can be found in our guide as well). An interesting observation was the shifting perspective on what open core actually is, and to present some examples on why I believe that the “original” open core nearly disappeared, while a “new” model was behind the more recent claims that this has become one of the preferred models for OSS companies.

In the beginning, we used as a classification criteria the distinction of code bases: an Open Core company was identified by the fact that the commercial product had a different source code base (usually an extension of a totally OS one), and the license to obtain the commercial was exclusive (so as to distinguish this from the “dual licensing” model). In the past, open core was more or less a re-enactment of shareware of old; that is, the open source edition was barely functional, and usable only to perform some testing or evaluation, but not for using in production. The “new” open core is more a combination of services and some marginal extension, that are usually targeted for integration with proprietary components or to simplify deployment and management. In this sense, the “real” part of open core (that is, the exclusive code) is becoming less and less important – three years ago we estimated that from a functional point of view the “old” open core model separated functions at approximately 70% (the OS edition had from 60% to 70% of the functions of the proprietary product), while now this split is around 90% or even higher, but is complemented with assurance services like support, documentation, knowledge bases, the certification of code and so on.

Just to show some examples: DimDimWe have synchronized this release to match the latest hosted version and released the complete source code tree. Bear in mind that features which require the Dimdim meeting portal (scheduling & recording to note) are not available in open source. There is also no limit to the number of attendees and meetings that can be supported using the Open Source Community Edition.” If you compare the editions, it is possible to see that the difference lies (apart from the scheduling and recording) in support and the availability of professional services (like custom integration with external authentication sources).

Alfresco: The difference in source code lies in the clustering and high-availability support and the JMX management extensions (all of which may be replicated with some effort by using pure OSS tools). Those differences are clearly relevant for the largest and most complex installations; from the point of view of services, the editions are differentiated through availability of support, certification (both of binary releases and of external stacks, like database and app server), bug fixing, documentation, availability of upgrades and training options.

Cynapse (an extremely interesting group collaboration system): The code difference lies in LDAP integration and clustering; the service difference lies in support, availability of certified binaries, knowledgebase access and official documentation.

OpenClinica (a platform for the creation of Electronic Data Capture systems used in pharmaceutical trials and in data acquisition in health care); from the web site: “OpenClinica Enterprise is fully supported version of the OpenClinica platform with a tailored set of Research Critical Services such as installation, training, validation, upgrades, help desk support, customization, systems integration, and more.”

During the compilation of the second FLOSSMETRICS database I found that the majority of “open core” models were actually moving from the original definition to an hybrid monetization model, that brings together several separate models (particularly the “platform provider”, “product specialist” and the proper “open core” one) to better address the needs of customers. The fact that the actual percentage of code that is not available under an OSS license is shrinking is in my view a positive fact: because it allows for the real OSS project to stand on its own (and eventually be reused by others) and because it shows that the proprietary code part is less and less important in an ecosystem where services are the real key to add value to a customer.

, , ,

4 Comments