Archive for category blog

The upcoming cloud storm, or ChromeOS revisited

The latest IO2011 conference from Google marked the first real marketable appearance of several noteworthy additions: Angry Birds for the web, a new Android release and the latest netbooks based on ChromeOS. The most common reaction was of basic lack of interest from analysts and journalists, with the prevalent mood being “we already saw what netbooks are, and we don’t like them” or “you can do the same with Windows, just remove everything and leave Chrome on it”.
Coming this Way!

Well, I believe that they are wrong. ChromeOS netbooks may be a failure, but the basic model is sound and there are strong economic reasons behind Google push in the enterprise. Yes- I said enterprise, because ChromeOS is a pure enterprise play, something that I already wrote about here. I would like to point out a few specific reasons I believe that ChromeOS, in some form or the other, will probably appear as a strong enterprise contender:

  • It is *not* a thin client. Several commentators argued that a ChromeOS netbook is essentially a rehash of some old thin client of sort, with most people presenting a parallel between ChromeOS and Sun Ray clients. But ChromeOS is designed to execute web applications, where the computational cost of the presentation and interaction layer is on the client; this means that the cost per user for providing the service is one order of magnitude lower than bitmap-based remotization approaches. How many servers would be necessary to support half a billion Facebook users if their app was not web based, but accessed through an ICA or RDP layer? The advantages are so overwhelming that nowadays most new enterprise apps are exclusively web-based (even if hosted on local premises).
  • It is designed to reduce maintenance costs. The use of the Google synchronization features and identity services allows for management-as-a-service to be introduced fairly easily; most thin clients infrastructures require a local server to act as a master for the individual client groups, and this adds costs and complexities. The extremely simple approach used to provide replication and management is also easy to grasp and extremely scalable; provisioning (from opening the box to begin useful work) is fast and requires no external support – only an internet connection and a login. This form of self-service provisioning, despite brochure claims, is still unavailable for most thin client infrastructures, and when available it is extremely costly in terms of licensing.
  • Updates are really failsafe. Ed Bott commented that “Automatic updates are a nightmare”, and it’s clear that he has quite a deep experience with the approach used by Microsoft. It is true that automatic updates are not a new thing – but the approach used by ChromeOS is totally different from the one used by MS, Apple or most Linux distributions. ChromeOS updates are distributed like satellite set-top box updates – whole, integrated new images sent through a central management service. The build process ensures that things work, because if something happens during the build the image will simply not be built – and distributed. Companies scared by the latest service pack roulette (will it run? will it stop in the middle, making half of the machines left broken in the rain?) should be happy to embrace a model where only working updates are distributed. And, just as a comment, this model is possible only because the components (with the exception of Flas) are all open source, and thus rebuildable. To those that still doubt that such a model can work, I suggest a simple experiment: go to the chromiumos developer wiki, download the source and try a full build. It is quite an instructive process.
  • The Citrix receiver thing is a temporary stopgap. If there is a thing that confused the waters in the past, it was the presentation done by Citrix of the integrated Receiver for the ICA protocol. It helped to create the misconception that ChromeOS is a thin-client operating system, and framed in a wrong way the expectations of users. The reality is that Google is pushing for a totally in-browser, HTML5 app world; ICA, RDP and other remotization features are there only to support those “legacy” app that are not HTML5 enabled. Only when the absolute majority of apps are web based the economics of ChromeOS makes sense.

On the other hand, it is clear that there are still substantial hurdles- actually, Google approach with the Chromebooks may even fail. In fact, I am not a big fan of the notebook format for enterprise computing, where a substantial percentage of users are still on the desk, and not nomadic. I believe that having a small, very low cost device is more interesting than leasing notebooks, especially if it is possible to lower the cost of the individual box to something like 100$ (yes – believe me, it is actually possible). Also, Google must make it easier to try ChromeOS on traditional PCs, something that at the moment is totally impossible; a more generic image, executed from a USB key, would go a long way towards demonstrating the usefulness of the approach.

2 Comments

Composing the Open Cloud puzzle

“When it rains, it pours”: This was my first thought while reading the RedHat announcement of yet another cloud project, OpenShift (a new PaaS) and Cloudforms (IaaS). The excellent Stephen O’Grady of RedMonk provided a very complete Q&A covering the announcement and the implications quite well. This is just the last of many, many announcements in the same line, that makes the Open Source cloud environment a very active and crowded one.
-

What exactly was announced? RedHat leveraged its acquisition of Makara, a PaaS vendor with a quite interesting technical solution, and its own Jboss/DeltaCloud platform to create a combination of PaaS and IaaS, designed to facilitate deployment and management of Java, PHP, Python and Ruby applications. It falls in a market that recently saw VMWare’s entry in the market with CloudFoundry, previous entrants like Eucalyptus, OpenNebula, Nimbus, OpenStack and the still unreleased Yahoo IaaS.

It would seem that there is really too much to choose from-but the reality is that each piece does have a place, and the biggest problem in finding this place is that we forget to look at the big picture that is on the puzzle box. The first point is that there is no single market for cloud/IaaS/PaaS/whatever. There are several distinct areas, and for every one there is a potentially different combination:

  • the software (SaaS) producer, that for small scales may be interested in hosting its own small IaaS+PaaS, with the opportunity to move it or dynamically expand outside (on Amazon, for example) when bursts come in. People still think that every company has a full swing of consumption, from very little to galactic scale, while the majority of SaaS producers tend to have a very static set of customers, and acquisitions are slow (compared to procurement time) and easily managed. With a full cloud you pay an extra for scalability – the capability to grow dynamically leap and bounds, and that extra may not be economically justified. Also, many companies already invested in hardware and networking for their own in-house small datacenter, and until the amortment period is ended those companies (for financial and fiscal reasons) would not turn to a full external cloud. So, the next best thing is to create their own little IaaS (for single customer instances), PaaS (for shared customers) on the hardware they already have. The economics here is related to the reduction in management costs, and the opportunity to simplify code deployment.
  • The Telcos: my own sources indicate that lots of telcos are working feverishly to create their own cloud offering, now that there is a potential solution that has no license costs. They could have built it earlier with VMWare or Citrix, but the licensing alone would have placed them out of the market; with the new open source stacks, with a relatively limited investment it is possible to reuse the massive hardware cache already in house to consolidate and offer a IaaS to individual customers, usually mid-to-large companies willing to outsource their varying computational loads outside, or directly dematerialize their server farm. It is mostly a transitional market, that will however be important for at least 5 years from now.
  • The System Integrators: large scale INT and consultancies are pushing a set of integrated offerings that cover everything – technical, management, legal and procurement aspects for companies that are willing to move their IT completely off their backs. It is not a new model – Accenture, Atos Origin et al are doing it since the beginning of time – but thanks to the availability of open source components it does provide a more compelling economics.
  • The small VARs: there is a large number of very small companies, that targets the very low end of the market, and that provide a sort of staircase to the cloud for small and mostly static workloads. It is a market of small consultants, system administrators, and consultancies that cover that part of the market that is largely invisible for larger entities, and that is starting to move towards web apps, hosted email and so on – but still need to manage some legacy system that is currently in-house.
  • The service provider: it just started as a movement, but in my opinion will become quite big: the specialized and cloud-based service (an example is the freshly released Sencha.io). I believe that those specialized providers will be among the most important contributors to the individual OSS components, that are internally used to deliver a service, and will provide a backbone of updates for most of the ancillary parts like storage, DB, identity and so on.

And the obvious question is: who will win? In my view, if every actor works properly, there will be more than a dominant actor. The reason is related to the two main factors of adoption, that is packaging and acceleration. Packaging is the ease of installation, deployment, management and in general user and developer-friendliness of the whole. So, taking for granted that RedHat will open source it (as it did with all the previous acquisitions) the main advantage for OpenShift and CloudForms will be ease of installation for RedHat and clone users – that are still the majority of enterprise Linux deployments. It will be natural for a RHEL user to start converting some servers into a RedHat IaaS and PaaS, given the fact that systems will also fall under a simplified management through RHT Satellites; the main interest in this case will be for system integrators and developers that already standardized under RedHat. A potential future enhancement could be the push towards a better identity management (a sore point for most IaaS, with the exception of the Grid-inspired Nimbus): RedHat does have a good Java-based provisioning and identity server in-house, that can probably be expanded with external tools like ForgeRock’s OpenIM/OpenIDM. The first one that will solve the “cloud identity” problem will actually have a gold mine in its hands.

OpenStack is more relevant for bare-metal installs, and will probably get the majority of new, unbranded installs thanks to its massive acceleration – the incredible rate of change that is clearly visible by the amount of new features delivered in any release. The combination of OpenStack and CloudFoundry is probably of interest mainly for large companies that want to create an internal development environment (reducing developer friction) – in this sense, I believe that the market for secondary services will be probably limited, but with a substantial contribution from “independents” that improve the platform to facilitate their own internal jobs.

The end result is that we will still see several competing platforms, each evolving to occupy a niche and taking (and giving) pieces of code to the others. The obvious result will be the pressure on proprietary vendors, that in my opinion will have a very hard time to keep the same development pace of these collaboratively developed solutions.

, , , ,

No Comments

Nokia is one of the most active Android contributors, and other surprises

Updated: added other examples from WebKit, IGEL and RIM

Yes, it may be a surprise, but that’s the beauty of Open Source – you never know where your contributions will be found. In this regard, I received a gentle mention from my friend Felipe Ortega of the Libresoft group of a nice snippet of research from Luis Canas Diaz, “Brief study of the Android community“. Luis studied the contributions to the Android code base, and splitted the contributions using the email of the originator, assigning those with “google.com” or “android.com” as internal, and classifying the others. Here is a sample of the results:

(Since October 2008)
# Commits Domain
69297 google.com
22786 android.com
8815 (NULL)
1000 gmail.com
762 nokia.com
576 motorola.com
485 myriadgroup.com
470 sekiwake.mtv.corp.google.com
422 holtmann.org
335 src.gnome.org
298 openbossa.org
243 sonyericsson.com
152 intel.com

Luis added: “Having a look at the name of the domains, it is very surprising that Nokia is one of the most active contributors. This is a real paradox, the company that states that Android is its main competition helps it!. One of the effects of using libre software licenses for your work is that even your competition can use your code, currently there are Nokia commits in the following repositories:

git://android.git.kernel.org/platform/external/dbus

git://android.git.kernel.org/platform/external/bluetooth/bluez”

In fact, it was Nokia participation in Maemo (and later Meego) and its funding of the dbus and bluez extensions that were later taken up by Google for Android. Intrigued by this result, I made a little experiment: I cloned the full Android gingerbread GIT repo (2.3), separated the parts that are coming from preexisting projects like the Linux kernel and the various external dependencies (many tens of project – included, to my surprise, a full Quake source code…) leaving for example Chromium but removing WebKit. I then took apart the external projects, and counted Google contributions there in an approximate way, and folded back everything. You get a rough size of 1.1GB of source code directly developed or contributed by Google, which means that around 75% of the source code of Android comes from external projects. Not bad, in terms of savings.

Update: many people commented on the strangeness of having fierce competitors working together in ways that are somehow “friendly” towards a common goal. Some of my twitter followers also found the percentage of 75% of non-Google contributions to be high, and this update is meant to be an answer for both. First of all, there is quite a long history of competitors working together in open source communities; the following sample of Eclipse contributors provide an intial demonstration of that:

eclipse

But there are many other examples as well. WebKit, theweb rendering component used in basically all the mobile platforms (except Windows Mobile) and on the desktop within Chrome and Safari was originally developed by the KDE free software community, taken by Apple and more recently co-developed by Nokia, Samsung, RIM and Google:

Screenshot-Chromium Notes: Who develops WebKit? - Mozilla Firefox

And on WebKit page, it is possible to find the following list:

“KDE: KDE is an open source desktop environment and application development framework. The project to develop this software is an informal association. WebKit was originally created based on code from KDE’s KHTML and KJS libraries. Although the code has been extensively reworked since then, this provided the basic groundwork and seed code for the project. Many KDE contributors have also contributed to WebKit since it became an independent project, with plans that it would be used in KDE as well. This has included work on initially developing the Qt port, as well as developing the original code (KSVG2) that provides WebKit’s SVG support, and subsequent maintenance of that code.

Apple:  Apple employees have contributed the majority of work on WebKit since it became an independent project. Apple uses WebKit for Safari on Mac OS X, iPhone and Windows; on the former two it is also a system framework and used by many other applications. Apple’s contribution has included extensive work on standards compliance, Web compatibility, performance, security, robustness, testing infrastructure and development of major new features.

Collabora:  Collabora has worked on several improvements to the Qt and GTK+ ports since 2007, including NPAPI plugins support, and general API design and implementation. Collabora currently supports the development of the GTK+ port, its adoption by GNOME projects such as Empathy, and promotes its usage in several client projects.

Nokia:  Nokia’s involvement with the WebKit project started with a port to the S60 platform for mobile devices. The S60 port exists in a branch of the public WebKit repository along with various changes to better support mobile devices. To date it has not been merged to the mainline. However, a few changes did make it in, including support for CSS queries. In 2008, Nokia acquired Trolltech. Trolltech has an extensive history of WebKit contributions, most notably the Qt port.

Google:  Google employees have contributed code to WebKit as part of work on Chrome and Android, both originally secret projects. This has included work on portability, bug fixes, security improvements, and various other contributions.

Torch Mobile:  Torch Mobile uses WebKit in the Iris Browser, and has contributed significantly to WebKit along the way. This has included portability work, bug fixes, and improvements to better support mobile devices. Torch Mobile has ported WebKit to Windows CE/Mobile, other undisclosed platforms, and maintains the QtWebKit git repository. Several long-time KHTML and WebKit contributors are employed by Torch Mobile.

Nuanti:  Nuanti engineers contribute to WebCore, JavaScriptCore and in particular develop the WebKit GTK+ port. This work includes porting to new mobile and embedded platforms, addition of features and integration with mobile and desktop technologies in the GNOME stack. Nuanti believes that working within the framework of the webkit.org version control and bug tracking services is the best way of moving the project forward as a whole.

Igalia:  Igalia is a free software consultancy company employing several core developers of the GTK+ port, with contributions including bugfixing, performance, accessibility, API design and many major features. It also provides various parts of the needed infrastructure for its day to day functioning, and is involved in the spread of WebKit among its clients and in the GNOME ecosystem, for example leading the transition of the Epiphany web browser to WebKit.

Company 100:  Company 100 has contributed code to WebKit as part of work on Dorothy Browser since 2009. This work includes portability, performance, bug fixes, improvements to support mobile and embedded devices. Company 100 has ported WebKit to BREW MP and other mobile platforms.

University of Szeged:  The Department of Software Engineering at the University of Szeged, Hungary started to work on WebKit in mid 2008. The first major contribution was the ARMv5 port of the JavaScript JIT engine. Since then, several other areas of WebKit have been tackled: memory allocation, parsers, regular expressions, SVG. Currently, the Department is maintaining the official Qt build bots and the Qt early warning system.

Samsung:  Samsung has contributed code to WebKit EFL (Enlightenment Foundation Libraries) especially in the area of bug fixes, HTML5, EFL WebView, etc. Samsung is maintaining the official Efl build bots and the EFL early warning system.”

So, we see fierce competitors (Apple, Nokia, Google, Samsung) co-operating in a project that is clearly of interest for all of them. In a previous post I made a similar analysis for IGEL (popular developers of thin clients) and HP/Palm:

“The actual results are:

  • Total published source code (without modifications) for IGEL: 1.9GB in 181 packages; total amount of patch code: 51MB in 167 files (the remaining files are not modified). Average patch size: 305KB, Patch percentage on total publisheed code:  2.68%
  • Total published source code (without modifications) for Palm: 1.2GB in 106 packages; total amount of patch code: 55MB in 83 files (the remaining files are not modified). Average patch size: 664KB, Patch percentage on total published code: 4.58%

If we add the proprietary parts and the code modified we end up in the same approximate range found in the Maemo study, that is around 10% to 15% of code that is either proprietary or modified OSS directly developed by the company. IGEL reused more than 50 million lines of code, modified or developed around 1.3 million lines of code. …. Open Source allows to create a derived product (in both case of substantial complexity) reducing the cost of development to 1/20, the time to market to 1/4, the total staff necessary to more than 1/4, and in general reduce the cost of maintaining the product after delivery. I believe that it would be difficult, for anyone producing software today, to ignore this kind of results.”

This is the real end result: it would be extremely difficult for companies to compete without the added advantage of Open Source. It is simply anti-economic to try to do everything from scratch, while competing companies work together on non-differentiating elements; for this reason it should not be considered a strange fact that Nokia is an important contributor to Google Android.

,

29 Comments

A small WebP test

I was quite intrigued by the WebP image encoding scheme created by Google, and based on the idea of a single-frame WebM movie. I performed some initial tests during initial release, and found it to be good but probably not groundbreaking. But I recently had the opportunity to read a blog post by Charles Bloom with some extensive tests, that showed that WebP was clearly on a par with a good Jpeg implementation on medium and high bitrates, but substantially better for small bitrates or constrained encodings. Another well executed test is linked there, and provide a good comparison between WebP, Jpeg and Jpeg2000, that again shows that WebP shines – really – in low bitrate condition. So, I decided to see if it was true, took some photos out of my trusted Nokia N97 and tried to convert them in a sensible way. Before flaming me about the fact that the images were not in raw format: I know it, thank you. My objective is not to perform a perfect test, but to verify Google assumptions that WebP can be used to reduce the bandwidth consumed by traditional, already encoded images while preserving most of the visual quality. This is not a quality comparison, but a “field test” to see if the technology works as described. The process I used is simple: I took some photos (I know, I am not a photographer…) selected for a mix of detail and low gradient areas; compressed them to 5% using GIMP with all Jpeg optimization enabled, took notice of size, then encoded the same source image with the WebP cwebp encoder without any parameter twiddling using the “-size” command line to match the size of the compressed Jpeg file. The WebP image was then decoded as PNG. The full set was uploaded to Flickr here, and here are some of the results:

11062010037
11062010037.jpg
11062010037.webp

Photo: Congress Centre, Berlin. Top: original Jpeg. middle: 5% Jpeg. Bottom: WebP at same Jpeg size.

10062010036
10062010036.jpg
10062010036.webp

Photo: LinuxNacht Berlin. Top: original Jpeg. middle: 5% Jpeg. Bottom: WebP at same Jpeg size.

02042010017
02042010017.jpg
02042010017.webp

Saltzburg castle. Top: original Jpeg. middle: 5% Jpeg. Bottom: WebP at same Jpeg size.

28052010031
28052010031.jpg
28052010031.webp

Venice. Top: original Jpeg. middle: 5% Jpeg. Bottom: WebP at same Jpeg size.

There is an obvious conclusion: at small file sizes, WebP handily beats Jpeg (and a good Jpeg encoder, the libjpeg-based one used by GIMP) by a large margin. Using a jpeg recompressor and repacker it is possible to even a little bit the results, but only marginally. With some test materials, like cartoons and anime, the advantage increases substantially. I can safely say that, given these results, WebP is a quite effective low-bitrate encoder, with substantial size advantages over Jpeg.

, , ,

1 Comment

On Symbian, Communities, and Motivation

(This is an updated repost of an article originally published on OSBR)

I have followed with great interest the evolution of the Symbian open source project – from its start, through its tentative evolution, and up to its closure this month. This process of closing down is accompanied by the claim that: “the current governance structure for the Symbian platform – the foundation – is no longer appropriate.”

It seems strange. Considering the great successes of Gnome, KDE, Eclipse, and many other groups, it is curious that Symbian was not able to follow along the same path. I have always been a great believer in OSS consortia, because I think that the sharing of research and development is a main strength of the open source model, and I think that consortia are among the best ways to implement R&D sharing efficiently.

However, to work well, Consortia need to provide benefits in terms of efficiency or visibility to all the actors that participate in them, not only to the original developer group. For Nokia, we know that one of the reasons to open up Symbian was to reduce the porting effort. As Eric Raymond reports, “they did a cost analysis and concluded they couldn’t afford the engineering hours needed to port Symbian to all the hardware they needed to support. (I had this straight from a Symbian executive, face-to-face, around 2002).”

But to get other people to contribute their work, you need an advantage for them as well. What can this advantage be? For Eclipse, most of the companies developing their own integrated development environment (IDE) found it economically sensible to drop their own work and contribute to Eclipse instead. It allowed them to quickly reduce their maintenance and development costs while increasing their quality as well. The Symbian foundation should have done the same thing, but apparently missed the mark, despite having a large number of partners and members. Why?

The reason is time and focus. The Eclipse foundation had, for quite some time, basically used only IBM resources to provide support and development. In a similar way, it took WebKit (which is not quite a foundation, but follows the same basic model) more than two years before it started receiving substantial contributions, as can be found here.

And WebKit is much, much smaller than Symbian and Eclipse. For Symbian, I would estimate that it would require at least three or four years before such a project could start to receive important external contributions. That is, unless it is substantially re-engineered so that the individual parts (some of which are quite interesting and advanced, despite the claims that Symbian is a dead project) can be removed and reused by other projects as well. This is usually the starting point for long-term cooperation. Some tooling was also not in place from the beginning; the need for a separate compiler chain – one that was not open source and that in many aspect was not as advanced as open source ones – was an additional stumbling block that delayed participation.

Another problem was focus. More or less, anyone understood that for a substantial period of time, Symbian would be managed and developed mainly by Nokia. And Nokia made a total mess of differentiating what part of the platform was real, what was a stopgap for future changes, what was end-of-life, and what was the future. Who would invest, in the long term, in a platform where the only entity that could gain from it was not even that much committed to it? And before flaming me for this comment, let me say that I am a proud owner of a Nokia device, I love most Nokia products, and I think that Symbian still could have been a contender, especially through a speedier transition to Qt for the user interface. But the long list of confusing announcements and delays, changes in plans, and lack of focus on how to beat the competitors like iOS and Android clearly reduced the willingness of commercial partners to invest in the venture.

Which is a pity – Symbian still powers most phones in the world and can still enter the market with some credibility. But this later announcement sounds like a death knell. Obtain the source code through a DVD or USB key? You must be kidding. Do you really think that setting up a webpage with the code and preserving a read-only Mercurial server would be a too much of a cost? The only thing that it shows is that Nokia stopped believing in an OSS Symbian.

(Update: after the change of CEO and the extraordinary change in strategy, it is clear that the reason for ditching the original EPL code was related to its inherent patent grant, that still provides a safeguard against Nokia patents embedded in the original Symbian code. There is a new release of Symbian under a different, non-OSS license; the original code is preserved in this sourceforge project, while Tyson Key preserved the incubation projects and many ancillary documentation like wiki pages at this Google code project.)

A full copy of the original EPL

,

4 Comments

The neverending quest to prove Google evilness. Why?

Ah, my favorite online nemesis (in a good sense, as we have always a respectful and fun way of having a disagreement) Florian Mueller is working full-time to demonstrate, in his own words, “a clear pattern of extensive GPL laundering by Google, which should worry any manufacturer or developer who cares about the IP integrity of Android and its effect on their proprietary extensions or applications. It should also be of significant concern to those who advocate software freedom.” Wow. Harsh words, at that, despite the fact that Linus Torvalds himself dismissed the whole thing with “It seems totally bogus. We’ve always made it very clear that the kernel system call interfaces do not in any way result in a derived work as per the GPL, and the kernel details are exported through the kernel headers to all the normal glibc interfaces too” (he also, amusingly, suggested that “If it’s some desperate cry for attention by somebody, I just wish those people would release their own sex tapes or something, rather than drag the Linux kernel into their sordid world”. Ah, I love him.)

In fact, I expressed the same point to Florian directly (both in email and in a few tweets), but it seems very clear that the man is on a crusade, given how he describes Google actions: “the very suspect copying of Linux headers and now these most recent discoveries, it’s hard not to see an attitude. There’s more to this than just nonchalance. Is it hubris? Or recklessness? A lack of managerial diligence?” or “It reduces the GPL to a farce — like a piece of fence in front of which only fools will stop, while “smart” people simply walk around it.”

Well, there is no such thing, and I am not saying this because I am a Google fanboy (heck, I even have a Nokia phone :-) ) but because this full-blown tempest is actually useless, and potentially damaging for the OSS debate.

I will start with the core of Florian arguments:

  • Google took GPL code headers;
  • they “sanitized” it with a script to remove copyrighted information,
  • what is left is not GPL anymore (in particular, is not copyrighted).

Which Florian sees as a way to “work around” the GPL. Well, it’s not, and there are sensible reasons for saying this. Let’s look at one of the incriminated files:

#ifndef __HCI_LIB_H
#define __HCI_LIB_H

#ifdef __cplusplus
#endif
#ifdef __cplusplus
#endif
static inline int hci_test_bit(int nr, void *addr)
{
	return *((uint32_t *) addr + (nr >> 5)) & (1 << (nr & 31));
}
#endif

or, for something longer:

#ifndef __RFCOMM_H
#define __RFCOMM_H

#ifdef __cplusplus
#endif
#include <sys/socket.h>
#define RFCOMM_DEFAULT_MTU 127
#define RFCOMM_PSM 3
#define RFCOMM_CONN_TIMEOUT (HZ * 30)
#define RFCOMM_DISC_TIMEOUT (HZ * 20)
#define RFCOMM_CONNINFO 0x02
#define RFCOMM_LM 0x03
#define RFCOMM_LM_MASTER 0x0001
#define RFCOMM_LM_AUTH 0x0002
#define RFCOMM_LM_ENCRYPT 0x0004
#define RFCOMM_LM_TRUSTED 0x0008
#define RFCOMM_LM_RELIABLE 0x0010
#define RFCOMM_LM_SECURE 0x0020
#define RFCOMM_MAX_DEV 256
#define RFCOMMCREATEDEV _IOW('R', 200, int)
#define RFCOMMRELEASEDEV _IOW('R', 201, int)
#define RFCOMMGETDEVLIST _IOR('R', 210, int)
#define RFCOMMGETDEVINFO _IOR('R', 211, int)
#define RFCOMM_REUSE_DLC 0
#define RFCOMM_RELEASE_ONHUP 1
#define RFCOMM_HANGUP_NOW 2
#define RFCOMM_TTY_ATTACHED 3
#ifdef __cplusplus
#endif
struct sockaddr_rc {
	sa_family_t	rc_family;
	bdaddr_t	rc_bdaddr;
	uint8_t		rc_channel;
};
#endif

What can we say of that? They contain interfaces, definitions, constants that are imposed by compatibility or efficiency reasons. For this reason, they are not copyrightable, or more properly would be excluded in the standard test for copyright infringement, in the abstraction-filtration test. In fact, it would not be possible to guarantee compatibility without such an expression.

But – Florian guesses – the authors put a copyright notice on top! That means that it must be copyrighted! In fact, he claims “The fact that such notices are added to header files shows that the authors of the programs in question consider the headers copyrightable. Also, without copyright, there’s no way to put material under a license such as the GPL.”

Actually it’s simply not true. I can take something, add in the beginning a claim of copyright, but that does not imply that I have a real copyright on that. Let’s imagine that I write a file containing one number, and put a (c) notice on top. Do I have a copyright on that number? No, because the number is not copyrightable itself. The same for the headers included before: to test for copyright infringement, you must first remove all material that is forced for standard compatibility, then Scenes a Faire (a principle in copyright law that says that certain elements of a creative work are not protected when they are mandated by or customary for an environment), then code that cannot be alternatively expressed for performance reasons. What is left is potential copyright infringement. Now, let’s apply the test to the code I have pasted. What is left? Nothing. Which is why, up to now, most of the commentators (that are working on the kernel) mentioned that this was also just a big, large, interesting but ultimately useless debate.

In fact, in the BlueZ group the same view was presented:

“#include <bluetooth/bluetooth.h> is only an interface contract. It contains only constants and two trivial macros. Therefore there is no obligation for files that include bluetooth.h to abide by the terms of the GPL license.  We will soon replace bluetooth.h with an alternate declaration of the interface contract that does not have the GPL header, so that this confusion does not arise again.” (Nick Pelly)

It is interesting that this comes, in and out, in many projects and several times; it happened in Wine (in importing vs. recoding Windows header definitions) and I am sure in countless others. The real value of this debate would be not to claim that Google nearly certainly is an horrible, profiteering parasite that steals GPL code, but to verify that the headers used do not contain copyrighted material, because that would be an extremely negative thing. Has this happened? Up to now, I am still unable to find a single example. Another, totally different thing is asking if this is impolite – taking without explicitly asking permission on a mailing list, for example. But we are not looking at headlines like “Google is impolite”, we are looking at “Google’s Android faces a serious Linux copyright issue”, or “More evidence of Google’s habit of GPL laundering in Android”.

That’s not constructive – that’s attention seeking. I would really love to see a debate about copyrightability of header files (I am not claiming that *all* header files are not copyrightable, of course) or copyrightability of assembly work (the “Yellow Book” problem). But such a debate is not happening, or it is drowned under a deluge of “Google is evil” half proofs.

Of course, that’s my opinion.

, , , , ,

1 Comment

Oracle, Sun, Java: lawsuits mark the exit road

I already wrote a few words on the Oracle/Google lawsuits here and here, and I would like to thank all those that found them interesting enough to read and comment on. I found recently a very interesting post by Java author extraordinaire, James Gosling, where he answers on some of his readers’ comments. In the post there are many interesting ideas, and a few points that I believe are not totally accurate – or better, may be explained in a different way. In particular, I believe that the role of Java in the enterprise will remain and will become “legacy”, that is stable, plain and boring, while the real evolution will move from Java to… something else.

James clearly points out the fact that JavaME fragmentation was a substantial hurdle for developers, and believes that in a lesser way this may be true for Android as well. While it is true that fragmentation was a problem for Java on mobile, this was a common aspect of mobile development at the time (go ask a Windows Mobile developer about fragmentation. And see a grown man cry, as the song says). The problem of JavaME was not fragmentation, but lack of movement – the basic toolkits, the UI components, most of the libraries for one reason or the other remained largely unchanged apart a few bug fixes. JavaFX should have been promoted much, much earlier, and would have had a great impact on software development, like (I believe) the more recent Qt releases from Nokia and their idea of declarative user interfaces.

If we compare with the rest of Java, we see a much stronger push towards adding libraries, components, functionalities: all things that made Java one of the best choices for software developers in the enterprise space, because the developers can trust Sun to update and extend their platform, making their job easier and faster. It was the same approach that made Microsoft the king of software: create lots of tools and libraries for developers, sometimes even trying to push more than one approach at a time to see what sticks (like Fahrenheit) , or trying very experimental and skunkworks approach, that later are turned into more mature projects (like WinG). JavaEE and JavaSE followed the same model, with a consistent stream of additions and updates that created a confidence in developers – and, despite all the naysayers, for enterprise software Java was portable with very little effort, even for very large applications.

JavaME was not so lucky, and partly to guarantee uniform licensing Sun was forced to do everything on their own (a striking difference with Android, that – if you check the source code – included tons of external open source projects inside) limiting the rate of growth attainable. Some features that now we take for granted (like web browsing) were not included as default, or implemented by vendors in inconsistent way because Sun never gave guidance on the roadmap and product evolution; multimedia has been mostly an afterthought, usually forcing developers to create (or buy) external libraries to implement anything more complex than a video or audio player. As I wrote before: JavaFX should have been announced much, much earlier, and not as a reactive answer to the competition, but as part of a long-term roadmap that JavaEE had, while the rest of Java missed.

This is, in my opinion, one of the real reasons for the lawsuit: Sun (now Oracle) was unable to create and maintain a real roadmap outside of JavaEE (and partly JavaSE), and especially for JavaME they constantly followed – never led. This, as any developer will tell you, is never a good position; it’s full of dust and you miss all the scenery. So, since Oracle is really more interested in their own markets (the DB and the applications) and not really caring about software developers, ecosystems or openness, they probably believe that lawsuits do have a better return on investment.

,

7 Comments

OpenWorldForum 2010: Join me in the OSS governance session!

I will have the opportunity to present our most recent results on the best practices for OSS adoption at the Open World Forum governance session, moderated by Martin Michlmayr (HP community manager) and Matteo Melideo (QUALIPSO project consortium leader). The program is available here, and packs quite a substantial amount of high quality talks. I hope to see you there!

The Open World Forum is the premier global summit meeting bringing together decisions-makers from across the world to discuss the technological, financial and social impact of open technologies, and to cross-fertilize ideas and initiatives in these areas. At the hub of the Free/Open Source revolution, the event was first staged in 2008, and takes place every year in Paris with more than 140 speakers from 40 countries, a 1,500-strong international audience and numerous conferences, workshops and think-tanks. The 2010 Open World Forum will be held on 30 September and 1 October, under the banner of “Open is the future: Open Innovation – Open Enterprise – Open Society”. Organized by a unique network of partners including the main Free/Open Source communities and most of the leading IT players worldwide, the Open World Forum is a must-attend event to discover the latest trends in open technology, business and social issues and to explore the future of Free/Open Source initiatives. It also offers a unique opportunity to share insights and best practices with many of the most respected visionaries, entrepreneurs and community leaders, and network with technology gurus, CxOs, analysts, CIOs, researchers, government leaders and investors from six continents. To request an invitation, please visit http://www.openworldforum.org

,

No Comments

Oracle/Google: the strategy behind Sun, Oracle and the OSS implications

In my previous post, I tried to provide some details on what in my opinion were the most relevant legal and licensing aspects of the recently launched Oracle lawsuit against Google. I would like now to provide some perspective on what may have been the motives behind this lawsuit, and what are the possible implications for the Java and Open Source communities.

First of all, it is clear that, as I mentioned before, Google turned the lawsuit into a positive event for their (slightly battered) public image. By turning the lawsuit against Android into an attack to the open source community, Google effectively created a positive image, as David unjustly accused by the Oracle giant. It is also clear that the lawsuit itself is actually quite weak, focusing on a copyright claim that is very vague (given the fact that Google never claimed to use Java), and a set of patent claims for techniques that are probably not relevant anymore (especially in the post-Bilski era). One of the possible reasons for this is to be sure that even the widely different Dalvik machine would be at least partially covered; the other is the fact that all of Classpath was included in the OIN “System Components” covered technologies. Since both Oracle and Google are part of OIN, I suspect that Oracle wanted to avoid any potential complication coming from this “broken marriage”.

But – this is not the only relevant aspect. Actually, an angle that is probably more important is the impact of the lawsuit on Java, Java for mobile, Android and the OSS communities that were part of the Sun technology landscape.

Enterprise Java: no change at all. Java is a very strong brand among corporate developers, and I doubt that the lawsuit will change anything at all; in fact, all the various licensee of the Java full specification are on perfectly legal grounds, and face no problem at all. Despite the opportunistic claims by Miguel De Icaza (that suggests that Mono/C# would have been a better choice), there is nothing in the lawsuit that would imply that other Java or Java-related technologies may be impacted (actually, Mono and the CLR are in the same situation as Dalvik, actually).

Mobile Java: as I mentioned before, the lawsuit put the last stone on the JavaME grave. The only potentially relevant route away from the land of the dead could have been JavaFX; that was too little, too late – incomplete, missing several pieces, missing a roadmap, and uselessly released as a binary-only project. Android  used the Java language, extended it with mobile-specific classes that were more modern and more useful for recent smartphones and even non-phone applications (like entertainment devices, automotive and navigation devices). It is not a surprise, that coupled with the Google brand name, Android surged in popularity so much as to become a threat.

Oracle OSS projects: Oracle has always been an opportunistic user of open source. With the term “opportunistic” I am not implying any negative connotation: simply the observation that Oracle dabbled in open source whenever there was an opportunity to reduce its own research and development costs. If you look at oracle projects, it is clear that all projects are related to infrastructural functionality for the Oracle run-time and for developers tools (using Eclipse as a basis). I was not able to find any “intrinsic” open source project started or adopted by Oracle that was not focused on this approach. So, for those projects, I believe that there will be no difference; for example, I believe that the activity on the Oracle-sponsored BTRFS project will not change significantly. Oracle, actually does not care at all if they are seen as “enemies”, or if their projects are not used anymore by others. What they care for is for their patches to be included in Linux. Remember that Oracle is an “old style” company; it does have two basic product lines: its database and its set of enterprise applications. Everything else is not quite necessary, and probably will be abandoned.

Sun OSS projects: as for Sun, there is a long preamble first. Sun has always been, first and foremost, an engineering company – something like Silicon Graphics in the past, or more recently Google. Sun had open sourced something of value whenever it was necessary to establish a common platform or protocol, like NFS or NIS+; but it was the advent of Jonathan Schwartz that actually turned things towards open source. The ponytailed CEO tried to turn the Sun behemoth towards a fully open strategy, but was unable to manage the conversion before being ousted out. It is a pity, actually – Sun could have leveraged its size, large number of technical partners and amount of technologies to become a platform provider like RedHat – but 10 times larger. The problem of this strategy is that it implies a large amount of cooperative development, and thus a substantial downsizing of the company itself. The alternative could have been the use of an open-core like strategy, for example creating a scalable JVM designed to auto-partition code execution on network of computers. The basic JVM could have been dual licensed, with the enhanced one released on a proprietary basis; this could have leveraged the exceptional Sun expertise in grid and parallel computing, filesystems and introspection systems.

But Sun never managed to complete the path – it dwindled left and right, with lots of subprojects that were started and abandoned. The embracing of PostgreSQL, its later abandonment, the latter embrace of MySQL, that was then not integrated anywhere; the creation of substantial OSS projects from their proprietary offering, but then losing interest as soon as a project started to become a threat for the proprietary edition. There is no surprise that despite the incredible potential, Sun never recouped much of their OSS investment (despite the great growth in their latest quarters, the OSS services remained a small portion of their revenues). Now that Oracle has taken control, I believe that Sun “openness” will quickly fade towards small, utilitarian projects – so, even if now everyone looks at Oracle with anger, noone at Oracle could care less.

Why oracle sued? The blogosphere is exploding with possible answers; my own two hypothesis are:

  • Oracle found a substantial technology it acquired (Java) losing value in what is the hottest tech market today, namely mobile systems. Sun had no credible plan to update JavaME, no credible alternative, and thus Android (that is loosely java based) is at the same time a threat to an acquired asset and (from their point of view) a stolen technology. Since anyone can follow the same path, Oracle wants to make sure that noone else would try to leverage Java to create an unlicensed (and uncontrolled) copy.
  • Oracle wants a piece of the mobile enterprise market, and the alternatives are unavailable (Apple does not want anything to do with Java, Blackberry is a JavaME licensee, Windows Mobile is backed by arch-rival Microsoft). Android is working well, grows incredibly fast, and Oracle wants a piece of it; Google probably rebuffed initial contacts, and now Oracle is showing the guns to make Google obey. I am skeptical, however, that Google would back down on what is becoming its most important growth path. The lawsuit itself is quite weak, and Google would risk too much by licensing the TCK from Oracle; they would basically destroy their opportunity for independent development. It is never a good idea to corner someone – if you leave no alternative, fight is the only answer.

I believe that the first one is the most probable one; Larry Ellison should know that cornering Google would not be sufficient to make them capitulate – they have too much to lose. But this will not be sufficient to create an opportunity for Oracle; I believe that the lawsuit will actually bring nothing to Oracle, and lots of advantages to Google. But only time will tell; the only thing that I can predict for sure right now is that Solaris will quickly fade from sight (as it will be unable to grow at the same rate of Linux) exactly like AIX and HP-UX: a mature and backroom tech, but nothing that you can base a growth strategy upon.

,

14 Comments

Oracle/Google: the patents and the implications

Just as LinuxCon ended, Oracle announced that it has filed suit for patent and copyright infringement against Google for its implementation of Android; as an Oracle spokesperson said, “In developing Android, Google knowingly, directly and repeatedly infringed Oracle’s Java-related intellectual property. This lawsuit seeks appropriate remedies for their infringement … Android (including without limitation the Dalvik VM and the Android software development kit) and devices that operate Android infringe one or more claims of each of United States Patents Nos. 6,125,447; 6,192,476; 5,966,702; 7,426,720; RE38,104; 6,910,205; and 6,061,520.” (some more details in the copy of Oracle complaint). Apart from the slight cowardice of waiting after LinuxCon for announcing it, the use of the Boies Schiller legal team (the same of SCO) would be ironic on its own (someone already is calling the company SCOracle).

The patent claims are:

Let’s skip the patent analysis for a moment, and let’s focus on the reasons behind this. Clearly, it is a move typical of mature industries: when a competitor is running past you, you try to put a wrench in its engine. That is a typical move, and one of the examples of why doing things by the book in this modern, collaborative world is wrong. Not only that, but I believe that previous actions by Sun made this threat clearly useless – even dangerous.

Let’s clear the table from the actual patent claims: the patent themselves are quite broad, and quite generic; a good example of what should not be patented (the security domain one is a good example; look at the sheet 5 and you will find the illuminating flowchart with the representation of: do you have the rights to do it? if yes, do it, if no, do nothing. How brilliant). Also, Dalvik implementation is quite different from the old JRE one, and I have strong suspicions that the actual Dalvik method is substantially different. But, that is not important. I believe that there are two main points that Oracle should have checked before filing the complaint (but, given the use of Schiller&Boies, I believe that they have still to learn from the SCO debacle): first of all, Dalvik is not Java and Google never claimed any form of Java compatibility. Second, there is a protection for patents as well, just hidden in recent history.

On the first point: in the complaint, Oracle claims that “The Android operating system software “stack” consists of Java applications running on a Java-based object-oriented application framework, and core libraries running on a “Dalvik” virtual machine (VM) that features just-in-time (JIT) compilation”. On copyrights, Oracle claims that “Without consent, authorization, approval, or license, Google knowingly, willingly, and unlawfully copied, prepared, published, and distributed Oracle America’s copyrighted work, portions thereof, or derivative works and continues to do so. Google’s Android infringes Oracle America’s copyrights in Java and Google is not licensed to do so … users of Android, including device manufacturers, must obtain and use copyrightable portions of the Java platform or works derived therefrom to manufacture and use functioning Android devices. Such use is not licensed. Google has thus induced, caused, and materially contributed to the infringing acts of others by encouraging, inducing, allowing and assisting others to use, copy, and distribute Oracle America’s copyrightable works, and works derived therefrom.”

Well, it is wrong. Wrong because Google did not copied Java – and actually never mention Java anywhere. In fact, the Android SDK produced Dalvik (not Java) bytecodes, and the decoding and execution pattern is quite different (and one of the reasons why older implementations of Dalvik were so slow – they were made to conserve memory bandwidth, that is quite limited in cell phone chipsets). The thing that Google did was to “copy” (or – for a better word – inspire) the Java language; but as the recent SAS-vs-WPS lawsuit found, “copyright in computer programs does not protect programming languages from being copied”. So, unless Oracle can find pieces of documentation that were verbatim lifted from the Sun one, I believe that the copyright part is quite weak.

As for patents, a little reminder: while copyright covers specific representations (a page of source code, an Harry Potter book, a music composition), software patents cover implementations of ideas, and if the patent is broad enough, all possible implementation of an algorithm (let’s skip for the moment the folly of giving monopoly protection on ideas. You already know how I think about it); so, if in any way Oracle had, now or in the past, given full access to those patents through a licensing that is transferable, Google is somehow protected there as well. And – guess what? That really happened! Sun released the entire Java JDK under the GPLv2+classpath exception; granting with that release full rights of use and redistribution of the IPR assigned on what was released. This is different from the TCK specification, that Google wisely never licensed; because the TCK license requires for the patents to be transferred to limit the development to enhancements or modifications to the basic JDK as released by Sun.

But, you would say, Dalvik is independent from OpenJDK, so patents are not transferred there. So, include the code that is touched by the patents from the OpenJDK within Dalvik – compile it, and make a connecting shim, include it in a way that is GPLv2 compatible. The idea (just an idea! and IANAL of course..) is that through the release of the GPL code Sun gave an implicit license to embedded patents that is connected with the code itself. So, if it is possible to create an aggregate entity of the Dalvik and OpenJDK code, the Dalvik one would become a derivative of the GPL license, and would obtain the same patent protection as well. That would be a good use of the GPL, don’t you think?

What will be the result of the lawsuit? First of all, the open source credibility of Oracle, already damaged by the OpenSolaris affair, is now destroyed. It is a pity – they have lots of good people there, both internal and through the Sun acquisition; after all, they are among the 10 largest contributors to the Linux kernel. That is something that will be very difficult to recover.

Second, Google now has a free, quite important gift: the attention has been moved from their recent net neutrality blunder, and they are again the David of the situation. I could not imagine a better gift.

Third, with this lawsuit Oracle basically announced the world that Java in mobile is dead. This was actually something that most people already knew – but seeing it in writing is always reassuring.

Update: Miguel de Icaza claims that “The Java specification patent grant patent grant seems to be only valid as long as you have a fully conformant implementation”, but that applies only to the Standard Implementation of Java, not OpenJDK. Sorry Miguel – nice try. More luck next time.

Update 2: cleaned the language on the phrase on patents, ideas and implementation that was badly worded.ù

Update 3: clarified the Dalvik+OpenJDK idea.

, ,

34 Comments