Archive for November, 2009

One of the fun things about attending conferences such as ApacheCon is that you get to meet with users of the software I work on and hear “real world” stories of the issues they’ve run into. I talked to one user yesterday that presented a common story that I’ve heard over and over again from many users, but it really got me thinking.

Basically, a short while ago, he was tasked with upgrading some older web services to get them off of Apache Axis. They started porting to Axis2 assuming that would be the easiest effort. After hitting bugs and spending a lot of time trying to work around issues and such, they punted on the effort and switched to CXF. For them, CXF “just worked”.

I hear the same basic thing from many users. CXF just “works”. It does what it says it will do relatively easily (relative term, nothing in WS-* is designed to be easy) and bugs, when found, are usually fixed quickly, but most users I’ve talked to having found any real bugs to log. It just “works”.

For some reason, the person I talked to yesterday got me thinking about the various Open Source SOAP stacks and how “buggy” they are and kind of wondered if that could be quantified. One of the nice things about the open source stacks is the issue trackers are also “open”. Thus, it shouldn’t be too hard. Famous last words. :-)

There are two main issues with trying to compare bug counts:

  1. Features: Each of the three major open source stacks (Axis2, CXF, Metro), while they all implement the same basic specs, they also have unique features. For example, CXF has bugs logged for the CORBA binding, JAX-RS, Distributed OSGi, etc… that the others wouldn’t have. Axis2 has bugs for the UDP transport which is not available in the others. Etc….
  2. Issue tracker layout: Each of the projects handle their issue tracker differently. CXF has a single project in JIRA and all bugs for everything are logged there. Metro has their issues split into two trackers, one for the base JAX-WS implementation and another for the WS-* stuff. Axis2 splits things across a bunch of JIRAs: a core area, one for Rampart, one for Sandesha, one for Kandula, transports, etc….

With that in mind, lets look at some numbers. To make things simple, I looked at the full “bug” count from CXF including all the features including JAX-RS, CORBA, DOSGi, etc…. Thus, the CXF counts are relatively inflated compared to the others. I didn’t look at feature requests, wishes, tasks, etc… Just bugs. For Metro, I looked at the “defects” only for the JAXWS RI and WSIT trackers. For Axis2, to make things simple, I only looked at the “bugs” in the core “Axis2″ jira and didn’t bother looking at the others. I didn’t need to as you’ll see.

As of this morning, Nov 5, 2009, the counts are:

CXF 78
Metro 173 (63 core, 110 WSIT)
Axis2 514

I think those numbers paint a pretty good picture of the state of stacks. However, it’s not the whole picture. Another useful stat is the number of “critical” or “blocker” bugs. (For Metro, P1 or P2 level)

Blocker Critical
CXF 0 2
Metro 0 26
Axis2 16 90

Another interesting stat is how many bugs logged in the last year or 6 months remain open. This shows how well the community responds to users bug reports.

Last Year 6 Months
CXF 50/441 (11%) 36/237 (15%)
Metro 59/335 (18%) 44/222 (20%)
Axis2 189/332 (57%) 96/168 (57%)

Anyway, that last table is quite interesting to me. Any non-trivial software (more than “Hello World”) has bugs. (as anyone that implements WS-* will tell you, WS-* is REALLY non-trivial) When selecting an open source project for use in your enterprise, the questions you need to ask are:

  1. How likely am I to hit a bug?
  2. If I do hit a bug, how serious is it likely to be?
  3. More importantly, if I hit a bug, how likely will a fix be made available in a timely manner?

That last question is critical. As I mentioned, all software has bugs. Getting those bugs fixed when encountered is important.

In an enterprise, you also need to ask if there is a good company behind the project from whom you can get a support agreement, training, consulting, etc… In that case, maybe the above questions are irrelevant.

Of course, being open source, you also should ask: Can I fix it myself and submit the patch back? The communities love it when you do that. :-)

Update: (12 hours later) I just wanted to point out that I DO agree with most of the comments posted below. Raw numbers are kind of pointless and are very hard to do comparisons. They can be interpreted many ways. For example, if you look in the last table, CXF had 441 bugs logged in the last year whereas Axis2 only had 332. Thus, an argument could easily be made that CXF is buggier than the others. Even if I pull out the JAX-RS and DOSGi bugs, it only drops to 366.

However, the one thing that I think IS comparable are the percentages on that last chart. The CXF team is resolving over 85% of the bugs that are logged. The Metro folks are resolving over 80%. Those are pretty respectable numbers. Again, that third question is important. If you do hit a bug, how likely is it to be fixed? 40%? 80%?

Of course, with Open Source, the answer can easily be 100% chance: fix it yourself and submit a patch.