How buggy is your SOAP stack

One of the fun things about attending conferences such as ApacheCon is that you get to meet with users of the software I work on and hear “real world” stories of the issues they’ve run into. I talked to one user yesterday that presented a common story that I’ve heard over and over again from many users, but it really got me thinking.

Basically, a short while ago, he was tasked with upgrading some older web services to get them off of Apache Axis. They started porting to Axis2 assuming that would be the easiest effort. After hitting bugs and spending a lot of time trying to work around issues and such, they punted on the effort and switched to CXF. For them, CXF “just worked”.

I hear the same basic thing from many users. CXF just “works”. It does what it says it will do relatively easily (relative term, nothing in WS-* is designed to be easy) and bugs, when found, are usually fixed quickly, but most users I’ve talked to having found any real bugs to log. It just “works”.

For some reason, the person I talked to yesterday got me thinking about the various Open Source SOAP stacks and how “buggy” they are and kind of wondered if that could be quantified. One of the nice things about the open source stacks is the issue trackers are also “open”. Thus, it shouldn’t be too hard. Famous last words. 🙂

There are two main issues with trying to compare bug counts:

  1. Features: Each of the three major open source stacks (Axis2, CXF, Metro), while they all implement the same basic specs, they also have unique features. For example, CXF has bugs logged for the CORBA binding, JAX-RS, Distributed OSGi, etc… that the others wouldn’t have. Axis2 has bugs for the UDP transport which is not available in the others. Etc….
  2. Issue tracker layout: Each of the projects handle their issue tracker differently. CXF has a single project in JIRA and all bugs for everything are logged there. Metro has their issues split into two trackers, one for the base JAX-WS implementation and another for the WS-* stuff. Axis2 splits things across a bunch of JIRAs: a core area, one for Rampart, one for Sandesha, one for Kandula, transports, etc….

With that in mind, lets look at some numbers. To make things simple, I looked at the full “bug” count from CXF including all the features including JAX-RS, CORBA, DOSGi, etc…. Thus, the CXF counts are relatively inflated compared to the others. I didn’t look at feature requests, wishes, tasks, etc… Just bugs. For Metro, I looked at the “defects” only for the JAXWS RI and WSIT trackers. For Axis2, to make things simple, I only looked at the “bugs” in the core “Axis2” jira and didn’t bother looking at the others. I didn’t need to as you’ll see.

As of this morning, Nov 5, 2009, the counts are:

CXF 78
Metro 173 (63 core, 110 WSIT)
Axis2 514

I think those numbers paint a pretty good picture of the state of stacks. However, it’s not the whole picture. Another useful stat is the number of “critical” or “blocker” bugs. (For Metro, P1 or P2 level)

Blocker Critical
CXF 0 2
Metro 0 26
Axis2 16 90

Another interesting stat is how many bugs logged in the last year or 6 months remain open. This shows how well the community responds to users bug reports.

Last Year 6 Months
CXF 50/441 (11%) 36/237 (15%)
Metro 59/335 (18%) 44/222 (20%)
Axis2 189/332 (57%) 96/168 (57%)

Anyway, that last table is quite interesting to me. Any non-trivial software (more than “Hello World”) has bugs. (as anyone that implements WS-* will tell you, WS-* is REALLY non-trivial) When selecting an open source project for use in your enterprise, the questions you need to ask are:

  1. How likely am I to hit a bug?
  2. If I do hit a bug, how serious is it likely to be?
  3. More importantly, if I hit a bug, how likely will a fix be made available in a timely manner?

That last question is critical. As I mentioned, all software has bugs. Getting those bugs fixed when encountered is important.

In an enterprise, you also need to ask if there is a good company behind the project from whom you can get a support agreement, training, consulting, etc… In that case, maybe the above questions are irrelevant.

Of course, being open source, you also should ask: Can I fix it myself and submit the patch back? The communities love it when you do that. 🙂

Update: (12 hours later) I just wanted to point out that I DO agree with most of the comments posted below. Raw numbers are kind of pointless and are very hard to do comparisons. They can be interpreted many ways. For example, if you look in the last table, CXF had 441 bugs logged in the last year whereas Axis2 only had 332. Thus, an argument could easily be made that CXF is buggier than the others. Even if I pull out the JAX-RS and DOSGi bugs, it only drops to 366.

However, the one thing that I think IS comparable are the percentages on that last chart. The CXF team is resolving over 85% of the bugs that are logged. The Metro folks are resolving over 80%. Those are pretty respectable numbers. Again, that third question is important. If you do hit a bug, how likely is it to be fixed? 40%? 80%?

Of course, with Open Source, the answer can easily be 100% chance: fix it yourself and submit a patch.

6 thoughts on “How buggy is your SOAP stack

  1. Dan,

    Thanks for the very interesting comparison of the projects. Also I respect and value your effort on giving constructive criticism on Axis2 and we always welcome feedback. But in general I’m not a believer in using bug counts as a way of comparing different projects or to measure the quality of software.

    There are few reasons behind my belief.

    A software might have 0 bugs for 10 years because there aren’t enough users using this software or the users care enough to post a bug. On the other hand a project having lots of bugs doesn’t mean it doesn’t work. May be there are thousands of users who really use it and care about improving the project.

    There are more reasons that can be found here :

    We all know very well that CXF is good is some cases and Axis2 is good in some other cases. But we can never say this is bad and the other is good.

    Once again thanks for this post and please continue helping us with these kind of feedback. We really appreciate it.

  2. Dan,

    The text of your article is all good, but the numbers are meaningless.
    When comparing bug counts you need to also consider the number of users and the ease with which bugs are filed.
    Without that information bare bug counts have no context.

  3. Of course bug counts matter, but its just one metric out of many.

    Its also a testimony of the committers if they oversee the issue tracker and react to those issues reported. If the bugs are not reported with enough details they should be asked to provide those details or the bug can be closed with incomplete etc.

    Remember this is bugs, not requests for new features, improvements etc. Its bugs!

    So having bugs open for a long time is because the maintainers do not oversee their project well enough.

    Dan great post, got me inspired to try look at the state of Apache Camel .

  4. This is actually quite interesting. I’d discount the criticality setting, as that may just be a relic of the bug trackers GUI. that leaves numbers filed, which as others point out, could be a function of the number of users.

    1. Perhaps its the ratio of bug reports filed/fixed that should be looked at, just as that is how you assess stability of a project. Is the project getting buggier or better?

    2. Same for bug longevity

    3. I’d also consider grouping by SOAP feature; anything in Java to WSDL could be discounted on the grounds of Moral Wrongness.

  5. Numbers aside, I have to agree that “CXF just “works”. It does what it says it will do relatively easily”. I recently evaluated CXF against Axis2 and the Sun Reference implementation, and chose CXF because, well, it does just work!

Leave a Comment

Your email address will not be published. Required fields are marked *