Apache CXF finally dropping support for Java 5

With the release of Apache CXF 2.6.1 last week, the CXF community decided to create the 2.6.x-fixes branch and move trunk to target 2.7. One of the first things we’ve started working on for 2.7 is dropping support for Java5. This is a topic that has come up many times over the last year or so, but it’s something we’ve tried to delay as long as possible as we do have users that still use Java 5. However, we’ve finally reached a tipping point where the community feels we need to move forward with Java6/Java7 only.

There are a few reasons:

  • External deps – CXF uses a bunch of party deps like Jetty, Spring, WSS4J, GWT, ActiveMQ, etc… Some of these have already gone Java6 only (Jetty 8, ActiveMQ and GWT for example) so CXF has been locked down to some of the older versions. To continue getting enhancements and fixes for some of those libraries, we need to update.
  • WSS4J 2.0 – WSS4J 2.0 which will introduce much faster and scalable ws-security processing has already decided to go Java6. It’s unknown yet whether WSS4J 2.0 will be ready for CXF 2.7, but to start working on the integration would require Java6.
  • Build system – Java 6+ contains a bunch of the dependencies that CXF needs “built in”. Things like SAAJ, Activation, jaxb-api, etc… We had some profiles in the poms to detect the JDK and add those deps if running on Java5. Removing those profiles removed hundreds of lines from the poms.
  • Simplify some code. There are a couple places in CXF where we had to resort to using some reflection to call off to Java6 API’s if running on Java6 and provide an alternative path if not. We are now able to cleanup some of that.
  • Deployment environments going Java6 – the popular environments that CXF is deployed into have also gone Java6 only or will be shortly. That include Jetty 8, Tomcat 7, Karaf 2.3/3.0 (coming soon), JBoss, etc… Even Karaf 2.2 which does support Java5 has a few features that only work with Java6.
  • Quite honestly, none of the CXF developers use Java5 on a daily basis anymore. We’ve all moved onto using Java6 (or even Java 7). Thus, the primary Java5 testing is now done by Jenkins and the developers only fix things if Jenkins finds an issue. Dropping Java5 simplifies life for the CXF developer community.

In anycase, this is something that’s been a long time coming, but it’s not something that will affect people today. In 5 months or so when CXF 2.7. is released, people will need to decide what to do then. However, keep in mind CXF 2.6.x will continue to support Java5 and will likely be supported for at least another year. The clock is now ticking though. Tick…. Tick…. Tick…. 🙂

CXF committers on top….

I haven’t looked at the Apache Jenkins instance leader board in a few months. It’s kind of a silly and meaningless stat, but I broke the CXF build a couple times this week and decided to click on it to see if I had dropped by much. I was VERY surprise, but very impressed with what I saw:

Jenkins Leaderboard

ALL 10 of the top 10 on the leader board are CXF committers. Every one of them. There is definitely something to be said for having a community that encourages frequent commits as well as a build system that is easy to use and promotes testing, but also a general mantra instilled in the community of “Don’t break the build”. (and fix it quickly if you do)

Another interesting note is that 7 of the top 14 are Talend committers. Unfortunately, the other 2 Talend committers are near the bottom of the list (negative even). I guess I need to start pushing them harder to move up the list… 😉

svnpubsub for Confluence sites

Anyone on a PMC at Apache likely saw a message sent out about a month ago from infrastructure@ that mentioned the mandate about project websites (and dist areas) change over to using svnpubsub by the end of the year. If you missed the email, it’s also nicely mentioned on the project-site page. For many of the newer projects, using the Apache CMS is definitely the easiest way to achieve that. However, for some of the older projects, this was a bit of an issue. Various projects at Apache have adopted various technologies from forest to anika to maven-site-plugin to Confluence to, well, lots of things. Migrating all that content to the CMS could be a lot of work that the projects just don’t have the time or resources to pursue.

Luckily, Apache does have some excellent people on the infrastructure team that have worked pretty hard to make migrating to just svnpubsub a bit easier. Joe Schaefer, in particular, has worked very hard to update the various buildbot scripts and cms scripts and such to allow it to support external builders (see their blog) to build the actual site content. As long as the technology to build the site can accept an “output directory” flag to output the site, it’s not a hard migration. Thus, projects can remain on their current technology choice as long as they have a build script to build it.

However, that still left out the sites that are using Confluence, such as Apache CXF and Apache Camel. Those projects don’t have a build script. They relied on a proprietary (and pretty buggy and annoying) plugin to Confluence to to render the pages to a local directory on the confluence host, then a series of multiple rsyncs from various personal crontabs to get the pages to the live site. It worked, but it was very slow causing several hours of delays between changes and it appearing on the live site. In any case, the Confuence based sites needed a solution to migrate to svnpubsub.

A while ago, I noticed Confluence has a soap interface. It’s a crappy, ancient, rpc/encoded interface, but it’s at least a usable interface. The interface provides methods to get the page information, render content to HTML, etc… Basically, everything we need to render the site externally. Thus, I used Apache CXF to create a simple program that would act as an external builder to render a site, grabbing all the attachments, applying a template, etc…. With that, plugging it into the svnpubsub infrastructure at Apache is easy. The program also uses Velocity as the template just like the autoexport plugin so migrating an existing template is relatively trivial.

However, by using an external program, I was able to make it MUCH MUCH better than the crappy autoexport plugin that Confluence currently uses. This includes:

  • Caching information and rendering changes. The program keeps a cache of page information and can detect just the pages that change and only renders those. Helps with performance.
  • It ALSO keeps track of which pages use {include} and {children} tags and can proper re-render those if the included page changes or the children structure changes. This is a big step above the autoexport plugin that would require a complete site regenerate for these things
  • HTML cleanup – the generated HTML is run through tagsoup as well as a custom listener that make an attempt to clean up the generated HTML. Confluence generates very poor HTML with all kinds of validation errors and such. The cleanup allows many pages to actually pass the w3c validator for HTML 4.1 transitional.
  • Link fixups – along with the HTML cleanup, it also will fix various links. Confluence generates HTML that kind of assumes it’s living on the confluence host. When copied to the live sites, those links break. This is handles as well as is adding nofollow attributes to links outside Apache.
  • Faster publishing – with all the rsyncs removed, making a change in confluence and getting it “live” is much faster. At worst, it’s less than one hour to the next buildbot build. However, if you need to, any developer can checkout the site and run the mvn script to force it immediately.

In any case, both Apache CXF and Apache Camel now have their main websites migrated over to using this new process to generate their sites from Confluence. If other projects would like to try it out, I’d suggest doing an svn checkout of the Camel website area from http://svn.apache.org/repos/asf/camel/website/ and taking a look.

Apache CXF split packages resolved!

A long time ago (ok, only 5.5 years ago), when Apache CXF was fist being spun up, we made a decision to try and separate the “API” and the “implementation” of the various CXF components into separate module. This was a very good idea for the most part. However, we did this prior to really investigating OSGi requirements (OSGi was just beginning to become well known). Thus, we didn’t take into account the OSGi split-package issue. The common pattern we used was to put the interfaces in cxf-api, and the “impl” in the exact same package in cxf-rt-core. In OSGi, that’s a big problem (without using fragments).

A few years ago, when users started really pushing the idea of using CXF in OSGi, we had to come up with a solution. I’ll admit we kind of took the “easy road” at the time. As part of the CXF build, we already were creating an “uber bundle” which contained all of the individual CXF modules squashed together into a single jar. We primarily did that to simplify classpath issues, dependencies, etc… for users (single jar to deal with, single pom dependency, etc…). When looking at options for OSGi, we saw that bundle as an “easy” path to OSGi. By adding OSGi manifest entries, CXF can be brought up in OSGi. Since ALL the packages are in that jar, no split-package issues.

For the most part, that setup has worked “OK”, but it’s really not ideal. CXF has a wide range of functionality, much of it may not be required by every user. The bundle ended up importing a LOT of packages. The current bundle imports about 240 packages. Thus, we had to go through and mark many of the imports “optional”. Over 160 of those imports are marked optional. (and likely some of the others could as well) This type of setup, while it “works”, is really less than ideal. It doesn’t work well with things like the OBR resolvers. It requires extra refreshes to add functionality that may be required by new bundles. It requires a lot of extra dependencies to be loaded upfront to reduce refreshes. Etc…

Over the past few weeks, Christian Schneider and I have been re-evaluating the split-package issue in CXF to try and come up with a better solution. An initial evaluation back in December found 25 packages that were split across jars. I have spent quite a bit of time the last couple weeks looking at each of those 25 split packages to figure out what can be done. We did hit some challenges and I ended up talking quite a bit with Christian about possible solutions, but I’m really happy to say that as of my commits this morning, all 25 are now resolved. All of the CXF individual modules are loadable as individual OSGi bundles. A couple of very simple test cases are working.

What kind of problems did we hit:

  • One of the main issues as mentioned above was an interface in API and then some sort of implementation in rt-core. For the most part, users always looked up the implementation via the bus.getExtension(..) call so the impl is really “private”. In those cases, the impl was moved to a new package in rt-core.
  • Abstract base classes in rt-core – similar to above. We had the interface in api, but an abstract base class in rt-core. There were then subclasses of that base class in the other modules. This really showed that the abstract base classes should have been part of the API, which is exactly what we did. Moved the class to API.
  • Promote some stuff to API – we did have a bunch of utility things in rt-core that was easily promoted to api. Things like our CachedOutputStream, mime attachment support, etc… that were easy to just promote up. They are heavily used in several submodules so it made sense to promote them anyway.
  • Pushed some optional stuff out of API – particular WS-Policy support. We did have some ws-policy related stuff in API that we could not resolve the split-package issue by promoting up. Thus, we pushed down into the cxf-rt-ws-policy. This may have an end user impact as they may need to have the cxf-rt-ws-policy module on their classpath to compile their code. However, they would have needed it on their classpath anyway to run the code so impact should be very minimal. The good thing about this is that this then reduces the imports in cxf-api by not requiring things like neethi for the api module. For JAX-RS users, this is good.

There were other issues caused by promoting stuff into API, but for the most part, they ended up being resolvable by loading an implementation via a bus extension. That seems to have cleaned things up quite a bit.

Anyway, it’s been a very challenging few weeks as we worked through all the split package issues. To be honest, the end result isn’t “ideal”. There is definitely more stuff in API than I’d really like to have there, but we were able to maintain a high degree of compatibility with existing applications. That’s important. Maybe when we do a “3.0” we can re-look at that, but for now, compatibility is very important. With the split package issue resolved, Christian and I are embarking on a new mission to examine the OSGi manifests of all the individual jars to adjust import version ranges, check for optionals, update the karaf features file, etc… One advantage of the big bundle was we only needed to do that in one place. Now we have many more touch points. However, the end result should definitely be a better experience for end users. I’m very excited about that.