CXF committers on top….

I haven’t looked at the Apache Jenkins instance leader board in a few months. It’s kind of a silly and meaningless stat, but I broke the CXF build a couple times this week and decided to click on it to see if I had dropped by much. I was VERY surprise, but very impressed with what I saw:

Jenkins Leaderboard

ALL 10 of the top 10 on the leader board are CXF committers. Every one of them. There is definitely something to be said for having a community that encourages frequent commits as well as a build system that is easy to use and promotes testing, but also a general mantra instilled in the community of “Don’t break the build”. (and fix it quickly if you do)

Another interesting note is that 7 of the top 14 are Talend committers. Unfortunately, the other 2 Talend committers are near the bottom of the list (negative even). I guess I need to start pushing them harder to move up the list… 😉

svnpubsub for Confluence sites

Anyone on a PMC at Apache likely saw a message sent out about a month ago from infrastructure@ that mentioned the mandate about project websites (and dist areas) change over to using svnpubsub by the end of the year. If you missed the email, it’s also nicely mentioned on the project-site page. For many of the newer projects, using the Apache CMS is definitely the easiest way to achieve that. However, for some of the older projects, this was a bit of an issue. Various projects at Apache have adopted various technologies from forest to anika to maven-site-plugin to Confluence to, well, lots of things. Migrating all that content to the CMS could be a lot of work that the projects just don’t have the time or resources to pursue.

Luckily, Apache does have some excellent people on the infrastructure team that have worked pretty hard to make migrating to just svnpubsub a bit easier. Joe Schaefer, in particular, has worked very hard to update the various buildbot scripts and cms scripts and such to allow it to support external builders (see their blog) to build the actual site content. As long as the technology to build the site can accept an “output directory” flag to output the site, it’s not a hard migration. Thus, projects can remain on their current technology choice as long as they have a build script to build it.

However, that still left out the sites that are using Confluence, such as Apache CXF and Apache Camel. Those projects don’t have a build script. They relied on a proprietary (and pretty buggy and annoying) plugin to Confluence to to render the pages to a local directory on the confluence host, then a series of multiple rsyncs from various personal crontabs to get the pages to the live site. It worked, but it was very slow causing several hours of delays between changes and it appearing on the live site. In any case, the Confuence based sites needed a solution to migrate to svnpubsub.

A while ago, I noticed Confluence has a soap interface. It’s a crappy, ancient, rpc/encoded interface, but it’s at least a usable interface. The interface provides methods to get the page information, render content to HTML, etc… Basically, everything we need to render the site externally. Thus, I used Apache CXF to create a simple program that would act as an external builder to render a site, grabbing all the attachments, applying a template, etc…. With that, plugging it into the svnpubsub infrastructure at Apache is easy. The program also uses Velocity as the template just like the autoexport plugin so migrating an existing template is relatively trivial.

However, by using an external program, I was able to make it MUCH MUCH better than the crappy autoexport plugin that Confluence currently uses. This includes:

  • Caching information and rendering changes. The program keeps a cache of page information and can detect just the pages that change and only renders those. Helps with performance.
  • It ALSO keeps track of which pages use {include} and {children} tags and can proper re-render those if the included page changes or the children structure changes. This is a big step above the autoexport plugin that would require a complete site regenerate for these things
  • HTML cleanup – the generated HTML is run through tagsoup as well as a custom listener that make an attempt to clean up the generated HTML. Confluence generates very poor HTML with all kinds of validation errors and such. The cleanup allows many pages to actually pass the w3c validator for HTML 4.1 transitional.
  • Link fixups – along with the HTML cleanup, it also will fix various links. Confluence generates HTML that kind of assumes it’s living on the confluence host. When copied to the live sites, those links break. This is handles as well as is adding nofollow attributes to links outside Apache.
  • Faster publishing – with all the rsyncs removed, making a change in confluence and getting it “live” is much faster. At worst, it’s less than one hour to the next buildbot build. However, if you need to, any developer can checkout the site and run the mvn script to force it immediately.

In any case, both Apache CXF and Apache Camel now have their main websites migrated over to using this new process to generate their sites from Confluence. If other projects would like to try it out, I’d suggest doing an svn checkout of the Camel website area from http://svn.apache.org/repos/asf/camel/website/ and taking a look.

Apache CXF split packages resolved!

A long time ago (ok, only 5.5 years ago), when Apache CXF was fist being spun up, we made a decision to try and separate the “API” and the “implementation” of the various CXF components into separate module. This was a very good idea for the most part. However, we did this prior to really investigating OSGi requirements (OSGi was just beginning to become well known). Thus, we didn’t take into account the OSGi split-package issue. The common pattern we used was to put the interfaces in cxf-api, and the “impl” in the exact same package in cxf-rt-core. In OSGi, that’s a big problem (without using fragments).

A few years ago, when users started really pushing the idea of using CXF in OSGi, we had to come up with a solution. I’ll admit we kind of took the “easy road” at the time. As part of the CXF build, we already were creating an “uber bundle” which contained all of the individual CXF modules squashed together into a single jar. We primarily did that to simplify classpath issues, dependencies, etc… for users (single jar to deal with, single pom dependency, etc…). When looking at options for OSGi, we saw that bundle as an “easy” path to OSGi. By adding OSGi manifest entries, CXF can be brought up in OSGi. Since ALL the packages are in that jar, no split-package issues.

For the most part, that setup has worked “OK”, but it’s really not ideal. CXF has a wide range of functionality, much of it may not be required by every user. The bundle ended up importing a LOT of packages. The current bundle imports about 240 packages. Thus, we had to go through and mark many of the imports “optional”. Over 160 of those imports are marked optional. (and likely some of the others could as well) This type of setup, while it “works”, is really less than ideal. It doesn’t work well with things like the OBR resolvers. It requires extra refreshes to add functionality that may be required by new bundles. It requires a lot of extra dependencies to be loaded upfront to reduce refreshes. Etc…

Over the past few weeks, Christian Schneider and I have been re-evaluating the split-package issue in CXF to try and come up with a better solution. An initial evaluation back in December found 25 packages that were split across jars. I have spent quite a bit of time the last couple weeks looking at each of those 25 split packages to figure out what can be done. We did hit some challenges and I ended up talking quite a bit with Christian about possible solutions, but I’m really happy to say that as of my commits this morning, all 25 are now resolved. All of the CXF individual modules are loadable as individual OSGi bundles. A couple of very simple test cases are working.

What kind of problems did we hit:

  • One of the main issues as mentioned above was an interface in API and then some sort of implementation in rt-core. For the most part, users always looked up the implementation via the bus.getExtension(..) call so the impl is really “private”. In those cases, the impl was moved to a new package in rt-core.
  • Abstract base classes in rt-core – similar to above. We had the interface in api, but an abstract base class in rt-core. There were then subclasses of that base class in the other modules. This really showed that the abstract base classes should have been part of the API, which is exactly what we did. Moved the class to API.
  • Promote some stuff to API – we did have a bunch of utility things in rt-core that was easily promoted to api. Things like our CachedOutputStream, mime attachment support, etc… that were easy to just promote up. They are heavily used in several submodules so it made sense to promote them anyway.
  • Pushed some optional stuff out of API – particular WS-Policy support. We did have some ws-policy related stuff in API that we could not resolve the split-package issue by promoting up. Thus, we pushed down into the cxf-rt-ws-policy. This may have an end user impact as they may need to have the cxf-rt-ws-policy module on their classpath to compile their code. However, they would have needed it on their classpath anyway to run the code so impact should be very minimal. The good thing about this is that this then reduces the imports in cxf-api by not requiring things like neethi for the api module. For JAX-RS users, this is good.

There were other issues caused by promoting stuff into API, but for the most part, they ended up being resolvable by loading an implementation via a bus extension. That seems to have cleaned things up quite a bit.

Anyway, it’s been a very challenging few weeks as we worked through all the split package issues. To be honest, the end result isn’t “ideal”. There is definitely more stuff in API than I’d really like to have there, but we were able to maintain a high degree of compatibility with existing applications. That’s important. Maybe when we do a “3.0” we can re-look at that, but for now, compatibility is very important. With the split package issue resolved, Christian and I are embarking on a new mission to examine the OSGi manifests of all the individual jars to adjust import version ranges, check for optionals, update the karaf features file, etc… One advantage of the big bundle was we only needed to do that in one place. Now we have many more touch points. However, the end result should definitely be a better experience for end users. I’m very excited about that.

Apache Camel in OSGi

After writing my post about setting up Apache CXF in OSGi, I’ve had a couple people ask about how to do the same with Apache Camel. Camel is a bit more complex due to all the libraries that it may pull in depending on which components you use, but it’s not really that much more complex.

As with CXF, the easiest way to get it up and running would be to grab a distribution that has it ready to go. Again, I’d recommend either the Talend Integration Factory or Talend ESB. Both have Apache Camel, Apache CXF and Apache ActiveMQ pre setup to run and everything well configured. The more popular Camel components are pre-installed with their dependencies making it fairly easy to get started.

That said, setting up your own instance is really not that hard.

  1. Follow steps 1-3 from my post about setting up Apache CXF in OSGi. This gets Karaf setup and ready.
  2. From the Karaf command line, run:
    features:addurl mvn:org.apache.activemq/activemq-karaf/5.5.0/xml/features
    features:addurl mvn:org.apache.cxf.karaf/apache-cxf/2.5.0/xml/features
    features:addurl mvn:org.apache.camel.karaf/apache-camel/2.8.2/xml/features
    

    That will add the ActiveMQ, CXF, and Camel features into Karafs feature resolver.

  3. If you plan on using JMS, it’s recommended to install the ActiveMQ features first. From the Karaf command line, run:
    features:install activemq
    features:install activemq-spring    (if you use spring)
    
  4. Next up would be CXF if you plan on using the CXF components. Again, from the Karaf command line, run:
    features:install cxf
    
  5. Finally, install the Camel components you are interested in using:
    features:install camel-core
    features:install camel-spring
    features:install camel-jms
    features:install camel-cxf
    etc...
    

    You can get a list of the Camel features by doing a “features:list | grep camel” from the Karaf command line.

That’s pretty much all there is to it. It’s not hard. 🙂 It’s now all setup! To try it out, I’d once again grabbing the samples package from Talend Integration Factory. There are several examples in there on how to do various Camel things, all running in OSGi.