I’ve spent quite a bit of time the last several weeks doing some performance tuning and profiling and such on the Talend ESB and decided to share some things I’ve learned.

How this all started: Asankha Perera contacted me in early July as they started preparing for round 6 of their ESB Performance benchmarks as they ran into a security related issue: Talend’s ESB was now rejecting the WS-Security messages due to the nonce cache that was added to prevent replay attacks. Since the benchmark is essentially a replay attack (sends the same message over and over again), Talend’s ESB was throwing an exception and blocking the benchmark from running. This is on top of the strict timestamp checking that Talend’s ESB has always done (and caused them issues last time as they had to regenerate the secure messages). From what I can tell, Talend’s ESB was the only one they needed to turn OFF various security things like this which likely means it’s the most secure of the ESB’s for WS-Security “out of the box”. Not too surprising given Talend’s excellent security folks. Anyway, as part of the report, he sent along some preliminary numbers for the other tests to look at. My initial look showed that the numbers were actually lower than round 5 which concerned me, which is why I dug in.

The initial investigation showed a configuration change was needed. The configs in the bitbucket repo were really for Talend ESB 5.0 and needed some minor updates for 5.1 to get the thread pool back up from 25 threads to the 300 threads the test calls for. That pretty much got the test results back up to round 5 levels, so I really could have left it at that, but I decided to take a little time and play some more.

Some things I discovered:

  1. System.getProperty(..) is synchronized – DON’T put this on a critical path. My very first “kill -3″ during processing of the various small messages had over 180 threads (of the 300) blocked in here. I did some more investigation and found two major causes in the Talend ESB:
    • Bug/Regression in Woodstox – Woodstox was calling this for every Reader and Writer that was created. That’s 4 times per request for the proxy cases. I logged a bug with them (now resolved in their latest release) and downgraded Woodstox for the time being.
    • DocumentImpl constructor – for some reason, Sun/Oracle added a call to System.getProperty into the DocumentImpl constructor for the DOM implementation built into the JDK. CXF caches SOAP headers in a DOM so this added 2 more calls per request. Grabbing the latest xercesImpl and forcing that to be used solved this issue.

    Getting those fixed definitely helped reduce a bit of the contention.

  2. The next choke point I found was in CXF’s handling of the thread default Bus. We were getting and setting the thread default bus several times per request, but each of those calls was in a synchronized block. Re-engineering how that is handled eliminated that.
  3. Next up was the JMX metrics in Camel. Updating the JMX stats in Camel is in a big synchronized block (which then calls a bunch of other synchronized methods on other objects). This is on my TODO to re-look at, but for the purpose of this test, I just turned off the JMX metrics. Most likely, just using Atomic values would allow removing the sync block. Not really sure though.
  4. The final major “synchronized” block I hit is in the HTTP keep-alive cache in the JDK. Nothing I can do about this one short term. Longer term, I’ve started working on a new Apache HTTP Components based HTTP transport (with the help of Oleg Kalnichevski) that may help, but not something for right now.

The above updates helped a little, but not really much, maybe a couple %. The main reason is that with 300 threads, even if 150 of them are blocked, there are still plenty of threads left able to do work. Removing the blocks just saved some time on context switches and cache hits and such.

That then got me looking into other things, primarily into Camel. I quickly discovered the XML handling in Camel is pretty poor. Eventually I’ll need to really look into to that, but for the short term, I was able to bypass much of it. The first thing I saw was that in SOME cases, the requests coming from CXF (which was passed from CXF to Camel as a streaming StaxSource) were being parsed into a DOM. What’s worse, due to the poor XML handling in Camel, the DocumentFactory and parser factories and such were being created to do so. That involves the SPI stuff in the JDK which involves a System property (see above) and a classpath search for stuff in /META-INF/services which is fairly slow in OSGi. Adding some extra type converters into Camel avoided all of that and provided a big boost.

I then started looking at some of the specific tests. For the content based routing, the original configs we had used XPath. However, the XPath stuff in Camel is again plagued by the poor XML handling which forced a DOM again. Ended up changing the test to use XQuery which performed much better.

The other thing I ended up doing was switch from PAYLOAD mode to MESSAGE mode for the CXF component for the tests that could be handled that way. This is a huge benefit. For the direct proxy and transport header cases, this allowed complete bypassing of all XML parsing. That’s huge.

Finally, I did some testing with using Apache Tomcat for the backend service instead of the toolbox thing that the esbperformance.org folks originally specified. For Talend, using Tomcat helped significantly. We had a bunch of timeouts and other errors with the toolbox thing, and performance was a lot better with Tomcat. Since using Tomcat is likely more “realistic”, I argued to change the tests to use that (and provided some optimized Tomcat configs). This likely helped all the ESB’s results. You’re welcome!

For the most part, things ended up pretty good. If you look at the results graph at the bottom of http://esbperformance.org/display/comparison/ESB+Performance, Talend’s ESB came out fairly good. One important measurement is the number of failures/timeouts: 0. Talend ESB and the UltraESB’s were the only ones to accomplish that. That alone is pretty cool. But the performance also ended up fairly good. I certainly have a lot more things to look at going forward, but for just a small amount of work, the results ended up quite good.

Leave a Reply

*

For spam filtering purposes, please copy the number 8839 to the field below: