ServiceMix Committer

I am currently in the middle of my Xmas vacation and I was just about to download a movie for tonight.
While downloading, I checked my emails, which I haven't really checked since Christmas Eve.

An invitation to join the Apache ServiceMix project as a committer was waiting for me on the top of my Inbox.

Of course I accepted the invitation and I immediately started blogging about it... 

That's a great ending for 2010 but its also a serious indication that I am going to need a time trasplant for 2011!


Karaf's JAAS modules in action

Karaf 2.1.0 has been just released! Among other new features, it includes a major revamp in the JAAS module support:
  1. Encryption support
  2. Database Login Module
  3. Role Policies
This post will use all 3 features, in order to create a secured Wicket application on Karaf, using Karaf's JAAS modules and Wicket's auth-roles module.

The application that we are going to build is a simple wicket application. It will be deployed on Karaf and the user credentials will be stored in a mysql database. For encrypting the password we will use Karaf's Jasypt encryption service implementation, to encrypt passwords using MD5 algorithm in hexadecimal format.

Step 1: Creating the database
The database that we are going to create will the simplest possible. We need a table that will hold username and password for each user. Each user may have one or more roles, so we will need a new table to hold the roles of the users.

We are going to create a user named "iocanel", that will have the roles "manager" and "admin" and password "koala" (stored in MD5 with hex output).

Note, for cases that a schema for user credentials already exists, Karaf's database login module offer's customization by allowing the user to provide custom queries for password and role retrieval.

Step 2: Creating a data source
In order to create a data source we will use the blueprint to create a DataSource as an OSGi service.
Before we do that we will need to install the mysql bundle and its prerequisite.
They can be easily installed from karaf shell.

osgi:install wrap:mvn:javax.xml.stream/stax-api/1.0
osgi:install wrap:mvn:mysql/mysql-connector-java/5.1.13 

Once all prerequisites are meet the datasource can be created by dropping the following xml under karaf deploy folder or by adding it under OSGI-INF/blueprint folder of our bundle.

Step 3: Creating a JAAS realm
In the same manner the new JAAS realm can be created by dropping the blueprint xml under the deploy folder or by adding it under OSGI-INF/blueprint folder of our bundle.

The new realm will make use of Karaf's JDBCLoginModule, and will also use MD5 encryption with hexadecimal output. Finally, it will be passed a role policy, that will add the "ROLE_" prefix on all role principals. This way our application can identify the role principals, without depending to the Karaf implementation.

If this isn't that clear, note that JAAS specifies interface Principal and its implementations provide User & Role principals (as implementing classes), making it impossible to distinguish between these two without having a dependency to the JAAS implementation or by having a common convention. This is what Role Policies is about.

Step 4: Creating the wicket application
Everything is set and all we need is to create the wicket application that will make use of our new JAAS realm in order to authenticate.

The first step is to create a Wicket Authenticated Session:

Now we need to tell our application to create such sessions and also where the location of our sign in page will be. For this purpose we will extend Wicket's AuthenticatedWebApplication class:
Now that everything is set up, we can restrict access to the HomePage to "admins" and "managers" by making use of Wickets

Final Words
I hope you found it useful. The source of this example will be added to this post soon, so stay tuned.

JavaOne and Oracle Develop 2010

I just returned home from Java ONE and Oracle Develop 2010 (which was also my first ONE) and I thought that it would be a good idea to take 5 minutes and share the experience.

The city of San Francisco was awesome and I couldn't find any other place in the world that could be best for the job. The weather, the size and the facilities where exactly what such event required. The organization was good enough and there were tons of sessions that I found exciting.

Don't let it cloud your judgment...
This is an alteration from a famous quote taken from "The Godfather" but its most fitting to this years Java One event. I found the excessive use of the buzz word "cloud" not only annoying but also misleading. There were tons of events, that used this buzzword to draw attention, even though there were not that related. The only thing I didn't see was:
"Taking Sushi to the Sky: Secrets for successful cooking in the premises and in the cloud".
 Note: The name above resembles with actual sessions. I am not implying anything negative about them.

And the winner is... Hadoop
For me by far the most interesting thing I saw in Java One was Apache Hadoop. To put in a sentence:

The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing
I had the luck to join two great session about hadoop.
  1. Extracting Real Value from Your Data with Apache Hadoop. (HOL)
  2. Hadoop vs. Relational Database: Shout-out Between a Java Guy and a Database Guy.  (BOF)
The second one will definitely be published so don't miss it.

I also liked... XSTM
An other pretty interesting session I had the chance to watch was:
  • Simpler and Faster Cloud Applications Using Distributed Transactional Memory.
This was a session related to the open source project XSTM. Which I found so interesting, that if I could also found time, I would definitely love to work with it.

Final Thoughts
I would definitely love to join JavaOne next year too. Here are two things that I will do next year and I strongly recommend doing in such events
  1. Don't go with the buzz.  See the detailed description beyond the buzzwords.
  2. Don't spent time with things you already know. A one hour length session can be a good introduction to unfamiliar areas, but I can't see how the can "add" in an area you are already familiar with.


Apache Karaf Committer

1 week after my vacation and still suffering from "post vacation depression", this Monday seemed like a nightmare.

I went to the office and I was feeling the urge to go get my self a huge Carafe of coffee (cups have long been proven inefficient), when an icoming email draw my attention.

It was an invitation to join Apache Karaf team as a committer.

This is the first open source project I join and I'm very thrilled (if not overreacting) about it and that's why I decided to blog about it.

I am looking forward working even more closely with this team.

Well, it seems that Mondays aren't that crappy after all!


Wicket, Spring 3, JPA2 & Hibernate OSGi Application on Apache Karaf

EDIT: Hibernate is now OSGi ready so most of those stuff are now completely outdated.

The full source for this post has been move to github.

Recently I attempted to modify an existing crud web application for OSGi deployment. During the process I encountered a lot of issues such as
  • Lack of OSGi bundles.
  • Troubles wiring the tiers of the application together.
  • Issues on the OSGi container configuration.
  • Lack of detailed examples on the web.
 So, I decided to create such a guide & provide full source for a working example (A very simple person crud application).

The first part of this guide is Creating custom Hibernate 3.5 OSGi bundles. This part provides an example project (which includes the bundles source) that describes how to use the custom hibernate bundles in order to build a wicket, spring 3, hibernate 3.5 jpa 2 and deploy it to Karaf.

Among others it describes:
  • How to wire database and web tier using the OSGi blueprint.
  • How to deploy web applications to Karaf 1.6.0.
  • A small wicket crud application.
Note: This demo application does not make use OSGi Enterprise Spec, since its an OSGi-fication of an existing application. The use of the spec will be a subject for future posts.


Environment Preparation
The OSGi run-time that will be used in this post is Felix/Karaf version 1.6.0.
This section describes the required configuration for deploying web applications.

Once, karaf is downloaded and extracted, it can be started by typing
from inside the karaf root folder.

Now, we are going to install karaf webconsole and war deployer that will allow us to deploy web applications to karaf.
features:install webconsole
features:install war

Note: In the background karaf fetches all the required bundles from maven repositories. You are going to need internet access for this. Moreover, if you are behind a proxy you will need to set up your jvm net.properties accordingly. Having the proxy configured in maven settings.xml is not enough.

Custom Bundles
Most of the bundles required for this project are available either in public maven repositories or inside Spring Enterprise Bundle Repository. However, hibernate 3.5.x which is one of the key dependencies for this project is not available as OSGi bundle (note: earlier version of hibernate can be found in Spring EBR). More details on OSGi-fying Hibernate 3.5.x in the previous part of the guide "Creating custom Hibernate 3.5 OSGi bundles".

Creating the application itself
The actual demo application will be the simplest possible wicket crud for persons (a killer application that stores/delete/updates a persons first name and last name to the database).

The create schema script of such application in mysql would look like this:
Database Tier
For the database tier we are going to create a simple bundle that will contain the entity, the dao interface and the dao implementation. The bundle will contain the necessary persistence descriptor for JPA 2.0 with hibernate as persistence provider. Finally it will use spring to create the data source, entity manager factory & JPA transaction manager. This bundle will export the dao as a service to the OSGi Registry using Spring dynamic modules.

The Person entity for the example can look like:
package net.iocanel.database.entities;

import java.io.Serializable;
import javax.persistence.Basic;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.NamedQueries;
import javax.persistence.NamedQuery;
import javax.persistence.Table;

 * @author iocanel
@Table(name = "Person")
 @NamedQuery(name = "Person.findAll", query = "SELECT p FROM Person p"),
 @NamedQuery(name = "Person.findById", query = "SELECT p FROM Person p WHERE p.id = :id"),
 @NamedQuery(name = "Person.findByFirstName", query = "SELECT p FROM Person p WHERE p.firstName = :firstName"),
 @NamedQuery(name = "Person.findByLastName", query = "SELECT p FROM Person p WHERE p.lastName = :lastName")})
 public class Person implements Serializable {

 private static final long serialVersionUID = 1L;
 @GeneratedValue(strategy = GenerationType.AUTO)
 @Basic(optional = false)
 @Column(name = "ID")
 private Integer id;
 @Column(name = "FIRST_NAME")
 private String firstName;
 @Column(name = "LAST_NAME")
 private String lastName;

 public Person() {

 public Person(Integer id) {
  this.id = id;

 public Integer getId() {
  return id;

 public void setId(Integer id) {
  this.id = id;

 public String getFirstName() {
  return firstName;

 public void setFirstName(String firstName) {
  this.firstName = firstName;

 public String getLastName() {
  return lastName;

 public void setLastName(String lastName) {
  this.lastName = lastName;

 public int hashCode() {
  int hash = 0;
  hash += (id != null ? id.hashCode() : 0);
  return hash;

 public boolean equals(Object object) {
  // TODO: Warning - this method won't work in the case the id fields are not set
  if (!(object instanceof Person)) {
   return false;
  Person other = (Person) object;
  if ((this.id == null && other.id != null) || (this.id != null && !this.id.equals(other.id))) {
   return false;
  return true;

 public String toString() {
  return "net.iocanel.database.entities.Person[id=" + id + "]";

For this entity we will create a dao interface, through which the rest of the bundles in the container can track/lookup the dao service (the actual implementation).

We want the dao service to provide simple crud operations such as, create, delete, find & findAll, so the dao interface can be something like:
package net.iocanel.database.dao;

import java.util.List;
import net.iocanel.database.entities.Person;

 * @author iocanel
public interface PersonDAO {

 public void create(Person person) throws Exception;
 public void edit(Person person) throws Exception;
 public void destroy(Integer id) throws Exception;
 public Person findPerson(Integer id);
 public List findAllPersons();

The actual jpa implementation of the dao will obtain the EntityManager via Spring (it will be injected by Spring) and for transaction demarcation will use Spring's Transactional annotation:
package net.iocanel.database.dao;

import java.util.List;
import javax.persistence.EntityManager;
import javax.persistence.Query;
import javax.persistence.PersistenceContext;
import net.iocanel.database.entities.Person;
import org.springframework.transaction.annotation.Transactional;

 * @author iocanel
public class PersonJpaDAO implements PersonDAO {

 private EntityManager entityManager;

 public void create(Person person) throws Exception {

 public void edit(Person person) throws Exception {

 public void destroy(Integer id) throws Exception {

 public List findPersonEntities(int maxResults, int firstResult) {
  return findPersonEntities(false, maxResults, firstResult);

 private List findPersonEntities(boolean all, int maxResults, int firstResult) {
  Query q = entityManager.createQuery("select object(o) from Person as o");
  if (!all) {
  return q.getResultList();

 public Person findPerson(Integer id) {
  return entityManager.find(Person.class, id);

 public int getPersonCount() {
  Query q = entityManager.createQuery("select count(o) from Person as o");
  return ((Long) q.getSingleResult()).intValue();

 public List findAllPersons() {
  Query q = entityManager.createNamedQuery("Person.findAll");
  return q.getResultList();

For the EntityManager injection and Spring Transactions, we need need a spring context. Since we are going to use Spring Dynamic Modules, the spring context needs to be placed under META-INF/spring/.







For the creation of the EntityManagerFactory Spring will need a persistence.xml file located under META-INF:


So far in the database tier we did what we would do in a typical application. Now we will add OSGi flavor to our module.

Creating the DAO OSGi Service
As mentioned above for the creation of the dao service we will use spring dynamic modules. So all we need is to add a descriptor under META-INF/spring that will instruct Spring's OSGi exporter to export bean personDAO as OSGi service:


Finally, we need to perform a small hack. In the previous part of this guide, we created an OSGi fragment for Hibernate Validator. This fragment is attached on the validation api host, so that the validation api can find the classes of hibernate validator. However, we still need to instruct the validation api, to look for Hibernate Validator classes. In an non-OSGi world the validation api will lookup in the classpath for the following file META-INF/services/javax.validation.spi.ValidationProvider and read the actual validation provider class name from this file.

Passing the Validation Provider to Validation API
In the OSGi world the validation api, will delegate that call to the calling bundle (in our case the database tier bundle) so we are going to make sure that it finds it. How we are going to do so? We are going to copy it from Hibernate Validator and add it in our bundle. This approach might not seem that elegant, however it has two great advantages:
  • Its simple
  • It works
If you are aware of more elegant alternative feel free to communicate them.

The final obstacle is creating the bundle itself.The bundle will be created using maven-bundle-plugin. As maven dependencies it will contain only whatever it requires for the compile scope and its run-time dependencies(hibernate,spring,jpa spec, cglib etc) will be declared as OSGi Import-Packages.



Presentation/Web Tier
For the presentation tier we are going to be a Wicket OSGi application. This application will be integrated with Spring using @SpringBean annotation (more details on this on Wicket/Spring Wiki).

Since we are interested in taking advantage of Spring Dynamic Modules, we are going to instruct to load its context from OsgiBundleXmlWebApplicationContext inside the web.xml.


So the full web deployment descriptor for Wicket/Spring/OSGi could look like this(yes I know I am starting to sound like Bob Ross):





The Spring context file (/WEB-INF/applicationContext.xml)that will be loaded needs to define two simple things:
  • The Wicket Application Object.
  • The PersonDAO OSGi service.

The PersonDAO service will be looked up using Spring Dynamic Modules. Inside the wicket application the PersonDAO service will be injected as if it was a normal spring bean using the @SpringBean annotation.


We are almost there. All that's left is the coding of the actual crud. I will not go into much detail, since its beyond the scope of this blog post. However, I am going to list the key points of the crud.

The C.R.U.D.
For the CRUD part we will create a single ajax page that will display:
  • A list of all persons in the database.
  • A small form to insert/edit person details.
  • Buttons for each record to edit/remove persons in the database.
The List Component that will be used is PropertyListView, and the model attached to list will be a LoadableDetachableModel that will load all persons from the database. Finally the person details form will consist of 2 text fields First Name & Last Name.

The full source for this example (including the custom bundles) can be found at my github repository. Once you unpack it you can mvn clean install and it will package the project bundles, the custom bundles and all the required bundles under target/wicket-osgi.dir/deploy folder. Just copy the contents of this folder to $KARAF_HOME/deploy and you are ready launch the application at http://localhost:8181/web-tier/.

Final thoughts
I hope you find this useful.
Once my schedule allows, I will blog on how to add JTA transactions on the example above, so stay tuned(The Hibernate bundle is JTA ready, however we need a JTA transaction manager bundle).
Feel free to send comments or suggestions.


Creating custom Hibernate OSGi bundles for JPA 2.0

Edit: I am more than happy that this post is now completely obsolete. Hibernate is now OSGi ready, Yay!

I was trying to migrate an application that uses JPA 2.0 / Hibernate to OSGi. I found out that hibernate does not provide OSGi bundles. There are some Hibernate bundles provided in the Spring Enterprise Bundle repository, however they are none available for Hibernate 3.5.x which implements JPA 2.0. So I decided to create them myself and share the experience with you.

This post describes how to OSGi-fy Hibernate 3.5.2-Final with EhCache and JTA transaction support. The bundles that were created were tested on Felix Karaf, but they will probably work in other containers too.

A typical JPA 2.0 application with Hibernate as persistence provider will probably require among other the following dependencies
  • hibernate-core
  • hibernate-annotations
  • hibernate-entitymanager
  • hibernate-validator
  • ehcache
Unfortunately, at the time this post was written none of the above was available as OSGi bundle. To make OSGi bundles for the above one needs to overcome the following problems
  • Cyclic dependencies inside Hibernate artifacts.
  • 3rd party dependencies (e.g. Weblogic/Websphere Transaction Manager).
  • Common api / impl issues for validation api and hibernate cache.
The last bullet which may not be that clear points to a problem where an api loads classes from the implementation using Class.forName() or similar approaches. In the OSGi world that means that the api must import packages from the implementation.

Hibernate cyclic dependencies
The creation of an OSGi bundle for each hibernate artifact is possible. However, when the bundles get deployed to an OSGi container, they will fail to resolve due to cyclic package imports.

The easiest way to overcome this issue is to merge hibernate core artifacts into one bundle. Below I am going to provide an example of how to use maven bundle plug-in to merge hibernate-core, hibernate-annotations & hibernate-entitymanager into one bundle.

A common way to use the maven-bundle-plugin to merge jars into artifacts is to instruct it to embed the dependencies of a project into a bundle. However, this is not very handy in cases where you need to add custom code into the final bundle. In that case you can use the maven dependency plug-in to unpack the dependencies, bundle plug-in to create the manifest and jar plug-in to instruct it to use the generated manifest in the package phase.


Hibernate & 3rd Party dependencies
Hibernate has a lot of 3rd party dependencies. Some of them are available as OSGi bundles, some need to be created and some can be excluded.

Examples of 3rd party dependencies that are available as OSGi bundle in the Spring Enterprise Repository are:
  • antlr
  • dom4j
  • cglib
Dependencies that are not available are:
  • jacc (javax.security.jacc)
 Dependencies that can be excluded vary depending on the needs. In my case I could exclude Weblogic/Websphere transaction manager, since I didn't intend to use them. To exclude a dependency just add the packages that are to be excluded in the import packages section using the ! operator (e.g. !com.ibm.*,*)

Hibernate validator and Validation API
As mentioned above the validation api provides a factory that build the validator by loading the implementing class using Class.forName(). This issue can be solved with 2 ways
  • Use dynamic imports in the API bundle to import the Implementation at runtime.
  • Make the implementation OSGi Fragment that will get attached to the API.
In this example the validation api is the one provided by the Spring Enterprise Bundle Repository, so the second approach was easier to apply.

More details on this issue can be found at this excellent blog post:
Having “fun” with JSR-303 Beans Validation and OSGi + Spring DM

Hibernate & EhCache
More or less the same applies to EhCache. Hibernate provides an interface which is implemented by EhCache. Hibernate loads that implementation in runtime. We will do exactly the same thing  we did for hibernate validator. We will convert ehcache jar to fragement bundle so that it gets attached to the merged hibernate bundle.

Hibernate & JTA Transactions
I kept for last the most interesting part. This part describes what needs to be added inside the bundle so that it can support JTA transactions.

For JTA transactions Hibernate needs a reference to the transaction manager. That reference is returned by the TransactionManagerLookup class specified in the persistence.xml. In a typical JEE container the lookup class just performs a JNDI to get the TransactionManager. In an OSGi container the transaction manager is very likely to be exported as an OSGi service.

This section describes how to build an OSGi based TransactionManagerLookup class. The solution presented is very simple and uses only the OSGi core framework (no blueprint implementation required).

We will add to the hibernate bundle 3 new classes:
  • TransactionManagerLocator (Service Locator).
  • OsgiTransactionManagerLookup (Lookup implementation).
  • Activator (Hibernate Bundle Activator).
TransactionManagerLocator is a ServiceLocator that uses OSGi's ServiceTracker to get a reference to the TransactionManager service.
package org.hibernate.transaction;

import javax.transaction.TransactionManager;
import org.osgi.framework.BundleContext;
import org.osgi.util.tracker.ServiceTracker;

 * @author iocanel
public class TransactionManagerLocator {

    private final String lookupFilter = "(objectClass=javax.transaction.TransactionManager)";
    private static BundleContext context;
    private static TransactionManagerLocator instance;
    private ServiceTracker serviceTracker;

    private TransactionManagerLocator() throws Exception {
        if (context == null) {
            throw new Exception("Bundle Context is null");
        } else {
            serviceTracker = new ServiceTracker(context, context.createFilter(lookupFilter), null);

    public static synchronized TransactionManagerLocator getInstance() throws Exception {
        if (instance == null) {
            instance = new TransactionManagerLocator();
        return instance;

    public static void setContext(BundleContext context) {
        TransactionManagerLocator.context = context;

    public TransactionManager getTransactionManager() {
        return (TransactionManager) serviceTracker.getService();


OsgiTransactionManagerLookup is an implementation of Hibernates TransactionManagerLookup that delegates the look
up to the TransactionManagerLocator.
package org.hibernate.transaction;

import java.util.Properties;
import javax.transaction.Transaction;
import javax.transaction.TransactionManager;
import org.hibernate.HibernateException;

 * @author iocanel
public class OsgiTransactionManagerLookup implements TransactionManagerLookup {

    public TransactionManager getTransactionManager(Properties props) throws HibernateException {
        try {
            return TransactionManagerLocator.getInstance().getTransactionManager();        } catch (Exception ex) {
            throw new HibernateException("Failed to lookup transaction manager.", ex);

    public String getUserTransactionName() {

        return "java:comp/UserTransaction";

    public Object getTransactionIdentifier(Transaction transaction) {
        return transaction;

Activator is just a bundle activator. Its role is to pass a static reference of the bundle context to the TransactionManagerLocator (the bundle context is required by the service tracker).
package org.hibernate;

import org.hibernate.transaction.TransactionManagerLocator;
import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;

 * @author iocanel
public class Activator implements BundleActivator {

    public void start(BundleContext bc) throws Exception {

    public void stop(BundleContext bc) throws Exception {
Example use of the bundle & bundle source code.
An example web application that uses the custom hibernate bundles can be found in this  post.

If you feel tired of reading and just want to use the bundles. You can download them from here. All the custom bundles are included in the maven project under the bundles folder (as seen in the picture).

The example application uses Wicket and can be easily deploy in Karaf.
Final Thoughts
I hope you found it useful.

Any feedback is more than welcome.


Spring AOP and Reflection Pitfalls

This post intents to point out some pitfalls when using spring aop and reflection on the same objects. Moreover, it provides some examples of these pitfalls when combining ServiceMix & Camel with Spring JPA/Hibernate.

The two most common uses of aspect oriented programming with spring are
  • Security
  • Transaction Handling
I found myself having issues when applying those 2 on beans that are accessed using reflection (not in all cases) and below I am going to dig into those issues.

Spring AOP flavors
Spring aop can be used in many different flavors:
  • Compile time weaving 
  • Load time weaving
  • Using dynamic or cglib proxies  (The main focus of this post)
Cglib proxies and reflection
There are many cases where a bean needs to be accessed using reflection. A common case is to use reflection in order to access a private field. 

I could use the following piece of code in order to retrieve the privateProperty value of SomeBean using Reflection like this:
And this would work pretty cool. However, if the someBeanInstance is enhanced using cglib, the code above would break resulting in having a null value in privatePropertyValue.
Spring's Transactional annotation and Reflection
The problem as described above might be pretty obvious, however here is a direct side effect of it that is not that obvious. Let's assume the use of Spring's transactional annotation. A possible set up could be
the bean annotated as transactional could be
If the resource is injected using traditional reflection(as described above), this would eventually result in a Null Pointer Exception, due to the fact that the resource would fail to be injected.
Moreover, the Exception would trigger a transaction rollback and the entity would not be saved.

You might wonder "why would reflection  fail?". The answer is that the cglib proxy is actually a subclass of the proxied object that is created on run-time and thus reflection would fail to find the declared field on the proxy. In order to to make it work, the getDeclared needs to be called upon the super class (but it would break once you removed the aop).

Work around: Spring's ReflectionUtils to the rescue
 Spring provides a class that among others offers a work around for this issue. Here is an example of using Spring's ReflectionUtils on cglib proxies.
Behind the scenes spring will attempt to find the declared field both on the target object(someBean) and all its superclasses. So if someBean is proxied, it will fail to find the declared field on the proxy, but it will succed using its superclass (SomeBean.class).
Real examples using Apache ServiceMix & Camel
I first encountered the issue the first time I attempted to add the transactional annotation on the bean of a ServiceMix's BeanEndpoint. A simplified version of this case is here.
ServiceMix uses reflection (as described above) in order to inject the DeliveryChannel to the MessageExchangeLister and this reproduces the problem.  Unfortunately, a direct solution to this issue would require editing the BeanEndpoint itself (which is not such a bad idea). An other work around would be to use the transactional annotation using compile time weaving. Finally, if none of the above seems appealing, you can always create an other bean that will be annotated as transactional an make calls to that bean from inside the MessageExchangeLister.

Note: The bean endpoint itself uses Spring's ReflectionUtils and it shouldn't encounter this issue, however it still does, due to the fact that the property (in this case the DeliveryChannel) is set on the proxy and not the actual object.

A similar case I encountered was the use of Camel's @RecipientList annotation combined with Spring's @Transactional annotation. I will not get into details about it, since I think that by now its pretty obvious.

Final thoughts
If you get to understand the nature if this issue its not that hard to deal with it. However, I spent a great deal of time trying to identify the root cause.

In most cases you can bypass it by avoiding to proxy the reflection target itself. To do so you only pass a reference of the proxied object to the class that is access using reflection.

From what I read in the forums, it taunts a lot of people and this is why I decided to blog about it. 

I hope you find it useful!


Extend ServiceMix Management features using Spring - Part 3

In the previous posts (Extend ServiceMix Management features using Spring - Part 1 and Part 2) I presented how to use spring to gain control over endpoint lifecycle and configuration via jmx. You might wonder till now "what happens to those custom changes if I have to redeploy the assembly, restart servicemix or even worse restart the server?". The short answer is that these changes are lost. The long answer is in this blog post, which explaings how to persist those changes and how to make the endpoint reload them each time it starts.

Part III: Modifying, Persisting and Loading Custom Configuration Automatically
In order to persist and auto load custom configuration upon endpoint start-up all we need is:
For persisting
  • A way to serialize the configuration in xml (jaxb2).
  • A way to persist the configuration (jpa/hibernate).
For auto loading
  • A way to intercept endpoint start and activate methods (spring aop).
  • A way to apply that configuration to the endpoint (beanutils).
The basic idea is that for each endpoint, the custom configuration can be serialized to xml and persisted and with the use of aop interceptors reloaded to the endpoint each time it starts up.

Step 1:Configuring persistence
For persisting configuration I am going to use JPA/Hibernate and MySQL.
I want to keep things as simple as possible, so I will create a table that will only contains 2 fields
  1. ID, the id of the endpoint which will be the primary key.
  2. CONFIGURATION, a text field that will hold the configuration in xml format.
This could look like this

The endpoint id can be retrieved by calling endpoint.getKey(). The configuration is the XML representation of the configuration (more details later).

The persistence unit, the entity and the data access object are things that we want to be reusable so they better be in a separate jar. I will call this management-support.

Let's start creating the new jar by adding the entity.

Now we can create the persistence unit. Note that in this example I am adding all the database connection information inside persistence.xml leaving pooling to hibernate. It would be better if I created a datasource, but for the shake of simplicity I will not.

Now its time to create a very simple dao for the EndpointConfiguration entity.

Step 2:Configuring Configuration Serialization
For each endpoint type that we want its configuration to be serialized and persistence I am going to create a pojo that contains all the properties that are managed. The pojo will be annotated with Jaxb annotations so that we can easily serialize it to xml. Before serialization takes place the pojo needs to be set the values of the current configuration. For this purpose I am going to use BeanUtils (spring beanutils). Now we can update our endpoint manager and add 2 methods (save & load of configuration) and the ConfigurationDao that was presented above.

The new endpoint manager will expose to the jmx the saveConfiguration and loadConfiguration managed operation.

Step 3:Configuring Endpoint Lifecycle Interception
In this section I will show how to intercept the lifecycle methods of the endpoint using spring-aop. Spring aop will be configured using cglib proxies. The goal is to intercept start and activate methods call the method load configuration on the endpoint manager and then proceed with the execution. So the interceptor needs to be aware of the endpoint that intercepts(determined by the pointcut definition) and the endpoint manager(will be injected to the bean that will play the role of the Aspect). So the interceptor will look like this

Note that we are intercepting both start and activate methods. This is because in some endpoints in order to refresh their configuration needs to be restarted while other need to be reactivated.
Step 4:Putting the pieces together
Now, its time to put all the pieces together. I am going to create a new jar the management support and add to it a generic endpoint manager(the base class for all entpoint managers), the endpoint configuration entity, the configuration dao and the persistance unit. The example project(wsdl-first) will be modified so that the HttpEndpointManager extends the generic endpoint manager and the http-su xbean.xml configures persistence and aop as explained above.

The generic EndpointManager
The POJO that represents HttpEndpoint configuration
The updated HttpEndpointManager

And finally the xbean.xml for the http service unit
The final configuration might seem a bit bloated. It can become a lot tidier by using xbean features, however this goes far beyound the scope of this post.

Preparing the container
For this example to run we need to add a few jars to servicemix

  • hibernate-entitymanager
  • hibernate-annotations
  • aspectjrt
  • spring-orm
  • the dependencies of the above
You can download the complete example here which will contains all the dependencies under wsdl-first/lib/optional.

Final words
I hope that you find it useful. Personally, I've been using it for quite some time now and I am very happy with it. Using this you can even alter the xslt of an xslt-endpoint using the jmx console without having to recompile, redeploy or restart your assmebly.

Extend ServiceMix Management features using Spring - Part 2

In the previous post (Extend ServiceMix Management features using Spring - Part 1) I demonstrated a very simple technique that allows you to expose endpoint lifecycle operations via jmx. Now I am going to take it one step further and expose the endpoint configuration via jmx.

If you haven't done already please catch up by reading Part 1.

Part II: Modifying the configuration of a live endpoint.

I am going to use the wsdl-first servicemix sample as modified in the previous post and expose the property locationURI of the HttpEndpoint to jmx using Spring's @ManagedAttribute annotation.

Step 1
Open the HttpEndpointManager and delegate the getter and setter of HttpEndpoints locationURI property.

Step 2
Annotate both methods with @ManagedAttribute

Now the HttpEndpointManager will look like this

Once the assembly gets deployed from the jmx console the locationURI property is exposed.
Note that once the new property is applied, the endpoint needs to be reactivated (call deactivate and activate from jmx as shown in the previous post).

As you can see in the picture I used jmx and changed the location uri from PersonService to NewPersonService, without editing, recompiling or redeploying the service assembly.

This approach is really simple and quite useful. Its biggest advantage is that even a person that has no knowledge of ServiceMix can alter the configuration. Moreover it simplifies the monitoring procedure of production environments.

The full source code of this example can be found here.

In the Part 3 I will demonstrate how these changes in the configuration can be persisted and how we can intercept endpoints lifecycle so that we have those changes loaded each time the endpoint starts.


Extend ServiceMix Management features using Spring - Part 1

This is the first from a series of posts that demonstrate how to extend ServiceMix management using Spring's jmx and aop features. The version of SerivceMix that is going to be used will be 3.3.3-SNAPSHOT but I've been using this technique since 3.3 and it will probably can be applied to 4.x.

One of the most common problems I had with servicemix was that even the most simple changes in the configuration (e.g. changing the name of the destination in a jms endpoint)  required editing the xbean.xml of the service unit and redeployment. Moreover this affected the rest of the service units contained in the service assemblies, which would be restarted too.

An other common problem was that I could not start, stop and restart a single service unit. That was a major problem since I often needed to be able to stop sending messages, while being able to accept messages in the bus. The only option I had was to split our service units in multiple service units (e.g. inbound service unit and outbound service unit).

This series of blog post will demonstrate how we used spring in order to:

  1. Obtain service unit lifecycle management via jmx.
  2. Expose endpoint and marshaler configuration via jmx.
  3. Perform configuration changes on live production environements.
  4. Persisting these changes to database.
  5. Loading endpoint custom configuration from database.
Part I: Starting and Stoping Endpoints
Although all ServiceMix endpoint have start and stop methods these methods are not expose neither to jmx nor to the web console.
A very simple but usefull way to expose this methods to jmx is to use spring's jmx auto exporting capabilities.

As an example I will use wsdl-first project from servicemix samples in order to expose the lifecycle methods of the http endpoint to jmx. To do so I will delegate its lifecycle methods (start,stop) to a spring bean that is annotated with the @ManagedResource annotation and I will modify the xbean.xml of the http service unit so that it automatically exports to jmx beans annotated as @ManagedResources.

Step 1
The first step is to add spring as a provided dependency inside the http service unit.
Step 2
Create the class that will be exported to jmx by spring. I will name the class HttpEndpointManager. This class will be annotated as @ManagedResource, will have a property of type HttpEndpoint and will delegate to HttpEndpoints the lifecycle methods (activate,deactivate,start,stop). This methods will be exposed to jmx by being annotated as @ManagedOperation.
Step 3
Edit the xbean.xml of the http service unit and add the spring beans that will take care of automatically exporting the HttpEndpointManager to jmx.

You can now open the JConsole and use the HttpEndpointManager MBean to start/stop the HttpEndpoint without having to start/stop the whole service assembly.

Managing the lifecycle of endpoints in a simple assembly like the wsdl-first servicemix sample has no added value(since you can stop the service assembly). However this sample was chosen, since most servicemix users are pretty familiar with it. In more complex assemblies this trick is a savior(cause you can stop a single endpoint, while having the rest of the endpoints running). Moreover, this is the base for even more usefull tricks that will be presented in the parts that follow.

The full source code of this example can be found here.

In the second part of the series, I will demonstrate how you can extend this trick in order to perform configuration modification via jmx.