7. Advanced jBPM capabilities – Open Source SOA

Chapter 7. Advanced jBPM capabilities

The previous two chapters on jBPM covered most of the fundamental features of this very capable BPM solution. You saw descriptions, with examples, of most of the constructs used in jBPM, such as nodes, transitions, and in chapter 6, tasks. In addition to describing how to build business processes, we also discussed how its API can be leveraged to build powerful, customized applications. Clearly, this is a very capable horse—but is it a thoroughbred? Does it possess the advanced features to warrant being considered "enterprise ready"? What characteristics must it possess to fulfill this role? We'll address some of these questions in this chapter.

This chapter covers how to handle highly complex processes by breaking them into more manageable subprocesses. These subprocesses, in turn, can be reused by other processes. Exception handling and audit logging, both essential for production deployment, are addressed through implementation examples and best practices. In addition, you'll learn how jBPM can be integrated with Apache Tuscany's Service Component Architecture (SCA) and Service Data Objects (SDOs). As you recall, SCA and SDO were the focus of chapters 3 and 4, and they provide a framework for building reusable, multilanguage components that can be easily exposed as services through a variety of communication protocols. We demonstrate how the SCA/SDO can be used to service enable jBPM, thereby making it a first-class citizen in your SOA environment. When you finish reading this chapter, you'll have all the tools you need to design, deploy, and monitor jBPM business processes. The savings that will result from your evangelism of BPM will pay off handsomely!

Important enterprise features of jBPM

The jBPM business process examples we've developed so far have all, by design, been fairly simple in nature. My goal was to ease the learning process and focus on the specific topic at hand. In a real-world scenario, you'll often find that you're creating processes that can contain dozens, or even hundreds of steps, in an orchestration. In such a scenario, it's useful to be able to break down, or group, the process into more manageable pieces. We'll discuss two means of accomplishing this in jBPM: superstates and subprocesses. Later, we'll provide solutions for managing exceptions that may occur as a result of any custom code you've introduced as part of a process definition.

While not an "advanced" feature per se, our focus will then turn to describing how you can use inline code in the form of BeanShell scripts. I'll offer solutions for monitoring a process instance through the extensive logging features available in jBPM. I'll conclude this section by looking into a concept called asynchronous continuations, which enable you to distribute processing to the jBPM server in which jBPM enterprise is running. Let's begin by looking at jBPM superstates.

Superstates for grouping

Superstates in jBPM are simply a grouping of nodes. They're useful, for example, when you want to logically associate a group of nodes. You might want to do this to delineate phases of a process or to group node activity by organizational responsibilities. For example, an employee termination process is typically cross-departmental, with various responsibilities falling in several departments. This is illustrated in the hypothetical employee termination process shown in figure 7.1.

In figure 7.1, superstates are used to group the nodes related to HR, Finance, and Security. In the jBPM Graphical Process Designer (GPD), when you deposit a superstate node into the jBPM working area or canvas, a bordered region is created where you can then place nodes and transitions. As you can see, these border areas can be resized and will sprout scroll bars where necessary. With the jPDL XML, how is a superstate defined? It's pretty simple, as this example illustrates:

<super-state name="security">
  <node name="disable ldap">
    <transition to="security-fork" name="t"></transition>
  <!-- other nodes here -->

Figure 7.1. A hypothetical employee termination process illustrating superstates in use<br></br> 

The available options for the super-literal element are fairly minimal. Attribute values include @name and @async (covered in more detail in section 7.1.6) and, similar to all jBPM nodes, support the standard node elements (see chapter 5).

Superstates provide some additional functionality beyond just diagrammatic grouping. As you learned in chapter 5, superstate-specific events are also available (superstate-enter and superstate-leave). You can associate action handlers with these events, so with this ability, you can call custom code when the superstate has been entered or exited. When would you consider using this? One use case that comes to mind is business activity monitoring, where you want to highlight when certain milestones or activities take place in a process.


The code examples that accompany this section of the book demonstrate superstate events in use. Notice in particular the SuperStateTest JUnit test class, which demonstrates how you can reference nodes in the superstate.

Perhaps more importantly, you can also associate timers with the superstate (this does require the Enterprise edition of jBPM—in other words, the app server edition). In the scenario shown in figure 7.1, you could, for instance, notify a manager of the HR department that their team's work hasn't been completed in a timely fashion. Thus, when using events and timers in tandem with superstates, there's a benefit beyond the obvious achieved by providing visual hierarchy and grouping. Related in concept to superstates are subprocesses, which are intended to provide greater process composition flexibility.

Using subprocesses to manage complexity

Whereas a superstate in jBPM is used to logically group nodes, a subprocess could be used to split those grouped nodes into entirely separate processes. Thus, subprocesses provide a means to create decomposed processes. You can define a master process, which in turn calls subprocesses. The subprocess can be thought of as simply another individual node in the parent process. When the subprocess is completed, execution will resume in the parent process which had invoked the subprocess. In that respect, it behaves much like a state nodetype in jBPM. Using subprocesses enables you to create more complex processes without overly complicating the visual layout. Additional benefits include the ability to create reusable process modules that can be incorporated by other process definitions. In chapter 1 you learned that an important aspect of SOA is the ability to create composite services. Using subprocesses, you can achieve this same objective. The subprocesses can be run in a stand-alone fashion, or as subprocesses to a larger orchestration.

In figure 7.1, I showed a modestly complex business process used for employee termination. In that case, superstates are used to provide logical structure to the diagram. Since the security-related nodes are the most involved, let's instead break that out into a separate subprocess rather than using a superstate (see figure 7.2).

In figure 7.2, the node named security represents the new subprocess (identified in the node icon as <<Process State>>). When this node is encountered, a new process instance for the security process is instantiated. This can be illustrated most effectively through the jBPM Console, which will clearly show a new process instance for the subprocess being created. Of course, you can also use the API, as we described in chapter 6, to identify the new process instance. In the parent process jPDL XML definition, you can see how this subprocess is defined (see listing 7.1).

Figure 7.2. The relationship between a main process and a subprocess<br></br> 

Example 7.1. jPDL subprocess definition example

<process-state name="security">
  <sub-process name="security" binding="late"/>
  <variable access="read" name="name"></variable>
  <variable access="read" name="employeeId"></variable>
  <variable access="read,write" name="securityComplete"></variable>
  <transition to="join1"/>

The process-state element, in addition to accepting the @name attribute, also supports asynchronous continuations, covered in section 7.1.6, through the @async attribute (by default, this is off, or false). The standard set of node elements are also supported, such as timers, events, descriptions, and exception handlers.


One neat feature not currently available in jBPM would be the ability to use a subprocess that resides on a different jBPM instance. Such a distributed feature would make jBPM more scalable for large implementations.

In the example from listing 7.1, the sub-process child element of process-state is where the subprocess is defined. The @name attribute of the sub-process element must equate to the name assigned to the subprocess. This corresponds to the value provided to the @name attribute of the process-definition root element of the subprocess being invoked. Additionally, the @binding attribute (which is set to late in listing 7.1) instructs the jBPM engine to wait until runtime to identify the subprocess version to invoke. Otherwise, the binding will occur when the process (not a particular process instance) is created, at which time it will attempt to identify the subprocess to use when a process instance is subsequently created. Thus, if you opt to not use late binding, you must also be sure to create the subprocess first, followed by the main or calling process. The optional @version attribute can also be used to specify the version of the subprocess definition you wish to use—in its absence, the most recent version will be used.


I recommend always using the @binding attribute set to late. This will help you avoid headaches that result from trying to determine which process must be installed first. The resulting errors can be confusing to debug.

Besides sub-process, the other important element in listing 7.1 is the variable child element. This variable is used to manage how process variables are propagated to the subprocess. It accepts three attributes:

  • @name—The name of the process variable to be passed to the subprocess.

  • @access—A comma-delimited set of values used to define the access rights permitted by the subprocess when working with the variable. Permissible values are read, which indicates the subprocess will have read-only access, and write, which indicates the value can be modified by the subprocess.

  • @mapped-name—When present, this variable allows you to specify a different name for the variable that's passed when it's received by the subprocess. This is a helpful feature if a subprocess is reused elsewhere and has different process variable name expectations.

We've now covered two approaches for helping you organize or decompose complex business processes—essential tools for building enterprise orchestrations. You may be thinking that you've learned enough about jBPM, and you're perhaps tempted to skip to the next chapter. At this point, you're like a doctor trained in the art of surgery but not well versed in the art of recuperation. As we all know, despite our best intentions, things don't always work out the way we anticipate. This is where exception handling comes into play.

Managing exceptions

Exception handling in jBPM is a bit different than you might imagine. When managing exceptions in jBPM, you're only dealing with those that result from any handler classes that you've created. They aren't used for any sort of internal jBPM error that may have resulted from processing within the engine itself. So, for example, if you're extending functionality with an action or assignment handler, you can trap and manage those errors using the exception-handling techniques we'll discuss.

A source of common misunderstanding about exception handling in jBPM is whether you can use this mechanism to directly alter the flow of the process. The official documentation is rather contradictory on this matter. The upshot is this: while technically you can redirect the flow using a Token.setNode(Node node) call, this approach is strongly discouraged. Instead, the proper strategy is to set a process instance variable, which can then direct subsequent flows by way of a decision node. In addition, you can use the exception mechanism to issue an alert or notification through JMS, email, and so forth so that someone can perform remedial actions.

Let's create a simple example to illustrate exception handling at work. Figure 7.3 shows a process that uses a transition action handler to purposely throw an exception (it occurs at the to-state transition). An exception handler action then creates a process instance variable called errorMsg and sets it to a String value. The downstream decision node (err-check) checks for the presence of the errorMsg process variable. If the variable is present, the decision node redirects the flow to the notify-of-error node.

In the jPDL used to define figure 7.3 (the full code is available in the source code for this chapter), the transition is expressed in XML as

Figure 7.3. An example process that illustrates exception-handling features<br></br> 

<transition name="to_state" to="first">
 <action name="action" class="com.sample.action.MessageActionHandlerExc2">
   <message>Going to the first state!</message>
 <exception-handler exception-class="java.lang.RuntimeException">
   <action name="RuntimeExceptionAction"

As you can see, the exception-handler element is defined as a child along with the action handler used to generate the exception. The exception-handler's @exception-class attribute defines the type of exception it will catch, which in this case is a java.lang.RuntimeException. When this exception is caught, the handler class defined by the @class attribute will be invoked, which in this case is RuntimeExceptionAction. This is a standard action handler that implements the Action-Handler's execute method. In this example, it simply creates a process instance variable errorMsg using

  "A runtime error has occurred in node");

Later on in the process, the decision node called error-check in figure 7.3 checks to see if errorMsg is defined as a process variable. If it is, error-check transitions the token to the node responsible for alerting or otherwise taking corrective action (notify-of-error). The decision node's jPDL definition is shown here:

<decision name="err-check"
  expression='#{errorMsg != null ? "err" : "okay"}'>
  <transition to="notify-of-error" name="err" />
  <transition to="end" name="okay"></transition>

I used the @expression attribute in this case to identify what transition path to follow. The value returned by the expression is the transition name to follow. Using Java's ternary operator, the statement checks to see if the errorMsg variable isn't null. If the variable is null, the statement returns the string err, which contains the name of one of the defined transitions. Otherwise, it returns the string okay, which corresponds to the transition name used to go to the end node.

In the source code for this section, you'll see how exception handlers can be defined at the root process level. Such handlers can be useful as a catchall, since exceptions will bubble up from the node and transition level if no corresponding exception handler catches the exception. In our example, java.lang.Exception was provided as the @exception-class, since many of the standard Java exceptions are subclasses of that and will thus be caught. Much like with standard Java, you can be as explicit as necessary in identifying what types of exceptions you want to trap.

Let's recap what we've learned. This section described how exception handling is managed in a jBPM process. This facility is only used for managing exceptions that occur as part of any custom code you introduce, such as handlers. A common source of confusion is knowing what to do when an error is encountered. I strongly recommended that you not alter the execution flow directly in your action handler class defined in your exception-handler element in jPDL. Instead, use exceptions for notification purposes or indirectly affect the process flow by setting process variables that can be interpreted downstream.

Up to this point, we've used Java handler classes to provide custom functionality. A more convenient approach is to use BeanShell scripts.

Scripting with BeanShell

At times it may seem like overkill to resort to writing Java code when only simple or trivial functionality needs to be introduced into your business process. Maybe you just need to introduce a few lines of programming logic. In these situations, you can use BeanShell scripts inline within your jPDL code. This approach is very convenient and allows for more rapid application development. In addition, BeanShell expressions are also used in a variety of capacities in jBPM, such as in decision node logic.

BeanShell was one of the earliest Java scripting implementations and recently has initiated the Java Community Process to become JSR-standards compliant. Having enjoyed fairly wide support, BeanShell is included in a variety of applications as a lightweight scripting alternative to Java (visit the official web site, http://www.beanshell.org/, for more details). The syntax and usage closely mirror that of standard Java, so Java developers can generally pick it up quickly. Table 7.1 identifies the various places in jBPM's jPDL where BeanShell scripts can be used.

Table 7.1. jPDL BeanShell scripting usage<br></br> 

Element name



When used in a create-timer element, the script will be called when the timer is first created. For cancel-timer, the script is invoked when the timer is called.


In the previous section, we demonstrated how a Java action handler can be invoked when an exception handler traps an exception. Instead of invoking a Java action handler, you could instead call a BeanShell script.


Anywhere you can specify an action element you can use a BeanShell script. So, for example, scripts can be specified in the actions associated with transitions, the process definition root, nodes, and events.

A BeanShell script would be of limited utility if you couldn't access the jBPM process instance context. Fortunately, jBPM provides exposure to instance variables such as executionContext, token, node and a variety of task-related objects. Obviously, the context by which the script is called determines whether these script variables will be populated. Here's an example of using a BeanShell script in a start-state node:

<start-state name="start">
  <transition name="to_state" to="first">
     <script name="beanshell-example">
        System.out.println("Event type is: " +
        System.out.println("Token is: " + token);
        System.out.println("Task is: " + task);

The first println statement reports the event type as transition. The second println displays a root token ("/"), and the last println shows the task is null, since no task is associated with that node. The source code for this section contains the process shown in figure 7.4.

Figure 7.4. An example process that uses script for setting instance bariables and decision criteria<br></br> 

In this simple demonstration, a random value is assigned to a process variable called evalNum. The event callout BeanShell script shown in figure 7.4 shows how this variable is set. After that event occurs with its BeanShell script in the node named first, the decision node, decision1, is encountered. This decision node is configured so that the transition path taken will depend on the random value assigned to evalNum in the BeanShell script. To accomplish this, the decision1 decision node uses a BeanShell expression, a one-line statement that must evaluate to true or false (this is true whenever an expression attribute is used). Here's the jPDL implementation for this decision node:

<decision name="decision1">
   <transition to="0 to 33" name="&lt; 33">
          #{evalNum &lt; 33}
   <transition to="33 to 66" name="33 to 66">
          #{evalNum &gt;= 33 &amp;&amp; evalNum &lt; 66}
   <transition to="66 or greater" name="&gt; 66">
          #{evalNum &gt;= 66}

The condition element contains the BeanShell expression used to determine whether a given transition should be followed. If the expression evaluates to true, then the transition is selected (if multiple transitions evaluate to true, the first one present will be used). As you can see, when used in combination with decision nodes BeanShell expressions are a convenient choice, and are easier than resorting to a Java class decision handler. I anticipate that future releases of jBPM will add scripting support, such as Groovy, JRuby, or Jython (perhaps by embracing the Apache Bean Scripting Framework [BSF] or the Scripting API in Java 6 [JSR-223]).


You can pretty much do anything in BeanShell scripts that you can in Java. For example, you can use import statements to provide access to external libraries. This approach is convenient since you can easily reuse your existing libraries without having to wrap them specifically within jBPM handlers.

Regardless of whether you extend jBPM functionality with BeanShell or Java, you inevitably will want the ability to log and monitor the activity that occurs in your process instance. This is where jBPM audit logging comes in handy, as you'll see in the next section. (You'll also learn in the next chapter how this capability can be used to generate events that can be consumed by an event stream processor, thus providing real-time metrics and monitoring of your jBPM processes.)

Audit logging

By default, a variety of audit logs are produced as a result of process instance execution. Collectively, these logs will provide you with complete insight into every activity that has occurred in a process instance. How can this information be beneficial? For example, you could load it into a data warehouse for reporting and analytics. Or perhaps you could monitor the data in real time for business activity-monitoring dashboards. As with most aspects of jBPM, you can also extend the logging features to add your own capabilities. For example, perhaps you want to dynamically filter log output for only content you deem relevant. To do so, you just implement your own Logging-Service class—you'll see an example in our next chapter where we cover Esper, the open source event stream processing (ESP) engine.


You can disable logging by commenting out the XML line beginning with <service name="logging...> in the jbpm.cfg.xml file. When you use the Eclipse Graphic Process Designer plug-in and specify New > Process Project, it will, by default, create a blank jpdl.cfg.xml file. If you want to selectively add entries to this file, locate the default.jbpm.cfg.xml file in the jbpm-console.war and copy the desired entries from there. Also, depending on your Eclipse configuration, it may not find your jbpm.cfg.xml file in your classpath, so be sure to specify the directory where it resides if running your samples directly through Eclipse (using the Open Run Dialog options).

Before we look at how to access the logs via the jBPM API, let's identify the types of logging classes (see table 7.2).

There are two ways in which you can acquire logs: via a LoggingInstance and via a LoggingSession. Let's begin by looking at the LoggingInstance.

Table 7.2. jBPM logfile types<br></br> 

Logfile class



Likely the most useful set of logs, classes such as ActionLog, TransitionLog and NodeLog can be used to track any activity related to these objects.


Includes classes such as VariableCreateLog and VariableDeleteLog, which can be used to track variables that were created and deleted throughout the lifecycle of a process instance.

org.jbpm.context.log. variableinstance.*

These classes, such as StringUpdateLog, are used for tracking individual changes to supported variable types (Byte, Date, Double, String). For complex, serializable Java types, logging of individual changes isn't directly supported (chapter 5 did address how you can create converters for other object types, and then you could create new associated logging classes).

A LoggingInstance can be retrieved by using the ProcessInstance.getLoggingInstance() method. For example, if you're running jBPM in an embedded style manner (such as used for the JUnit tests that are generated when you create a new process definition project in GPD), this could be done using a fragment such as

ProcessDefinition processDefinition =
ProcessInstance instance = new ProcessInstance(processDefinition);
LoggingInstance loggingInstance = instance.getLoggingInstance();

Once you have an instance of the LoggingInstance class (shown here as loggingInstance), you can use it to retrieve the set of logs associated with that process instance. The number and type of logs that appear will vary based on where the execution cycle is in the process instance(s), as well as the type of nodes being used. The following is an example of how you can retrieve all of the available logs via the loggingInstance we acquired earlier:

List<Object> logs = loggingInstance.getLogs();

for (Object obj : logs) {
    println("Logtype is: " + obj.getClass().getName());

As you can see, we're simply fetching a List of the logs through the getLogs method, iterating through them, assigning each to a Java Object (since they can be of different types), and then printing out the object's name to the console. Depending on the log class, various methods can then be interrogated to retrieve the details of the log. For example, the VariableCreateLog log class can return a VariableInstance, from which you can get the name and value of the variable that was created.


Although LoggingInstances can be used to access logs, they're only transitory in nature, and thus may be of limited value. Once a process instance is flushed to the database or persisted, all logs in the LoggingInstance will be cleared. Instead, use LoggingSession to retrieve historical logs. You can flush the logs by issuing a JbpmContext.save(ProcessInstance) method call.

As pointed out in the callout, the logs obtained through LoggingInstance are only available while you're working in an existing process instance context. To retrieve the logs after a process instance has been persisted to the database, use LoggingSession, which is the second method we mentioned in the section introduction.

To obtain an instance of LoggingSession, you can use the method JbpmContext.getLoggingSession(). From there, you can retrieve all logs using the LoggingSession method findLogsByProcessInstance, which takes as a parameter a ProcessInstanceId. This will return a Map, with a key representing each token in the given process instance. The following is an example that prints out the logs available for a given process instance where we're assuming only one token execution path is used (for example, no forks exist in the process). In this example, jbpmContext represents a JbpmContext and instance is of type ProcessInstance:

LoggingSession loggingSession = jbpmContext.getLoggingSession();
Map logMap = loggingSession.findLogsByProcessInstance(instance.getId());
Map.Entry entry = (Entry) logMap.entrySet().iterator().next();
ArrayList<Object> sessionLogs = ((ArrayList) entry.getValue());

for (Object log : sessionLogs) {
  println("Log is: " + log.getClass().getName());

When run, this will result in output that resembles the following:

Log is: org.jbpm.graph.log.NodeLog
Log is: org.jbpm.graph.log.TransitionLog
Log is: org.jbpm.graph.log.ActionLog
Log is: org.jbpm.context.log.variableinstance.StringUpdateLog
Log is: org.jbpm.graph.log.ProcessInstanceEndLog

If multiple tokens can be present in the process, you'd obviously want to also iterate through the Map.Entry keys. Lastly, if you already have a handle to process instance's token, you can also use the method LoggingSession.findLogsByToken(long tokenId), which also brings back the List of logs associated with that token execution.


In the previous chapter's coverage of APIs, you may recall we demonstrated how to return the number of execution tokens for a given process instance. See listing 6.8.

In wrapping up our coverage of jBPM audit logs, I want to warn you that you shouldn't confuse jBPM audit logs with standard Java logging, like that provided by Apache log4j. Instead, jBPM audit logs are used for auditing purposes, where you want to keep a historical record of the execution steps that occurred in a given process instance. The logging capabilities, which are extensive, are exposed through a set of logging classes that are specific to the type of activity being logged. For instance, the Transition-Log captures the details about transitions that have taken place in the process instance. Earlier I pointed out that the LoggingSession is probably how you want to acquire the logs, and not the transitory LoggingInstance. I also demonstrated how you can retrieve the logs for an instance and put them to use.

We're nearing our completion of advanced jBPM features, but we have one last topic to examine: asynchronous continuations. Perhaps because of its fancy name, you might be confused about its meaning and purpose. However, it's not as complex as it sounds; it simply enables process execution to be asynchronously performed by a server process. The benefits, as I'll show next, can be substantial.

Understanding asynchronous continuations

You've likely noticed that when you signal the execution of a process instance, it will continue to execute within the thread you're running until it encounters a wait state, such as a state or task node. At that point, you could consider the transaction to be completed. While generally this doesn't cause any problems—most transactions complete within milliseconds—there are times when that may not be the case. Do any scenarios come to mind? How about when you have a node nodetype that performs a web service call to a remote system using a traditional request/reply message exchange. In that scenario, the node (and your thread) will block and wait until the reply is received (or it times out). This could have highly undesirable consequences for your process if, for example, an immediate response is anticipated (maybe it's a web order being kicked off through the BPM process, and the user is awaiting a response with an order identifier).

You might be thinking whether it would be better to asynchronously manage this by creating a separate service apart from jBPM and using a state nodetype to call it via a JMS message. By using a state and not a node nodetype, execution would be returned immediately to your thread as the process instance was persisted and put on hold until it was instructed to proceed. That approach is sound, but bear in mind one complexity: some Java process must be running to receive the results from JMS, and to then interact with jBPM to access the process instance and signal the state's token to advance. While certainly not that complicated, it's not trivial either, especially when you factor in exception management.

Fortunately, there's already a built-in approach for managing this scenario directly within jBPM. It is called asynchronous continuations. How this works is best illustrated through a simple example. Figure 7.5 shows a simple process model.

In the process shown in figure 7.5, let's assume <<Node>>s 1, 2, and 3 include Java action handlers that perform some external action. In <<Node>>s 1 and 2, those action handlers perform their work synchronously in the same transaction in which the process is initiated (this is the default behavior). However, the third <<Node>> is specified using @async=true. What this means is that the node, and within it any Java action handlers, will be processed by an external command executor. Since <<Node>> 3 is processed asynchronously, the transaction is completed and placed in a wait state (that is, persisted) until the <<Join>> node depicted receives <<Node>>3's signal.

Figure 7.5. Process definition that uses one asynchronous node<br></br> 

The jBPM Enterprise edition, which runs within the context of an application server, is by default configured to support asynchronous continuations using its builtin Job Executor. The Job Executor receives its command message through a JMS queue that's automatically configured when jBPM Enterprise is run. In our example, this means that the jBPM Job Executor will asynchronously process <<Node>> 3 in figure 7.5. Once completed, <<Node>> 3's action handler instruction to signal continuance of the execution will be performed. Figure 7.6 shows the process instance just after initiation, where <<Node>>s 1 and 2 have been completed and now wait in the join node for the conclusion of <<Node>> 3.

As figure 7.6 illustrates, the asynchronous processing of <<Node>> 3 occurs in three steps: (1) a JMS message that includes the action handler to be executed is sent to a JMS queue, (2) the jBPM Enterprise server's command listener is listening for new messages submitted to the queue, and (3) once a message is received, it's sent to the Job Executor for processing. The Job Executor will initiate a new transaction from which the action handler is run, and will forward the execution. Although not shown, all three <<Node>>s will then have completed, the <<Join>> consummated, and execution moved to the end to complete the process instance (<<End-State>>).

Asynchronous continuations can be specified for all nodetypes as well as within action handlers. This approach provides a great deal of flexibility, and can be used as an effective way to distribute load, since the server in which jBPM Enterprise edition is running may be more suitable for CPU-intensive processing. When you're contemplating interacting with external services via a node nodetype, you'll definitely want to consider using asynchronous continuations.

This wraps up our coverage of some of the most important advanced jBPM features you'll likely use. Now let's turn our focus to a very exciting topic: integrating jBPM with SCA and SDO (the topic of chapters 3 and 4). You'll find that combining these technologies will unlock jBPM's role within the enterprise and become a key cornerstone in your SOA environment.

Figure 7.6. The process once the instance is initiated—node 3 is executed asynchronously<br></br> 


Some topics, such as how to create your own nodetypes and integration of jBPM Console security, have not been addressed. The jBPM User Guide provides guidance on these matters, and the forum and source code can be indispensable as well.

Integration with SCA/SDO

The Service Component Architecture (SCA), and its sister technology, Service Data Objects (SDO), are an emerging standard for creating multiprotocol, multilanguage services based on the concept of reusable components. Apache Tuscany, a reference implementation of SCA/SDO, has recently achieved its 1.4 release. Chapters 3 and 4 covered Tuscany in some detail, and we'll build on the examples presented in those chapters to demonstrate how we can integrate jBPM with SCA/SDO to make a powerful SOA combination.

Nearly any nontrivial business process that's being modeled and executed using jBPM will contain requirements to call out or access external systems or services. Web services, in particular those that are based on SOAP, REST, or even plain old XML (POX) over HTTP, are becoming increasingly ubiquitous. This trend has become even more pronounced as companies scramble to adopt a SOA environment, which is predicated on the notion that reusable services can be exposed through a variety of communication protocols. While there is no lack of tools and libraries available for creating web service clients and servers, they're often tied to one of the specific communication protocols, such as SOAP. As we discussed in chapter 3, the beauty of SCA is that you can expose clients through a number of protocols, all the while keeping your code completely neutral and protocol free (in other words, plain Java classes with no dependency on a given protocol). Section 7.2.1 will demonstrate how to use an SCA client in a jBPM node to access third-party web services and SCA services directly.

One of the most frequent themes in the jBPM forums hosted by JBoss are questions about how to expose jBPM through web services. We've demonstrated many examples of using the jBPM API, and explored the capabilities and flexibility it offers. However, it is Java specific, and clients wishing to access jBPM must embed jBPM libraries and calls in their code. This runs contrary to one of the main premises behind SOA: loose coupling. By embedding jBPM API calls in your clients, you have effectively limited flexibility, as you are tightly integrated with jBPM. If at some point you migrate to a different BPM engine, wholesale client code changes will be necessary. Further, it puts the onus on developers of client applications using jBPM to become jBPM experts. While I hope my book helps lower that learning barrier, jBPM is still a fairly complex product, and becoming conversant with the API is not child's play.


The most recent release of jBPM, 3.2.6, offers an experimental web services interface. However, the interface is extremely limited and is intended as an example the developer can build on.

A far better approach is to abstract some of the complexities of the API into a web service façade. This strategy simplifies client development, promotes loose coupling, and exposes jBPM as a cross-platform and cross-protocol solution. You'll learn how this can be accomplished in section 7.2.2. In the meantime, let's begin by figuring out how to use SCA as a client acting on behalf of a jBPM node.

Using SCA client components for service integration

Although SCA is primarily thought of as a technology for developing components and exposing them as services, it can also effectively be used in a client capacity for integration with any existing services, whether or not they originate from SCA (assuming the service supports one of the SCA protocols). In fact, you achieve the same benefit when using SCA as a client or server: the ability to interact in a protocol- and language-neutral capacity through a flexible component framework. The upshot is that when you need to integrate with a service from within jBPM, SCA makes an outstanding choice.

To illustrate, we'll provide a brief example that demonstrates how using SCA as a client can be achieved using one of the web services we developed in chapter 3. As you'll recall, we created an SCA SOAP-based web service for a hypothetical problem ticket service. Intended to demonstrate how easily a component can be exposed as a web service, our example didn't provide any real functionality beyond returning a fictitious, random case number. However, this simple service will serve our purposes for this example (in the sample code for this section, everything is included). First, let's look at the jBPM process that will use this web service, shown in figure 7.7.

Figure 7.7. An example business process that invokes an SCA web service through a node<br></br> 

In our example, a human interface task is used to capture the details of the hypothetical problem ticket/issue using the create-ticket task. When the task is completed, the soap-sca-submit node's action handler will submit the details collected from the prior task to the web service. The WSDL for the web service is included in the sample code; it's very simple and includes a single operation called createTicket. Here are the steps to establish the web service client:

  1. Create the Java interface and implementation classes.

  2. Construct the SCA composite XML file that identifies how to interact with the web service.

  3. Create the jBPM action handler class that invokes the service through the node.

Let's look at each step.


This step involves creating a Java interface and an implementation class that correspond to the interface of the web service we'll be calling. If you're working with a third-party web service, here's the most straightforward way to accomplish this:

  1. Download the WSDL locally for the remote service.

  2. Generate Java SDO classes for the XML binding required to interact with the web service (using XSD2JavaGenerator, which is described in chapter 4).

  3. Create Java methods to reflect the web services you'll be integrating with.

In our example, since the web service itself was designed using SCA and sports a simple data structure, there's no need to use SDO. We can instead use the autobinding facility that comes with SCA along with the same data object class that we used to dynamically construct the WSDL (chapter 3 describes these concepts in detail, so it may be worth reviewing). Figure 7.8 illustrates the link between the WSDL artifacts and the Java classes created to interface with the remove service.

In figure 7.8, the SOAPClient interface class contains one method, createTicket, that mirrors the operation of the same name identified in the WSDL. Within the WSDL, the createTicket accepts, as an input, the TicketDO complexType, which is shown on the top left of the figure. As you can see, the TicketDO Java class used as a parameter for the SOAPClient.createTicket method also mirrors the complexType definition in the WSDL schema. If you did use SDO to generate the Java classes from the WSDL schema using XSD2JavaGenerator, the classes generated would be used in lieu of the TicketDO Java class shown here.

Figure 7.8. The relationship between the WSDL schema and Java SCA classes<br></br> 


Using SDO isn't a requirement when you're working with SCA. It's appropriate if you (a) are working with complex XML structures or (b) require some of the advanced SDO capabilities such as disconnected data sets. We'll use SDO in the next section when we talk about exposing jBPM as a service.

So far, we've depicted two Java classes: the SOAPClient interface and the TicketDO data object class. The last class we need is the actual implementation class for SOAPClient, which is shown in listing 7.2.

Example 7.2. SOAPClientImpl implementation class

The real heavy lifting in the SOAPClientImpl class is performed by SOAPClient, which is acting as a proxy to the remote web service, injected through an SCA reference


The SCA composite file is the glue that holds together the assembly of components for either exposing or interfacing with services. Listing 7.3 shows the composite file used for this example.

Example 7.3. problemTicket.composite file used by the SCA client

The only component defined in this composite is SOAPComponent

In the sample code that accompanies this section, you'll also find a class called opensoa.book.chapter721.SOAPClientMain, which is a Java class whose static main() method simply invokes a test request to the web service (the README.txt file in the root directory for the section details how to run the web service so that you can test against it). This class, shown in listing 7.4, creates a handle to the SOAPComponent, instantiates a TicketDO object, populates it with some dummy data, and invokes the web service by calling the SOAPClient's createTicket method.

Example 7.4. Example Main class used for simple testing of a remote web service

You can run this test using the Ant build file's soap.client target at the root directory for this section's sample code. So what we've completed thus far is an SCA client that can be used to access the remote SOAP-based web service. The SOAPClientMain class can be used for testing the client from the command line.

Now that our SCA client component has been configured, we can create the jBPM node's action handler.


Figure 7.7 showed that the node soap-sca-submit is called immediately following the task node, which is used to capture the problem ticket specifics. Before we move into the action handler code, let's briefly look at the definition of the node within the jPDL:

<node name="soap-sca-submit">
   <action name="SOAPNodeAction"
   <transition to="end"/>

As it turns out, this step involves little more than what we've already covered. Listing 7.5 displays the complete action handler code.

Example 7.5. SOAPNodeActionHandle implementation class

Since this class is performing as an action handler, it implements ActionHandler and the required execute method

The purpose of this example was to demonstrate how SCA components can be used in a client capacity to initiate web service calls from within jBPM. Using SCA provides considerable flexibility, because it supports multiple protocols and allows non-Java languages such as Ruby to be used. For companies aggressively service enabling their enterprise, which is a prerequisite for SOA, SCA helps unleash the power of jBPM. Let's now reverse the roles—that is, let's service enable jBPM so that clients can interface with it through protocols such as SOAP or JMS.

Service enabling jBPM

In the introduction to section 7.2, we explored some of the reasons why you might want to service enable jBPM so that client applications can interact through a variety of protocols, including SOAP, JMS, and EJB. Obviously, the ability to use multiple protocols is beneficial in heterogeneous environments, such as when mixing Java and non-Java languages. In particular, the .NET environment has outstanding web services support, so applications based on that platform can integrate rather easily with jBPM using SOAP. You might be thinking that service enabling jBPM must be a very tall order or it would have been done before. Well, it's not entirely trivial, but you may be surprised how easy it is to selectively expose key jBPM features as services while providing a foundation for extending it, as needed, in your organization.

Figure 7.9. Service enabling jBPM using SCA/SDO<br></br> 

Figure 7.9 shows how we'll achieve this goal by marrying the capabilities of jBPM with SCA/SDO using Apache Tuscany. Client applications wishing to connect to jBPM can do so using any of the supported SCA binding protocols (SOAP, JMS, JSON-RPC, EJB). Internally, jBPM functions will be wrapped as SCA components, where they can be exposed as services individually or grouped together to form composite services (that is, those that combine several lower-level components into a more coarse-grained service). These SCA components will interface with jBPM using its rich and powerful API.

Attempting to expose the entire jBPM API through SCA-based services would be overly ambitious. However, like most things, I believe the 80-20 rule (Pareto principle) applies: 80 percent of what is really used can be derived from 20 percent of the functions. Further, what I hope to demonstrate is a framework for how you can build your own services as you need them (alternatively, if sufficient demand exists, we may create a SourceForge project to build the entire catalog). Table 7.3 lists a number of API calls that have been exposed through SCA that are included in this book's sample code. Although the operations represent only a small subset of what's possible, you may find it sufficient for integration with jBPM from external systems.

Table 7.3. Exposed jBPM services using SCA<br></br> 




Creates a new process instance. Requires a process name as an input, along with optional instance-specific data.


Use in conjunction with db4objects to store Java instance objects.


Given an actorId, this operation will bring back all tasks that are assigned to a given actor or user.


Given a processInstanceId, this operation will list all tasks associated with that process instance. An optional filter attribute allows you to refine which tasks are returned.


Given a processInstanceId, this operation will return all tokens for a given process instance.


Given a processId, this call will return all process instances associated with a process. You can filter results through the optional filter attribute.


This operation will list all processes available in the jBPM server instance.


Given a tokenId, this service enables various token-related operations to be performed, such as signaling.

Rather than go through each operation one by one, we'll select a couple of these operations and dissect how they were created. The two we'll select are listProcesses and createProcessInstance; the former is very simple, and the latter is a bit more complex.


Bear in mind that, to run any of these examples, you'll need to connect to a jBPM instance that has some existing business processes deployed and running.

Developing the ListProcesses service operation

The objective behind this web service operation is to return a list of all processes that reside in a given jBPM server instance. The request itself doesn't contain any expected parameters. The output will return the list of processes in a format that resembles that shown in figure 7.10.

Figure 7.10. A sample of a request and response for the Listprocesses SOAP operation<br></br> 


The sample code for this section contains a hibernate.properties file that you can use to specify the database instance you want to connect to. You'll obviously want to change the properties to reflect your own environment.

Notice in figure 7.10 that the response returns a list of processes currently installed in your jBPM server instance. For each process, it provides the assigned name; a count of the process instances that are running, ended or suspended; the internal processId; and finally, the unique process version number. To create this operation (and the others), four main steps are involved: (1) create (or modify) WSDL entries, (2) autogenerate the SDO classes used for the request/response XML, (3) create the Java SCA implementation classes, and (4) create the SCA composite XML. Let's examine each step.


Manually creating or modifying a WSDL is a tedious undertaking, even with the WSDL editors available in Eclipse and in tools such as Stylus Studio. While SCA can automatically generate a WSDL(s), the downside to this approach is that it can result in a proliferation of WSDLs, since each service will result in a separate WSDL. Furthermore, the generated WSDLs may not adhere to the desired format and structure. A better approach is to manually create a WSDL, which in the sample code for this section is called jbpm.wsdl. Obviously, it's outside the scope of this book to address the specifics of WSDL design, but I can highlight the entries necessary to construct our ListProcesses operation.

When modifying a WSDL, I find it easiest to work backward, if you will, from the service creation. Let's start by creating the service, binding, and portType definitions (using WSDL 1.1). Figure 7.11 shows the relationships between the entities.

As you may be aware, a WSDL can define multiple protocols for accessing a service. In our example, we're just using a SOAP binding, which is defined through the wsdl:binding element. The wsdl:service definition for ListProcesses ties this binding to a specific URL and port. The wsdl:portType defines the inputs and outputs required for the operations. The wsdl:portType name is referenced by the @type attribute in the wsdl:binding, as illustrated in figure 7.11. What remains is the XML Schema definition used for the request and response. This relationship is shown in figure 7.12.

Figure 7.11. The serbice, binding, and portType definitions for the ListProcesses operation<br></br> 

Figure 7.12. ListProcesses WSDL XML Schema definition<br></br> 

The wsdl:portType's child wsdl:input and wsdl:output elements (see figure 7.11), via their @message attribute, associate to wsdl:message, which is shown in figure 7.12. The message parts identified in wsdl:message, through the wsdl:part's @element attribute, finally tie it to the XML Schema. Obviously, keeping this all straight can be a challenge, but once you set up a few operations, it's fairly easy to clone your entries. Crafting the WSDL is the most difficult, or at least most tedious, part of this whole process. Next, we'll generate our Java data objects that correspond to the XML Schema elements shown in figure 7.12.


For a simple XML Schema like the one we're using with ListProcesses, we could clearly create our own Java objects that represent the request and response XML (as you recall, in section 7.2.1 we did just that in our example). However, once you begin to work with more complex types, doing so manually becomes untenable. Fortunately, you can easily generate Java binding classes using the SDO utility XSD2JavaGenerator. The suggested means for running this is within an Ant script, using a target, as shown in listing 7.6.

Example 7.6. Example of an Ant target used to generate SDO classes

The Ant target shown in listing 7.6 (also included in the build.xml file you'll find in this section's sample code) uses the Ant Java task to call the XSD2JavaGenerator, and it accepts a variety of parameters, such as the targetDirectory where the generated class files should be created

<xs:complexType name="ProcessVO">
      <xs:element minOccurs="0" name="description" nillable="true"
      <xs:element minOccurs="0" name="hasActions" type="xs:boolean"/>
      <xs:element minOccurs="0" name="hasEvents" type="xs:boolean"/>
      <xs:element minOccurs="0" name="id" type="xs:long"/>
      <xs:element minOccurs="0" name="name" nillable="true"
      <xs:element minOccurs="0" name="version" type="xs:int"/>
   <xs:attribute name="running" type="xs:int"/>
   <xs:attribute name="suspended" type="xs:int"/>
   <xs:attribute name="ended" type="xs:int"/>

When XSD2JavaGenerator is run on the WSDL, it generates a class called opensoa.sca.vo.xsd.ProcessVOType that corresponds to the ProcessVO complexType shown earlier. As a result, we know that this class represents the return value associated with the Java method used to process the service operation. In this case, the operation is ListProcesses (see figure 7.11). Let's examine how we implement this method and its corresponding implementation class.


By now you're probably familiar with the standard approach for creating SCA components and exposing them as services. The first step is to create an interface class that defines the service signature. In this example, we call this class, appropriately enough, ListProcesses:

public interface ListProcesses {
   public ProcessVOType listProcesses ();

The @Remotable annotation indicates that this interface can be exposed externally outside of the SCA JVM runtime. The method listProcesses, as we expect, will receive no request parameters and will return an instance of type ProcessVOType. The next step is to implement this interface, which is where the logic resides for working with the jBPM service. Listing 7.7 shows the implementation class.

Example 7.7. ListProcesses implementation class interfacing with the jBPM service

In the previous chapters on SCA and jBPM, we covered each of the steps that are occurring in listing 7.7. We're acquiring a jBPM session connection


The last step on our journey is to create SCA composite files, which declaratively define our services and how they're published. Because we ultimately want to create many such services and not just the one we're creating now, the composite files are deconstructed into several composite files. The parent composite file we'll call jbpm.composite, and it includes the child composites. This relationship is illustrated in figure 7.13.

Figure 7.13. Decomposed construction of the jbpm.composite SCA file<br></br> 

The service we're defining, ListProcessesService, resides in the file listservices. com-posite:

<composite xmlns="http://www.osoa.org/xmlns/sca/1.0"

    <service name="ListProcessesService" promote="ListProcessesComponent">

    <component name="ListProcessesComponent">
       <implementation.java class="opensoa.sca.impl.ListProcessesImpl"/>
       <reference name="jbpmContextHelper"

The service definition for ListProcessesService promotes the component ListProcessesComponent. This component uses as its implementation ListProcessesImpl, which we developed in listing 7.7. The ListProcessesService service definition includes the binding.ws child element, which indicates that the service is to be exposed as a SOAP-based web service. The @wsdlElement instructs the binding to use the WSDL we manually developed in the first step of this process.

You may have also noticed that the component definition for ListProcessesComponent includes the reference injection for jbpmContextHelper. This component is defined in utility.composite and provides services to the component for connecting to the jBPM session or instance (see listing 7.7).


You can find the utility.composite file in the sample code for this chapter.

The only remaining task is to create a class with a static main() method to host and run the assembly within an SCA domain. We'll use the embedded server for simplicity's sake, and the web service will be served by it. This class, appropriately called Server (Ant target run.server), just launches the SCA server:

public class Server {
   public static void main(String[] args) {
      Server server = new Server();
   public void run() {
      SCADomain scaDomain = SCADomain.newInstance("jbpm.composite");

What have we accomplished by this exercise? Our objective was to create a web service that interacted with the jBPM API to return a list of all processes running within that jBPM instance. Toward this end, we constructed a WSDL that defined the web service. That WSDL, in turn, was used to automatically generate SDO binding classes for each of the XML Schema elements and types included within the WSDL. A Java class was then developed to implement the service. It used the jBPM API to retrieve the list of processes, and populated the return response using generated SDO classes. An SCA assembly was then created with the Java class as the implementation for a component that was exposed as a web service. While there was some setup work involved in this solution, adding new services will be much more straightforward. Further, you can also expose the service through JMS or any of the other available Tuscany SCA protocols. Let's add one more example to reinforce the steps. This service will be used to instantiate a new jBPM process instance.

Developing the CreateProcessInstance service operation

Creating the CreateProcessInstance service mostly mirrors what we've already described in the previous example. Rather than going through each step, let's focus on the unique aspects of this service's implementation. In particular, when you create a process instance, such as instantiating a new employee hire process, you'll often have a significant amount of information already collected. While the data could possibly be passed via a web service as an unlimited map-type array (i.e., key/value pairs), doing so isn't always sensible or practical. Instead, a more intuitive approach is to create an XML Schema that fully expresses the complexity of the data you're passing. What's the downside to this approach? Your web service WSDL must potentially be modified for each process where you wish to incorporate complex data types.

Let's examine the approach of using a separate XML Schema for each process. Let's assume that you have an existing jBPM process that's used for hiring new people. We'll call this process NewHireProcess. Let's assume that some data is already available on the new employee, perhaps originating from an Applicant Tracking System (ATS), and we want to pass this information to the process when it's initiated. To facilitate this, we'll create an employee entity and define it within an XML Schema. Thus, when the CreateProcessInstance service is called, the employee XML data will be populated and passed when the service is invoked. Listing 7.8 shows an example of the CreateProcessInstance operation using the Employee XML Schema complex type.

Example 7.8. XML example of a CreateProcessInstance operation

The Process element's @processName attribute identifies the process to use when creating the instance

<xs:complexType name="ProcessType">
        <xs:element name="key" type="xs:string" minOccurs="0"/>
        <xs:element name="ProcessVars" minOccurs="0"
           <xs:element name="Applicant" type="jbpm:ApplicantType"/>
           <xs:element name="Employee" type="jbpm:EmployeeType"/>
           <xs:element name="Other"/>
    <xs:attribute name="actorId" type="xs:string"/>
    <xs:attribute name="processName" type="xs:string" use="required"/>

Notice the choice declaration, which accepts one of the three element types: ApplicantType, EmployeeType, or Other. You'd obviously modify this to reflect any complex types you require, and add them to the appropriate location within the XML Schema located within the WSDL (in the example for listing 7.8, the EmployeeType is being used).

The CreateProcessInstanceImpl class, which is the SCA implementation class, is analogous to the ListProcessesImpl class we created in listing 7.7. Therefore, it's the implementation for the CreateProcessInstance web service. One challenge that exists is that, depending on the process instance created, different inbound XML will be provided to the service request. This was illustrated in listing 7.8, where the Employee node information was sent since the process instance to be created was specified as NewHireProcess. This can be addressed by triaging the inbound requests based upon the process name provided (found in the @processName attribute of the Process element). For example:

if (process.getProcessName().equalsIgnoreCase("NewHireProcess")) {
   // process specific logic

Within the body of the if statement, you could then perform process-specific functions, such as adding process variables to the instance. This is illustrated in the sample code for this section.


In the code examples for this section, db4objects is used to store the inbound complex SDO data objects that represent an employee or applicant. There are a couple of reasons for this: (a) SDO objects can't be stored natively within the jBPM instance (or, at a minimum, it will cause errors when displaying process instance details within the jBPM Console); (b) storing them externally in a database makes them more readily accessible for reporting and other purposes; and (c) an object database such as db4objects is likely far more efficient at indexing and retrieval of native Java objects than jBPM. The index used to store them in these examples is the @objectId attribute associated with the object's root element (such as Employee).

There are undoubtedly other approaches, probably many superior, to handling the variability that surrounds an operation such as CreateProcessInstance. For example, Spring would likely be a great choice for declaratively managing which classes are used for different process instances. I tried to avoid introducing too many additional technologies beyond the core we're focusing on in an effort to keep things simple. jBPM is a wonderful and powerful BPM solution, and when coupled with SCA/SDO, can open a world of possibilities for integration within a SOA environment.


This chapter has been quite a journey. We've covered a lot of material! Hopefully your perseverance has paid handsome dividends. This chapter was split into two main sections: advanced features of jBPM and integration of jBPM with SCA/SDO through its Apache Tuscany implementation. The advanced features focused on some of the enterprise capabilities of jBPM, such as the ability to create superstates and subprocesses, both of which help bring greater order and management to defining complex business processes. We also touched on the use of asynchronous continuations, which can be used in circumstances where you're integrating with services that may not have predicable or timely responses. Asynchronous continuations can also help you create more distributed solutions.

The second main section focused on how you can integrate jBPM with SCA and its related technology, SDO. This marriage addresses some of the recurring concerns with jBPM, such as how you call external services within the context of a reusable and consistent framework. We demonstrated how you can easily integrate with web services using SCA components in a client-style capacity. We then reversed the requirement and provided a means by which the jBPM API can be exposed through SCA. The implication is that jBPM can now be integrated through any number of protocols, including SOAP and JMS. The ability to call out as a service consumer and perform as a service provider is equally important from a SOA standpoint. You may recall from our initial discussion in chapter 2 that services can be construed as high-level business processes or as more granular, component-level type services. Through the combination of SCA/SDO and jBPM, we have the full spectrum of services addressed, from fine to coarse-grained, layered upon a compelling technology stack.

The next chapter will describe how we can leverage the events derived from our services to provide complete operational insight and monitoring—an important value-added feature of a SOA environment.